Compare commits
7 Commits
secrets-mc
...
secrets-mc
| Author | SHA1 | Date | |
|---|---|---|---|
| df701f21b9 | |||
| c3c536200e | |||
| 7909f7102d | |||
| 87a29af82d | |||
| 1b11f7e976 | |||
| 08e81363c9 | |||
|
|
beade4503d |
@@ -118,7 +118,7 @@ oauth_accounts (
|
|||||||
|
|
||||||
### PEM 共享(`key_ref`)
|
### PEM 共享(`key_ref`)
|
||||||
|
|
||||||
将共享 PEM 存为 **`type=key`** 的 entry;其它记录在 `metadata.key_ref` 指向该 key 的 `name`。更新 key 记录后,引用方通过服务层解析合并逻辑即可使用新密钥(实现见 `secrets_core::service`)。
|
建议将共享 PEM 存为 **`type=key`** 的 entry;其它记录在 `metadata.key_ref` 指向目标 entry 的 `name`(支持 `folder/name` 格式消歧)。删除被引用 key 时,服务会自动迁移为单副本 + 重定向(复制到首个引用方,其余引用方改指向新 owner);解析逻辑见 `secrets_core::service::env_map`。
|
||||||
|
|
||||||
## 代码规范
|
## 代码规范
|
||||||
|
|
||||||
|
|||||||
2
Cargo.lock
generated
2
Cargo.lock
generated
@@ -1968,7 +1968,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "secrets-mcp"
|
name = "secrets-mcp"
|
||||||
version = "0.3.0"
|
version = "0.3.7"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"askama",
|
"askama",
|
||||||
|
|||||||
26
README.md
26
README.md
@@ -17,7 +17,10 @@ cargo build --release -p secrets-mcp
|
|||||||
|
|
||||||
| 变量 | 说明 |
|
| 变量 | 说明 |
|
||||||
|------|------|
|
|------|------|
|
||||||
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL 连接串(建议专用库,如 `secrets-mcp`)。 |
|
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL 连接串(推荐使用域名,例如 `db.refining.ltd`,避免直连 IP)。 |
|
||||||
|
| `SECRETS_DATABASE_SSL_MODE` | 可选但强烈建议生产必填。推荐 `verify-full`(至少 `verify-ca`),避免回退到弱 TLS 模式。 |
|
||||||
|
| `SECRETS_DATABASE_SSL_ROOT_CERT` | 可选。私有 CA 或自签链路时指定 CA 根证书路径(如 `/etc/secrets/pg-ca.crt`)。 |
|
||||||
|
| `SECRETS_ENV` | 可选。设为 `prod` / `production` 时会拒绝弱 PostgreSQL TLS 模式(`prefer`、`disable`、`allow`、`require`)。 |
|
||||||
| `BASE_URL` | 对外访问基址;OAuth 回调为 `{BASE_URL}/auth/google/callback`。默认 `http://localhost:9315`。 |
|
| `BASE_URL` | 对外访问基址;OAuth 回调为 `{BASE_URL}/auth/google/callback`。默认 `http://localhost:9315`。 |
|
||||||
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`。容器内或直接对外暴露端口时请改为 `0.0.0.0:9315`;反代时常为 `127.0.0.1:9315`。 |
|
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`。容器内或直接对外暴露端口时请改为 `0.0.0.0:9315`;反代时常为 `127.0.0.1:9315`。 |
|
||||||
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;不配置则无 Google 登录入口。运行时从环境读取,勿写入 CI、勿打入二进制。 |
|
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;不配置则无 Google 登录入口。运行时从环境读取,勿写入 CI、勿打入二进制。 |
|
||||||
@@ -27,9 +30,26 @@ cargo build --release -p secrets-mcp
|
|||||||
cargo run -p secrets-mcp
|
cargo run -p secrets-mcp
|
||||||
```
|
```
|
||||||
|
|
||||||
|
生产推荐示例(PostgreSQL TLS):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
SECRETS_DATABASE_URL=postgres://postgres:***@db.refining.ltd:5432/secrets-mcp
|
||||||
|
SECRETS_DATABASE_SSL_MODE=verify-full
|
||||||
|
SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
|
||||||
|
SECRETS_ENV=production
|
||||||
|
```
|
||||||
|
|
||||||
- **Web**:`BASE_URL`(登录、Dashboard、设置密码短语、创建 API Key)。
|
- **Web**:`BASE_URL`(登录、Dashboard、设置密码短语、创建 API Key)。
|
||||||
- **MCP**:Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer <api_key>` + `X-Encryption-Key: <hex>` 请求头(读密文工具须带密钥)。
|
- **MCP**:Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer <api_key>` + `X-Encryption-Key: <hex>` 请求头(读密文工具须带密钥)。
|
||||||
|
|
||||||
|
## PostgreSQL TLS 加固
|
||||||
|
|
||||||
|
- 推荐将数据库域名单独设置为 `db.refining.ltd`,服务域名保持 `secrets.refining.app`。
|
||||||
|
- 数据库证书建议使用可校验链路(如 Let's Encrypt 或私有 CA),并保证证书 `SAN` 包含 `db.refining.ltd`。
|
||||||
|
- PostgreSQL 侧建议使用 `hostssl` 规则限制应用来源(如 `47.238.146.244/32`),逐步移除公网明文 `host` 访问。
|
||||||
|
- 应用端推荐 `SECRETS_DATABASE_SSL_MODE=verify-full`;仅在过渡阶段可临时用 `verify-ca`。
|
||||||
|
- 可执行运维步骤见 [`deploy/postgres-tls-hardening.md`](deploy/postgres-tls-hardening.md)。
|
||||||
|
|
||||||
## MCP 与 AI 工作流(v0.3+)
|
## MCP 与 AI 工作流(v0.3+)
|
||||||
|
|
||||||
条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun` 与 `ricnsmart/aliyun`)。
|
条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun` 与 `ricnsmart/aliyun`)。
|
||||||
@@ -37,6 +57,7 @@ cargo run -p secrets-mcp
|
|||||||
- **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。
|
- **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。
|
||||||
- **`secrets_get` / `secrets_update` / `secrets_delete`(按 name)/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。
|
- **`secrets_get` / `secrets_update` / `secrets_delete`(按 name)/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。
|
||||||
- **`secrets_delete`**:`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder`。
|
- **`secrets_delete`**:`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder`。
|
||||||
|
- **共享 key 自动迁移删除**:删除仍被 `metadata.key_ref` 引用的 key 条目时,系统会自动迁移:把密文复制到首个引用方,并将其余引用方的 `key_ref` 重定向到新 owner,然后继续删除。
|
||||||
|
|
||||||
## 加密架构(混合 E2EE)
|
## 加密架构(混合 E2EE)
|
||||||
|
|
||||||
@@ -147,7 +168,8 @@ flowchart LR
|
|||||||
|
|
||||||
### PEM 共享(`key_ref`)
|
### PEM 共享(`key_ref`)
|
||||||
|
|
||||||
同一 PEM 可被多条 `server` 等记录引用:将 PEM 存为 **`type=key`** 的 entry,在其它条目的 `metadata.key_ref` 中写该 key 条目的 `name`;轮换时只更新 key 对应记录即可。
|
同一 PEM 可被多条 `server` 等记录引用:建议将 PEM 存为 **`type=key`** 的 entry,在其它条目的 `metadata.key_ref` 中写目标 entry 的 `name`(支持 `folder/name` 格式消歧);轮换时只更新该目标记录即可。
|
||||||
|
删除共享 key 时,系统会自动迁移引用:将密文复制到首个引用方(单副本),其余引用方的 `key_ref` 自动重定向到该新 owner,再删除原 key 记录。
|
||||||
|
|
||||||
## 审计日志
|
## 审计日志
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,15 @@
|
|||||||
use anyhow::Result;
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
use sqlx::postgres::PgSslMode;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct DatabaseConfig {
|
||||||
|
pub url: String,
|
||||||
|
pub ssl_mode: Option<PgSslMode>,
|
||||||
|
pub ssl_root_cert: Option<PathBuf>,
|
||||||
|
pub enforce_strict_tls: bool,
|
||||||
|
}
|
||||||
|
|
||||||
/// Resolve database URL from environment.
|
/// Resolve database URL from environment.
|
||||||
/// Priority: `SECRETS_DATABASE_URL` env var → error.
|
/// Priority: `SECRETS_DATABASE_URL` env var → error.
|
||||||
@@ -18,3 +29,54 @@ pub fn resolve_db_url(override_url: &str) -> Result<String> {
|
|||||||
Example: SECRETS_DATABASE_URL=postgres://user:pass@host:port/dbname"
|
Example: SECRETS_DATABASE_URL=postgres://user:pass@host:port/dbname"
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn env_var_non_empty(name: &str) -> Option<String> {
|
||||||
|
std::env::var(name)
|
||||||
|
.ok()
|
||||||
|
.filter(|value| !value.trim().is_empty())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_ssl_mode_from_env() -> Result<Option<PgSslMode>> {
|
||||||
|
let Some(mode) = env_var_non_empty("SECRETS_DATABASE_SSL_MODE") else {
|
||||||
|
return Ok(None);
|
||||||
|
};
|
||||||
|
|
||||||
|
let parsed = mode.parse::<PgSslMode>().with_context(|| {
|
||||||
|
format!(
|
||||||
|
"Invalid SECRETS_DATABASE_SSL_MODE='{mode}'. Use one of: disable, allow, prefer, require, verify-ca, verify-full."
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
Ok(Some(parsed))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn resolve_ssl_root_cert_from_env() -> Result<Option<PathBuf>> {
|
||||||
|
let Some(path) = env_var_non_empty("SECRETS_DATABASE_SSL_ROOT_CERT") else {
|
||||||
|
return Ok(None);
|
||||||
|
};
|
||||||
|
let path = PathBuf::from(path);
|
||||||
|
if !path.exists() {
|
||||||
|
anyhow::bail!(
|
||||||
|
"SECRETS_DATABASE_SSL_ROOT_CERT points to a missing file: {}",
|
||||||
|
path.display()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
Ok(Some(path))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn is_production_env() -> bool {
|
||||||
|
matches!(
|
||||||
|
env_var_non_empty("SECRETS_ENV")
|
||||||
|
.as_deref()
|
||||||
|
.map(|value| value.to_ascii_lowercase()),
|
||||||
|
Some(value) if value == "prod" || value == "production"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn resolve_db_config(override_url: &str) -> Result<DatabaseConfig> {
|
||||||
|
Ok(DatabaseConfig {
|
||||||
|
url: resolve_db_url(override_url)?,
|
||||||
|
ssl_mode: parse_ssl_mode_from_env()?,
|
||||||
|
ssl_root_cert: resolve_ssl_root_cert_from_env()?,
|
||||||
|
enforce_strict_tls: is_production_env(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,14 +1,45 @@
|
|||||||
use anyhow::Result;
|
use std::str::FromStr;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
use serde_json::Value;
|
use serde_json::Value;
|
||||||
use sqlx::PgPool;
|
use sqlx::PgPool;
|
||||||
use sqlx::postgres::PgPoolOptions;
|
use sqlx::postgres::{PgConnectOptions, PgPoolOptions, PgSslMode};
|
||||||
|
|
||||||
pub async fn create_pool(database_url: &str) -> Result<PgPool> {
|
use crate::config::DatabaseConfig;
|
||||||
|
|
||||||
|
fn build_connect_options(config: &DatabaseConfig) -> Result<PgConnectOptions> {
|
||||||
|
let mut options = PgConnectOptions::from_str(&config.url)
|
||||||
|
.with_context(|| "failed to parse SECRETS_DATABASE_URL".to_string())?;
|
||||||
|
|
||||||
|
if let Some(mode) = config.ssl_mode {
|
||||||
|
options = options.ssl_mode(mode);
|
||||||
|
}
|
||||||
|
if let Some(path) = &config.ssl_root_cert {
|
||||||
|
options = options.ssl_root_cert(path);
|
||||||
|
}
|
||||||
|
|
||||||
|
if config.enforce_strict_tls
|
||||||
|
&& !matches!(
|
||||||
|
options.get_ssl_mode(),
|
||||||
|
PgSslMode::VerifyCa | PgSslMode::VerifyFull
|
||||||
|
)
|
||||||
|
{
|
||||||
|
anyhow::bail!(
|
||||||
|
"Refusing to start in production with weak PostgreSQL TLS mode. \
|
||||||
|
Set SECRETS_DATABASE_SSL_MODE=verify-ca or verify-full."
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(options)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn create_pool(config: &DatabaseConfig) -> Result<PgPool> {
|
||||||
tracing::debug!("connecting to database");
|
tracing::debug!("connecting to database");
|
||||||
|
let connect_options = build_connect_options(config)?;
|
||||||
let pool = PgPoolOptions::new()
|
let pool = PgPoolOptions::new()
|
||||||
.max_connections(10)
|
.max_connections(10)
|
||||||
.acquire_timeout(std::time::Duration::from_secs(5))
|
.acquire_timeout(std::time::Duration::from_secs(5))
|
||||||
.connect(database_url)
|
.connect_with(connect_options)
|
||||||
.await?;
|
.await?;
|
||||||
tracing::debug!("database connection established");
|
tracing::debug!("database connection established");
|
||||||
Ok(pool)
|
Ok(pool)
|
||||||
|
|||||||
@@ -51,6 +51,34 @@ pub struct EntryRow {
|
|||||||
pub notes: String,
|
pub notes: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Entry row including `name` (used for id-scoped web / service updates).
|
||||||
|
#[derive(Debug, sqlx::FromRow)]
|
||||||
|
pub struct EntryWriteRow {
|
||||||
|
pub id: Uuid,
|
||||||
|
pub version: i64,
|
||||||
|
pub folder: String,
|
||||||
|
#[sqlx(rename = "type")]
|
||||||
|
pub entry_type: String,
|
||||||
|
pub name: String,
|
||||||
|
pub tags: Vec<String>,
|
||||||
|
pub metadata: Value,
|
||||||
|
pub notes: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl From<&EntryWriteRow> for EntryRow {
|
||||||
|
fn from(r: &EntryWriteRow) -> Self {
|
||||||
|
EntryRow {
|
||||||
|
id: r.id,
|
||||||
|
version: r.version,
|
||||||
|
folder: r.folder.clone(),
|
||||||
|
entry_type: r.entry_type.clone(),
|
||||||
|
tags: r.tags.clone(),
|
||||||
|
metadata: r.metadata.clone(),
|
||||||
|
notes: r.notes.clone(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// Minimal secret field row fetched before snapshots or cascade deletes.
|
/// Minimal secret field row fetched before snapshots or cascade deletes.
|
||||||
#[derive(Debug, sqlx::FromRow)]
|
#[derive(Debug, sqlx::FromRow)]
|
||||||
pub struct SecretFieldRow {
|
pub struct SecretFieldRow {
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ use sqlx::PgPool;
|
|||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
use crate::db;
|
use crate::db;
|
||||||
use crate::models::{EntryRow, SecretFieldRow};
|
use crate::models::{EntryRow, EntryWriteRow, SecretFieldRow};
|
||||||
|
|
||||||
#[derive(Debug, serde::Serialize)]
|
#[derive(Debug, serde::Serialize)]
|
||||||
pub struct DeletedEntry {
|
pub struct DeletedEntry {
|
||||||
@@ -17,6 +17,7 @@ pub struct DeletedEntry {
|
|||||||
#[derive(Debug, serde::Serialize)]
|
#[derive(Debug, serde::Serialize)]
|
||||||
pub struct DeleteResult {
|
pub struct DeleteResult {
|
||||||
pub deleted: Vec<DeletedEntry>,
|
pub deleted: Vec<DeletedEntry>,
|
||||||
|
pub migrated: Vec<String>,
|
||||||
pub dry_run: bool,
|
pub dry_run: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -31,6 +32,233 @@ pub struct DeleteParams<'a> {
|
|||||||
pub user_id: Option<Uuid>,
|
pub user_id: Option<Uuid>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, sqlx::FromRow)]
|
||||||
|
struct KeyReferrer {
|
||||||
|
id: Uuid,
|
||||||
|
folder: String,
|
||||||
|
#[sqlx(rename = "type")]
|
||||||
|
entry_type: String,
|
||||||
|
name: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn ref_label(r: &KeyReferrer) -> String {
|
||||||
|
format!("{}/{} ({})", r.folder, r.name, r.entry_type)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn ref_path(r: &KeyReferrer) -> String {
|
||||||
|
format!("{}/{}", r.folder, r.name)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn fetch_key_referrers_pool(
|
||||||
|
pool: &PgPool,
|
||||||
|
key_entry_id: Uuid,
|
||||||
|
key_folder: &str,
|
||||||
|
key_name: &str,
|
||||||
|
user_id: Option<Uuid>,
|
||||||
|
) -> Result<Vec<KeyReferrer>> {
|
||||||
|
let qualified = format!("{}/{}", key_folder, key_name);
|
||||||
|
let refs: Vec<KeyReferrer> = if let Some(uid) = user_id {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT id, folder, type, name FROM entries \
|
||||||
|
WHERE user_id = $1 AND id <> $2 \
|
||||||
|
AND (metadata->>'key_ref' = $3 OR metadata->>'key_ref' = $4) \
|
||||||
|
ORDER BY folder, type, name",
|
||||||
|
)
|
||||||
|
.bind(uid)
|
||||||
|
.bind(key_entry_id)
|
||||||
|
.bind(key_name)
|
||||||
|
.bind(&qualified)
|
||||||
|
.fetch_all(pool)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT id, folder, type, name FROM entries \
|
||||||
|
WHERE user_id IS NULL AND id <> $1 \
|
||||||
|
AND (metadata->>'key_ref' = $2 OR metadata->>'key_ref' = $3) \
|
||||||
|
ORDER BY folder, type, name",
|
||||||
|
)
|
||||||
|
.bind(key_entry_id)
|
||||||
|
.bind(key_name)
|
||||||
|
.bind(&qualified)
|
||||||
|
.fetch_all(pool)
|
||||||
|
.await?
|
||||||
|
};
|
||||||
|
Ok(refs)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn migrate_key_refs_if_needed(
|
||||||
|
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
|
||||||
|
key_row: &EntryRow,
|
||||||
|
key_name: &str,
|
||||||
|
user_id: Option<Uuid>,
|
||||||
|
dry_run: bool,
|
||||||
|
) -> Result<Vec<String>> {
|
||||||
|
let qualified = format!("{}/{}", key_row.folder, key_name);
|
||||||
|
let refs: Vec<KeyReferrer> = if let Some(uid) = user_id {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT id, folder, type, name FROM entries \
|
||||||
|
WHERE user_id = $1 AND id <> $2 \
|
||||||
|
AND (metadata->>'key_ref' = $3 OR metadata->>'key_ref' = $4) \
|
||||||
|
ORDER BY folder, type, name",
|
||||||
|
)
|
||||||
|
.bind(uid)
|
||||||
|
.bind(key_row.id)
|
||||||
|
.bind(key_name)
|
||||||
|
.bind(&qualified)
|
||||||
|
.fetch_all(&mut **tx)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT id, folder, type, name FROM entries \
|
||||||
|
WHERE user_id IS NULL AND id <> $1 \
|
||||||
|
AND (metadata->>'key_ref' = $2 OR metadata->>'key_ref' = $3) \
|
||||||
|
ORDER BY folder, type, name",
|
||||||
|
)
|
||||||
|
.bind(key_row.id)
|
||||||
|
.bind(key_name)
|
||||||
|
.bind(&qualified)
|
||||||
|
.fetch_all(&mut **tx)
|
||||||
|
.await?
|
||||||
|
};
|
||||||
|
|
||||||
|
if refs.is_empty() {
|
||||||
|
return Ok(vec![]);
|
||||||
|
}
|
||||||
|
if dry_run {
|
||||||
|
return Ok(refs.iter().map(ref_label).collect());
|
||||||
|
}
|
||||||
|
|
||||||
|
let owner = &refs[0];
|
||||||
|
let owner_path = ref_path(owner);
|
||||||
|
let key_fields: Vec<SecretFieldRow> =
|
||||||
|
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1")
|
||||||
|
.bind(key_row.id)
|
||||||
|
.fetch_all(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
for f in &key_fields {
|
||||||
|
sqlx::query(
|
||||||
|
"INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3) \
|
||||||
|
ON CONFLICT (entry_id, field_name) DO NOTHING",
|
||||||
|
)
|
||||||
|
.bind(owner.id)
|
||||||
|
.bind(&f.field_name)
|
||||||
|
.bind(&f.encrypted)
|
||||||
|
.execute(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE entries SET metadata = metadata - 'key_ref', \
|
||||||
|
version = version + 1, updated_at = NOW() WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(owner.id)
|
||||||
|
.execute(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
crate::audit::log_tx(
|
||||||
|
tx,
|
||||||
|
user_id,
|
||||||
|
"key_migrate",
|
||||||
|
&owner.folder,
|
||||||
|
&owner.entry_type,
|
||||||
|
&owner.name,
|
||||||
|
json!({
|
||||||
|
"from_key": format!("{}/{}", key_row.folder, key_name),
|
||||||
|
"role": "new_owner",
|
||||||
|
"redirect_target": owner_path,
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
for r in refs.iter().skip(1) {
|
||||||
|
sqlx::query(
|
||||||
|
"UPDATE entries SET metadata = jsonb_set(metadata, '{key_ref}', to_jsonb($2::text), true), \
|
||||||
|
version = version + 1, updated_at = NOW() WHERE id = $1",
|
||||||
|
)
|
||||||
|
.bind(r.id)
|
||||||
|
.bind(&owner_path)
|
||||||
|
.execute(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
crate::audit::log_tx(
|
||||||
|
tx,
|
||||||
|
user_id,
|
||||||
|
"key_migrate",
|
||||||
|
&r.folder,
|
||||||
|
&r.entry_type,
|
||||||
|
&r.name,
|
||||||
|
json!({
|
||||||
|
"from_key": format!("{}/{}", key_row.folder, key_name),
|
||||||
|
"role": "redirected_ref",
|
||||||
|
"redirect_to": owner_path,
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(refs.iter().map(ref_label).collect())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Delete a single entry by id (multi-tenant: `user_id` must match). Cascades `secrets` via FK.
|
||||||
|
pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<DeleteResult> {
|
||||||
|
let mut tx = pool.begin().await?;
|
||||||
|
let row: Option<EntryWriteRow> = sqlx::query_as(
|
||||||
|
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
|
||||||
|
WHERE id = $1 AND user_id = $2 FOR UPDATE",
|
||||||
|
)
|
||||||
|
.bind(entry_id)
|
||||||
|
.bind(user_id)
|
||||||
|
.fetch_optional(&mut *tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let row = match row {
|
||||||
|
Some(r) => r,
|
||||||
|
None => {
|
||||||
|
tx.rollback().await?;
|
||||||
|
anyhow::bail!("Entry not found");
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let folder = row.folder.clone();
|
||||||
|
let entry_type = row.entry_type.clone();
|
||||||
|
let name = row.name.clone();
|
||||||
|
let entry_row: EntryRow = (&row).into();
|
||||||
|
let migrated =
|
||||||
|
migrate_key_refs_if_needed(&mut tx, &entry_row, &name, Some(user_id), false).await?;
|
||||||
|
|
||||||
|
snapshot_and_delete(
|
||||||
|
&mut tx,
|
||||||
|
&folder,
|
||||||
|
&entry_type,
|
||||||
|
&name,
|
||||||
|
&entry_row,
|
||||||
|
Some(user_id),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
crate::audit::log_tx(
|
||||||
|
&mut tx,
|
||||||
|
Some(user_id),
|
||||||
|
"delete",
|
||||||
|
&folder,
|
||||||
|
&entry_type,
|
||||||
|
&name,
|
||||||
|
json!({ "source": "web", "entry_id": entry_id }),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
tx.commit().await?;
|
||||||
|
|
||||||
|
Ok(DeleteResult {
|
||||||
|
deleted: vec![DeletedEntry {
|
||||||
|
name,
|
||||||
|
folder,
|
||||||
|
entry_type,
|
||||||
|
}],
|
||||||
|
migrated,
|
||||||
|
dry_run: false,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult> {
|
pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult> {
|
||||||
match params.name {
|
match params.name {
|
||||||
Some(name) => delete_one(pool, name, params.folder, params.dry_run, params.user_id).await,
|
Some(name) => delete_one(pool, name, params.folder, params.dry_run, params.user_id).await,
|
||||||
@@ -66,6 +294,7 @@ async fn delete_one(
|
|||||||
// - 2+ matches → disambiguation error (same as non-dry-run)
|
// - 2+ matches → disambiguation error (same as non-dry-run)
|
||||||
#[derive(sqlx::FromRow)]
|
#[derive(sqlx::FromRow)]
|
||||||
struct DryRunRow {
|
struct DryRunRow {
|
||||||
|
id: Uuid,
|
||||||
folder: String,
|
folder: String,
|
||||||
#[sqlx(rename = "type")]
|
#[sqlx(rename = "type")]
|
||||||
entry_type: String,
|
entry_type: String,
|
||||||
@@ -74,7 +303,7 @@ async fn delete_one(
|
|||||||
let rows: Vec<DryRunRow> = if let Some(uid) = user_id {
|
let rows: Vec<DryRunRow> = if let Some(uid) = user_id {
|
||||||
if let Some(f) = folder {
|
if let Some(f) = folder {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3",
|
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3",
|
||||||
)
|
)
|
||||||
.bind(uid)
|
.bind(uid)
|
||||||
.bind(f)
|
.bind(f)
|
||||||
@@ -82,7 +311,9 @@ async fn delete_one(
|
|||||||
.fetch_all(pool)
|
.fetch_all(pool)
|
||||||
.await?
|
.await?
|
||||||
} else {
|
} else {
|
||||||
sqlx::query_as("SELECT folder, type FROM entries WHERE user_id = $1 AND name = $2")
|
sqlx::query_as(
|
||||||
|
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND name = $2",
|
||||||
|
)
|
||||||
.bind(uid)
|
.bind(uid)
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.fetch_all(pool)
|
.fetch_all(pool)
|
||||||
@@ -90,14 +321,16 @@ async fn delete_one(
|
|||||||
}
|
}
|
||||||
} else if let Some(f) = folder {
|
} else if let Some(f) = folder {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2",
|
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2",
|
||||||
)
|
)
|
||||||
.bind(f)
|
.bind(f)
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.fetch_all(pool)
|
.fetch_all(pool)
|
||||||
.await?
|
.await?
|
||||||
} else {
|
} else {
|
||||||
sqlx::query_as("SELECT folder, type FROM entries WHERE user_id IS NULL AND name = $1")
|
sqlx::query_as(
|
||||||
|
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND name = $1",
|
||||||
|
)
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.fetch_all(pool)
|
.fetch_all(pool)
|
||||||
.await?
|
.await?
|
||||||
@@ -106,16 +339,20 @@ async fn delete_one(
|
|||||||
return match rows.len() {
|
return match rows.len() {
|
||||||
0 => Ok(DeleteResult {
|
0 => Ok(DeleteResult {
|
||||||
deleted: vec![],
|
deleted: vec![],
|
||||||
|
migrated: vec![],
|
||||||
dry_run: true,
|
dry_run: true,
|
||||||
}),
|
}),
|
||||||
1 => {
|
1 => {
|
||||||
let row = rows.into_iter().next().unwrap();
|
let row = rows.into_iter().next().unwrap();
|
||||||
|
let refs =
|
||||||
|
fetch_key_referrers_pool(pool, row.id, &row.folder, name, user_id).await?;
|
||||||
Ok(DeleteResult {
|
Ok(DeleteResult {
|
||||||
deleted: vec![DeletedEntry {
|
deleted: vec![DeletedEntry {
|
||||||
name: name.to_string(),
|
name: name.to_string(),
|
||||||
folder: row.folder,
|
folder: row.folder,
|
||||||
entry_type: row.entry_type,
|
entry_type: row.entry_type,
|
||||||
}],
|
}],
|
||||||
|
migrated: refs.iter().map(ref_label).collect(),
|
||||||
dry_run: true,
|
dry_run: true,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
@@ -180,6 +417,7 @@ async fn delete_one(
|
|||||||
tx.rollback().await?;
|
tx.rollback().await?;
|
||||||
return Ok(DeleteResult {
|
return Ok(DeleteResult {
|
||||||
deleted: vec![],
|
deleted: vec![],
|
||||||
|
migrated: vec![],
|
||||||
dry_run: false,
|
dry_run: false,
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
@@ -199,6 +437,7 @@ async fn delete_one(
|
|||||||
|
|
||||||
let folder = row.folder.clone();
|
let folder = row.folder.clone();
|
||||||
let entry_type = row.entry_type.clone();
|
let entry_type = row.entry_type.clone();
|
||||||
|
let migrated = migrate_key_refs_if_needed(&mut tx, &row, name, user_id, false).await?;
|
||||||
snapshot_and_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?;
|
snapshot_and_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?;
|
||||||
crate::audit::log_tx(
|
crate::audit::log_tx(
|
||||||
&mut tx,
|
&mut tx,
|
||||||
@@ -218,6 +457,7 @@ async fn delete_one(
|
|||||||
folder,
|
folder,
|
||||||
entry_type,
|
entry_type,
|
||||||
}],
|
}],
|
||||||
|
migrated,
|
||||||
dry_run: false,
|
dry_run: false,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
@@ -278,6 +518,12 @@ async fn delete_bulk(
|
|||||||
let rows = q.fetch_all(pool).await?;
|
let rows = q.fetch_all(pool).await?;
|
||||||
|
|
||||||
if dry_run {
|
if dry_run {
|
||||||
|
let mut migrated: Vec<String> = Vec::new();
|
||||||
|
for row in &rows {
|
||||||
|
let refs =
|
||||||
|
fetch_key_referrers_pool(pool, row.id, &row.folder, &row.name, user_id).await?;
|
||||||
|
migrated.extend(refs.iter().map(ref_label));
|
||||||
|
}
|
||||||
let deleted = rows
|
let deleted = rows
|
||||||
.iter()
|
.iter()
|
||||||
.map(|r| DeletedEntry {
|
.map(|r| DeletedEntry {
|
||||||
@@ -288,11 +534,13 @@ async fn delete_bulk(
|
|||||||
.collect();
|
.collect();
|
||||||
return Ok(DeleteResult {
|
return Ok(DeleteResult {
|
||||||
deleted,
|
deleted,
|
||||||
|
migrated,
|
||||||
dry_run: true,
|
dry_run: true,
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut deleted = Vec::with_capacity(rows.len());
|
let mut deleted = Vec::with_capacity(rows.len());
|
||||||
|
let mut migrated: Vec<String> = Vec::new();
|
||||||
for row in &rows {
|
for row in &rows {
|
||||||
let entry_row = EntryRow {
|
let entry_row = EntryRow {
|
||||||
id: row.id,
|
id: row.id,
|
||||||
@@ -304,6 +552,8 @@ async fn delete_bulk(
|
|||||||
notes: row.notes.clone(),
|
notes: row.notes.clone(),
|
||||||
};
|
};
|
||||||
let mut tx = pool.begin().await?;
|
let mut tx = pool.begin().await?;
|
||||||
|
let m = migrate_key_refs_if_needed(&mut tx, &entry_row, &row.name, user_id, false).await?;
|
||||||
|
migrated.extend(m);
|
||||||
snapshot_and_delete(
|
snapshot_and_delete(
|
||||||
&mut tx,
|
&mut tx,
|
||||||
&row.folder,
|
&row.folder,
|
||||||
@@ -333,6 +583,7 @@ async fn delete_bulk(
|
|||||||
|
|
||||||
Ok(DeleteResult {
|
Ok(DeleteResult {
|
||||||
deleted,
|
deleted,
|
||||||
|
migrated,
|
||||||
dry_run: false,
|
dry_run: false,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
@@ -395,3 +646,264 @@ async fn snapshot_and_delete(
|
|||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
async fn maybe_test_pool() -> Option<PgPool> {
|
||||||
|
let Ok(url) = std::env::var("SECRETS_DATABASE_URL") else {
|
||||||
|
eprintln!("skip delete migration tests: SECRETS_DATABASE_URL is not set");
|
||||||
|
return None;
|
||||||
|
};
|
||||||
|
let Ok(pool) = PgPool::connect(&url).await else {
|
||||||
|
eprintln!("skip delete migration tests: cannot connect to database");
|
||||||
|
return None;
|
||||||
|
};
|
||||||
|
if let Err(e) = crate::db::migrate(&pool).await {
|
||||||
|
eprintln!("skip delete migration tests: migrate failed: {e}");
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
Some(pool)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn insert_entry(
|
||||||
|
pool: &PgPool,
|
||||||
|
id: Uuid,
|
||||||
|
user_id: Uuid,
|
||||||
|
folder: &str,
|
||||||
|
entry_type: &str,
|
||||||
|
name: &str,
|
||||||
|
metadata: serde_json::Value,
|
||||||
|
) -> Result<()> {
|
||||||
|
sqlx::query(
|
||||||
|
"INSERT INTO entries (id, user_id, folder, type, name, notes, tags, metadata, version) \
|
||||||
|
VALUES ($1, $2, $3, $4, $5, '', ARRAY[]::text[], $6, 1)",
|
||||||
|
)
|
||||||
|
.bind(id)
|
||||||
|
.bind(user_id)
|
||||||
|
.bind(folder)
|
||||||
|
.bind(entry_type)
|
||||||
|
.bind(name)
|
||||||
|
.bind(metadata)
|
||||||
|
.execute(pool)
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn delete_shared_key_dry_run_reports_migration_without_writes() -> Result<()> {
|
||||||
|
let Some(pool) = maybe_test_pool().await else {
|
||||||
|
return Ok(());
|
||||||
|
};
|
||||||
|
|
||||||
|
let user_id = Uuid::from_u128(rand::random());
|
||||||
|
let key_id = Uuid::from_u128(rand::random());
|
||||||
|
let ref_a = Uuid::from_u128(rand::random());
|
||||||
|
let ref_b = Uuid::from_u128(rand::random());
|
||||||
|
|
||||||
|
insert_entry(
|
||||||
|
&pool,
|
||||||
|
key_id,
|
||||||
|
user_id,
|
||||||
|
"kfolder",
|
||||||
|
"key",
|
||||||
|
"shared-key",
|
||||||
|
json!({}),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
|
||||||
|
.bind(key_id)
|
||||||
|
.bind("pem")
|
||||||
|
.bind(vec![1_u8, 2, 3])
|
||||||
|
.execute(&pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
insert_entry(
|
||||||
|
&pool,
|
||||||
|
ref_a,
|
||||||
|
user_id,
|
||||||
|
"afolder",
|
||||||
|
"server",
|
||||||
|
"srv-a",
|
||||||
|
json!({"key_ref":"kfolder/shared-key"}),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
insert_entry(
|
||||||
|
&pool,
|
||||||
|
ref_b,
|
||||||
|
user_id,
|
||||||
|
"bfolder",
|
||||||
|
"server",
|
||||||
|
"srv-b",
|
||||||
|
json!({"key_ref":"shared-key"}),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let result = run(
|
||||||
|
&pool,
|
||||||
|
DeleteParams {
|
||||||
|
name: Some("shared-key"),
|
||||||
|
folder: Some("kfolder"),
|
||||||
|
entry_type: None,
|
||||||
|
dry_run: true,
|
||||||
|
user_id: Some(user_id),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
assert!(result.dry_run);
|
||||||
|
assert_eq!(result.deleted.len(), 1);
|
||||||
|
assert_eq!(result.migrated.len(), 2);
|
||||||
|
|
||||||
|
let key_exists: bool = sqlx::query_scalar(
|
||||||
|
"SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1 AND user_id = $2)",
|
||||||
|
)
|
||||||
|
.bind(key_id)
|
||||||
|
.bind(user_id)
|
||||||
|
.fetch_one(&pool)
|
||||||
|
.await?;
|
||||||
|
assert!(key_exists);
|
||||||
|
|
||||||
|
let ref_a_key_ref: Option<String> =
|
||||||
|
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
|
||||||
|
.bind(ref_a)
|
||||||
|
.fetch_one(&pool)
|
||||||
|
.await?;
|
||||||
|
let ref_b_key_ref: Option<String> =
|
||||||
|
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
|
||||||
|
.bind(ref_b)
|
||||||
|
.fetch_one(&pool)
|
||||||
|
.await?;
|
||||||
|
assert_eq!(ref_a_key_ref.as_deref(), Some("kfolder/shared-key"));
|
||||||
|
assert_eq!(ref_b_key_ref.as_deref(), Some("shared-key"));
|
||||||
|
|
||||||
|
sqlx::query("DELETE FROM entries WHERE user_id = $1")
|
||||||
|
.bind(user_id)
|
||||||
|
.execute(&pool)
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn delete_shared_key_auto_migrates_single_copy_and_redirects_refs() -> Result<()> {
|
||||||
|
let Some(pool) = maybe_test_pool().await else {
|
||||||
|
return Ok(());
|
||||||
|
};
|
||||||
|
|
||||||
|
let user_id = Uuid::from_u128(rand::random());
|
||||||
|
let key_id = Uuid::from_u128(rand::random());
|
||||||
|
let ref_a = Uuid::from_u128(rand::random());
|
||||||
|
let ref_b = Uuid::from_u128(rand::random());
|
||||||
|
let ref_c = Uuid::from_u128(rand::random());
|
||||||
|
|
||||||
|
insert_entry(
|
||||||
|
&pool,
|
||||||
|
key_id,
|
||||||
|
user_id,
|
||||||
|
"kfolder",
|
||||||
|
"key",
|
||||||
|
"shared-key",
|
||||||
|
json!({}),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
|
||||||
|
.bind(key_id)
|
||||||
|
.bind("pem")
|
||||||
|
.bind(vec![7_u8, 8, 9])
|
||||||
|
.execute(&pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// owner candidate (sorted first by folder)
|
||||||
|
insert_entry(
|
||||||
|
&pool,
|
||||||
|
ref_a,
|
||||||
|
user_id,
|
||||||
|
"afolder",
|
||||||
|
"server",
|
||||||
|
"srv-a",
|
||||||
|
json!({"key_ref":"kfolder/shared-key"}),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
insert_entry(
|
||||||
|
&pool,
|
||||||
|
ref_b,
|
||||||
|
user_id,
|
||||||
|
"bfolder",
|
||||||
|
"server",
|
||||||
|
"srv-b",
|
||||||
|
json!({"key_ref":"shared-key"}),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
insert_entry(
|
||||||
|
&pool,
|
||||||
|
ref_c,
|
||||||
|
user_id,
|
||||||
|
"cfolder",
|
||||||
|
"service",
|
||||||
|
"svc-c",
|
||||||
|
json!({"key_ref":"kfolder/shared-key"}),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let result = run(
|
||||||
|
&pool,
|
||||||
|
DeleteParams {
|
||||||
|
name: Some("shared-key"),
|
||||||
|
folder: Some("kfolder"),
|
||||||
|
entry_type: None,
|
||||||
|
dry_run: false,
|
||||||
|
user_id: Some(user_id),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
assert!(!result.dry_run);
|
||||||
|
assert_eq!(result.deleted.len(), 1);
|
||||||
|
assert_eq!(result.migrated.len(), 3);
|
||||||
|
|
||||||
|
let key_exists: bool = sqlx::query_scalar(
|
||||||
|
"SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1 AND user_id = $2)",
|
||||||
|
)
|
||||||
|
.bind(key_id)
|
||||||
|
.bind(user_id)
|
||||||
|
.fetch_one(&pool)
|
||||||
|
.await?;
|
||||||
|
assert!(!key_exists);
|
||||||
|
|
||||||
|
let owner_key_ref: Option<String> =
|
||||||
|
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
|
||||||
|
.bind(ref_a)
|
||||||
|
.fetch_one(&pool)
|
||||||
|
.await?;
|
||||||
|
let ref_b_key_ref: Option<String> =
|
||||||
|
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
|
||||||
|
.bind(ref_b)
|
||||||
|
.fetch_one(&pool)
|
||||||
|
.await?;
|
||||||
|
let ref_c_key_ref: Option<String> =
|
||||||
|
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
|
||||||
|
.bind(ref_c)
|
||||||
|
.fetch_one(&pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
assert_eq!(owner_key_ref, None);
|
||||||
|
assert_eq!(ref_b_key_ref.as_deref(), Some("afolder/srv-a"));
|
||||||
|
assert_eq!(ref_c_key_ref.as_deref(), Some("afolder/srv-a"));
|
||||||
|
|
||||||
|
let owner_has_copied: bool = sqlx::query_scalar(
|
||||||
|
"SELECT EXISTS(SELECT 1 FROM secrets WHERE entry_id = $1 AND field_name = 'pem')",
|
||||||
|
)
|
||||||
|
.bind(ref_a)
|
||||||
|
.fetch_one(&pool)
|
||||||
|
.await?;
|
||||||
|
assert!(owner_has_copied);
|
||||||
|
|
||||||
|
sqlx::query("DELETE FROM entries WHERE user_id = $1")
|
||||||
|
.bind(user_id)
|
||||||
|
.execute(&pool)
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -26,7 +26,8 @@ pub async fn build_env_map(
|
|||||||
let mut combined: HashMap<String, String> = HashMap::new();
|
let mut combined: HashMap<String, String> = HashMap::new();
|
||||||
|
|
||||||
for entry in &entries {
|
for entry in &entries {
|
||||||
let entry_map = build_entry_env_map(pool, entry, only_fields, prefix, master_key).await?;
|
let entry_map =
|
||||||
|
build_entry_env_map(pool, entry, only_fields, prefix, master_key, user_id).await?;
|
||||||
combined.extend(entry_map);
|
combined.extend(entry_map);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -39,6 +40,7 @@ async fn build_entry_env_map(
|
|||||||
only_fields: &[String],
|
only_fields: &[String],
|
||||||
prefix: &str,
|
prefix: &str,
|
||||||
master_key: &[u8; 32],
|
master_key: &[u8; 32],
|
||||||
|
user_id: Option<Uuid>,
|
||||||
) -> Result<HashMap<String, String>> {
|
) -> Result<HashMap<String, String>> {
|
||||||
let entry_ids = vec![entry.id];
|
let entry_ids = vec![entry.id];
|
||||||
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
|
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
|
||||||
@@ -66,10 +68,23 @@ async fn build_entry_env_map(
|
|||||||
map.insert(key, json_to_env_string(&decrypted));
|
map.insert(key, json_to_env_string(&decrypted));
|
||||||
}
|
}
|
||||||
|
|
||||||
// Resolve key_ref
|
// Resolve key_ref. Supported formats: "name" or "folder/name".
|
||||||
if let Some(key_ref) = entry.metadata.get("key_ref").and_then(|v| v.as_str()) {
|
if let Some(key_ref) = entry.metadata.get("key_ref").and_then(|v| v.as_str()) {
|
||||||
|
let (ref_folder, ref_name) = if let Some((f, n)) = key_ref.split_once('/') {
|
||||||
|
(Some(f), n)
|
||||||
|
} else {
|
||||||
|
(None, key_ref)
|
||||||
|
};
|
||||||
let key_entries =
|
let key_entries =
|
||||||
fetch_entries(pool, None, Some("key"), Some(key_ref), &[], None, None).await?;
|
fetch_entries(pool, ref_folder, None, Some(ref_name), &[], None, user_id).await?;
|
||||||
|
|
||||||
|
if key_entries.len() > 1 {
|
||||||
|
anyhow::bail!(
|
||||||
|
"key_ref '{}' matched {} entries; qualify with folder/name to resolve the ambiguity",
|
||||||
|
key_ref,
|
||||||
|
key_entries.len()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
if let Some(key_entry) = key_entries.first() {
|
if let Some(key_entry) = key_entries.first() {
|
||||||
let key_ids = vec![key_entry.id];
|
let key_ids = vec![key_entry.id];
|
||||||
@@ -87,7 +102,7 @@ async fn build_entry_env_map(
|
|||||||
map.insert(key_var, json_to_env_string(&decrypted));
|
map.insert(key_var, json_to_env_string(&decrypted));
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
tracing::warn!(key_ref, "key_ref target not found");
|
tracing::warn!(key_ref, ?user_id, "key_ref target not found");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ use std::collections::HashMap;
|
|||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
use crate::crypto;
|
use crate::crypto;
|
||||||
use crate::service::search::{fetch_secrets_for_entries, resolve_entry};
|
use crate::service::search::{fetch_secrets_for_entries, resolve_entry, resolve_entry_by_id};
|
||||||
|
|
||||||
/// Decrypt a single named field from an entry.
|
/// Decrypt a single named field from an entry.
|
||||||
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
|
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
|
||||||
@@ -53,3 +53,52 @@ pub async fn get_all_secrets(
|
|||||||
}
|
}
|
||||||
Ok(map)
|
Ok(map)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Decrypt a single named field from an entry, located by its UUID.
|
||||||
|
pub async fn get_secret_field_by_id(
|
||||||
|
pool: &PgPool,
|
||||||
|
entry_id: Uuid,
|
||||||
|
field_name: &str,
|
||||||
|
master_key: &[u8; 32],
|
||||||
|
user_id: Option<Uuid>,
|
||||||
|
) -> Result<Value> {
|
||||||
|
resolve_entry_by_id(pool, entry_id, user_id)
|
||||||
|
.await
|
||||||
|
.map_err(|_| anyhow::anyhow!("Entry with id '{}' not found", entry_id))?;
|
||||||
|
|
||||||
|
let entry_ids = vec![entry_id];
|
||||||
|
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
|
||||||
|
let fields = secrets_map.get(&entry_id).map(Vec::as_slice).unwrap_or(&[]);
|
||||||
|
|
||||||
|
let field = fields
|
||||||
|
.iter()
|
||||||
|
.find(|f| f.field_name == field_name)
|
||||||
|
.ok_or_else(|| anyhow::anyhow!("Secret field '{}' not found", field_name))?;
|
||||||
|
|
||||||
|
crypto::decrypt_json(master_key, &field.encrypted)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Decrypt all secret fields from an entry, located by its UUID.
|
||||||
|
/// Returns a map field_name → decrypted Value.
|
||||||
|
pub async fn get_all_secrets_by_id(
|
||||||
|
pool: &PgPool,
|
||||||
|
entry_id: Uuid,
|
||||||
|
master_key: &[u8; 32],
|
||||||
|
user_id: Option<Uuid>,
|
||||||
|
) -> Result<HashMap<String, Value>> {
|
||||||
|
// Validate entry exists (and that it belongs to the requesting user)
|
||||||
|
resolve_entry_by_id(pool, entry_id, user_id)
|
||||||
|
.await
|
||||||
|
.map_err(|_| anyhow::anyhow!("Entry with id '{}' not found", entry_id))?;
|
||||||
|
|
||||||
|
let entry_ids = vec![entry_id];
|
||||||
|
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
|
||||||
|
let fields = secrets_map.get(&entry_id).map(Vec::as_slice).unwrap_or(&[]);
|
||||||
|
|
||||||
|
let mut map = HashMap::new();
|
||||||
|
for f in fields {
|
||||||
|
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
|
||||||
|
map.insert(f.field_name.clone(), decrypted);
|
||||||
|
}
|
||||||
|
Ok(map)
|
||||||
|
}
|
||||||
|
|||||||
@@ -27,49 +27,46 @@ pub struct SearchResult {
|
|||||||
pub secret_schemas: HashMap<Uuid, Vec<SecretField>>,
|
pub secret_schemas: HashMap<Uuid, Vec<SecretField>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult> {
|
/// List `entries` rows matching params (paged, ordered per `params.sort`).
|
||||||
let entries = fetch_entries_paged(pool, ¶ms).await?;
|
/// Does not read the `secrets` table.
|
||||||
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
|
pub async fn list_entries(pool: &PgPool, params: SearchParams<'_>) -> Result<Vec<Entry>> {
|
||||||
let secret_schemas = if !entry_ids.is_empty() {
|
|
||||||
fetch_secret_schemas(pool, &entry_ids).await?
|
|
||||||
} else {
|
|
||||||
HashMap::new()
|
|
||||||
};
|
|
||||||
Ok(SearchResult {
|
|
||||||
entries,
|
|
||||||
secret_schemas,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
|
|
||||||
pub async fn fetch_entries(
|
|
||||||
pool: &PgPool,
|
|
||||||
folder: Option<&str>,
|
|
||||||
entry_type: Option<&str>,
|
|
||||||
name: Option<&str>,
|
|
||||||
tags: &[String],
|
|
||||||
query: Option<&str>,
|
|
||||||
user_id: Option<Uuid>,
|
|
||||||
) -> Result<Vec<Entry>> {
|
|
||||||
let params = SearchParams {
|
|
||||||
folder,
|
|
||||||
entry_type,
|
|
||||||
name,
|
|
||||||
tags,
|
|
||||||
query,
|
|
||||||
sort: "name",
|
|
||||||
limit: FETCH_ALL_LIMIT,
|
|
||||||
offset: 0,
|
|
||||||
user_id,
|
|
||||||
};
|
|
||||||
fetch_entries_paged(pool, ¶ms).await
|
fetch_entries_paged(pool, ¶ms).await
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<Entry>> {
|
/// Count `entries` rows matching the same filters as [`list_entries`] (ignores `sort` / `limit` / `offset`).
|
||||||
|
/// Does not read the `secrets` table.
|
||||||
|
pub async fn count_entries(pool: &PgPool, a: &SearchParams<'_>) -> Result<i64> {
|
||||||
|
let (where_clause, _) = entry_where_clause_and_next_idx(a);
|
||||||
|
let sql = format!("SELECT COUNT(*)::bigint FROM entries {where_clause}");
|
||||||
|
let mut q = sqlx::query_scalar::<_, i64>(&sql);
|
||||||
|
if let Some(uid) = a.user_id {
|
||||||
|
q = q.bind(uid);
|
||||||
|
}
|
||||||
|
if let Some(v) = a.folder {
|
||||||
|
q = q.bind(v);
|
||||||
|
}
|
||||||
|
if let Some(v) = a.entry_type {
|
||||||
|
q = q.bind(v);
|
||||||
|
}
|
||||||
|
if let Some(v) = a.name {
|
||||||
|
q = q.bind(v);
|
||||||
|
}
|
||||||
|
for tag in a.tags {
|
||||||
|
q = q.bind(tag);
|
||||||
|
}
|
||||||
|
if let Some(v) = a.query {
|
||||||
|
let pattern = format!("%{}%", v.replace('%', "\\%").replace('_', "\\_"));
|
||||||
|
q = q.bind(pattern);
|
||||||
|
}
|
||||||
|
let n = q.fetch_one(pool).await?;
|
||||||
|
Ok(n)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Shared WHERE clause and the next `$n` index (for LIMIT/OFFSET in paged queries).
|
||||||
|
fn entry_where_clause_and_next_idx(a: &SearchParams<'_>) -> (String, i32) {
|
||||||
let mut conditions: Vec<String> = Vec::new();
|
let mut conditions: Vec<String> = Vec::new();
|
||||||
let mut idx: i32 = 1;
|
let mut idx: i32 = 1;
|
||||||
|
|
||||||
// user_id filtering — always comes first when present
|
|
||||||
if a.user_id.is_some() {
|
if a.user_id.is_some() {
|
||||||
conditions.push(format!("user_id = ${}", idx));
|
conditions.push(format!("user_id = ${}", idx));
|
||||||
idx += 1;
|
idx += 1;
|
||||||
@@ -115,6 +112,55 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
|||||||
idx += 1;
|
idx += 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let where_clause = if conditions.is_empty() {
|
||||||
|
String::new()
|
||||||
|
} else {
|
||||||
|
format!("WHERE {}", conditions.join(" AND "))
|
||||||
|
};
|
||||||
|
(where_clause, idx)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult> {
|
||||||
|
let entries = fetch_entries_paged(pool, ¶ms).await?;
|
||||||
|
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
|
||||||
|
let secret_schemas = if !entry_ids.is_empty() {
|
||||||
|
fetch_secret_schemas(pool, &entry_ids).await?
|
||||||
|
} else {
|
||||||
|
HashMap::new()
|
||||||
|
};
|
||||||
|
Ok(SearchResult {
|
||||||
|
entries,
|
||||||
|
secret_schemas,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
|
||||||
|
pub async fn fetch_entries(
|
||||||
|
pool: &PgPool,
|
||||||
|
folder: Option<&str>,
|
||||||
|
entry_type: Option<&str>,
|
||||||
|
name: Option<&str>,
|
||||||
|
tags: &[String],
|
||||||
|
query: Option<&str>,
|
||||||
|
user_id: Option<Uuid>,
|
||||||
|
) -> Result<Vec<Entry>> {
|
||||||
|
let params = SearchParams {
|
||||||
|
folder,
|
||||||
|
entry_type,
|
||||||
|
name,
|
||||||
|
tags,
|
||||||
|
query,
|
||||||
|
sort: "name",
|
||||||
|
limit: FETCH_ALL_LIMIT,
|
||||||
|
offset: 0,
|
||||||
|
user_id,
|
||||||
|
};
|
||||||
|
list_entries(pool, params).await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<Entry>> {
|
||||||
|
let (where_clause, idx) = entry_where_clause_and_next_idx(a);
|
||||||
|
|
||||||
let order = match a.sort {
|
let order = match a.sort {
|
||||||
"updated" => "updated_at DESC",
|
"updated" => "updated_at DESC",
|
||||||
"created" => "created_at DESC",
|
"created" => "created_at DESC",
|
||||||
@@ -122,14 +168,7 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
|||||||
};
|
};
|
||||||
|
|
||||||
let limit_idx = idx;
|
let limit_idx = idx;
|
||||||
idx += 1;
|
let offset_idx = idx + 1;
|
||||||
let offset_idx = idx;
|
|
||||||
|
|
||||||
let where_clause = if conditions.is_empty() {
|
|
||||||
String::new()
|
|
||||||
} else {
|
|
||||||
format!("WHERE {}", conditions.join(" AND "))
|
|
||||||
};
|
|
||||||
|
|
||||||
let sql = format!(
|
let sql = format!(
|
||||||
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
|
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
|
||||||
@@ -138,7 +177,6 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
|||||||
);
|
);
|
||||||
|
|
||||||
let mut q = sqlx::query_as::<_, EntryRaw>(&sql);
|
let mut q = sqlx::query_as::<_, EntryRaw>(&sql);
|
||||||
|
|
||||||
if let Some(uid) = a.user_id {
|
if let Some(uid) = a.user_id {
|
||||||
q = q.bind(uid);
|
q = q.bind(uid);
|
||||||
}
|
}
|
||||||
@@ -208,6 +246,36 @@ pub async fn fetch_secrets_for_entries(
|
|||||||
Ok(map)
|
Ok(map)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Resolve exactly one entry by its UUID primary key.
|
||||||
|
///
|
||||||
|
/// Returns an error if the entry does not exist or does not belong to the given user.
|
||||||
|
pub async fn resolve_entry_by_id(
|
||||||
|
pool: &PgPool,
|
||||||
|
id: Uuid,
|
||||||
|
user_id: Option<Uuid>,
|
||||||
|
) -> Result<crate::models::Entry> {
|
||||||
|
let row: Option<EntryRaw> = if let Some(uid) = user_id {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
|
||||||
|
created_at, updated_at FROM entries WHERE id = $1 AND user_id = $2",
|
||||||
|
)
|
||||||
|
.bind(id)
|
||||||
|
.bind(uid)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
|
||||||
|
created_at, updated_at FROM entries WHERE id = $1 AND user_id IS NULL",
|
||||||
|
)
|
||||||
|
.bind(id)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await?
|
||||||
|
};
|
||||||
|
row.map(Entry::from)
|
||||||
|
.ok_or_else(|| anyhow::anyhow!("Entry with id '{}' not found", id))
|
||||||
|
}
|
||||||
|
|
||||||
/// Resolve exactly one entry by name, with optional folder for disambiguation.
|
/// Resolve exactly one entry by name, with optional folder for disambiguation.
|
||||||
///
|
///
|
||||||
/// - If `folder` is provided: exact `(folder, name)` match.
|
/// - If `folder` is provided: exact `(folder, name)` match.
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ use uuid::Uuid;
|
|||||||
|
|
||||||
use crate::crypto;
|
use crate::crypto;
|
||||||
use crate::db;
|
use crate::db;
|
||||||
use crate::models::EntryRow;
|
use crate::models::{EntryRow, EntryWriteRow};
|
||||||
use crate::service::add::{
|
use crate::service::add::{
|
||||||
collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path,
|
collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path,
|
||||||
parse_kv, remove_path,
|
parse_kv, remove_path,
|
||||||
@@ -306,3 +306,118 @@ pub async fn run(
|
|||||||
remove_secrets: remove_secret_keys,
|
remove_secrets: remove_secret_keys,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Update non-sensitive entry columns by primary key (multi-tenant: `user_id` must match).
|
||||||
|
/// Does not read or modify `secrets` rows.
|
||||||
|
pub struct UpdateEntryFieldsByIdParams<'a> {
|
||||||
|
pub folder: &'a str,
|
||||||
|
pub entry_type: &'a str,
|
||||||
|
pub name: &'a str,
|
||||||
|
pub notes: &'a str,
|
||||||
|
pub tags: &'a [String],
|
||||||
|
pub metadata: &'a serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn update_fields_by_id(
|
||||||
|
pool: &PgPool,
|
||||||
|
entry_id: Uuid,
|
||||||
|
user_id: Uuid,
|
||||||
|
params: UpdateEntryFieldsByIdParams<'_>,
|
||||||
|
) -> Result<()> {
|
||||||
|
if params.folder.len() > 128 {
|
||||||
|
anyhow::bail!("folder must be at most 128 characters");
|
||||||
|
}
|
||||||
|
if params.entry_type.len() > 64 {
|
||||||
|
anyhow::bail!("type must be at most 64 characters");
|
||||||
|
}
|
||||||
|
if params.name.len() > 256 {
|
||||||
|
anyhow::bail!("name must be at most 256 characters");
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut tx = pool.begin().await?;
|
||||||
|
|
||||||
|
let row: Option<EntryWriteRow> = sqlx::query_as(
|
||||||
|
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
|
||||||
|
WHERE id = $1 AND user_id = $2 FOR UPDATE",
|
||||||
|
)
|
||||||
|
.bind(entry_id)
|
||||||
|
.bind(user_id)
|
||||||
|
.fetch_optional(&mut *tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let row = match row {
|
||||||
|
Some(r) => r,
|
||||||
|
None => {
|
||||||
|
tx.rollback().await?;
|
||||||
|
anyhow::bail!("Entry not found");
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Err(e) = db::snapshot_entry_history(
|
||||||
|
&mut tx,
|
||||||
|
db::EntrySnapshotParams {
|
||||||
|
entry_id: row.id,
|
||||||
|
user_id: Some(user_id),
|
||||||
|
folder: &row.folder,
|
||||||
|
entry_type: &row.entry_type,
|
||||||
|
name: &row.name,
|
||||||
|
version: row.version,
|
||||||
|
action: "update",
|
||||||
|
tags: &row.tags,
|
||||||
|
metadata: &row.metadata,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
tracing::warn!(error = %e, "failed to snapshot entry history before web update");
|
||||||
|
}
|
||||||
|
|
||||||
|
let res = sqlx::query(
|
||||||
|
"UPDATE entries SET folder = $1, type = $2, name = $3, notes = $4, tags = $5, metadata = $6, \
|
||||||
|
version = version + 1, updated_at = NOW() \
|
||||||
|
WHERE id = $7 AND version = $8",
|
||||||
|
)
|
||||||
|
.bind(params.folder)
|
||||||
|
.bind(params.entry_type)
|
||||||
|
.bind(params.name)
|
||||||
|
.bind(params.notes)
|
||||||
|
.bind(params.tags)
|
||||||
|
.bind(params.metadata)
|
||||||
|
.bind(row.id)
|
||||||
|
.bind(row.version)
|
||||||
|
.execute(&mut *tx)
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
if let sqlx::Error::Database(ref d) = e
|
||||||
|
&& d.code().as_deref() == Some("23505")
|
||||||
|
{
|
||||||
|
return anyhow::anyhow!(
|
||||||
|
"An entry with this folder and name already exists for your account."
|
||||||
|
);
|
||||||
|
}
|
||||||
|
e.into()
|
||||||
|
})?;
|
||||||
|
|
||||||
|
if res.rows_affected() == 0 {
|
||||||
|
tx.rollback().await?;
|
||||||
|
anyhow::bail!("Concurrent modification detected. Please refresh and try again.");
|
||||||
|
}
|
||||||
|
|
||||||
|
crate::audit::log_tx(
|
||||||
|
&mut tx,
|
||||||
|
Some(user_id),
|
||||||
|
"update",
|
||||||
|
params.folder,
|
||||||
|
params.entry_type,
|
||||||
|
params.name,
|
||||||
|
serde_json::json!({
|
||||||
|
"source": "web",
|
||||||
|
"entry_id": entry_id,
|
||||||
|
"fields": ["folder", "type", "name", "notes", "tags", "metadata"],
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
|
||||||
|
tx.commit().await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
[package]
|
[package]
|
||||||
name = "secrets-mcp"
|
name = "secrets-mcp"
|
||||||
version = "0.3.0"
|
version = "0.3.7"
|
||||||
edition.workspace = true
|
edition.workspace = true
|
||||||
|
|
||||||
[[bin]]
|
[[bin]]
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ use tower_sessions_sqlx_store_chrono::PostgresStore;
|
|||||||
use tracing_subscriber::EnvFilter;
|
use tracing_subscriber::EnvFilter;
|
||||||
use tracing_subscriber::fmt::time::FormatTime;
|
use tracing_subscriber::fmt::time::FormatTime;
|
||||||
|
|
||||||
use secrets_core::config::resolve_db_url;
|
use secrets_core::config::resolve_db_config;
|
||||||
use secrets_core::db::{create_pool, migrate};
|
use secrets_core::db::{create_pool, migrate};
|
||||||
|
|
||||||
use crate::oauth::OAuthConfig;
|
use crate::oauth::OAuthConfig;
|
||||||
@@ -78,9 +78,9 @@ async fn main() -> Result<()> {
|
|||||||
.init();
|
.init();
|
||||||
|
|
||||||
// ── Database ──────────────────────────────────────────────────────────────
|
// ── Database ──────────────────────────────────────────────────────────────
|
||||||
let db_url = resolve_db_url("")
|
let db_config = resolve_db_config("")
|
||||||
.context("Database not configured. Set SECRETS_DATABASE_URL environment variable.")?;
|
.context("Database not configured. Set SECRETS_DATABASE_URL environment variable.")?;
|
||||||
let pool = create_pool(&db_url)
|
let pool = create_pool(&db_config)
|
||||||
.await
|
.await
|
||||||
.context("failed to connect to database")?;
|
.context("failed to connect to database")?;
|
||||||
migrate(&pool)
|
migrate(&pool)
|
||||||
|
|||||||
@@ -14,6 +14,7 @@ use rmcp::{
|
|||||||
};
|
};
|
||||||
use schemars::JsonSchema;
|
use schemars::JsonSchema;
|
||||||
use serde::Deserialize;
|
use serde::Deserialize;
|
||||||
|
use serde_json::{Map, Value};
|
||||||
use sqlx::PgPool;
|
use sqlx::PgPool;
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
@@ -22,10 +23,10 @@ use secrets_core::service::{
|
|||||||
add::{AddParams, run as svc_add},
|
add::{AddParams, run as svc_add},
|
||||||
delete::{DeleteParams, run as svc_delete},
|
delete::{DeleteParams, run as svc_delete},
|
||||||
export::{ExportParams, export as svc_export},
|
export::{ExportParams, export as svc_export},
|
||||||
get_secret::{get_all_secrets, get_secret_field},
|
get_secret::{get_all_secrets_by_id, get_secret_field_by_id},
|
||||||
history::run as svc_history,
|
history::run as svc_history,
|
||||||
rollback::run as svc_rollback,
|
rollback::run as svc_rollback,
|
||||||
search::{SearchParams, run as svc_search},
|
search::{SearchParams, resolve_entry_by_id, run as svc_search},
|
||||||
update::{UpdateParams, run as svc_update},
|
update::{UpdateParams, run as svc_update},
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -153,6 +154,25 @@ impl SecretsService {
|
|||||||
|
|
||||||
// ── Tool parameter types ──────────────────────────────────────────────────────
|
// ── Tool parameter types ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
|
struct FindInput {
|
||||||
|
#[schemars(
|
||||||
|
description = "Fuzzy search across name, folder, type, notes, tags, and metadata values"
|
||||||
|
)]
|
||||||
|
query: Option<String>,
|
||||||
|
#[schemars(description = "Exact folder filter (e.g. 'refining', 'ricnsmart')")]
|
||||||
|
folder: Option<String>,
|
||||||
|
#[schemars(description = "Exact type filter (e.g. 'server', 'service', 'person', 'key')")]
|
||||||
|
#[serde(rename = "type")]
|
||||||
|
entry_type: Option<String>,
|
||||||
|
#[schemars(description = "Exact name filter")]
|
||||||
|
name: Option<String>,
|
||||||
|
#[schemars(description = "Tag filters (all must match)")]
|
||||||
|
tags: Option<Vec<String>>,
|
||||||
|
#[schemars(description = "Max results (default 20)")]
|
||||||
|
limit: Option<u32>,
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct SearchInput {
|
struct SearchInput {
|
||||||
#[schemars(description = "Fuzzy search across name, folder, type, notes, tags, metadata")]
|
#[schemars(description = "Fuzzy search across name, folder, type, notes, tags, metadata")]
|
||||||
@@ -178,12 +198,8 @@ struct SearchInput {
|
|||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct GetSecretInput {
|
struct GetSecretInput {
|
||||||
#[schemars(description = "Name of the entry")]
|
#[schemars(description = "Entry UUID obtained from secrets_find results")]
|
||||||
name: String,
|
id: String,
|
||||||
#[schemars(
|
|
||||||
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
|
||||||
)]
|
|
||||||
folder: Option<String>,
|
|
||||||
#[schemars(description = "Specific field to retrieve. If omitted, returns all fields.")]
|
#[schemars(description = "Specific field to retrieve. If omitted, returns all fields.")]
|
||||||
field: Option<String>,
|
field: Option<String>,
|
||||||
}
|
}
|
||||||
@@ -205,8 +221,16 @@ struct AddInput {
|
|||||||
tags: Option<Vec<String>>,
|
tags: Option<Vec<String>>,
|
||||||
#[schemars(description = "Metadata fields as 'key=value' or 'key:=json' strings")]
|
#[schemars(description = "Metadata fields as 'key=value' or 'key:=json' strings")]
|
||||||
meta: Option<Vec<String>>,
|
meta: Option<Vec<String>>,
|
||||||
|
#[schemars(
|
||||||
|
description = "Metadata fields as a JSON object {\"key\": value}. Merged with 'meta' if both provided."
|
||||||
|
)]
|
||||||
|
meta_obj: Option<Map<String, Value>>,
|
||||||
#[schemars(description = "Secret fields as 'key=value' strings")]
|
#[schemars(description = "Secret fields as 'key=value' strings")]
|
||||||
secrets: Option<Vec<String>>,
|
secrets: Option<Vec<String>>,
|
||||||
|
#[schemars(
|
||||||
|
description = "Secret fields as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided."
|
||||||
|
)]
|
||||||
|
secrets_obj: Option<Map<String, Value>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
@@ -217,6 +241,10 @@ struct UpdateInput {
|
|||||||
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
||||||
)]
|
)]
|
||||||
folder: Option<String>,
|
folder: Option<String>,
|
||||||
|
#[schemars(
|
||||||
|
description = "Entry UUID (from secrets_find). If provided, name/folder are used for disambiguation only."
|
||||||
|
)]
|
||||||
|
id: Option<String>,
|
||||||
#[schemars(description = "Update the notes field")]
|
#[schemars(description = "Update the notes field")]
|
||||||
notes: Option<String>,
|
notes: Option<String>,
|
||||||
#[schemars(description = "Tags to add")]
|
#[schemars(description = "Tags to add")]
|
||||||
@@ -225,16 +253,29 @@ struct UpdateInput {
|
|||||||
remove_tags: Option<Vec<String>>,
|
remove_tags: Option<Vec<String>>,
|
||||||
#[schemars(description = "Metadata fields to update/add as 'key=value' strings")]
|
#[schemars(description = "Metadata fields to update/add as 'key=value' strings")]
|
||||||
meta: Option<Vec<String>>,
|
meta: Option<Vec<String>>,
|
||||||
|
#[schemars(
|
||||||
|
description = "Metadata fields to update/add as a JSON object {\"key\": value}. Merged with 'meta' if both provided."
|
||||||
|
)]
|
||||||
|
meta_obj: Option<Map<String, Value>>,
|
||||||
#[schemars(description = "Metadata field keys to remove")]
|
#[schemars(description = "Metadata field keys to remove")]
|
||||||
remove_meta: Option<Vec<String>>,
|
remove_meta: Option<Vec<String>>,
|
||||||
#[schemars(description = "Secret fields to update/add as 'key=value' strings")]
|
#[schemars(description = "Secret fields to update/add as 'key=value' strings")]
|
||||||
secrets: Option<Vec<String>>,
|
secrets: Option<Vec<String>>,
|
||||||
|
#[schemars(
|
||||||
|
description = "Secret fields to update/add as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided."
|
||||||
|
)]
|
||||||
|
secrets_obj: Option<Map<String, Value>>,
|
||||||
#[schemars(description = "Secret field keys to remove")]
|
#[schemars(description = "Secret field keys to remove")]
|
||||||
remove_secrets: Option<Vec<String>>,
|
remove_secrets: Option<Vec<String>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct DeleteInput {
|
struct DeleteInput {
|
||||||
|
#[schemars(
|
||||||
|
description = "Entry UUID (from secrets_find). If provided, deletes this specific entry \
|
||||||
|
regardless of name/folder."
|
||||||
|
)]
|
||||||
|
id: Option<String>,
|
||||||
#[schemars(description = "Name of the entry to delete (single delete). \
|
#[schemars(description = "Name of the entry to delete (single delete). \
|
||||||
Omit to bulk delete by folder/type filters.")]
|
Omit to bulk delete by folder/type filters.")]
|
||||||
name: Option<String>,
|
name: Option<String>,
|
||||||
@@ -255,6 +296,10 @@ struct HistoryInput {
|
|||||||
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
||||||
)]
|
)]
|
||||||
folder: Option<String>,
|
folder: Option<String>,
|
||||||
|
#[schemars(
|
||||||
|
description = "Entry UUID (from secrets_find). If provided, name/folder are ignored."
|
||||||
|
)]
|
||||||
|
id: Option<String>,
|
||||||
#[schemars(description = "Max history entries to return (default 20)")]
|
#[schemars(description = "Max history entries to return (default 20)")]
|
||||||
limit: Option<u32>,
|
limit: Option<u32>,
|
||||||
}
|
}
|
||||||
@@ -267,6 +312,10 @@ struct RollbackInput {
|
|||||||
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
||||||
)]
|
)]
|
||||||
folder: Option<String>,
|
folder: Option<String>,
|
||||||
|
#[schemars(
|
||||||
|
description = "Entry UUID (from secrets_find). If provided, name/folder are ignored."
|
||||||
|
)]
|
||||||
|
id: Option<String>,
|
||||||
#[schemars(description = "Target version number. Omit to restore the most recent snapshot.")]
|
#[schemars(description = "Target version number. Omit to restore the most recent snapshot.")]
|
||||||
to_version: Option<i64>,
|
to_version: Option<i64>,
|
||||||
}
|
}
|
||||||
@@ -301,17 +350,118 @@ struct EnvMapInput {
|
|||||||
tags: Option<Vec<String>>,
|
tags: Option<Vec<String>>,
|
||||||
#[schemars(description = "Only include these secret fields")]
|
#[schemars(description = "Only include these secret fields")]
|
||||||
only_fields: Option<Vec<String>>,
|
only_fields: Option<Vec<String>>,
|
||||||
#[schemars(description = "Environment variable name prefix")]
|
#[schemars(description = "Environment variable name prefix. \
|
||||||
|
Variable names are built as UPPER(prefix)_UPPER(entry_name)_UPPER(field_name), \
|
||||||
|
with hyphens and dots replaced by underscores. \
|
||||||
|
Example: entry 'aliyun', field 'access_key_id' → ALIYUN_ACCESS_KEY_ID \
|
||||||
|
(or PREFIX_ALIYUN_ACCESS_KEY_ID with prefix set).")]
|
||||||
prefix: Option<String>,
|
prefix: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
|
struct OverviewInput {}
|
||||||
|
|
||||||
|
// ── Helpers ───────────────────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
/// Convert a JSON object map into "key=value" / "key:=json" strings for service-layer parsing.
|
||||||
|
fn map_to_kv_strings(map: Map<String, Value>) -> Vec<String> {
|
||||||
|
map.into_iter()
|
||||||
|
.map(|(k, v)| match &v {
|
||||||
|
Value::String(s) => format!("{}={}", k, s),
|
||||||
|
_ => format!("{}:={}", k, v),
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Parse a UUID string, returning an MCP error on failure.
|
||||||
|
fn parse_uuid(s: &str) -> Result<Uuid, rmcp::ErrorData> {
|
||||||
|
s.parse::<Uuid>()
|
||||||
|
.map_err(|_| rmcp::ErrorData::invalid_request(format!("Invalid UUID: '{}'", s), None))
|
||||||
|
}
|
||||||
|
|
||||||
// ── Tool implementations ──────────────────────────────────────────────────────
|
// ── Tool implementations ──────────────────────────────────────────────────────
|
||||||
|
|
||||||
#[tool_router]
|
#[tool_router]
|
||||||
impl SecretsService {
|
impl SecretsService {
|
||||||
|
#[tool(
|
||||||
|
description = "Find entries in the secrets store by folder, name, type, tags, or a \
|
||||||
|
fuzzy query that also searches metadata values. Requires Bearer API key. \
|
||||||
|
Returns 0 or more entries with id, metadata, and secret field names (not values). \
|
||||||
|
Use the returned id with secrets_get to decrypt secret values. \
|
||||||
|
Replaces secrets_search for discovery tasks.",
|
||||||
|
annotations(title = "Find Secrets", read_only_hint = true, idempotent_hint = true)
|
||||||
|
)]
|
||||||
|
async fn secrets_find(
|
||||||
|
&self,
|
||||||
|
Parameters(input): Parameters<FindInput>,
|
||||||
|
ctx: RequestContext<RoleServer>,
|
||||||
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
|
let t = Instant::now();
|
||||||
|
let user_id = Self::require_user_id(&ctx)?;
|
||||||
|
tracing::info!(
|
||||||
|
tool = "secrets_find",
|
||||||
|
?user_id,
|
||||||
|
folder = input.folder.as_deref(),
|
||||||
|
entry_type = input.entry_type.as_deref(),
|
||||||
|
name = input.name.as_deref(),
|
||||||
|
query = input.query.as_deref(),
|
||||||
|
"tool call start",
|
||||||
|
);
|
||||||
|
let tags = input.tags.unwrap_or_default();
|
||||||
|
let result = svc_search(
|
||||||
|
&self.pool,
|
||||||
|
SearchParams {
|
||||||
|
folder: input.folder.as_deref(),
|
||||||
|
entry_type: input.entry_type.as_deref(),
|
||||||
|
name: input.name.as_deref(),
|
||||||
|
tags: &tags,
|
||||||
|
query: input.query.as_deref(),
|
||||||
|
sort: "name",
|
||||||
|
limit: input.limit.unwrap_or(20),
|
||||||
|
offset: 0,
|
||||||
|
user_id: Some(user_id),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.map_err(|e| mcp_err_internal_logged("secrets_find", Some(user_id), e))?;
|
||||||
|
|
||||||
|
let entries: Vec<serde_json::Value> = result
|
||||||
|
.entries
|
||||||
|
.iter()
|
||||||
|
.map(|e| {
|
||||||
|
let schema: Vec<&str> = result
|
||||||
|
.secret_schemas
|
||||||
|
.get(&e.id)
|
||||||
|
.map(|f| f.iter().map(|s| s.field_name.as_str()).collect())
|
||||||
|
.unwrap_or_default();
|
||||||
|
serde_json::json!({
|
||||||
|
"id": e.id,
|
||||||
|
"name": e.name,
|
||||||
|
"folder": e.folder,
|
||||||
|
"type": e.entry_type,
|
||||||
|
"tags": e.tags,
|
||||||
|
"metadata": e.metadata,
|
||||||
|
"secret_fields": schema,
|
||||||
|
"updated_at": e.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
tracing::info!(
|
||||||
|
tool = "secrets_find",
|
||||||
|
?user_id,
|
||||||
|
result_count = entries.len(),
|
||||||
|
elapsed_ms = t.elapsed().as_millis(),
|
||||||
|
"tool call ok",
|
||||||
|
);
|
||||||
|
let json = serde_json::to_string_pretty(&entries).unwrap_or_else(|_| "[]".to_string());
|
||||||
|
Ok(CallToolResult::success(vec![Content::text(json)]))
|
||||||
|
}
|
||||||
|
|
||||||
#[tool(
|
#[tool(
|
||||||
description = "Search entries in the secrets store. Requires Bearer API key. Returns \
|
description = "Search entries in the secrets store. Requires Bearer API key. Returns \
|
||||||
entries with metadata and secret field names (not values). Use secrets_get to decrypt secret values.",
|
entries with metadata and secret field names (not values). \
|
||||||
|
Prefer secrets_find for discovery; secrets_search is kept for backward compatibility.",
|
||||||
annotations(
|
annotations(
|
||||||
title = "Search Secrets",
|
title = "Search Secrets",
|
||||||
read_only_hint = true,
|
read_only_hint = true,
|
||||||
@@ -401,8 +551,8 @@ impl SecretsService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[tool(
|
#[tool(
|
||||||
description = "Get decrypted secret field values for an entry. Requires your \
|
description = "Get decrypted secret field values for an entry identified by its UUID \
|
||||||
encryption key via X-Encryption-Key header (64 hex chars, PBKDF2-derived). \
|
(from secrets_find). Requires X-Encryption-Key header. \
|
||||||
Returns all fields, or a specific field if 'field' is provided.",
|
Returns all fields, or a specific field if 'field' is provided.",
|
||||||
annotations(
|
annotations(
|
||||||
title = "Get Secret Values",
|
title = "Get Secret Values",
|
||||||
@@ -417,29 +567,23 @@ impl SecretsService {
|
|||||||
) -> Result<CallToolResult, rmcp::ErrorData> {
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
let t = Instant::now();
|
let t = Instant::now();
|
||||||
let (user_id, user_key) = Self::require_user_and_key(&ctx)?;
|
let (user_id, user_key) = Self::require_user_and_key(&ctx)?;
|
||||||
|
let entry_id = parse_uuid(&input.id)?;
|
||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_get",
|
tool = "secrets_get",
|
||||||
?user_id,
|
id = %input.id,
|
||||||
name = %input.name,
|
|
||||||
field = input.field.as_deref(),
|
field = input.field.as_deref(),
|
||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
|
|
||||||
if let Some(field_name) = &input.field {
|
if let Some(field_name) = &input.field {
|
||||||
let value = get_secret_field(
|
let value =
|
||||||
&self.pool,
|
get_secret_field_by_id(&self.pool, entry_id, field_name, &user_key, Some(user_id))
|
||||||
&input.name,
|
|
||||||
input.folder.as_deref(),
|
|
||||||
field_name,
|
|
||||||
&user_key,
|
|
||||||
Some(user_id),
|
|
||||||
)
|
|
||||||
.await
|
.await
|
||||||
.map_err(|e| mcp_err_internal_logged("secrets_get", Some(user_id), e))?;
|
.map_err(|e| mcp_err_internal_logged("secrets_get", None, e))?;
|
||||||
|
|
||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_get",
|
tool = "secrets_get",
|
||||||
?user_id,
|
id = %input.id,
|
||||||
elapsed_ms = t.elapsed().as_millis(),
|
elapsed_ms = t.elapsed().as_millis(),
|
||||||
"tool call ok",
|
"tool call ok",
|
||||||
);
|
);
|
||||||
@@ -447,21 +591,14 @@ impl SecretsService {
|
|||||||
let json = serde_json::to_string_pretty(&result).unwrap_or_default();
|
let json = serde_json::to_string_pretty(&result).unwrap_or_default();
|
||||||
Ok(CallToolResult::success(vec![Content::text(json)]))
|
Ok(CallToolResult::success(vec![Content::text(json)]))
|
||||||
} else {
|
} else {
|
||||||
let secrets = get_all_secrets(
|
let secrets = get_all_secrets_by_id(&self.pool, entry_id, &user_key, Some(user_id))
|
||||||
&self.pool,
|
|
||||||
&input.name,
|
|
||||||
input.folder.as_deref(),
|
|
||||||
&user_key,
|
|
||||||
Some(user_id),
|
|
||||||
)
|
|
||||||
.await
|
.await
|
||||||
.map_err(|e| mcp_err_internal_logged("secrets_get", Some(user_id), e))?;
|
.map_err(|e| mcp_err_internal_logged("secrets_get", None, e))?;
|
||||||
|
|
||||||
let count = secrets.len();
|
|
||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_get",
|
tool = "secrets_get",
|
||||||
?user_id,
|
id = %entry_id,
|
||||||
field_count = count,
|
field_count = secrets.len(),
|
||||||
elapsed_ms = t.elapsed().as_millis(),
|
elapsed_ms = t.elapsed().as_millis(),
|
||||||
"tool call ok",
|
"tool call ok",
|
||||||
);
|
);
|
||||||
@@ -473,7 +610,8 @@ impl SecretsService {
|
|||||||
#[tool(
|
#[tool(
|
||||||
description = "Add or upsert an entry with metadata and encrypted secret fields. \
|
description = "Add or upsert an entry with metadata and encrypted secret fields. \
|
||||||
Requires X-Encryption-Key header. \
|
Requires X-Encryption-Key header. \
|
||||||
Meta and secret values use 'key=value', 'key=@file', or 'key:=<json>' format.",
|
Meta and secret values use 'key=value', 'key=@file', or 'key:=<json>' format, \
|
||||||
|
or pass a JSON object via meta_obj / secrets_obj.",
|
||||||
annotations(title = "Add Secret Entry")
|
annotations(title = "Add Secret Entry")
|
||||||
)]
|
)]
|
||||||
async fn secrets_add(
|
async fn secrets_add(
|
||||||
@@ -493,8 +631,14 @@ impl SecretsService {
|
|||||||
);
|
);
|
||||||
|
|
||||||
let tags = input.tags.unwrap_or_default();
|
let tags = input.tags.unwrap_or_default();
|
||||||
let meta = input.meta.unwrap_or_default();
|
let mut meta = input.meta.unwrap_or_default();
|
||||||
let secrets = input.secrets.unwrap_or_default();
|
if let Some(obj) = input.meta_obj {
|
||||||
|
meta.extend(map_to_kv_strings(obj));
|
||||||
|
}
|
||||||
|
let mut secrets = input.secrets.unwrap_or_default();
|
||||||
|
if let Some(obj) = input.secrets_obj {
|
||||||
|
secrets.extend(map_to_kv_strings(obj));
|
||||||
|
}
|
||||||
let folder = input.folder.as_deref().unwrap_or("");
|
let folder = input.folder.as_deref().unwrap_or("");
|
||||||
let entry_type = input.entry_type.as_deref().unwrap_or("");
|
let entry_type = input.entry_type.as_deref().unwrap_or("");
|
||||||
let notes = input.notes.as_deref().unwrap_or("");
|
let notes = input.notes.as_deref().unwrap_or("");
|
||||||
@@ -529,7 +673,8 @@ impl SecretsService {
|
|||||||
|
|
||||||
#[tool(
|
#[tool(
|
||||||
description = "Incrementally update an existing entry. Requires X-Encryption-Key header. \
|
description = "Incrementally update an existing entry. Requires X-Encryption-Key header. \
|
||||||
Only the fields you specify are changed; everything else is preserved.",
|
Only the fields you specify are changed; everything else is preserved. \
|
||||||
|
Optionally pass 'id' (from secrets_find) to target the entry directly.",
|
||||||
annotations(title = "Update Secret Entry")
|
annotations(title = "Update Secret Entry")
|
||||||
)]
|
)]
|
||||||
async fn secrets_update(
|
async fn secrets_update(
|
||||||
@@ -543,21 +688,40 @@ impl SecretsService {
|
|||||||
tool = "secrets_update",
|
tool = "secrets_update",
|
||||||
?user_id,
|
?user_id,
|
||||||
name = %input.name,
|
name = %input.name,
|
||||||
|
id = ?input.id,
|
||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// When id is provided, resolve to (name, folder) via primary key to skip disambiguation.
|
||||||
|
let (resolved_name, resolved_folder): (String, Option<String>) =
|
||||||
|
if let Some(ref id_str) = input.id {
|
||||||
|
let eid = parse_uuid(id_str)?;
|
||||||
|
let entry = resolve_entry_by_id(&self.pool, eid, Some(user_id))
|
||||||
|
.await
|
||||||
|
.map_err(|e| mcp_err_internal_logged("secrets_update", Some(user_id), e))?;
|
||||||
|
(entry.name, Some(entry.folder))
|
||||||
|
} else {
|
||||||
|
(input.name.clone(), input.folder.clone())
|
||||||
|
};
|
||||||
|
|
||||||
let add_tags = input.add_tags.unwrap_or_default();
|
let add_tags = input.add_tags.unwrap_or_default();
|
||||||
let remove_tags = input.remove_tags.unwrap_or_default();
|
let remove_tags = input.remove_tags.unwrap_or_default();
|
||||||
let meta = input.meta.unwrap_or_default();
|
let mut meta = input.meta.unwrap_or_default();
|
||||||
|
if let Some(obj) = input.meta_obj {
|
||||||
|
meta.extend(map_to_kv_strings(obj));
|
||||||
|
}
|
||||||
let remove_meta = input.remove_meta.unwrap_or_default();
|
let remove_meta = input.remove_meta.unwrap_or_default();
|
||||||
let secrets = input.secrets.unwrap_or_default();
|
let mut secrets = input.secrets.unwrap_or_default();
|
||||||
|
if let Some(obj) = input.secrets_obj {
|
||||||
|
secrets.extend(map_to_kv_strings(obj));
|
||||||
|
}
|
||||||
let remove_secrets = input.remove_secrets.unwrap_or_default();
|
let remove_secrets = input.remove_secrets.unwrap_or_default();
|
||||||
|
|
||||||
let result = svc_update(
|
let result = svc_update(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
UpdateParams {
|
UpdateParams {
|
||||||
name: &input.name,
|
name: &resolved_name,
|
||||||
folder: input.folder.as_deref(),
|
folder: resolved_folder.as_deref(),
|
||||||
notes: input.notes.as_deref(),
|
notes: input.notes.as_deref(),
|
||||||
add_tags: &add_tags,
|
add_tags: &add_tags,
|
||||||
remove_tags: &remove_tags,
|
remove_tags: &remove_tags,
|
||||||
@@ -575,7 +739,7 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_update",
|
tool = "secrets_update",
|
||||||
?user_id,
|
?user_id,
|
||||||
name = %input.name,
|
name = %resolved_name,
|
||||||
elapsed_ms = t.elapsed().as_millis(),
|
elapsed_ms = t.elapsed().as_millis(),
|
||||||
"tool call ok",
|
"tool call ok",
|
||||||
);
|
);
|
||||||
@@ -584,8 +748,9 @@ impl SecretsService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[tool(
|
#[tool(
|
||||||
description = "Delete one entry by name, or bulk delete entries matching folder and/or type. \
|
description = "Delete one entry by name (or id), or bulk delete entries matching folder \
|
||||||
Use dry_run=true to preview.",
|
and/or type. Use dry_run=true to preview. \
|
||||||
|
At least one of id, name, folder, or type must be provided.",
|
||||||
annotations(title = "Delete Secret Entry", destructive_hint = true)
|
annotations(title = "Delete Secret Entry", destructive_hint = true)
|
||||||
)]
|
)]
|
||||||
async fn secrets_delete(
|
async fn secrets_delete(
|
||||||
@@ -595,9 +760,23 @@ impl SecretsService {
|
|||||||
) -> Result<CallToolResult, rmcp::ErrorData> {
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
let t = Instant::now();
|
let t = Instant::now();
|
||||||
let user_id = Self::user_id_from_ctx(&ctx)?;
|
let user_id = Self::user_id_from_ctx(&ctx)?;
|
||||||
|
|
||||||
|
// Safety: require at least one filter.
|
||||||
|
if input.id.is_none()
|
||||||
|
&& input.name.is_none()
|
||||||
|
&& input.folder.is_none()
|
||||||
|
&& input.entry_type.is_none()
|
||||||
|
{
|
||||||
|
return Err(rmcp::ErrorData::invalid_request(
|
||||||
|
"At least one of id, name, folder, or type must be provided.",
|
||||||
|
None,
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_delete",
|
tool = "secrets_delete",
|
||||||
?user_id,
|
?user_id,
|
||||||
|
id = ?input.id,
|
||||||
name = input.name.as_deref(),
|
name = input.name.as_deref(),
|
||||||
folder = input.folder.as_deref(),
|
folder = input.folder.as_deref(),
|
||||||
entry_type = input.entry_type.as_deref(),
|
entry_type = input.entry_type.as_deref(),
|
||||||
@@ -605,11 +784,24 @@ impl SecretsService {
|
|||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// When id is provided, resolve to name+folder for the single-entry delete path.
|
||||||
|
let (effective_name, effective_folder): (Option<String>, Option<String>) =
|
||||||
|
if let Some(ref id_str) = input.id {
|
||||||
|
let eid = parse_uuid(id_str)?;
|
||||||
|
let uid = user_id;
|
||||||
|
let entry = resolve_entry_by_id(&self.pool, eid, uid)
|
||||||
|
.await
|
||||||
|
.map_err(|e| mcp_err_internal_logged("secrets_delete", uid, e))?;
|
||||||
|
(Some(entry.name), Some(entry.folder))
|
||||||
|
} else {
|
||||||
|
(input.name.clone(), input.folder.clone())
|
||||||
|
};
|
||||||
|
|
||||||
let result = svc_delete(
|
let result = svc_delete(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
DeleteParams {
|
DeleteParams {
|
||||||
name: input.name.as_deref(),
|
name: effective_name.as_deref(),
|
||||||
folder: input.folder.as_deref(),
|
folder: effective_folder.as_deref(),
|
||||||
entry_type: input.entry_type.as_deref(),
|
entry_type: input.entry_type.as_deref(),
|
||||||
dry_run: input.dry_run.unwrap_or(false),
|
dry_run: input.dry_run.unwrap_or(false),
|
||||||
user_id,
|
user_id,
|
||||||
@@ -630,7 +822,7 @@ impl SecretsService {
|
|||||||
|
|
||||||
#[tool(
|
#[tool(
|
||||||
description = "View change history for an entry. Returns a list of versions with \
|
description = "View change history for an entry. Returns a list of versions with \
|
||||||
actions and timestamps.",
|
actions and timestamps. Optionally pass 'id' (from secrets_find) to target directly.",
|
||||||
annotations(
|
annotations(
|
||||||
title = "View Secret History",
|
title = "View Secret History",
|
||||||
read_only_hint = true,
|
read_only_hint = true,
|
||||||
@@ -648,13 +840,25 @@ impl SecretsService {
|
|||||||
tool = "secrets_history",
|
tool = "secrets_history",
|
||||||
?user_id,
|
?user_id,
|
||||||
name = %input.name,
|
name = %input.name,
|
||||||
|
id = ?input.id,
|
||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
|
|
||||||
|
let (resolved_name, resolved_folder): (String, Option<String>) =
|
||||||
|
if let Some(ref id_str) = input.id {
|
||||||
|
let eid = parse_uuid(id_str)?;
|
||||||
|
let entry = resolve_entry_by_id(&self.pool, eid, user_id)
|
||||||
|
.await
|
||||||
|
.map_err(|e| mcp_err_internal_logged("secrets_history", user_id, e))?;
|
||||||
|
(entry.name, Some(entry.folder))
|
||||||
|
} else {
|
||||||
|
(input.name.clone(), input.folder.clone())
|
||||||
|
};
|
||||||
|
|
||||||
let result = svc_history(
|
let result = svc_history(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
&input.name,
|
&resolved_name,
|
||||||
input.folder.as_deref(),
|
resolved_folder.as_deref(),
|
||||||
input.limit.unwrap_or(20),
|
input.limit.unwrap_or(20),
|
||||||
user_id,
|
user_id,
|
||||||
)
|
)
|
||||||
@@ -673,7 +877,8 @@ impl SecretsService {
|
|||||||
|
|
||||||
#[tool(
|
#[tool(
|
||||||
description = "Rollback an entry to a previous version. Requires X-Encryption-Key header. \
|
description = "Rollback an entry to a previous version. Requires X-Encryption-Key header. \
|
||||||
Omit to_version to restore the most recent snapshot.",
|
Omit to_version to restore the most recent snapshot. \
|
||||||
|
Optionally pass 'id' (from secrets_find) to target directly.",
|
||||||
annotations(title = "Rollback Secret Entry", destructive_hint = true)
|
annotations(title = "Rollback Secret Entry", destructive_hint = true)
|
||||||
)]
|
)]
|
||||||
async fn secrets_rollback(
|
async fn secrets_rollback(
|
||||||
@@ -687,14 +892,26 @@ impl SecretsService {
|
|||||||
tool = "secrets_rollback",
|
tool = "secrets_rollback",
|
||||||
?user_id,
|
?user_id,
|
||||||
name = %input.name,
|
name = %input.name,
|
||||||
|
id = ?input.id,
|
||||||
to_version = input.to_version,
|
to_version = input.to_version,
|
||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
|
|
||||||
|
let (resolved_name, resolved_folder): (String, Option<String>) =
|
||||||
|
if let Some(ref id_str) = input.id {
|
||||||
|
let eid = parse_uuid(id_str)?;
|
||||||
|
let entry = resolve_entry_by_id(&self.pool, eid, Some(user_id))
|
||||||
|
.await
|
||||||
|
.map_err(|e| mcp_err_internal_logged("secrets_rollback", Some(user_id), e))?;
|
||||||
|
(entry.name, Some(entry.folder))
|
||||||
|
} else {
|
||||||
|
(input.name.clone(), input.folder.clone())
|
||||||
|
};
|
||||||
|
|
||||||
let result = svc_rollback(
|
let result = svc_rollback(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
&input.name,
|
&resolved_name,
|
||||||
input.folder.as_deref(),
|
resolved_folder.as_deref(),
|
||||||
input.to_version,
|
input.to_version,
|
||||||
&user_key,
|
&user_key,
|
||||||
Some(user_id),
|
Some(user_id),
|
||||||
@@ -784,7 +1001,10 @@ impl SecretsService {
|
|||||||
#[tool(
|
#[tool(
|
||||||
description = "Build the environment variable map from entry secrets with decrypted \
|
description = "Build the environment variable map from entry secrets with decrypted \
|
||||||
plaintext values. Requires X-Encryption-Key header. \
|
plaintext values. Requires X-Encryption-Key header. \
|
||||||
Returns a JSON object of VAR_NAME -> plaintext_value ready for injection.",
|
Returns a JSON object of VAR_NAME -> plaintext_value ready for injection. \
|
||||||
|
Variable names follow the pattern UPPER(entry_name)_UPPER(field_name), \
|
||||||
|
with hyphens and dots replaced by underscores. \
|
||||||
|
Example: entry 'aliyun', field 'access_key_id' → ALIYUN_ACCESS_KEY_ID.",
|
||||||
annotations(title = "Build Env Map", read_only_hint = true, idempotent_hint = true)
|
annotations(title = "Build Env Map", read_only_hint = true, idempotent_hint = true)
|
||||||
)]
|
)]
|
||||||
async fn secrets_env_map(
|
async fn secrets_env_map(
|
||||||
@@ -830,6 +1050,67 @@ impl SecretsService {
|
|||||||
let json = serde_json::to_string_pretty(&env_map).unwrap_or_default();
|
let json = serde_json::to_string_pretty(&env_map).unwrap_or_default();
|
||||||
Ok(CallToolResult::success(vec![Content::text(json)]))
|
Ok(CallToolResult::success(vec![Content::text(json)]))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[tool(
|
||||||
|
description = "Get an overview of the secrets store: counts of entries per folder and \
|
||||||
|
per type. Requires Bearer API key. Useful for exploring the store structure.",
|
||||||
|
annotations(
|
||||||
|
title = "Secrets Overview",
|
||||||
|
read_only_hint = true,
|
||||||
|
idempotent_hint = true
|
||||||
|
)
|
||||||
|
)]
|
||||||
|
async fn secrets_overview(
|
||||||
|
&self,
|
||||||
|
Parameters(_input): Parameters<OverviewInput>,
|
||||||
|
ctx: RequestContext<RoleServer>,
|
||||||
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
|
let t = Instant::now();
|
||||||
|
let user_id = Self::require_user_id(&ctx)?;
|
||||||
|
tracing::info!(tool = "secrets_overview", ?user_id, "tool call start");
|
||||||
|
|
||||||
|
#[derive(sqlx::FromRow)]
|
||||||
|
struct CountRow {
|
||||||
|
name: String,
|
||||||
|
count: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
let folder_rows: Vec<CountRow> = sqlx::query_as(
|
||||||
|
"SELECT folder AS name, COUNT(*) AS count FROM entries \
|
||||||
|
WHERE user_id = $1 GROUP BY folder ORDER BY folder",
|
||||||
|
)
|
||||||
|
.bind(user_id)
|
||||||
|
.fetch_all(&*self.pool)
|
||||||
|
.await
|
||||||
|
.map_err(|e| mcp_err_internal_logged("secrets_overview", Some(user_id), e))?;
|
||||||
|
|
||||||
|
let type_rows: Vec<CountRow> = sqlx::query_as(
|
||||||
|
"SELECT type AS name, COUNT(*) AS count FROM entries \
|
||||||
|
WHERE user_id = $1 GROUP BY type ORDER BY type",
|
||||||
|
)
|
||||||
|
.bind(user_id)
|
||||||
|
.fetch_all(&*self.pool)
|
||||||
|
.await
|
||||||
|
.map_err(|e| mcp_err_internal_logged("secrets_overview", Some(user_id), e))?;
|
||||||
|
|
||||||
|
let total: i64 = folder_rows.iter().map(|r| r.count).sum();
|
||||||
|
|
||||||
|
let result = serde_json::json!({
|
||||||
|
"total": total,
|
||||||
|
"folders": folder_rows.iter().map(|r| serde_json::json!({"name": r.name, "count": r.count})).collect::<Vec<_>>(),
|
||||||
|
"types": type_rows.iter().map(|r| serde_json::json!({"name": r.name, "count": r.count})).collect::<Vec<_>>(),
|
||||||
|
});
|
||||||
|
|
||||||
|
tracing::info!(
|
||||||
|
tool = "secrets_overview",
|
||||||
|
?user_id,
|
||||||
|
total,
|
||||||
|
elapsed_ms = t.elapsed().as_millis(),
|
||||||
|
"tool call ok",
|
||||||
|
);
|
||||||
|
let json = serde_json::to_string_pretty(&result).unwrap_or_default();
|
||||||
|
Ok(CallToolResult::success(vec![Content::text(json)]))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── ServerHandler ─────────────────────────────────────────────────────────────
|
// ── ServerHandler ─────────────────────────────────────────────────────────────
|
||||||
@@ -846,11 +1127,11 @@ impl ServerHandler for SecretsService {
|
|||||||
info.protocol_version = ProtocolVersion::V_2025_06_18;
|
info.protocol_version = ProtocolVersion::V_2025_06_18;
|
||||||
info.instructions = Some(
|
info.instructions = Some(
|
||||||
"Manage cross-device secrets and configuration securely. \
|
"Manage cross-device secrets and configuration securely. \
|
||||||
Data is encrypted with your passphrase-derived key. \
|
Use secrets_find to discover entries by folder, name, type, tags, or query \
|
||||||
Include your 64-char hex key in the X-Encryption-Key header for all read/write operations. \
|
(query also searches metadata values). \
|
||||||
Use secrets_search to discover entries (Bearer token required; encryption key not needed), \
|
Use secrets_get with the entry id (from secrets_find) to decrypt secret values. \
|
||||||
secrets_get to decrypt secret values, \
|
Use secrets_add / secrets_update to write entries. \
|
||||||
and secrets_add/secrets_update to write encrypted secrets."
|
Use secrets_overview for a quick count of entries per folder and type."
|
||||||
.to_string(),
|
.to_string(),
|
||||||
);
|
);
|
||||||
info
|
info
|
||||||
|
|||||||
@@ -8,9 +8,10 @@ use axum::{
|
|||||||
extract::{ConnectInfo, Path, Query, State},
|
extract::{ConnectInfo, Path, Query, State},
|
||||||
http::{HeaderMap, StatusCode, header},
|
http::{HeaderMap, StatusCode, header},
|
||||||
response::{Html, IntoResponse, Redirect, Response},
|
response::{Html, IntoResponse, Redirect, Response},
|
||||||
routing::{get, post},
|
routing::{get, patch, post},
|
||||||
};
|
};
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
|
use serde_json::json;
|
||||||
use tower_sessions::Session;
|
use tower_sessions::Session;
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
@@ -19,6 +20,9 @@ use secrets_core::crypto::hex;
|
|||||||
use secrets_core::service::{
|
use secrets_core::service::{
|
||||||
api_key::{ensure_api_key, regenerate_api_key},
|
api_key::{ensure_api_key, regenerate_api_key},
|
||||||
audit_log::list_for_user,
|
audit_log::list_for_user,
|
||||||
|
delete::delete_by_id,
|
||||||
|
search::{SearchParams, count_entries, list_entries},
|
||||||
|
update::{UpdateEntryFieldsByIdParams, update_fields_by_id},
|
||||||
user::{
|
user::{
|
||||||
OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id,
|
OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id,
|
||||||
unbind_oauth_account, update_user_key_setup,
|
unbind_oauth_account, update_user_key_setup,
|
||||||
@@ -78,6 +82,44 @@ struct AuditEntryView {
|
|||||||
detail: String,
|
detail: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Template)]
|
||||||
|
#[template(path = "entries.html")]
|
||||||
|
struct EntriesPageTemplate {
|
||||||
|
user_name: String,
|
||||||
|
user_email: String,
|
||||||
|
entries: Vec<EntryListItemView>,
|
||||||
|
total_count: i64,
|
||||||
|
shown_count: usize,
|
||||||
|
limit: u32,
|
||||||
|
filter_folder: String,
|
||||||
|
filter_type: String,
|
||||||
|
version: &'static str,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Non-sensitive fields only (no `secrets` / ciphertext).
|
||||||
|
struct EntryListItemView {
|
||||||
|
id: String,
|
||||||
|
folder: String,
|
||||||
|
entry_type: String,
|
||||||
|
name: String,
|
||||||
|
notes: String,
|
||||||
|
tags: String,
|
||||||
|
metadata: String,
|
||||||
|
/// RFC3339 UTC for `<time datetime>`; localized in entries.html.
|
||||||
|
updated_at_iso: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Cap for HTML list (avoids loading unbounded rows into memory).
|
||||||
|
const ENTRIES_PAGE_LIMIT: u32 = 5_000;
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
struct EntriesQuery {
|
||||||
|
folder: Option<String>,
|
||||||
|
/// URL query key is `type` (maps to DB column `entries.type`).
|
||||||
|
#[serde(rename = "type")]
|
||||||
|
entry_type: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
// ── App state helpers ─────────────────────────────────────────────────────────
|
// ── App state helpers ─────────────────────────────────────────────────────────
|
||||||
|
|
||||||
fn google_cfg(state: &AppState) -> Option<&OAuthConfig> {
|
fn google_cfg(state: &AppState) -> Option<&OAuthConfig> {
|
||||||
@@ -149,6 +191,7 @@ pub fn web_router() -> Router<AppState> {
|
|||||||
.route("/auth/google/callback", get(auth_google_callback))
|
.route("/auth/google/callback", get(auth_google_callback))
|
||||||
.route("/auth/logout", post(auth_logout))
|
.route("/auth/logout", post(auth_logout))
|
||||||
.route("/dashboard", get(dashboard))
|
.route("/dashboard", get(dashboard))
|
||||||
|
.route("/entries", get(entries_page))
|
||||||
.route("/audit", get(audit_page))
|
.route("/audit", get(audit_page))
|
||||||
.route("/account/bind/google", get(account_bind_google))
|
.route("/account/bind/google", get(account_bind_google))
|
||||||
.route(
|
.route(
|
||||||
@@ -160,6 +203,10 @@ pub fn web_router() -> Router<AppState> {
|
|||||||
.route("/api/key-setup", post(api_key_setup))
|
.route("/api/key-setup", post(api_key_setup))
|
||||||
.route("/api/apikey", get(api_apikey_get))
|
.route("/api/apikey", get(api_apikey_get))
|
||||||
.route("/api/apikey/regenerate", post(api_apikey_regenerate))
|
.route("/api/apikey/regenerate", post(api_apikey_regenerate))
|
||||||
|
.route(
|
||||||
|
"/api/entries/{id}",
|
||||||
|
patch(api_entry_patch).delete(api_entry_delete),
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn text_asset_response(content: &'static str, content_type: &'static str) -> Response {
|
fn text_asset_response(content: &'static str, content_type: &'static str) -> Response {
|
||||||
@@ -478,6 +525,89 @@ async fn dashboard(
|
|||||||
render_template(tmpl)
|
render_template(tmpl)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn entries_page(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
session: Session,
|
||||||
|
Query(q): Query<EntriesQuery>,
|
||||||
|
) -> Result<Response, StatusCode> {
|
||||||
|
let Some(user_id) = current_user_id(&session).await else {
|
||||||
|
return Ok(Redirect::to("/login").into_response());
|
||||||
|
};
|
||||||
|
|
||||||
|
let user = match get_user_by_id(&state.pool, user_id).await.map_err(|e| {
|
||||||
|
tracing::error!(error = %e, %user_id, "failed to load user for entries page");
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR
|
||||||
|
})? {
|
||||||
|
Some(u) => u,
|
||||||
|
None => return Ok(Redirect::to("/login").into_response()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let folder_filter = q
|
||||||
|
.folder
|
||||||
|
.as_ref()
|
||||||
|
.map(|s| s.trim())
|
||||||
|
.filter(|s| !s.is_empty())
|
||||||
|
.map(|s| s.to_string());
|
||||||
|
let type_filter = q
|
||||||
|
.entry_type
|
||||||
|
.as_ref()
|
||||||
|
.map(|s| s.trim())
|
||||||
|
.filter(|s| !s.is_empty())
|
||||||
|
.map(|s| s.to_string());
|
||||||
|
|
||||||
|
let params = SearchParams {
|
||||||
|
folder: folder_filter.as_deref(),
|
||||||
|
entry_type: type_filter.as_deref(),
|
||||||
|
name: None,
|
||||||
|
tags: &[],
|
||||||
|
query: None,
|
||||||
|
sort: "updated",
|
||||||
|
limit: ENTRIES_PAGE_LIMIT,
|
||||||
|
offset: 0,
|
||||||
|
user_id: Some(user_id),
|
||||||
|
};
|
||||||
|
|
||||||
|
let total_count = count_entries(&state.pool, ¶ms).await.map_err(|e| {
|
||||||
|
tracing::error!(error = %e, "failed to count entries for web");
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let rows = list_entries(&state.pool, params).await.map_err(|e| {
|
||||||
|
tracing::error!(error = %e, "failed to load entries list for web");
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR
|
||||||
|
})?;
|
||||||
|
let shown_count = rows.len();
|
||||||
|
|
||||||
|
let entries = rows
|
||||||
|
.into_iter()
|
||||||
|
.map(|e| EntryListItemView {
|
||||||
|
id: e.id.to_string(),
|
||||||
|
folder: e.folder,
|
||||||
|
entry_type: e.entry_type,
|
||||||
|
name: e.name,
|
||||||
|
notes: e.notes,
|
||||||
|
tags: e.tags.join(", "),
|
||||||
|
metadata: serde_json::to_string_pretty(&e.metadata)
|
||||||
|
.unwrap_or_else(|_| "{}".to_string()),
|
||||||
|
updated_at_iso: e.updated_at.to_rfc3339_opts(SecondsFormat::Secs, true),
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let tmpl = EntriesPageTemplate {
|
||||||
|
user_name: user.name.clone(),
|
||||||
|
user_email: user.email.clone().unwrap_or_default(),
|
||||||
|
entries,
|
||||||
|
total_count,
|
||||||
|
shown_count,
|
||||||
|
limit: ENTRIES_PAGE_LIMIT,
|
||||||
|
filter_folder: folder_filter.unwrap_or_default(),
|
||||||
|
filter_type: type_filter.unwrap_or_default(),
|
||||||
|
version: env!("CARGO_PKG_VERSION"),
|
||||||
|
};
|
||||||
|
|
||||||
|
render_template(tmpl)
|
||||||
|
}
|
||||||
|
|
||||||
async fn audit_page(
|
async fn audit_page(
|
||||||
State(state): State<AppState>,
|
State(state): State<AppState>,
|
||||||
session: Session,
|
session: Session,
|
||||||
@@ -751,6 +881,125 @@ async fn api_apikey_regenerate(
|
|||||||
Ok(Json(ApiKeyResponse { api_key }))
|
Ok(Json(ApiKeyResponse { api_key }))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ── Entry management (Web UI, non-sensitive fields only) ───────────────────────
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
struct EntryPatchBody {
|
||||||
|
folder: String,
|
||||||
|
#[serde(rename = "type")]
|
||||||
|
entry_type: String,
|
||||||
|
name: String,
|
||||||
|
notes: String,
|
||||||
|
tags: Vec<String>,
|
||||||
|
metadata: serde_json::Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
type EntryApiError = (StatusCode, Json<serde_json::Value>);
|
||||||
|
|
||||||
|
fn map_entry_mutation_err(e: anyhow::Error) -> EntryApiError {
|
||||||
|
let msg = e.to_string();
|
||||||
|
if msg.contains("Entry not found") {
|
||||||
|
return (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
Json(json!({ "error": "条目不存在或无权访问" })),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
if msg.contains("already exists") {
|
||||||
|
return (
|
||||||
|
StatusCode::CONFLICT,
|
||||||
|
Json(json!({ "error": "该账号下已存在相同 folder + name 的条目" })),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
if msg.contains("Concurrent modification") {
|
||||||
|
return (
|
||||||
|
StatusCode::CONFLICT,
|
||||||
|
Json(json!({ "error": "条目已被修改,请刷新后重试" })),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
if msg.contains("must be at most") {
|
||||||
|
return (StatusCode::BAD_REQUEST, Json(json!({ "error": msg })));
|
||||||
|
}
|
||||||
|
tracing::error!(error = %e, "entry mutation failed");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
Json(json!({ "error": "操作失败,请稍后重试" })),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn api_entry_patch(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
session: Session,
|
||||||
|
Path(entry_id): Path<Uuid>,
|
||||||
|
Json(body): Json<EntryPatchBody>,
|
||||||
|
) -> Result<Json<serde_json::Value>, EntryApiError> {
|
||||||
|
let user_id = current_user_id(&session)
|
||||||
|
.await
|
||||||
|
.ok_or((StatusCode::UNAUTHORIZED, Json(json!({ "error": "未登录" }))))?;
|
||||||
|
|
||||||
|
let folder = body.folder.trim();
|
||||||
|
let entry_type = body.entry_type.trim();
|
||||||
|
let name = body.name.trim();
|
||||||
|
let notes = body.notes.trim();
|
||||||
|
|
||||||
|
if name.is_empty() {
|
||||||
|
return Err((
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
Json(json!({ "error": "name 不能为空" })),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
let tags: Vec<String> = body
|
||||||
|
.tags
|
||||||
|
.into_iter()
|
||||||
|
.map(|t| t.trim().to_string())
|
||||||
|
.filter(|t| !t.is_empty())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
if !body.metadata.is_object() {
|
||||||
|
return Err((
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
Json(json!({ "error": "metadata 必须是 JSON 对象" })),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
update_fields_by_id(
|
||||||
|
&state.pool,
|
||||||
|
entry_id,
|
||||||
|
user_id,
|
||||||
|
UpdateEntryFieldsByIdParams {
|
||||||
|
folder,
|
||||||
|
entry_type,
|
||||||
|
name,
|
||||||
|
notes,
|
||||||
|
tags: &tags,
|
||||||
|
metadata: &body.metadata,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.map_err(map_entry_mutation_err)?;
|
||||||
|
|
||||||
|
Ok(Json(json!({ "ok": true })))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn api_entry_delete(
|
||||||
|
State(state): State<AppState>,
|
||||||
|
session: Session,
|
||||||
|
Path(entry_id): Path<Uuid>,
|
||||||
|
) -> Result<Json<serde_json::Value>, EntryApiError> {
|
||||||
|
let user_id = current_user_id(&session)
|
||||||
|
.await
|
||||||
|
.ok_or((StatusCode::UNAUTHORIZED, Json(json!({ "error": "未登录" }))))?;
|
||||||
|
|
||||||
|
let result = delete_by_id(&state.pool, entry_id, user_id)
|
||||||
|
.await
|
||||||
|
.map_err(map_entry_mutation_err)?;
|
||||||
|
|
||||||
|
Ok(Json(json!({
|
||||||
|
"ok": true,
|
||||||
|
"migrated": result.migrated,
|
||||||
|
})))
|
||||||
|
}
|
||||||
|
|
||||||
// ── OAuth / Well-known ────────────────────────────────────────────────────────
|
// ── OAuth / Well-known ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
/// RFC 9728 — OAuth 2.0 Protected Resource Metadata.
|
/// RFC 9728 — OAuth 2.0 Protected Resource Metadata.
|
||||||
|
|||||||
@@ -92,6 +92,7 @@
|
|||||||
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
|
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
|
||||||
<nav class="sidebar-menu">
|
<nav class="sidebar-menu">
|
||||||
<a href="/dashboard" class="sidebar-link">MCP</a>
|
<a href="/dashboard" class="sidebar-link">MCP</a>
|
||||||
|
<a href="/entries" class="sidebar-link">条目</a>
|
||||||
<a href="/audit" class="sidebar-link active">审计</a>
|
<a href="/audit" class="sidebar-link active">审计</a>
|
||||||
</nav>
|
</nav>
|
||||||
</aside>
|
</aside>
|
||||||
|
|||||||
@@ -174,6 +174,7 @@
|
|||||||
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
|
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
|
||||||
<nav class="sidebar-menu">
|
<nav class="sidebar-menu">
|
||||||
<a href="/dashboard" class="sidebar-link active">MCP</a>
|
<a href="/dashboard" class="sidebar-link active">MCP</a>
|
||||||
|
<a href="/entries" class="sidebar-link">条目</a>
|
||||||
<a href="/audit" class="sidebar-link">审计</a>
|
<a href="/audit" class="sidebar-link">审计</a>
|
||||||
</nav>
|
</nav>
|
||||||
</aside>
|
</aside>
|
||||||
|
|||||||
390
crates/secrets-mcp/templates/entries.html
Normal file
390
crates/secrets-mcp/templates/entries.html
Normal file
@@ -0,0 +1,390 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="zh-CN">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<link rel="icon" href="/favicon.svg?v={{ version }}" type="image/svg+xml">
|
||||||
|
<title>Secrets — 条目</title>
|
||||||
|
<style>
|
||||||
|
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
|
||||||
|
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;600&family=Inter:wght@400;500;600&display=swap');
|
||||||
|
:root {
|
||||||
|
--bg: #0d1117; --surface: #161b22; --surface2: #21262d;
|
||||||
|
--border: #30363d; --text: #e6edf3; --text-muted: #8b949e;
|
||||||
|
--accent: #58a6ff; --accent-hover: #79b8ff;
|
||||||
|
}
|
||||||
|
body { background: var(--bg); color: var(--text); font-family: 'Inter', sans-serif; min-height: 100vh; }
|
||||||
|
.layout { display: flex; min-height: 100vh; }
|
||||||
|
.sidebar {
|
||||||
|
width: 220px; flex-shrink: 0; background: var(--surface); border-right: 1px solid var(--border);
|
||||||
|
padding: 24px 16px; display: flex; flex-direction: column; gap: 20px;
|
||||||
|
}
|
||||||
|
.sidebar-logo { font-family: 'JetBrains Mono', monospace; font-size: 16px; font-weight: 600;
|
||||||
|
color: var(--text); text-decoration: none; padding: 0 10px; }
|
||||||
|
.sidebar-logo span { color: var(--accent); }
|
||||||
|
.sidebar-menu { display: flex; flex-direction: column; gap: 6px; }
|
||||||
|
.sidebar-link {
|
||||||
|
padding: 10px 12px; border-radius: 8px; color: var(--text-muted); text-decoration: none;
|
||||||
|
border: 1px solid transparent; font-size: 13px; font-weight: 500;
|
||||||
|
}
|
||||||
|
.sidebar-link:hover { background: var(--surface2); color: var(--text); }
|
||||||
|
.sidebar-link.active {
|
||||||
|
background: rgba(88,166,255,0.12); color: var(--text); border-color: rgba(88,166,255,0.35);
|
||||||
|
}
|
||||||
|
.content-shell { flex: 1; min-width: 0; display: flex; flex-direction: column; }
|
||||||
|
.topbar {
|
||||||
|
background: var(--surface); border-bottom: 1px solid var(--border); padding: 0 24px;
|
||||||
|
display: flex; align-items: center; gap: 12px; min-height: 52px;
|
||||||
|
}
|
||||||
|
.topbar-spacer { flex: 1; }
|
||||||
|
.nav-user { font-size: 13px; color: var(--text-muted); }
|
||||||
|
.btn-sign-out {
|
||||||
|
padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border);
|
||||||
|
background: none; color: var(--text); font-size: 12px; text-decoration: none; cursor: pointer;
|
||||||
|
}
|
||||||
|
.btn-sign-out:hover { background: var(--surface2); }
|
||||||
|
.main { padding: 32px 24px 40px; flex: 1; }
|
||||||
|
.card { background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
|
||||||
|
padding: 24px; width: 100%; max-width: 1280px; margin: 0 auto; }
|
||||||
|
.card-title { font-size: 20px; font-weight: 600; margin-bottom: 8px; }
|
||||||
|
.card-subtitle { color: var(--text-muted); font-size: 13px; margin-bottom: 20px; }
|
||||||
|
.filter-bar {
|
||||||
|
display: flex; flex-wrap: wrap; align-items: flex-end; gap: 12px 16px;
|
||||||
|
margin-bottom: 20px; padding: 16px; background: var(--bg); border: 1px solid var(--border);
|
||||||
|
border-radius: 10px;
|
||||||
|
}
|
||||||
|
.filter-field { display: flex; flex-direction: column; gap: 6px; min-width: 140px; flex: 1; }
|
||||||
|
.filter-field label { font-size: 12px; color: var(--text-muted); font-weight: 500; }
|
||||||
|
.filter-field input {
|
||||||
|
background: var(--surface); border: 1px solid var(--border); border-radius: 6px;
|
||||||
|
color: var(--text); padding: 8px 10px; font-size: 13px; font-family: 'JetBrains Mono', monospace;
|
||||||
|
outline: none; width: 100%;
|
||||||
|
}
|
||||||
|
.filter-field input:focus { border-color: var(--accent); }
|
||||||
|
.filter-actions { display: flex; flex-wrap: wrap; align-items: center; gap: 8px; }
|
||||||
|
.btn-filter {
|
||||||
|
padding: 8px 16px; border-radius: 6px; border: none; background: var(--accent); color: #0d1117;
|
||||||
|
font-size: 13px; font-weight: 600; cursor: pointer;
|
||||||
|
}
|
||||||
|
.btn-filter:hover { background: var(--accent-hover); }
|
||||||
|
.btn-clear {
|
||||||
|
padding: 8px 14px; border-radius: 6px; border: 1px solid var(--border); background: transparent;
|
||||||
|
color: var(--text-muted); font-size: 13px; text-decoration: none; cursor: pointer;
|
||||||
|
}
|
||||||
|
.btn-clear:hover { background: var(--surface2); color: var(--text); }
|
||||||
|
.empty { color: var(--text-muted); font-size: 14px; padding: 20px 0; }
|
||||||
|
.table-wrap { overflow-x: auto; }
|
||||||
|
table { width: 100%; border-collapse: collapse; min-width: 720px; }
|
||||||
|
th, td { text-align: left; vertical-align: top; padding: 12px 10px; border-top: 1px solid var(--border); }
|
||||||
|
th { color: var(--text-muted); font-size: 12px; font-weight: 600; white-space: nowrap; }
|
||||||
|
td { font-size: 13px; }
|
||||||
|
.mono { font-family: 'JetBrains Mono', monospace; }
|
||||||
|
.cell-notes, .cell-meta {
|
||||||
|
max-width: 280px; word-break: break-word;
|
||||||
|
}
|
||||||
|
.notes-scroll {
|
||||||
|
max-height: 160px;
|
||||||
|
overflow: auto;
|
||||||
|
white-space: pre-wrap;
|
||||||
|
word-break: break-word;
|
||||||
|
padding: 8px;
|
||||||
|
background: var(--bg);
|
||||||
|
border: 1px solid var(--border);
|
||||||
|
border-radius: 8px;
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
.detail {
|
||||||
|
background: var(--bg); border: 1px solid var(--border); border-radius: 8px;
|
||||||
|
padding: 10px; white-space: pre-wrap; word-break: break-word; font-size: 12px;
|
||||||
|
max-width: 320px; max-height: 160px; overflow: auto;
|
||||||
|
}
|
||||||
|
.col-actions { white-space: nowrap; }
|
||||||
|
.row-actions { display: flex; flex-wrap: wrap; gap: 6px; }
|
||||||
|
.btn-row {
|
||||||
|
padding: 4px 10px; border-radius: 6px; font-size: 12px; cursor: pointer;
|
||||||
|
border: 1px solid var(--border); background: var(--surface2); color: var(--text-muted);
|
||||||
|
font-family: inherit;
|
||||||
|
}
|
||||||
|
.btn-row:hover { color: var(--text); border-color: var(--text-muted); }
|
||||||
|
.btn-row.danger:hover { border-color: #f85149; color: #f85149; }
|
||||||
|
.modal-overlay {
|
||||||
|
position: fixed; inset: 0; background: rgba(1, 4, 9, 0.65); z-index: 200;
|
||||||
|
display: flex; align-items: center; justify-content: center; padding: 16px;
|
||||||
|
}
|
||||||
|
.modal-overlay[hidden] { display: none !important; }
|
||||||
|
.modal {
|
||||||
|
background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
|
||||||
|
padding: 22px; width: 100%; max-width: 520px; max-height: 90vh; overflow: auto;
|
||||||
|
box-shadow: 0 16px 48px rgba(0,0,0,0.45);
|
||||||
|
}
|
||||||
|
.modal-title { font-size: 16px; font-weight: 600; margin-bottom: 14px; }
|
||||||
|
.modal-field { margin-bottom: 12px; }
|
||||||
|
.modal-field label { display: block; font-size: 12px; color: var(--text-muted); margin-bottom: 5px; }
|
||||||
|
.modal-field input, .modal-field textarea {
|
||||||
|
width: 100%; background: var(--bg); border: 1px solid var(--border); border-radius: 6px;
|
||||||
|
color: var(--text); padding: 8px 10px; font-size: 13px; font-family: 'JetBrains Mono', monospace;
|
||||||
|
outline: none;
|
||||||
|
}
|
||||||
|
.modal-field textarea { min-height: 72px; resize: vertical; }
|
||||||
|
.modal-field textarea.metadata-edit { min-height: 140px; }
|
||||||
|
.modal-error { color: #f85149; font-size: 12px; margin-bottom: 10px; display: none; }
|
||||||
|
.modal-error.visible { display: block; }
|
||||||
|
.modal-footer { display: flex; flex-wrap: wrap; gap: 8px; justify-content: flex-end; margin-top: 16px; }
|
||||||
|
.btn-modal { padding: 8px 16px; border-radius: 6px; font-size: 13px; cursor: pointer; font-family: inherit; border: 1px solid var(--border); background: transparent; color: var(--text); }
|
||||||
|
.btn-modal.primary { background: var(--accent); color: #0d1117; border-color: transparent; font-weight: 600; }
|
||||||
|
.btn-modal.primary:hover { background: var(--accent-hover); }
|
||||||
|
.btn-modal.danger { border-color: #f85149; color: #f85149; }
|
||||||
|
@media (max-width: 900px) {
|
||||||
|
.layout { flex-direction: column; }
|
||||||
|
.sidebar {
|
||||||
|
width: 100%; border-right: none; border-bottom: 1px solid var(--border);
|
||||||
|
padding: 16px; gap: 14px;
|
||||||
|
}
|
||||||
|
.sidebar-menu { flex-direction: row; flex-wrap: wrap; }
|
||||||
|
.sidebar-link { flex: 1; text-align: center; min-width: 72px; }
|
||||||
|
.main { padding: 20px 12px 28px; }
|
||||||
|
.card { padding: 16px; }
|
||||||
|
.topbar { padding: 12px 16px; flex-wrap: wrap; }
|
||||||
|
table, thead, tbody, th, td, tr { display: block; }
|
||||||
|
thead { display: none; }
|
||||||
|
tr { border-top: 1px solid var(--border); padding: 12px 0; }
|
||||||
|
td { border-top: none; padding: 6px 0; max-width: none; }
|
||||||
|
td::before {
|
||||||
|
display: block; color: var(--text-muted); font-size: 11px;
|
||||||
|
margin-bottom: 4px; text-transform: uppercase;
|
||||||
|
}
|
||||||
|
td.col-updated::before { content: "更新"; }
|
||||||
|
td.col-folder::before { content: "Folder"; }
|
||||||
|
td.col-type::before { content: "Type"; }
|
||||||
|
td.col-name::before { content: "Name"; }
|
||||||
|
td.col-notes::before { content: "Notes"; }
|
||||||
|
td.col-tags::before { content: "Tags"; }
|
||||||
|
td.col-meta::before { content: "Metadata"; }
|
||||||
|
td.col-actions::before { content: "操作"; }
|
||||||
|
.detail { max-width: none; }
|
||||||
|
.notes-scroll { max-width: none; }
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div class="layout">
|
||||||
|
<aside class="sidebar">
|
||||||
|
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
|
||||||
|
<nav class="sidebar-menu">
|
||||||
|
<a href="/dashboard" class="sidebar-link">MCP</a>
|
||||||
|
<a href="/entries" class="sidebar-link active">条目</a>
|
||||||
|
<a href="/audit" class="sidebar-link">审计</a>
|
||||||
|
</nav>
|
||||||
|
</aside>
|
||||||
|
|
||||||
|
<div class="content-shell">
|
||||||
|
<div class="topbar">
|
||||||
|
<span class="topbar-spacer"></span>
|
||||||
|
<span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span>
|
||||||
|
<form action="/auth/logout" method="post" style="display:inline">
|
||||||
|
<button type="submit" class="btn-sign-out">退出</button>
|
||||||
|
</form>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<main class="main">
|
||||||
|
<section class="card">
|
||||||
|
<div class="card-title">我的条目</div>
|
||||||
|
<div class="card-subtitle">在当前筛选条件下,共 <strong>{{ total_count }}</strong> 条记录;本页显示 <strong>{{ shown_count }}</strong> 条(按更新时间降序,单页最多 {{ limit }} 条)。不含密文字段。时间为浏览器本地时区。</div>
|
||||||
|
|
||||||
|
<form class="filter-bar" method="get" action="/entries">
|
||||||
|
<div class="filter-field">
|
||||||
|
<label for="filter-folder">Folder(精确匹配)</label>
|
||||||
|
<input id="filter-folder" name="folder" type="text" value="{{ filter_folder }}" placeholder="例如 refining" autocomplete="off">
|
||||||
|
</div>
|
||||||
|
<div class="filter-field">
|
||||||
|
<label for="filter-type">Type(精确匹配)</label>
|
||||||
|
<input id="filter-type" name="type" type="text" value="{{ filter_type }}" placeholder="例如 server" autocomplete="off">
|
||||||
|
</div>
|
||||||
|
<div class="filter-actions">
|
||||||
|
<button type="submit" class="btn-filter">筛选</button>
|
||||||
|
<a href="/entries" class="btn-clear">清空</a>
|
||||||
|
</div>
|
||||||
|
</form>
|
||||||
|
|
||||||
|
{% if entries.is_empty() %}
|
||||||
|
<div class="empty">暂无条目。</div>
|
||||||
|
{% else %}
|
||||||
|
<div class="table-wrap">
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>更新</th>
|
||||||
|
<th>Folder</th>
|
||||||
|
<th>Type</th>
|
||||||
|
<th>Name</th>
|
||||||
|
<th>Notes</th>
|
||||||
|
<th>Tags</th>
|
||||||
|
<th>Metadata</th>
|
||||||
|
<th>操作</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
{% for entry in entries %}
|
||||||
|
<tr data-entry-id="{{ entry.id }}">
|
||||||
|
<td class="col-updated mono"><time class="entry-local-time" datetime="{{ entry.updated_at_iso }}">{{ entry.updated_at_iso }}</time></td>
|
||||||
|
<td class="col-folder mono cell-folder">{{ entry.folder }}</td>
|
||||||
|
<td class="col-type mono cell-type">{{ entry.entry_type }}</td>
|
||||||
|
<td class="col-name mono cell-name">{{ entry.name }}</td>
|
||||||
|
<td class="col-notes cell-notes">{% if !entry.notes.is_empty() %}<div class="notes-scroll cell-notes-val">{{ entry.notes }}</div>{% endif %}</td>
|
||||||
|
<td class="col-tags mono cell-tags-val">{{ entry.tags }}</td>
|
||||||
|
<td class="col-meta cell-meta"><pre class="detail cell-meta-val">{{ entry.metadata }}</pre></td>
|
||||||
|
<td class="col-actions">
|
||||||
|
<div class="row-actions">
|
||||||
|
<button type="button" class="btn-row btn-edit">编辑</button>
|
||||||
|
<button type="button" class="btn-row danger btn-del">删除</button>
|
||||||
|
</div>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
{% endfor %}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
</section>
|
||||||
|
</main>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div id="edit-overlay" class="modal-overlay" hidden>
|
||||||
|
<div class="modal" role="dialog" aria-modal="true" aria-labelledby="edit-title">
|
||||||
|
<div class="modal-title" id="edit-title">编辑条目</div>
|
||||||
|
<div id="edit-error" class="modal-error"></div>
|
||||||
|
<div class="modal-field"><label for="edit-folder">Folder</label><input id="edit-folder" type="text" autocomplete="off"></div>
|
||||||
|
<div class="modal-field"><label for="edit-type">Type</label><input id="edit-type" type="text" autocomplete="off"></div>
|
||||||
|
<div class="modal-field"><label for="edit-name">Name</label><input id="edit-name" type="text" autocomplete="off"></div>
|
||||||
|
<div class="modal-field"><label for="edit-notes">Notes</label><textarea id="edit-notes"></textarea></div>
|
||||||
|
<div class="modal-field"><label for="edit-tags">Tags(逗号分隔)</label><input id="edit-tags" type="text" autocomplete="off"></div>
|
||||||
|
<div class="modal-field"><label for="edit-metadata">Metadata(JSON 对象)</label><textarea id="edit-metadata" class="metadata-edit"></textarea></div>
|
||||||
|
<div class="modal-footer">
|
||||||
|
<button type="button" class="btn-modal" id="edit-cancel">取消</button>
|
||||||
|
<button type="button" class="btn-modal primary" id="edit-save">保存</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<script>
|
||||||
|
(function () {
|
||||||
|
document.querySelectorAll('time.entry-local-time[datetime]').forEach(function (el) {
|
||||||
|
var raw = el.getAttribute('datetime');
|
||||||
|
var d = raw ? new Date(raw) : null;
|
||||||
|
if (d && !isNaN(d.getTime())) {
|
||||||
|
el.textContent = d.toLocaleString(undefined, { dateStyle: 'medium', timeStyle: 'medium' });
|
||||||
|
el.title = raw + ' (UTC)';
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
var editOverlay = document.getElementById('edit-overlay');
|
||||||
|
var editError = document.getElementById('edit-error');
|
||||||
|
var editFolder = document.getElementById('edit-folder');
|
||||||
|
var editType = document.getElementById('edit-type');
|
||||||
|
var editName = document.getElementById('edit-name');
|
||||||
|
var editNotes = document.getElementById('edit-notes');
|
||||||
|
var editTags = document.getElementById('edit-tags');
|
||||||
|
var editMetadata = document.getElementById('edit-metadata');
|
||||||
|
var currentEntryId = null;
|
||||||
|
|
||||||
|
function showEditErr(msg) {
|
||||||
|
editError.textContent = msg || '';
|
||||||
|
editError.classList.toggle('visible', !!msg);
|
||||||
|
}
|
||||||
|
|
||||||
|
function openEdit(tr) {
|
||||||
|
var id = tr.getAttribute('data-entry-id');
|
||||||
|
if (!id) return;
|
||||||
|
currentEntryId = id;
|
||||||
|
showEditErr('');
|
||||||
|
editFolder.value = tr.querySelector('.cell-folder') ? tr.querySelector('.cell-folder').textContent.trim() : '';
|
||||||
|
editType.value = tr.querySelector('.cell-type') ? tr.querySelector('.cell-type').textContent.trim() : '';
|
||||||
|
editName.value = tr.querySelector('.cell-name') ? tr.querySelector('.cell-name').textContent.trim() : '';
|
||||||
|
editNotes.value = tr.querySelector('.cell-notes-val') ? tr.querySelector('.cell-notes-val').textContent : '';
|
||||||
|
var tagsText = tr.querySelector('.cell-tags-val') ? tr.querySelector('.cell-tags-val').textContent.trim() : '';
|
||||||
|
editTags.value = tagsText;
|
||||||
|
var metaPre = tr.querySelector('.cell-meta-val');
|
||||||
|
editMetadata.value = metaPre ? metaPre.textContent : '{}';
|
||||||
|
editOverlay.hidden = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
function closeEdit() {
|
||||||
|
editOverlay.hidden = true;
|
||||||
|
currentEntryId = null;
|
||||||
|
showEditErr('');
|
||||||
|
}
|
||||||
|
|
||||||
|
document.getElementById('edit-cancel').addEventListener('click', closeEdit);
|
||||||
|
editOverlay.addEventListener('click', function (e) {
|
||||||
|
if (e.target === editOverlay) closeEdit();
|
||||||
|
});
|
||||||
|
|
||||||
|
document.getElementById('edit-save').addEventListener('click', function () {
|
||||||
|
if (!currentEntryId) return;
|
||||||
|
var meta;
|
||||||
|
try {
|
||||||
|
meta = JSON.parse(editMetadata.value);
|
||||||
|
} catch (err) {
|
||||||
|
showEditErr('Metadata 不是合法 JSON');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (meta === null || typeof meta !== 'object' || Array.isArray(meta)) {
|
||||||
|
showEditErr('Metadata 必须是 JSON 对象');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
var tags = editTags.value.split(',').map(function (s) { return s.trim(); }).filter(Boolean);
|
||||||
|
var body = {
|
||||||
|
folder: editFolder.value,
|
||||||
|
type: editType.value,
|
||||||
|
name: editName.value.trim(),
|
||||||
|
notes: editNotes.value,
|
||||||
|
tags: tags,
|
||||||
|
metadata: meta
|
||||||
|
};
|
||||||
|
showEditErr('');
|
||||||
|
fetch('/api/entries/' + encodeURIComponent(currentEntryId), {
|
||||||
|
method: 'PATCH',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
credentials: 'same-origin',
|
||||||
|
body: JSON.stringify(body)
|
||||||
|
}).then(function (r) {
|
||||||
|
return r.json().then(function (data) {
|
||||||
|
if (!r.ok) throw new Error(data.error || ('HTTP ' + r.status));
|
||||||
|
return data;
|
||||||
|
});
|
||||||
|
}).then(function () {
|
||||||
|
closeEdit();
|
||||||
|
window.location.reload();
|
||||||
|
}).catch(function (e) {
|
||||||
|
showEditErr(e.message || String(e));
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
document.querySelectorAll('tr[data-entry-id]').forEach(function (tr) {
|
||||||
|
tr.querySelector('.btn-edit').addEventListener('click', function () { openEdit(tr); });
|
||||||
|
tr.querySelector('.btn-del').addEventListener('click', function () {
|
||||||
|
var id = tr.getAttribute('data-entry-id');
|
||||||
|
var nameEl = tr.querySelector('.cell-name');
|
||||||
|
var name = nameEl ? nameEl.textContent.trim() : '';
|
||||||
|
if (!id) return;
|
||||||
|
if (!confirm('确定删除条目「' + name + '」?')) return;
|
||||||
|
fetch('/api/entries/' + encodeURIComponent(id), { method: 'DELETE', credentials: 'same-origin' })
|
||||||
|
.then(function (r) {
|
||||||
|
return r.json().then(function (data) {
|
||||||
|
if (!r.ok) throw new Error(data.error || ('HTTP ' + r.status));
|
||||||
|
return data;
|
||||||
|
});
|
||||||
|
})
|
||||||
|
.then(function (data) {
|
||||||
|
if (data && Array.isArray(data.migrated) && data.migrated.length > 0) {
|
||||||
|
alert('已自动迁移共享 key 引用:' + data.migrated.length + ' 个条目完成重定向。');
|
||||||
|
}
|
||||||
|
window.location.reload();
|
||||||
|
})
|
||||||
|
.catch(function (e) { alert(e.message || String(e)); });
|
||||||
|
});
|
||||||
|
});
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
@@ -3,7 +3,13 @@
|
|||||||
|
|
||||||
# ─── 数据库 ───────────────────────────────────────────────────────────
|
# ─── 数据库 ───────────────────────────────────────────────────────────
|
||||||
# Web 会话(tower-sessions)与业务数据共用此库;启动时会自动 migrate 会话表,无需额外环境变量。
|
# Web 会话(tower-sessions)与业务数据共用此库;启动时会自动 migrate 会话表,无需额外环境变量。
|
||||||
SECRETS_DATABASE_URL=postgres://postgres:PASSWORD@HOST:PORT/secrets-mcp
|
SECRETS_DATABASE_URL=postgres://postgres:PASSWORD@db.refining.ltd:5432/secrets-mcp
|
||||||
|
# 强烈建议生产使用 verify-full(至少 verify-ca)
|
||||||
|
SECRETS_DATABASE_SSL_MODE=verify-full
|
||||||
|
# 私有 CA 或自建链路时填写 CA 根证书路径;使用公共受信 CA 可留空
|
||||||
|
# SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
|
||||||
|
# 当设为 prod/production 时,服务会拒绝弱 TLS 模式(prefer/disable/allow/require)
|
||||||
|
SECRETS_ENV=production
|
||||||
|
|
||||||
# ─── 服务地址 ─────────────────────────────────────────────────────────
|
# ─── 服务地址 ─────────────────────────────────────────────────────────
|
||||||
# 内网监听地址(Cloudflare / Nginx 反代时填内网端口)
|
# 内网监听地址(Cloudflare / Nginx 反代时填内网端口)
|
||||||
|
|||||||
92
deploy/postgres-tls-hardening.md
Normal file
92
deploy/postgres-tls-hardening.md
Normal file
@@ -0,0 +1,92 @@
|
|||||||
|
# PostgreSQL TLS Hardening Runbook
|
||||||
|
|
||||||
|
This runbook applies to:
|
||||||
|
|
||||||
|
- PostgreSQL server: `47.117.131.22` (`db.refining.ltd`)
|
||||||
|
- `secrets-mcp` app server: `47.238.146.244` (`secrets.refining.app`)
|
||||||
|
|
||||||
|
## 1) Issue certificate for `db.refining.ltd` (Let's Encrypt + Cloudflare DNS-01)
|
||||||
|
|
||||||
|
Install `acme.sh` on the PostgreSQL server and use a Cloudflare API token with DNS edit permission for the target zone.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl https://get.acme.sh | sh -s email=ops@refining.ltd
|
||||||
|
export CF_Token="your_cloudflare_dns_token"
|
||||||
|
export CF_Zone_ID="your_zone_id"
|
||||||
|
~/.acme.sh/acme.sh --issue --dns dns_cf -d db.refining.ltd --keylength ec-256
|
||||||
|
```
|
||||||
|
|
||||||
|
Install cert/key into a PostgreSQL-readable path:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo mkdir -p /etc/postgresql/tls
|
||||||
|
sudo ~/.acme.sh/acme.sh --install-cert -d db.refining.ltd --ecc \
|
||||||
|
--fullchain-file /etc/postgresql/tls/fullchain.pem \
|
||||||
|
--key-file /etc/postgresql/tls/privkey.pem \
|
||||||
|
--reloadcmd "systemctl reload postgresql || systemctl restart postgresql"
|
||||||
|
sudo chown -R postgres:postgres /etc/postgresql/tls
|
||||||
|
sudo chmod 600 /etc/postgresql/tls/privkey.pem
|
||||||
|
sudo chmod 644 /etc/postgresql/tls/fullchain.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2) Configure PostgreSQL TLS and access rules
|
||||||
|
|
||||||
|
In `postgresql.conf`:
|
||||||
|
|
||||||
|
```conf
|
||||||
|
ssl = on
|
||||||
|
ssl_cert_file = '/etc/postgresql/tls/fullchain.pem'
|
||||||
|
ssl_key_file = '/etc/postgresql/tls/privkey.pem'
|
||||||
|
```
|
||||||
|
|
||||||
|
In `pg_hba.conf`, allow app traffic via TLS only (example):
|
||||||
|
|
||||||
|
```conf
|
||||||
|
hostssl secrets-mcp postgres 47.238.146.244/32 scram-sha-256
|
||||||
|
```
|
||||||
|
|
||||||
|
Keep a safe admin path (`local` socket or restricted source CIDR) before removing old plaintext `host` rules.
|
||||||
|
|
||||||
|
Reload PostgreSQL:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl reload postgresql
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3) Verify server-side TLS
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openssl s_client -starttls postgres -connect db.refining.ltd:5432 -servername db.refining.ltd
|
||||||
|
```
|
||||||
|
|
||||||
|
The handshake should succeed and the certificate should match `db.refining.ltd`.
|
||||||
|
|
||||||
|
## 4) Update `secrets-mcp` app server env
|
||||||
|
|
||||||
|
Use environment values like:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
SECRETS_DATABASE_URL=postgres://postgres:***@db.refining.ltd:5432/secrets-mcp
|
||||||
|
SECRETS_DATABASE_SSL_MODE=verify-full
|
||||||
|
SECRETS_ENV=production
|
||||||
|
```
|
||||||
|
|
||||||
|
If you use private CA instead of public CA, also set:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart `secrets-mcp` after updating env.
|
||||||
|
|
||||||
|
## 5) Verify from app server
|
||||||
|
|
||||||
|
Run positive and negative checks:
|
||||||
|
|
||||||
|
- Positive: app starts, migrations pass, dashboard + MCP API work.
|
||||||
|
- Negative:
|
||||||
|
- wrong hostname -> connection fails
|
||||||
|
- wrong CA file -> connection fails
|
||||||
|
- disable TLS on DB -> connection fails
|
||||||
|
|
||||||
|
This ensures no silent downgrade to weak TLS in production.
|
||||||
Reference in New Issue
Block a user