chore(release): secrets-mcp 0.4.0
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m19s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s

Bump version for the N:N entry_secrets data model and related MCP/Web
changes. Remove superseded SQL migration artifacts; rely on auto-migrate.
Add structured errors, taxonomy normalization, and web i18n helpers.

Made-with: Cursor
This commit is contained in:
voson
2026-04-04 17:58:12 +08:00
parent b99d821644
commit 1518388374
29 changed files with 2285 additions and 1260 deletions

5
.gitignore vendored
View File

@@ -2,7 +2,6 @@
.env .env
.DS_Store .DS_Store
.cursor/ .cursor/
# Google OAuth 下载的 JSON 凭据文件
client_secret_*.apps.googleusercontent.com.json
*.pem *.pem
tmp/ tmp/
client_secret_*.apps.googleusercontent.com.json

View File

@@ -55,13 +55,24 @@ entries (
```sql ```sql
secrets ( secrets (
id UUID PRIMARY KEY DEFAULT uuidv7(), id UUID PRIMARY KEY DEFAULT uuidv7(),
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE, user_id UUID,
field_name VARCHAR(256) NOT NULL, name VARCHAR(256) NOT NULL,
type VARCHAR(64) NOT NULL DEFAULT 'text',
encrypted BYTEA NOT NULL DEFAULT '\x', encrypted BYTEA NOT NULL DEFAULT '\x',
version BIGINT NOT NULL DEFAULT 1, version BIGINT NOT NULL DEFAULT 1,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
UNIQUE(entry_id, field_name) )
-- 唯一UNIQUE(user_id, name) WHERE user_id IS NOT NULL
```
```sql
entry_secrets (
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
secret_id UUID NOT NULL REFERENCES secrets(id) ON DELETE CASCADE,
sort_order INT NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY(entry_id, secret_id)
) )
``` ```
@@ -108,17 +119,20 @@ oauth_accounts (
| 字段 | 含义 | 示例 | | 字段 | 含义 | 示例 |
|------|------|------| |------|------|------|
| `folder` | 隔离空间(参与唯一键) | `refining` | | `folder` | 隔离空间(参与唯一键) | `refining` |
| `type` | 软分类(不参与唯一键) | `server`, `service`, `key`, `person` | | `type` | 软分类(不参与唯一键) | `server`, `service`, `person`, `document` |
| `name` | 标识名 | `gitea`, `aliyun` | | `name` | 标识名 | `gitea`, `aliyun` |
| `notes` | 非敏感说明 | 自由文本 | | `notes` | 非敏感说明 | 自由文本 |
| `tags` | 标签 | `["aliyun","prod"]` | | `tags` | 标签 | `["aliyun","prod"]` |
| `metadata` | 明文描述 | `ip``url``key_ref` | | `metadata` | 明文描述 | `ip``url``subtype` |
| `secrets.field_name` | 加密字段名(明文 | `token`, `ssh_key` | | `secrets.name` | 密钥名称(调用方提供 | `token`, `ssh_key`, `password` |
| `secrets.type` | 密钥类型(调用方提供,默认 `text` | `text`, `password`, `key` |
| `secrets.encrypted` | 密文 | AES-GCM | | `secrets.encrypted` | 密文 | AES-GCM |
### PEM 共享(`key_ref` ### 共享密钥N:N 关联
建议将共享 PEM 存为 **`type=key`** 的 entry其它记录在 `metadata.key_ref` 指向目标 entry 的 `name`(支持 `folder/name` 格式消歧)。删除被引用 key 时,服务会自动迁移为单副本 + 重定向(复制到首个引用方,其余引用方改指向新 owner解析逻辑见 `secrets_core::service::env_map` 多个 entry 可共享同一 secret 字段,通过 `entry_secrets` 中间表关联
添加条目时通过 `link_secret_names` 参数指定要关联的已有 secret name`(user_id, name)` 精确匹配)。
删除 entry 时仅解除关联secret 本身若仍被引用则保留;不再被任何 entry 引用时自动清理。
## 代码规范 ## 代码规范

3
Cargo.lock generated
View File

@@ -1960,6 +1960,7 @@ dependencies = [
"sha2", "sha2",
"sqlx", "sqlx",
"tempfile", "tempfile",
"thiserror",
"tokio", "tokio",
"toml", "toml",
"tracing", "tracing",
@@ -1968,7 +1969,7 @@ dependencies = [
[[package]] [[package]]
name = "secrets-mcp" name = "secrets-mcp"
version = "0.3.9" version = "0.4.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"askama", "askama",

View File

@@ -28,6 +28,7 @@ rand = "^0.10.0"
# Utils # Utils
anyhow = "^1.0.102" anyhow = "^1.0.102"
thiserror = "^2"
chrono = { version = "^0.4.44", features = ["serde"] } chrono = { version = "^0.4.44", features = ["serde"] }
uuid = { version = "^1.22.0", features = ["serde"] } uuid = { version = "^1.22.0", features = ["serde"] }
tracing = "^0.1" tracing = "^0.1"

View File

@@ -57,7 +57,7 @@ SECRETS_ENV=production
- **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。 - **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。
- **`secrets_get` / `secrets_update` / `secrets_delete`(按 name/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。 - **`secrets_get` / `secrets_update` / `secrets_delete`(按 name/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。
- **`secrets_delete`**`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder` - **`secrets_delete`**`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder`
- **共享 key 自动迁移删除**:删除仍被 `metadata.key_ref` 引用的 key 条目时,系统会自动迁移:把密文复制到首个引用方,并将其余引用方的 `key_ref` 重定向到新 owner然后继续删除 - **共享密钥**N:N 关联下,删除 entry 仅解除关联,被共享的 secret 若仍被其他 entry 引用则保留;无引用时自动清理
## 加密架构(混合 E2EE ## 加密架构(混合 E2EE
@@ -151,25 +151,28 @@ flowchart LR
## 数据模型 ## 数据模型
主表 **`entries`**`folder``type``name``notes``tags``metadata`,多租户时带 `user_id`+ 子表 **`secrets`**(每行一个加密字段:`field_name``encrypted`)。**唯一性**`UNIQUE(user_id, folder, name)``user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history``secrets_history``audit_log`,以及 **`users`**(含 `key_salt``key_check``key_params``api_key`)、**`oauth_accounts`**。首次连库自动迁移建表(`secrets-core``migrate`);已有库可对照 [`scripts/migrate-v0.3.0.sql`](scripts/migrate-v0.3.0.sql) 做列重命名与索引重建。**Web 登录会话**tower-sessions使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp``PostgresStore::migrate`),无需额外环境变量。 主表 **`entries`**`folder``type``name``notes``tags``metadata`,多租户时带 `user_id`+ 子表 **`secrets`**(每行一个加密字段:`name``type``encrypted`,通过 `entry_secrets` 中间表与 entry 建立 N:N 关联)。**唯一性**`UNIQUE(user_id, folder, name)``user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history``secrets_history``audit_log`,以及 **`users`**(含 `key_salt``key_check``key_params``api_key`)、**`oauth_accounts`**。首次连库自动迁移建表(`secrets-core``migrate`);已有库可对照 [`scripts/migrate-v0.3.0.sql`](scripts/migrate-v0.3.0.sql) 做列重命名与索引重建。**Web 登录会话**tower-sessions使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp``PostgresStore::migrate`),无需额外环境变量。
| 位置 | 字段 | 说明 | | 位置 | 字段 | 说明 |
|------|------|------| |------|------|------|
| entries | folder | 组织/隔离空间,如 `refining``ricnsmart`;参与唯一键 | | entries | folder | 组织/隔离空间,如 `refining``ricnsmart`;参与唯一键 |
| entries | type | 软分类,如 `server``service``key``person`(可扩展,不参与唯一键) | | entries | type | 软分类,如 `server``service``person``document`(可扩展,不参与唯一键) |
| entries | name | 人类可读标识;与 `folder` 一起在用户内唯一 | | entries | name | 人类可读标识;与 `folder` 一起在用户内唯一 |
| entries | notes | 非敏感说明文本 | | entries | notes | 非敏感说明文本 |
| entries | metadata | 明文 JSONip、url、`key_ref` 等) | | entries | metadata | 明文 JSONip、url、subtype 等) |
| secrets | field_name | 明文字段名,便于 schema 展示 | | secrets | name | 密钥名称(调用方提供) |
| secrets | type | 密钥类型(调用方提供,默认 `text` |
| secrets | encrypted | AES-GCM 密文(含 nonce | | secrets | encrypted | AES-GCM 密文(含 nonce |
| users | key_salt | PBKDF2 salt32B首次设置密码短语时写入 | | users | key_salt | PBKDF2 salt32B首次设置密码短语时写入 |
| users | key_check | 派生密钥加密已知常量,用于验证密码短语 | | users | key_check | 派生密钥加密已知常量,用于验证密码短语 |
| users | key_params | 派生算法参数,如 `{"alg":"pbkdf2-sha256","iterations":600000}` | | users | key_params | 派生算法参数,如 `{"alg":"pbkdf2-sha256","iterations":600000}` |
### PEM 共享(`key_ref` ### 共享密钥N:N 关联
同一 PEM 可被多条 `server` 等记录引用:建议将 PEM 存为 **`type=key`** 的 entry在其它条目的 `metadata.key_ref` 中写目标 entry 的 `name`(支持 `folder/name` 格式消歧);轮换时只更新该目标记录即可。 多个条目可共享同一密文字段,通过 `entry_secrets` 中间表实现 N:N 关联:
删除共享 key 时,系统会自动迁移引用:将密文复制到首个引用方(单副本),其余引用方的 `key_ref` 自动重定向到该新 owner再删除原 key 记录。 - 添加条目时可通过 `link_secret_names` 参数关联已有的 secret`(user_id, name)` 精确匹配查找)
- 同一 secret 可被多个 entry 引用,删除某 entry 不会级联删除被共享的 secret
- 当 secret 不再被任何 entry 引用时,自动清理(`NOT EXISTS` 子查询)
## 审计日志 ## 审计日志

View File

@@ -10,6 +10,7 @@ path = "src/lib.rs"
[dependencies] [dependencies]
aes-gcm.workspace = true aes-gcm.workspace = true
anyhow.workspace = true anyhow.workspace = true
thiserror.workspace = true
chrono.workspace = true chrono.workspace = true
rand.workspace = true rand.workspace = true
serde.workspace = true serde.workspace = true

View File

@@ -0,0 +1,139 @@
use sqlx::error::DatabaseError;
/// Structured business errors for the secrets service.
///
/// These replace ad-hoc `anyhow` strings for expected failure modes,
/// allowing MCP and Web layers to map to appropriate protocol-level errors.
#[derive(Debug, thiserror::Error)]
pub enum AppError {
#[error("A secret with the name '{secret_name}' already exists for this user")]
ConflictSecretName { secret_name: String },
#[error("An entry with folder='{folder}' and name='{name}' already exists")]
ConflictEntryName { folder: String, name: String },
#[error("Entry not found")]
NotFoundEntry,
#[error("Validation failed: {message}")]
Validation { message: String },
#[error("Concurrent modification detected")]
ConcurrentModification,
#[error(transparent)]
Internal(#[from] anyhow::Error),
}
impl AppError {
/// Try to convert a sqlx database error into a structured `AppError`.
///
/// The caller should provide the context (which table was being written,
/// what values were being inserted) so we can produce a meaningful error.
pub fn from_db_error(err: sqlx::Error, ctx: DbErrorContext<'_>) -> Self {
if let sqlx::Error::Database(ref db_err) = err
&& db_err.code().as_deref() == Some("23505")
{
return Self::from_unique_violation(db_err.as_ref(), ctx);
}
AppError::Internal(err.into())
}
fn from_unique_violation(db_err: &dyn DatabaseError, ctx: DbErrorContext<'_>) -> Self {
let constraint = db_err.constraint();
match constraint {
Some("idx_secrets_unique_user_name") => AppError::ConflictSecretName {
secret_name: ctx.secret_name.unwrap_or("unknown").to_string(),
},
Some("idx_entries_unique_user") | Some("idx_entries_unique_legacy") => {
AppError::ConflictEntryName {
folder: ctx.folder.unwrap_or("").to_string(),
name: ctx.name.unwrap_or("unknown").to_string(),
}
}
_ => {
// Fall back to message-based detection for unnamed constraints
let msg = db_err.message();
if msg.contains("secrets") {
AppError::ConflictSecretName {
secret_name: ctx.secret_name.unwrap_or("unknown").to_string(),
}
} else {
AppError::ConflictEntryName {
folder: ctx.folder.unwrap_or("").to_string(),
name: ctx.name.unwrap_or("unknown").to_string(),
}
}
}
}
}
}
/// Context hints used when converting a database error to `AppError`.
#[derive(Debug, Default, Clone, Copy)]
pub struct DbErrorContext<'a> {
pub secret_name: Option<&'a str>,
pub folder: Option<&'a str>,
pub name: Option<&'a str>,
}
impl<'a> DbErrorContext<'a> {
pub fn secret_name(name: &'a str) -> Self {
Self {
secret_name: Some(name),
..Default::default()
}
}
pub fn entry(folder: &'a str, name: &'a str) -> Self {
Self {
folder: Some(folder),
name: Some(name),
..Default::default()
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn app_error_display_messages() {
let err = AppError::ConflictSecretName {
secret_name: "token".to_string(),
};
assert!(err.to_string().contains("token"));
let err = AppError::ConflictEntryName {
folder: "refining".to_string(),
name: "gitea".to_string(),
};
assert!(err.to_string().contains("refining"));
assert!(err.to_string().contains("gitea"));
let err = AppError::NotFoundEntry;
assert_eq!(err.to_string(), "Entry not found");
let err = AppError::Validation {
message: "too long".to_string(),
};
assert!(err.to_string().contains("too long"));
let err = AppError::ConcurrentModification;
assert!(err.to_string().contains("Concurrent modification"));
}
#[test]
fn db_error_context_helpers() {
let ctx = DbErrorContext::secret_name("my_key");
assert_eq!(ctx.secret_name, Some("my_key"));
assert!(ctx.folder.is_none());
let ctx = DbErrorContext::entry("prod", "db-creds");
assert_eq!(ctx.folder, Some("prod"));
assert_eq!(ctx.name, Some("db-creds"));
assert!(ctx.secret_name.is_none());
}
}

View File

@@ -2,5 +2,7 @@ pub mod audit;
pub mod config; pub mod config;
pub mod crypto; pub mod crypto;
pub mod db; pub mod db;
pub mod error;
pub mod models; pub mod models;
pub mod service; pub mod service;
pub mod taxonomy;

View File

@@ -7,7 +7,9 @@ use uuid::Uuid;
use crate::crypto; use crate::crypto;
use crate::db; use crate::db;
use crate::error::{AppError, DbErrorContext};
use crate::models::EntryRow; use crate::models::EntryRow;
use crate::taxonomy;
// ── Key/value parsing helpers ───────────────────────────────────────────────── // ── Key/value parsing helpers ─────────────────────────────────────────────────
@@ -177,13 +179,19 @@ pub struct AddParams<'a> {
pub tags: &'a [String], pub tags: &'a [String],
pub meta_entries: &'a [String], pub meta_entries: &'a [String],
pub secret_entries: &'a [String], pub secret_entries: &'a [String],
pub secret_types: &'a std::collections::HashMap<String, String>,
pub link_secret_names: &'a [String], pub link_secret_names: &'a [String],
/// Optional user_id for multi-user isolation (None = single-user CLI mode) /// Optional user_id for multi-user isolation (None = single-user CLI mode)
pub user_id: Option<Uuid>, pub user_id: Option<Uuid>,
} }
pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> Result<AddResult> { pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> Result<AddResult> {
let metadata = build_json(params.meta_entries)?; let Value::Object(mut metadata_map) = build_json(params.meta_entries)? else {
unreachable!("build_json always returns a JSON object");
};
let normalized_entry_type =
taxonomy::normalize_entry_type_and_metadata(params.entry_type, &mut metadata_map);
let metadata = Value::Object(metadata_map);
let secret_json = build_json(params.secret_entries)?; let secret_json = build_json(params.secret_entries)?;
let meta_keys = collect_key_paths(params.meta_entries)?; let meta_keys = collect_key_paths(params.meta_entries)?;
let secret_keys = collect_key_paths(params.secret_entries)?; let secret_keys = collect_key_paths(params.secret_entries)?;
@@ -224,7 +232,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
entry_id: ex.id, entry_id: ex.id,
user_id: params.user_id, user_id: params.user_id,
folder: params.folder, folder: params.folder,
entry_type: params.entry_type, entry_type: &normalized_entry_type,
name: params.name, name: params.name,
version: ex.version, version: ex.version,
action: "add", action: "add",
@@ -254,7 +262,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
) )
.bind(uid) .bind(uid)
.bind(params.folder) .bind(params.folder)
.bind(params.entry_type) .bind(&normalized_entry_type)
.bind(params.name) .bind(params.name)
.bind(params.notes) .bind(params.notes)
.bind(params.tags) .bind(params.tags)
@@ -277,7 +285,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
RETURNING id"#, RETURNING id"#,
) )
.bind(params.folder) .bind(params.folder)
.bind(params.entry_type) .bind(&normalized_entry_type)
.bind(params.name) .bind(params.name)
.bind(params.notes) .bind(params.notes)
.bind(params.tags) .bind(params.tags)
@@ -299,7 +307,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
entry_id, entry_id,
user_id: params.user_id, user_id: params.user_id,
folder: params.folder, folder: params.folder,
entry_type: params.entry_type, entry_type: &normalized_entry_type,
name: params.name, name: params.name,
version: current_entry_version, version: current_entry_version,
action: "create", action: "create",
@@ -345,30 +353,42 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
} }
} }
let orphan_candidates: Vec<Uuid> = existing_fields.iter().map(|f| f.id).collect();
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1") sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1")
.bind(entry_id) .bind(entry_id)
.execute(&mut *tx) .execute(&mut *tx)
.await?; .await?;
sqlx::query( if !orphan_candidates.is_empty() {
"DELETE FROM secrets s \ sqlx::query(
WHERE NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)", "DELETE FROM secrets s \
) WHERE s.id = ANY($1) \
.execute(&mut *tx) AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
.await?; )
.bind(&orphan_candidates)
.execute(&mut *tx)
.await?;
}
} }
for (field_name, field_value) in &flat_fields { for (field_name, field_value) in &flat_fields {
let encrypted = crypto::encrypt_json(master_key, field_value)?; let encrypted = crypto::encrypt_json(master_key, field_value)?;
let secret_type = params
.secret_types
.get(field_name)
.map(|s| s.as_str())
.unwrap_or("text");
let secret_id: Uuid = sqlx::query_scalar( let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id", "INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
) )
.bind(params.user_id) .bind(params.user_id)
.bind(field_name) .bind(field_name)
.bind(infer_secret_type(field_name)) .bind(secret_type)
.bind(&encrypted) .bind(&encrypted)
.fetch_one(&mut *tx) .fetch_one(&mut *tx)
.await?; .await
.map_err(|e| AppError::from_db_error(e, DbErrorContext::secret_name(field_name)))?;
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)") sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
.bind(entry_id) .bind(entry_id)
.bind(secret_id) .bind(secret_id)
@@ -414,7 +434,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
params.user_id, params.user_id,
"add", "add",
params.folder, params.folder,
params.entry_type, &normalized_entry_type,
params.name, params.name,
serde_json::json!({ serde_json::json!({
"tags": params.tags, "tags": params.tags,
@@ -429,32 +449,13 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
Ok(AddResult { Ok(AddResult {
name: params.name.to_string(), name: params.name.to_string(),
folder: params.folder.to_string(), folder: params.folder.to_string(),
entry_type: params.entry_type.to_string(), entry_type: normalized_entry_type,
tags: params.tags.to_vec(), tags: params.tags.to_vec(),
meta_keys, meta_keys,
secret_keys, secret_keys,
}) })
} }
pub(crate) fn infer_secret_type(name: &str) -> &'static str {
match name {
"ssh_key" => "pem",
"password" => "password",
"phone" | "phone_2" => "phone",
"webhook_url" | "address" => "url",
"access_key_id"
| "access_key_secret"
| "global_api_key"
| "api_key"
| "secret_key"
| "personal_access_token"
| "runner_token"
| "GOOGLE_CLIENT_ID"
| "GOOGLE_CLIENT_SECRET" => "token",
_ => "text",
}
}
fn validate_link_secret_names( fn validate_link_secret_names(
link_secret_names: &[String], link_secret_names: &[String],
new_secret_names: &BTreeSet<String>, new_secret_names: &BTreeSet<String>,
@@ -601,6 +602,7 @@ mod tests {
tags: &[], tags: &[],
meta_entries: &[], meta_entries: &[],
secret_entries: &[], secret_entries: &[],
secret_types: &Default::default(),
link_secret_names: std::slice::from_ref(&secret_name), link_secret_names: std::slice::from_ref(&secret_name),
user_id: None, user_id: None,
}, },
@@ -647,6 +649,7 @@ mod tests {
tags: &[], tags: &[],
meta_entries: &[], meta_entries: &[],
secret_entries: &[], secret_entries: &[],
secret_types: &Default::default(),
link_secret_names: std::slice::from_ref(&secret_name), link_secret_names: std::slice::from_ref(&secret_name),
user_id: None, user_id: None,
}, },
@@ -697,6 +700,7 @@ mod tests {
tags: &[], tags: &[],
meta_entries: &[], meta_entries: &[],
secret_entries: &[], secret_entries: &[],
secret_types: &Default::default(),
link_secret_names: std::slice::from_ref(&secret_name), link_secret_names: std::slice::from_ref(&secret_name),
user_id: None, user_id: None,
}, },
@@ -709,4 +713,69 @@ mod tests {
cleanup_test_rows(&pool, &marker).await?; cleanup_test_rows(&pool, &marker).await?;
Ok(()) Ok(())
} }
#[tokio::test]
async fn add_duplicate_secret_name_returns_conflict_error() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("dup_secret_{}", &suffix[..8]);
let entry_name = format!("{}_entry", marker);
let secret_name = "shared_token";
cleanup_test_rows(&pool, &marker).await?;
// First add succeeds
run(
&pool,
AddParams {
name: &entry_name,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[format!("{}=value1", secret_name)],
secret_types: &Default::default(),
link_secret_names: &[],
user_id: None,
},
&[0_u8; 32],
)
.await?;
// Second add with same secret name under same user_id should fail with ConflictSecretName
let entry_name2 = format!("{}_entry2", marker);
let err = run(
&pool,
AddParams {
name: &entry_name2,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[format!("{}=value2", secret_name)],
secret_types: &Default::default(),
link_secret_names: &[],
user_id: None,
},
&[0_u8; 32],
)
.await
.expect_err("must fail on duplicate secret name");
let app_err = err
.downcast_ref::<crate::error::AppError>()
.expect("error should be AppError");
assert!(
matches!(app_err, crate::error::AppError::ConflictSecretName { .. }),
"expected ConflictSecretName, got: {}",
app_err
);
cleanup_test_rows(&pool, &marker).await?;
Ok(())
}
} }

View File

@@ -17,7 +17,6 @@ pub struct DeletedEntry {
#[derive(Debug, serde::Serialize)] #[derive(Debug, serde::Serialize)]
pub struct DeleteResult { pub struct DeleteResult {
pub deleted: Vec<DeletedEntry>, pub deleted: Vec<DeletedEntry>,
pub migrated: Vec<String>,
pub dry_run: bool, pub dry_run: bool,
} }
@@ -32,174 +31,6 @@ pub struct DeleteParams<'a> {
pub user_id: Option<Uuid>, pub user_id: Option<Uuid>,
} }
#[derive(Debug, sqlx::FromRow)]
struct KeyReferrer {
id: Uuid,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
name: String,
}
fn ref_label(r: &KeyReferrer) -> String {
format!("{}/{} ({})", r.folder, r.name, r.entry_type)
}
fn ref_path(r: &KeyReferrer) -> String {
format!("{}/{}", r.folder, r.name)
}
async fn fetch_key_referrers_pool(
pool: &PgPool,
key_entry_id: Uuid,
key_folder: &str,
key_name: &str,
user_id: Option<Uuid>,
) -> Result<Vec<KeyReferrer>> {
let qualified = format!("{}/{}", key_folder, key_name);
let refs: Vec<KeyReferrer> = if let Some(uid) = user_id {
sqlx::query_as(
"SELECT id, folder, type, name FROM entries \
WHERE user_id = $1 AND id <> $2 \
AND (metadata->>'key_ref' = $3 OR metadata->>'key_ref' = $4) \
ORDER BY folder, type, name",
)
.bind(uid)
.bind(key_entry_id)
.bind(key_name)
.bind(&qualified)
.fetch_all(pool)
.await?
} else {
sqlx::query_as(
"SELECT id, folder, type, name FROM entries \
WHERE user_id IS NULL AND id <> $1 \
AND (metadata->>'key_ref' = $2 OR metadata->>'key_ref' = $3) \
ORDER BY folder, type, name",
)
.bind(key_entry_id)
.bind(key_name)
.bind(&qualified)
.fetch_all(pool)
.await?
};
Ok(refs)
}
async fn migrate_key_refs_if_needed(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
key_row: &EntryRow,
key_name: &str,
user_id: Option<Uuid>,
dry_run: bool,
) -> Result<Vec<String>> {
let qualified = format!("{}/{}", key_row.folder, key_name);
let refs: Vec<KeyReferrer> = if let Some(uid) = user_id {
sqlx::query_as(
"SELECT id, folder, type, name FROM entries \
WHERE user_id = $1 AND id <> $2 \
AND (metadata->>'key_ref' = $3 OR metadata->>'key_ref' = $4) \
ORDER BY folder, type, name",
)
.bind(uid)
.bind(key_row.id)
.bind(key_name)
.bind(&qualified)
.fetch_all(&mut **tx)
.await?
} else {
sqlx::query_as(
"SELECT id, folder, type, name FROM entries \
WHERE user_id IS NULL AND id <> $1 \
AND (metadata->>'key_ref' = $2 OR metadata->>'key_ref' = $3) \
ORDER BY folder, type, name",
)
.bind(key_row.id)
.bind(key_name)
.bind(&qualified)
.fetch_all(&mut **tx)
.await?
};
if refs.is_empty() {
return Ok(vec![]);
}
if dry_run {
return Ok(refs.iter().map(ref_label).collect());
}
let owner = &refs[0];
let owner_path = ref_path(owner);
let key_fields: Vec<SecretFieldRow> = sqlx::query_as(
"SELECT s.id, s.name, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1",
)
.bind(key_row.id)
.fetch_all(&mut **tx)
.await?;
for f in &key_fields {
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING")
.bind(owner.id)
.bind(f.id)
.execute(&mut **tx)
.await?;
}
sqlx::query(
"UPDATE entries SET metadata = metadata - 'key_ref', \
version = version + 1, updated_at = NOW() WHERE id = $1",
)
.bind(owner.id)
.execute(&mut **tx)
.await?;
crate::audit::log_tx(
tx,
user_id,
"key_migrate",
&owner.folder,
&owner.entry_type,
&owner.name,
json!({
"from_key": format!("{}/{}", key_row.folder, key_name),
"role": "new_owner",
"redirect_target": owner_path,
}),
)
.await;
for r in refs.iter().skip(1) {
sqlx::query(
"UPDATE entries SET metadata = jsonb_set(metadata, '{key_ref}', to_jsonb($2::text), true), \
version = version + 1, updated_at = NOW() WHERE id = $1",
)
.bind(r.id)
.bind(&owner_path)
.execute(&mut **tx)
.await?;
crate::audit::log_tx(
tx,
user_id,
"key_migrate",
&r.folder,
&r.entry_type,
&r.name,
json!({
"from_key": format!("{}/{}", key_row.folder, key_name),
"role": "redirected_ref",
"redirect_to": owner_path,
}),
)
.await;
}
Ok(refs.iter().map(ref_label).collect())
}
/// Delete a single entry by id (multi-tenant: `user_id` must match). /// Delete a single entry by id (multi-tenant: `user_id` must match).
pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<DeleteResult> { pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<DeleteResult> {
let mut tx = pool.begin().await?; let mut tx = pool.begin().await?;
@@ -224,8 +55,6 @@ pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Resul
let entry_type = row.entry_type.clone(); let entry_type = row.entry_type.clone();
let name = row.name.clone(); let name = row.name.clone();
let entry_row: EntryRow = (&row).into(); let entry_row: EntryRow = (&row).into();
let migrated =
migrate_key_refs_if_needed(&mut tx, &entry_row, &name, Some(user_id), false).await?;
snapshot_and_delete( snapshot_and_delete(
&mut tx, &mut tx,
@@ -254,7 +83,6 @@ pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Resul
folder, folder,
entry_type, entry_type,
}], }],
migrated,
dry_run: false, dry_run: false,
}) })
} }
@@ -294,6 +122,7 @@ async fn delete_one(
// - 2+ matches → disambiguation error (same as non-dry-run) // - 2+ matches → disambiguation error (same as non-dry-run)
#[derive(sqlx::FromRow)] #[derive(sqlx::FromRow)]
struct DryRunRow { struct DryRunRow {
#[allow(dead_code)]
id: Uuid, id: Uuid,
folder: String, folder: String,
#[sqlx(rename = "type")] #[sqlx(rename = "type")]
@@ -339,20 +168,16 @@ async fn delete_one(
return match rows.len() { return match rows.len() {
0 => Ok(DeleteResult { 0 => Ok(DeleteResult {
deleted: vec![], deleted: vec![],
migrated: vec![],
dry_run: true, dry_run: true,
}), }),
1 => { 1 => {
let row = rows.into_iter().next().unwrap(); let row = rows.into_iter().next().unwrap();
let refs =
fetch_key_referrers_pool(pool, row.id, &row.folder, name, user_id).await?;
Ok(DeleteResult { Ok(DeleteResult {
deleted: vec![DeletedEntry { deleted: vec![DeletedEntry {
name: name.to_string(), name: name.to_string(),
folder: row.folder, folder: row.folder,
entry_type: row.entry_type, entry_type: row.entry_type,
}], }],
migrated: refs.iter().map(ref_label).collect(),
dry_run: true, dry_run: true,
}) })
} }
@@ -417,7 +242,6 @@ async fn delete_one(
tx.rollback().await?; tx.rollback().await?;
return Ok(DeleteResult { return Ok(DeleteResult {
deleted: vec![], deleted: vec![],
migrated: vec![],
dry_run: false, dry_run: false,
}); });
} }
@@ -437,7 +261,6 @@ async fn delete_one(
let folder = row.folder.clone(); let folder = row.folder.clone();
let entry_type = row.entry_type.clone(); let entry_type = row.entry_type.clone();
let migrated = migrate_key_refs_if_needed(&mut tx, &row, name, user_id, false).await?;
snapshot_and_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?; snapshot_and_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?;
crate::audit::log_tx( crate::audit::log_tx(
&mut tx, &mut tx,
@@ -457,7 +280,6 @@ async fn delete_one(
folder, folder,
entry_type, entry_type,
}], }],
migrated,
dry_run: false, dry_run: false,
}) })
} }
@@ -497,33 +319,29 @@ async fn delete_bulk(
} }
if entry_type.is_some() { if entry_type.is_some() {
conditions.push(format!("type = ${}", idx)); conditions.push(format!("type = ${}", idx));
idx += 1;
} }
let where_clause = format!("WHERE {}", conditions.join(" AND ")); let where_clause = format!("WHERE {}", conditions.join(" AND "));
let sql = format!( let _ = idx; // used only for placeholder numbering in conditions
"SELECT id, version, folder, type, name, metadata, tags, notes \
FROM entries {where_clause} ORDER BY type, name"
);
let mut q = sqlx::query_as::<_, FullEntryRow>(&sql);
if let Some(uid) = user_id {
q = q.bind(uid);
}
if let Some(f) = folder {
q = q.bind(f);
}
if let Some(t) = entry_type {
q = q.bind(t);
}
let rows = q.fetch_all(pool).await?;
if dry_run { if dry_run {
let mut migrated: Vec<String> = Vec::new(); let sql = format!(
for row in &rows { "SELECT id, version, folder, type, name, metadata, tags, notes \
let refs = FROM entries {where_clause} ORDER BY type, name"
fetch_key_referrers_pool(pool, row.id, &row.folder, &row.name, user_id).await?; );
migrated.extend(refs.iter().map(ref_label)); let mut q = sqlx::query_as::<_, FullEntryRow>(&sql);
if let Some(uid) = user_id {
q = q.bind(uid);
} }
if let Some(f) = folder {
q = q.bind(f);
}
if let Some(t) = entry_type {
q = q.bind(t);
}
let rows = q.fetch_all(pool).await?;
let deleted = rows let deleted = rows
.iter() .iter()
.map(|r| DeletedEntry { .map(|r| DeletedEntry {
@@ -534,15 +352,31 @@ async fn delete_bulk(
.collect(); .collect();
return Ok(DeleteResult { return Ok(DeleteResult {
deleted, deleted,
migrated,
dry_run: true, dry_run: true,
}); });
} }
let mut tx = pool.begin().await?;
let sql = format!(
"SELECT id, version, folder, type, name, metadata, tags, notes \
FROM entries {where_clause} ORDER BY type, name FOR UPDATE"
);
let mut q = sqlx::query_as::<_, FullEntryRow>(&sql);
if let Some(uid) = user_id {
q = q.bind(uid);
}
if let Some(f) = folder {
q = q.bind(f);
}
if let Some(t) = entry_type {
q = q.bind(t);
}
let rows = q.fetch_all(&mut *tx).await?;
let mut deleted = Vec::with_capacity(rows.len()); let mut deleted = Vec::with_capacity(rows.len());
let mut migrated: Vec<String> = Vec::new();
for row in &rows { for row in &rows {
let entry_row = EntryRow { let entry_row: EntryRow = EntryRow {
id: row.id, id: row.id,
version: row.version, version: row.version,
folder: row.folder.clone(), folder: row.folder.clone(),
@@ -551,9 +385,6 @@ async fn delete_bulk(
metadata: row.metadata.clone(), metadata: row.metadata.clone(),
notes: row.notes.clone(), notes: row.notes.clone(),
}; };
let mut tx = pool.begin().await?;
let m = migrate_key_refs_if_needed(&mut tx, &entry_row, &row.name, user_id, false).await?;
migrated.extend(m);
snapshot_and_delete( snapshot_and_delete(
&mut tx, &mut tx,
&row.folder, &row.folder,
@@ -573,7 +404,6 @@ async fn delete_bulk(
json!({"bulk": true}), json!({"bulk": true}),
) )
.await; .await;
tx.commit().await?;
deleted.push(DeletedEntry { deleted.push(DeletedEntry {
name: row.name.clone(), name: row.name.clone(),
folder: row.folder.clone(), folder: row.folder.clone(),
@@ -581,9 +411,10 @@ async fn delete_bulk(
}); });
} }
tx.commit().await?;
Ok(DeleteResult { Ok(DeleteResult {
deleted, deleted,
migrated,
dry_run: false, dry_run: false,
}) })
} }
@@ -646,12 +477,17 @@ async fn snapshot_and_delete(
.execute(&mut **tx) .execute(&mut **tx)
.await?; .await?;
sqlx::query( let secret_ids: Vec<Uuid> = fields.iter().map(|f| f.id).collect();
"DELETE FROM secrets s \ if !secret_ids.is_empty() {
WHERE NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)", sqlx::query(
) "DELETE FROM secrets s \
.execute(&mut **tx) WHERE s.id = ANY($1) \
.await?; AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(&secret_ids)
.execute(&mut **tx)
.await?;
}
Ok(()) Ok(())
} }
@@ -659,280 +495,153 @@ async fn snapshot_and_delete(
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use serde_json::json; use sqlx::PgPool;
async fn maybe_test_pool() -> Option<PgPool> { async fn maybe_test_pool() -> Option<PgPool> {
let Ok(url) = std::env::var("SECRETS_DATABASE_URL") else { let Ok(url) = std::env::var("SECRETS_DATABASE_URL") else {
eprintln!("skip delete migration tests: SECRETS_DATABASE_URL is not set"); eprintln!("skip delete tests: SECRETS_DATABASE_URL is not set");
return None; return None;
}; };
let Ok(pool) = PgPool::connect(&url).await else { let Ok(pool) = PgPool::connect(&url).await else {
eprintln!("skip delete migration tests: cannot connect to database"); eprintln!("skip delete tests: cannot connect to database");
return None; return None;
}; };
if let Err(e) = crate::db::migrate(&pool).await { if let Err(e) = crate::db::migrate(&pool).await {
eprintln!("skip delete migration tests: migrate failed: {e}"); eprintln!("skip delete tests: migrate failed: {e}");
return None; return None;
} }
Some(pool) Some(pool)
} }
async fn insert_entry( async fn cleanup_single_user_rows(pool: &PgPool, marker: &str) -> Result<()> {
pool: &PgPool,
id: Uuid,
user_id: Uuid,
folder: &str,
entry_type: &str,
name: &str,
metadata: serde_json::Value,
) -> Result<()> {
sqlx::query( sqlx::query(
"INSERT INTO entries (id, user_id, folder, type, name, notes, tags, metadata, version) \ "DELETE FROM entries WHERE user_id IS NULL AND (name LIKE $1 OR folder LIKE $1)",
VALUES ($1, $2, $3, $4, $5, '', ARRAY[]::text[], $6, 1)",
) )
.bind(id) .bind(format!("%{marker}%"))
.bind(user_id) .execute(pool)
.bind(folder) .await?;
.bind(entry_type) sqlx::query(
.bind(name) "DELETE FROM secrets WHERE user_id IS NULL AND name LIKE $1 \
.bind(metadata) AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = secrets.id)",
)
.bind(format!("%{marker}%"))
.execute(pool) .execute(pool)
.await?; .await?;
Ok(()) Ok(())
} }
async fn insert_secret_for_entry(
pool: &PgPool,
user_id: Uuid,
entry_id: Uuid,
name: &str,
secret_type: &str,
encrypted: Vec<u8>,
) -> Result<()> {
let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
)
.bind(user_id)
.bind(name)
.bind(secret_type)
.bind(encrypted)
.fetch_one(pool)
.await?;
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
.bind(entry_id)
.bind(secret_id)
.execute(pool)
.await?;
Ok(())
}
#[tokio::test] #[tokio::test]
async fn delete_shared_key_dry_run_reports_migration_without_writes() -> Result<()> { async fn delete_dry_run_reports_matching_entry_without_writes() -> Result<()> {
let Some(pool) = maybe_test_pool().await else { let Some(pool) = maybe_test_pool().await else {
return Ok(()); return Ok(());
}; };
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("delete_dry_{}", &suffix[..8]);
let entry_name = format!("{}_entry", marker);
let user_id = Uuid::from_u128(rand::random()); cleanup_single_user_rows(&pool, &marker).await?;
let key_id = Uuid::from_u128(rand::random());
let ref_a = Uuid::from_u128(rand::random());
let ref_b = Uuid::from_u128(rand::random());
insert_entry( sqlx::query(
&pool, "INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata) \
key_id, VALUES (NULL, $1, 'service', $2, '', '{}', '{}')",
user_id,
"kfolder",
"key",
"shared-key",
json!({}),
)
.await?;
insert_secret_for_entry(&pool, user_id, key_id, "pem", "pem", vec![1_u8, 2, 3]).await?;
insert_entry(
&pool,
ref_a,
user_id,
"afolder",
"server",
"srv-a",
json!({"key_ref":"kfolder/shared-key"}),
)
.await?;
insert_entry(
&pool,
ref_b,
user_id,
"bfolder",
"server",
"srv-b",
json!({"key_ref":"shared-key"}),
) )
.bind(&marker)
.bind(&entry_name)
.execute(&pool)
.await?; .await?;
let result = run( let result = run(
&pool, &pool,
DeleteParams { DeleteParams {
name: Some("shared-key"), name: Some(&entry_name),
folder: Some("kfolder"), folder: Some(&marker),
entry_type: None, entry_type: None,
dry_run: true, dry_run: true,
user_id: Some(user_id), user_id: None,
}, },
) )
.await?; .await?;
assert!(result.dry_run); assert!(result.dry_run);
assert_eq!(result.deleted.len(), 1); assert_eq!(result.deleted.len(), 1);
assert_eq!(result.migrated.len(), 2); assert_eq!(result.deleted[0].name, entry_name);
let key_exists: bool = sqlx::query_scalar( let still_exists: bool = sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1 AND user_id = $2)", "SELECT EXISTS(SELECT 1 FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2)",
) )
.bind(key_id) .bind(&marker)
.bind(user_id) .bind(&entry_name)
.fetch_one(&pool) .fetch_one(&pool)
.await?; .await?;
assert!(key_exists); assert!(still_exists);
let ref_a_key_ref: Option<String> = cleanup_single_user_rows(&pool, &marker).await?;
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
.bind(ref_a)
.fetch_one(&pool)
.await?;
let ref_b_key_ref: Option<String> =
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
.bind(ref_b)
.fetch_one(&pool)
.await?;
assert_eq!(ref_a_key_ref.as_deref(), Some("kfolder/shared-key"));
assert_eq!(ref_b_key_ref.as_deref(), Some("shared-key"));
sqlx::query("DELETE FROM entries WHERE user_id = $1")
.bind(user_id)
.execute(&pool)
.await?;
Ok(()) Ok(())
} }
#[tokio::test] #[tokio::test]
async fn delete_shared_key_auto_migrates_single_copy_and_redirects_refs() -> Result<()> { async fn delete_by_id_removes_entry_and_orphan_secret() -> Result<()> {
let Some(pool) = maybe_test_pool().await else { let Some(pool) = maybe_test_pool().await else {
return Ok(()); return Ok(());
}; };
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("delete_id_{}", &suffix[..8]);
let user_id = Uuid::from_u128(rand::random()); let user_id = Uuid::from_u128(rand::random());
let key_id = Uuid::from_u128(rand::random()); let entry_name = format!("{}_entry", marker);
let ref_a = Uuid::from_u128(rand::random()); let secret_name = format!("{}_secret", marker);
let ref_b = Uuid::from_u128(rand::random());
let ref_c = Uuid::from_u128(rand::random());
insert_entry( sqlx::query("DELETE FROM entries WHERE user_id = $1 AND folder = $2")
&pool,
key_id,
user_id,
"kfolder",
"key",
"shared-key",
json!({}),
)
.await?;
insert_secret_for_entry(&pool, user_id, key_id, "pem", "pem", vec![7_u8, 8, 9]).await?;
// owner candidate (sorted first by folder)
insert_entry(
&pool,
ref_a,
user_id,
"afolder",
"server",
"srv-a",
json!({"key_ref":"kfolder/shared-key"}),
)
.await?;
insert_entry(
&pool,
ref_b,
user_id,
"bfolder",
"server",
"srv-b",
json!({"key_ref":"shared-key"}),
)
.await?;
insert_entry(
&pool,
ref_c,
user_id,
"cfolder",
"service",
"svc-c",
json!({"key_ref":"kfolder/shared-key"}),
)
.await?;
let result = run(
&pool,
DeleteParams {
name: Some("shared-key"),
folder: Some("kfolder"),
entry_type: None,
dry_run: false,
user_id: Some(user_id),
},
)
.await?;
assert!(!result.dry_run);
assert_eq!(result.deleted.len(), 1);
assert_eq!(result.migrated.len(), 3);
let key_exists: bool = sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1 AND user_id = $2)",
)
.bind(key_id)
.bind(user_id)
.fetch_one(&pool)
.await?;
assert!(!key_exists);
let owner_key_ref: Option<String> =
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
.bind(ref_a)
.fetch_one(&pool)
.await?;
let ref_b_key_ref: Option<String> =
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
.bind(ref_b)
.fetch_one(&pool)
.await?;
let ref_c_key_ref: Option<String> =
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
.bind(ref_c)
.fetch_one(&pool)
.await?;
assert_eq!(owner_key_ref, None);
assert_eq!(ref_b_key_ref.as_deref(), Some("afolder/srv-a"));
assert_eq!(ref_c_key_ref.as_deref(), Some("afolder/srv-a"));
let owner_has_copied: bool = sqlx::query_scalar(
"SELECT EXISTS( \
SELECT 1 \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1 AND s.name = 'pem' \
)",
)
.bind(ref_a)
.fetch_one(&pool)
.await?;
assert!(owner_has_copied);
sqlx::query("DELETE FROM entries WHERE user_id = $1")
.bind(user_id) .bind(user_id)
.bind(&marker)
.execute(&pool) .execute(&pool)
.await?; .await?;
sqlx::query("DELETE FROM secrets WHERE user_id = $1 AND name = $2")
.bind(user_id)
.bind(&secret_name)
.execute(&pool)
.await?;
let entry_id: Uuid = sqlx::query_scalar(
"INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata) \
VALUES ($1, $2, 'service', $3, '', '{}', '{}') RETURNING id",
)
.bind(user_id)
.bind(&marker)
.bind(&entry_name)
.fetch_one(&pool)
.await?;
let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, 'text', $3) RETURNING id",
)
.bind(user_id)
.bind(&secret_name)
.bind(vec![1_u8, 2, 3])
.fetch_one(&pool)
.await?;
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
.bind(entry_id)
.bind(secret_id)
.execute(&pool)
.await?;
let result = delete_by_id(&pool, entry_id, user_id).await?;
assert!(!result.dry_run);
assert_eq!(result.deleted.len(), 1);
assert_eq!(result.deleted[0].name, entry_name);
let entry_exists: bool =
sqlx::query_scalar("SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1)")
.bind(entry_id)
.fetch_one(&pool)
.await?;
let secret_exists: bool =
sqlx::query_scalar("SELECT EXISTS(SELECT 1 FROM secrets WHERE id = $1)")
.bind(secret_id)
.fetch_one(&pool)
.await?;
assert!(!entry_exists);
assert!(!secret_exists);
Ok(()) Ok(())
} }
} }

View File

@@ -40,7 +40,7 @@ async fn build_entry_env_map(
only_fields: &[String], only_fields: &[String],
prefix: &str, prefix: &str,
master_key: &[u8; 32], master_key: &[u8; 32],
user_id: Option<Uuid>, _user_id: Option<Uuid>,
) -> Result<HashMap<String, String>> { ) -> Result<HashMap<String, String>> {
let entry_ids = vec![entry.id]; let entry_ids = vec![entry.id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?; let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
@@ -68,44 +68,6 @@ async fn build_entry_env_map(
map.insert(key, json_to_env_string(&decrypted)); map.insert(key, json_to_env_string(&decrypted));
} }
// Resolve key_ref. Supported formats: "name" or "folder/name".
if let Some(key_ref) = entry.metadata.get("key_ref").and_then(|v| v.as_str()) {
let (ref_folder, ref_name) = if let Some((f, n)) = key_ref.split_once('/') {
(Some(f), n)
} else {
(None, key_ref)
};
let key_entries =
fetch_entries(pool, ref_folder, None, Some(ref_name), &[], None, user_id).await?;
if key_entries.len() > 1 {
anyhow::bail!(
"key_ref '{}' matched {} entries; qualify with folder/name to resolve the ambiguity",
key_ref,
key_entries.len()
);
}
if let Some(key_entry) = key_entries.first() {
let key_ids = vec![key_entry.id];
let key_fields_map = fetch_secrets_for_entries(pool, &key_ids).await?;
let empty = vec![];
let key_fields = key_fields_map.get(&key_entry.id).unwrap_or(&empty);
let key_prefix = env_prefix(key_entry, prefix);
for f in key_fields {
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
let key_var = format!(
"{}_{}",
key_prefix,
f.name.to_uppercase().replace(['-', '.'], "_")
);
map.insert(key_var, json_to_env_string(&decrypted));
}
} else {
tracing::warn!(key_ref, ?user_id, "key_ref target not found");
}
}
Ok(map) Ok(map)
} }

View File

@@ -85,6 +85,7 @@ pub async fn run(
tags: &entry.tags, tags: &entry.tags,
meta_entries: &meta_entries, meta_entries: &meta_entries,
secret_entries: &secret_entries, secret_entries: &secret_entries,
secret_types: &Default::default(),
link_secret_names: &[], link_secret_names: &[],
user_id: params.user_id, user_id: params.user_id,
}, },

View File

@@ -8,10 +8,23 @@ use crate::models::{Entry, SecretField};
pub const FETCH_ALL_LIMIT: u32 = 100_000; pub const FETCH_ALL_LIMIT: u32 = 100_000;
/// Build an ILIKE pattern for fuzzy matching, escaping `%` and `_` literals.
pub fn ilike_pattern(value: &str) -> String {
format!(
"%{}%",
value
.replace('\\', "\\\\")
.replace('%', "\\%")
.replace('_', "\\_")
)
}
pub struct SearchParams<'a> { pub struct SearchParams<'a> {
pub folder: Option<&'a str>, pub folder: Option<&'a str>,
pub entry_type: Option<&'a str>, pub entry_type: Option<&'a str>,
pub name: Option<&'a str>, pub name: Option<&'a str>,
/// Fuzzy match on `entries.name` only (ILIKE with escaped `%`/`_`).
pub name_query: Option<&'a str>,
pub tags: &'a [String], pub tags: &'a [String],
pub query: Option<&'a str>, pub query: Option<&'a str>,
pub sort: &'a str, pub sort: &'a str,
@@ -51,11 +64,15 @@ pub async fn count_entries(pool: &PgPool, a: &SearchParams<'_>) -> Result<i64> {
if let Some(v) = a.name { if let Some(v) = a.name {
q = q.bind(v); q = q.bind(v);
} }
if let Some(v) = a.name_query {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
for tag in a.tags { for tag in a.tags {
q = q.bind(tag); q = q.bind(tag);
} }
if let Some(v) = a.query { if let Some(v) = a.query {
let pattern = format!("%{}%", v.replace('%', "\\%").replace('_', "\\_")); let pattern = ilike_pattern(v);
q = q.bind(pattern); q = q.bind(pattern);
} }
let n = q.fetch_one(pool).await?; let n = q.fetch_one(pool).await?;
@@ -86,6 +103,10 @@ fn entry_where_clause_and_next_idx(a: &SearchParams<'_>) -> (String, i32) {
conditions.push(format!("name = ${}", idx)); conditions.push(format!("name = ${}", idx));
idx += 1; idx += 1;
} }
if a.name_query.is_some() {
conditions.push(format!("name ILIKE ${} ESCAPE '\\'", idx));
idx += 1;
}
if !a.tags.is_empty() { if !a.tags.is_empty() {
let placeholders: Vec<String> = a let placeholders: Vec<String> = a
.tags .tags
@@ -135,6 +156,7 @@ pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult
} }
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT. /// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
#[allow(clippy::too_many_arguments)]
pub async fn fetch_entries( pub async fn fetch_entries(
pool: &PgPool, pool: &PgPool,
folder: Option<&str>, folder: Option<&str>,
@@ -148,6 +170,7 @@ pub async fn fetch_entries(
folder, folder,
entry_type, entry_type,
name, name,
name_query: None,
tags, tags,
query, query,
sort: "name", sort: "name",
@@ -189,11 +212,15 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
if let Some(v) = a.name { if let Some(v) = a.name {
q = q.bind(v); q = q.bind(v);
} }
if let Some(v) = a.name_query {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
for tag in a.tags { for tag in a.tags {
q = q.bind(tag); q = q.bind(tag);
} }
if let Some(v) = a.query { if let Some(v) = a.query {
let pattern = format!("%{}%", v.replace('%', "\\%").replace('_', "\\_")); let pattern = ilike_pattern(v);
q = q.bind(pattern); q = q.bind(pattern);
} }
q = q.bind(a.limit as i64).bind(a.offset as i64); q = q.bind(a.limit as i64).bind(a.offset as i64);
@@ -384,3 +411,13 @@ impl EntrySecretRow {
} }
} }
} }
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn ilike_pattern_escapes_backslash_percent_and_underscore() {
assert_eq!(ilike_pattern(r"hello\_100%"), r"%hello\\\_100\%%");
}
}

View File

@@ -5,11 +5,13 @@ use uuid::Uuid;
use crate::crypto; use crate::crypto;
use crate::db; use crate::db;
use crate::error::{AppError, DbErrorContext};
use crate::models::{EntryRow, EntryWriteRow}; use crate::models::{EntryRow, EntryWriteRow};
use crate::service::add::{ use crate::service::add::{
collect_field_paths, collect_key_paths, flatten_json_fields, infer_secret_type, insert_path, collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path,
parse_key_path, parse_kv, remove_path, parse_kv, remove_path,
}; };
use crate::taxonomy;
#[derive(Debug, serde::Serialize)] #[derive(Debug, serde::Serialize)]
pub struct UpdateResult { pub struct UpdateResult {
@@ -35,6 +37,7 @@ pub struct UpdateParams<'a> {
pub meta_entries: &'a [String], pub meta_entries: &'a [String],
pub remove_meta: &'a [String], pub remove_meta: &'a [String],
pub secret_entries: &'a [String], pub secret_entries: &'a [String],
pub secret_types: &'a std::collections::HashMap<String, String>,
pub remove_secrets: &'a [String], pub remove_secrets: &'a [String],
pub user_id: Option<Uuid>, pub user_id: Option<Uuid>,
} }
@@ -90,10 +93,7 @@ pub async fn run(
let row = match rows.len() { let row = match rows.len() {
0 => { 0 => {
tx.rollback().await?; tx.rollback().await?;
anyhow::bail!( return Err(AppError::NotFoundEntry.into());
"Not found: '{}'. Use `add` to create it first.",
params.name
)
} }
1 => rows.into_iter().next().unwrap(), 1 => rows.into_iter().next().unwrap(),
_ => { _ => {
@@ -167,10 +167,7 @@ pub async fn run(
if result.rows_affected() == 0 { if result.rows_affected() == 0 {
tx.rollback().await?; tx.rollback().await?;
anyhow::bail!( return Err(AppError::ConcurrentModification.into());
"Concurrent modification detected for '{}'. Please retry.",
params.name
);
} }
for entry in params.secret_entries { for entry in params.secret_entries {
@@ -224,15 +221,21 @@ pub async fn run(
.execute(&mut *tx) .execute(&mut *tx)
.await?; .await?;
} else { } else {
let secret_type = params
.secret_types
.get(field_name)
.map(|s| s.as_str())
.unwrap_or("text");
let secret_id: Uuid = sqlx::query_scalar( let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id", "INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
) )
.bind(params.user_id) .bind(params.user_id)
.bind(field_name) .bind(field_name.to_string())
.bind(infer_secret_type(field_name)) .bind(secret_type)
.bind(&encrypted) .bind(&encrypted)
.fetch_one(&mut *tx) .fetch_one(&mut *tx)
.await?; .await
.map_err(|e| AppError::from_db_error(e, DbErrorContext::secret_name(field_name)))?;
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)") sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
.bind(row.id) .bind(row.id)
.bind(secret_id) .bind(secret_id)
@@ -347,13 +350,13 @@ pub async fn update_fields_by_id(
user_id: Uuid, user_id: Uuid,
params: UpdateEntryFieldsByIdParams<'_>, params: UpdateEntryFieldsByIdParams<'_>,
) -> Result<()> { ) -> Result<()> {
if params.folder.len() > 128 { if params.folder.chars().count() > 128 {
anyhow::bail!("folder must be at most 128 characters"); anyhow::bail!("folder must be at most 128 characters");
} }
if params.entry_type.len() > 64 { if params.entry_type.chars().count() > 64 {
anyhow::bail!("type must be at most 64 characters"); anyhow::bail!("type must be at most 64 characters");
} }
if params.name.len() > 256 { if params.name.chars().count() > 256 {
anyhow::bail!("name must be at most 256 characters"); anyhow::bail!("name must be at most 256 characters");
} }
@@ -372,7 +375,7 @@ pub async fn update_fields_by_id(
Some(r) => r, Some(r) => r,
None => { None => {
tx.rollback().await?; tx.rollback().await?;
anyhow::bail!("Entry not found"); return Err(AppError::NotFoundEntry.into());
} }
}; };
@@ -395,17 +398,25 @@ pub async fn update_fields_by_id(
tracing::warn!(error = %e, "failed to snapshot entry history before web update"); tracing::warn!(error = %e, "failed to snapshot entry history before web update");
} }
let mut metadata_map = match params.metadata {
Value::Object(m) => m.clone(),
_ => Map::new(),
};
let normalized_type =
taxonomy::normalize_entry_type_and_metadata(params.entry_type, &mut metadata_map);
let normalized_metadata = Value::Object(metadata_map);
let res = sqlx::query( let res = sqlx::query(
"UPDATE entries SET folder = $1, type = $2, name = $3, notes = $4, tags = $5, metadata = $6, \ "UPDATE entries SET folder = $1, type = $2, name = $3, notes = $4, tags = $5, metadata = $6, \
version = version + 1, updated_at = NOW() \ version = version + 1, updated_at = NOW() \
WHERE id = $7 AND version = $8", WHERE id = $7 AND version = $8",
) )
.bind(params.folder) .bind(params.folder)
.bind(params.entry_type) .bind(&normalized_type)
.bind(params.name) .bind(params.name)
.bind(params.notes) .bind(params.notes)
.bind(params.tags) .bind(params.tags)
.bind(params.metadata) .bind(&normalized_metadata)
.bind(row.id) .bind(row.id)
.bind(row.version) .bind(row.version)
.execute(&mut *tx) .execute(&mut *tx)
@@ -414,16 +425,17 @@ pub async fn update_fields_by_id(
if let sqlx::Error::Database(ref d) = e if let sqlx::Error::Database(ref d) = e
&& d.code().as_deref() == Some("23505") && d.code().as_deref() == Some("23505")
{ {
return anyhow::anyhow!( return AppError::ConflictEntryName {
"An entry with this folder and name already exists for your account." folder: params.folder.to_string(),
); name: params.name.to_string(),
};
} }
e.into() AppError::Internal(e.into())
})?; })?;
if res.rows_affected() == 0 { if res.rows_affected() == 0 {
tx.rollback().await?; tx.rollback().await?;
anyhow::bail!("Concurrent modification detected. Please refresh and try again."); return Err(AppError::ConcurrentModification.into());
} }
crate::audit::log_tx( crate::audit::log_tx(
@@ -431,7 +443,7 @@ pub async fn update_fields_by_id(
Some(user_id), Some(user_id),
"update", "update",
params.folder, params.folder,
params.entry_type, &normalized_type,
params.name, params.name,
serde_json::json!({ serde_json::json!({
"source": "web", "source": "web",

View File

@@ -0,0 +1,111 @@
use serde_json::{Map, Value};
fn normalize_token(input: &str) -> String {
input.trim().to_lowercase().replace('_', "-")
}
fn normalize_subtype_token(input: &str) -> String {
normalize_token(input)
}
fn map_legacy_entry_type(input: &str) -> Option<(&'static str, &'static str)> {
match input {
"log-ingestion-endpoint" => Some(("service", "log-ingestion")),
"cloud-api" => Some(("service", "cloud-api")),
"git-server" => Some(("service", "git")),
"mqtt-broker" => Some(("service", "mqtt-broker")),
"database" => Some(("service", "database")),
"monitoring-dashboard" => Some(("service", "monitoring")),
"dns-api" => Some(("service", "dns-api")),
"notification-webhook" => Some(("service", "webhook")),
"api-endpoint" => Some(("service", "api-endpoint")),
"credential" | "credential-key" => Some(("service", "credential")),
"key" => Some(("service", "credential")),
_ => None,
}
}
/// Normalize entry `type` and optionally backfill `metadata.subtype` for legacy values.
///
/// This keeps backward compatibility:
/// - stable primary types stay unchanged
/// - known legacy long-tail types are mapped to `service` + `metadata.subtype`
/// - unknown values are kept (normalized to kebab-case) instead of hard failing
pub fn normalize_entry_type_and_metadata(
entry_type: &str,
metadata: &mut Map<String, Value>,
) -> String {
let original_raw = entry_type.trim();
let normalized = normalize_token(original_raw);
if normalized.is_empty() {
return String::new();
}
if let Some((mapped_type, mapped_subtype)) = map_legacy_entry_type(&normalized) {
if !metadata.contains_key("subtype") {
metadata.insert(
"subtype".to_string(),
Value::String(mapped_subtype.to_string()),
);
}
if !metadata.contains_key("_original_type") && original_raw != mapped_type {
metadata.insert(
"_original_type".to_string(),
Value::String(original_raw.to_string()),
);
}
return mapped_type.to_string();
}
if let Some(subtype) = metadata.get_mut("subtype")
&& let Some(s) = subtype.as_str()
{
*subtype = Value::String(normalize_subtype_token(s));
}
normalized
}
/// Canonical secret type options for UI dropdowns.
pub const SECRET_TYPE_OPTIONS: &[&str] = &[
"text", "password", "token", "api-key", "ssh-key", "url", "phone", "id-card",
];
#[cfg(test)]
mod tests {
use super::*;
use serde_json::{Map, Value};
#[test]
fn normalize_entry_type_maps_legacy_type_and_backfills_metadata() {
let mut metadata = Map::new();
let normalized = normalize_entry_type_and_metadata("git-server", &mut metadata);
assert_eq!(normalized, "service");
assert_eq!(
metadata.get("subtype"),
Some(&Value::String("git".to_string()))
);
assert_eq!(
metadata.get("_original_type"),
Some(&Value::String("git-server".to_string()))
);
}
#[test]
fn normalize_entry_type_normalizes_existing_subtype() {
let mut metadata = Map::new();
metadata.insert(
"subtype".to_string(),
Value::String("Cloud_API".to_string()),
);
let normalized = normalize_entry_type_and_metadata("service", &mut metadata);
assert_eq!(normalized, "service");
assert_eq!(
metadata.get("subtype"),
Some(&Value::String("cloud-api".to_string()))
);
}
}

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "secrets-mcp" name = "secrets-mcp"
version = "0.3.9" version = "0.4.0"
edition.workspace = true edition.workspace = true
[[bin]] [[bin]]

View File

@@ -0,0 +1,36 @@
use secrets_core::error::AppError;
/// Map a structured `AppError` to an MCP protocol error.
///
/// This replaces the previous pattern of swallowing all errors into `-32603`.
pub fn app_error_to_mcp(err: &AppError) -> rmcp::ErrorData {
match err {
AppError::ConflictSecretName { secret_name } => rmcp::ErrorData::invalid_request(
format!(
"A secret with the name '{secret_name}' already exists for your account. \
Secret names must be unique per user."
),
None,
),
AppError::ConflictEntryName { folder, name } => rmcp::ErrorData::invalid_request(
format!(
"An entry with folder='{folder}' and name='{name}' already exists. \
The combination of folder and name must be unique."
),
None,
),
AppError::NotFoundEntry => rmcp::ErrorData::invalid_request(
"Entry not found. Use secrets_find to discover existing entries.",
None,
),
AppError::Validation { message } => rmcp::ErrorData::invalid_request(message.clone(), None),
AppError::ConcurrentModification => rmcp::ErrorData::invalid_request(
"The entry was modified by another request. Please refresh and try again.",
None,
),
AppError::Internal(_) => rmcp::ErrorData::internal_error(
"Request failed due to a server error. Check service logs if you need details.",
None,
),
}
}

View File

@@ -1,4 +1,5 @@
mod auth; mod auth;
mod error;
mod logging; mod logging;
mod oauth; mod oauth;
mod tools; mod tools;

View File

@@ -31,6 +31,7 @@ use secrets_core::service::{
}; };
use crate::auth::AuthUser; use crate::auth::AuthUser;
use crate::error;
// ── MCP client-facing errors (no internal details) ─────────────────────────── // ── MCP client-facing errors (no internal details) ───────────────────────────
@@ -50,6 +51,17 @@ fn mcp_err_internal_logged(
) )
} }
fn mcp_err_from_anyhow(
tool: &'static str,
user_id: Option<Uuid>,
err: anyhow::Error,
) -> rmcp::ErrorData {
if let Some(app_err) = err.downcast_ref::<secrets_core::error::AppError>() {
return error::app_error_to_mcp(app_err);
}
mcp_err_internal_logged(tool, user_id, err)
}
fn mcp_err_invalid_encryption_key_logged(err: impl std::fmt::Display) -> rmcp::ErrorData { fn mcp_err_invalid_encryption_key_logged(err: impl std::fmt::Display) -> rmcp::ErrorData {
tracing::warn!(error = %err, "invalid X-Encryption-Key"); tracing::warn!(error = %err, "invalid X-Encryption-Key");
rmcp::ErrorData::invalid_request( rmcp::ErrorData::invalid_request(
@@ -162,11 +174,17 @@ struct FindInput {
query: Option<String>, query: Option<String>,
#[schemars(description = "Exact folder filter (e.g. 'refining', 'ricnsmart')")] #[schemars(description = "Exact folder filter (e.g. 'refining', 'ricnsmart')")]
folder: Option<String>, folder: Option<String>,
#[schemars(description = "Exact type filter (e.g. 'server', 'service', 'person', 'key')")] #[schemars(
description = "Exact type filter (recommended: 'server', 'service', 'person', 'document')"
)]
#[serde(rename = "type")] #[serde(rename = "type")]
entry_type: Option<String>, entry_type: Option<String>,
#[schemars(description = "Exact name filter")] #[schemars(description = "Exact name filter. For fuzzy matching use name_query instead.")]
name: Option<String>, name: Option<String>,
#[schemars(
description = "Fuzzy name filter (ILIKE, case-insensitive partial match). Use this instead of 'name' when you don't know the exact name."
)]
name_query: Option<String>,
#[schemars(description = "Tag filters (all must match)")] #[schemars(description = "Tag filters (all must match)")]
tags: Option<Vec<String>>, tags: Option<Vec<String>>,
#[schemars(description = "Max results (default 20)")] #[schemars(description = "Max results (default 20)")]
@@ -179,11 +197,17 @@ struct SearchInput {
query: Option<String>, query: Option<String>,
#[schemars(description = "Folder filter (e.g. 'refining', 'personal', 'family')")] #[schemars(description = "Folder filter (e.g. 'refining', 'personal', 'family')")]
folder: Option<String>, folder: Option<String>,
#[schemars(description = "Type filter (e.g. 'server', 'service', 'person', 'key')")] #[schemars(
description = "Type filter (recommended: 'server', 'service', 'person', 'document')"
)]
#[serde(rename = "type")] #[serde(rename = "type")]
entry_type: Option<String>, entry_type: Option<String>,
#[schemars(description = "Exact name to match")] #[schemars(description = "Exact name to match. For fuzzy matching use name_query instead.")]
name: Option<String>, name: Option<String>,
#[schemars(
description = "Fuzzy name filter (ILIKE, case-insensitive partial match). Use this instead of 'name' when you don't know the exact name."
)]
name_query: Option<String>,
#[schemars(description = "Tag filters (all must match)")] #[schemars(description = "Tag filters (all must match)")]
tags: Option<Vec<String>>, tags: Option<Vec<String>>,
#[schemars(description = "Return only summary fields (name/tags/notes/updated_at)")] #[schemars(description = "Return only summary fields (name/tags/notes/updated_at)")]
@@ -211,7 +235,7 @@ struct AddInput {
#[schemars(description = "Folder for organization (optional, e.g. 'personal', 'refining')")] #[schemars(description = "Folder for organization (optional, e.g. 'personal', 'refining')")]
folder: Option<String>, folder: Option<String>,
#[schemars( #[schemars(
description = "Type/category of this entry (optional, e.g. 'server', 'person', 'key')" description = "Type/category of this entry (optional, recommended: 'server', 'service', 'person', 'document')"
)] )]
#[serde(rename = "type")] #[serde(rename = "type")]
entry_type: Option<String>, entry_type: Option<String>,
@@ -233,6 +257,10 @@ struct AddInput {
description = "Secret fields as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided. Reminder: non-sensitive endpoint/address fields should go to metadata.address." description = "Secret fields as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided. Reminder: non-sensitive endpoint/address fields should go to metadata.address."
)] )]
secrets_obj: Option<Map<String, Value>>, secrets_obj: Option<Map<String, Value>>,
#[schemars(
description = "Secret types as {\"secret_name\": \"type\"}. Keys must match secret field names. Missing keys default to \"text\"."
)]
secret_types: Option<Map<String, Value>>,
#[schemars( #[schemars(
description = "Link existing secrets by secret name. Names must resolve uniquely under current user." description = "Link existing secrets by secret name. Names must resolve uniquely under current user."
)] )]
@@ -273,6 +301,10 @@ struct UpdateInput {
description = "Secret fields to update/add as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided. Reminder: non-sensitive endpoint/address fields should go to metadata.address." description = "Secret fields to update/add as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided. Reminder: non-sensitive endpoint/address fields should go to metadata.address."
)] )]
secrets_obj: Option<Map<String, Value>>, secrets_obj: Option<Map<String, Value>>,
#[schemars(
description = "Secret types as {\"secret_name\": \"type\"}. Keys must match secret field names. Missing keys default to \"text\"."
)]
secret_types: Option<Map<String, Value>>,
#[schemars(description = "Secret field keys to remove")] #[schemars(description = "Secret field keys to remove")]
remove_secrets: Option<Vec<String>>, remove_secrets: Option<Vec<String>>,
} }
@@ -412,6 +444,7 @@ impl SecretsService {
folder = input.folder.as_deref(), folder = input.folder.as_deref(),
entry_type = input.entry_type.as_deref(), entry_type = input.entry_type.as_deref(),
name = input.name.as_deref(), name = input.name.as_deref(),
name_query = input.name_query.as_deref(),
query = input.query.as_deref(), query = input.query.as_deref(),
"tool call start", "tool call start",
); );
@@ -422,6 +455,7 @@ impl SecretsService {
folder: input.folder.as_deref(), folder: input.folder.as_deref(),
entry_type: input.entry_type.as_deref(), entry_type: input.entry_type.as_deref(),
name: input.name.as_deref(), name: input.name.as_deref(),
name_query: input.name_query.as_deref(),
tags: &tags, tags: &tags,
query: input.query.as_deref(), query: input.query.as_deref(),
sort: "name", sort: "name",
@@ -499,6 +533,7 @@ impl SecretsService {
folder = input.folder.as_deref(), folder = input.folder.as_deref(),
entry_type = input.entry_type.as_deref(), entry_type = input.entry_type.as_deref(),
name = input.name.as_deref(), name = input.name.as_deref(),
name_query = input.name_query.as_deref(),
query = input.query.as_deref(), query = input.query.as_deref(),
"tool call start", "tool call start",
); );
@@ -509,6 +544,7 @@ impl SecretsService {
folder: input.folder.as_deref(), folder: input.folder.as_deref(),
entry_type: input.entry_type.as_deref(), entry_type: input.entry_type.as_deref(),
name: input.name.as_deref(), name: input.name.as_deref(),
name_query: input.name_query.as_deref(),
tags: &tags, tags: &tags,
query: input.query.as_deref(), query: input.query.as_deref(),
sort: input.sort.as_deref().unwrap_or("name"), sort: input.sort.as_deref().unwrap_or("name"),
@@ -667,6 +703,11 @@ impl SecretsService {
if let Some(obj) = input.secrets_obj { if let Some(obj) = input.secrets_obj {
secrets.extend(map_to_kv_strings(obj)); secrets.extend(map_to_kv_strings(obj));
} }
let secret_types = input.secret_types.unwrap_or_default();
let secret_types_map: std::collections::HashMap<String, String> = secret_types
.into_iter()
.filter_map(|(k, v)| v.as_str().map(|s| (k, s.to_string())))
.collect();
let link_secret_names = input.link_secret_names.unwrap_or_default(); let link_secret_names = input.link_secret_names.unwrap_or_default();
let folder = input.folder.as_deref().unwrap_or(""); let folder = input.folder.as_deref().unwrap_or("");
let entry_type = input.entry_type.as_deref().unwrap_or(""); let entry_type = input.entry_type.as_deref().unwrap_or("");
@@ -682,13 +723,14 @@ impl SecretsService {
tags: &tags, tags: &tags,
meta_entries: &meta, meta_entries: &meta,
secret_entries: &secrets, secret_entries: &secrets,
secret_types: &secret_types_map,
link_secret_names: &link_secret_names, link_secret_names: &link_secret_names,
user_id: Some(user_id), user_id: Some(user_id),
}, },
&user_key, &user_key,
) )
.await .await
.map_err(|e| mcp_err_internal_logged("secrets_add", Some(user_id), e))?; .map_err(|e| mcp_err_from_anyhow("secrets_add", Some(user_id), e))?;
tracing::info!( tracing::info!(
tool = "secrets_add", tool = "secrets_add",
@@ -745,6 +787,11 @@ impl SecretsService {
if let Some(obj) = input.secrets_obj { if let Some(obj) = input.secrets_obj {
secrets.extend(map_to_kv_strings(obj)); secrets.extend(map_to_kv_strings(obj));
} }
let secret_types = input.secret_types.unwrap_or_default();
let secret_types_map: std::collections::HashMap<String, String> = secret_types
.into_iter()
.filter_map(|(k, v)| v.as_str().map(|s| (k, s.to_string())))
.collect();
let remove_secrets = input.remove_secrets.unwrap_or_default(); let remove_secrets = input.remove_secrets.unwrap_or_default();
let result = svc_update( let result = svc_update(
@@ -758,13 +805,14 @@ impl SecretsService {
meta_entries: &meta, meta_entries: &meta,
remove_meta: &remove_meta, remove_meta: &remove_meta,
secret_entries: &secrets, secret_entries: &secrets,
secret_types: &secret_types_map,
remove_secrets: &remove_secrets, remove_secrets: &remove_secrets,
user_id: Some(user_id), user_id: Some(user_id),
}, },
&user_key, &user_key,
) )
.await .await
.map_err(|e| mcp_err_internal_logged("secrets_update", Some(user_id), e))?; .map_err(|e| mcp_err_from_anyhow("secrets_update", Some(user_id), e))?;
tracing::info!( tracing::info!(
tool = "secrets_update", tool = "secrets_update",

View File

@@ -17,11 +17,12 @@ use uuid::Uuid;
use secrets_core::audit::log_login; use secrets_core::audit::log_login;
use secrets_core::crypto::hex; use secrets_core::crypto::hex;
use secrets_core::error::AppError;
use secrets_core::service::{ use secrets_core::service::{
api_key::{ensure_api_key, regenerate_api_key}, api_key::{ensure_api_key, regenerate_api_key},
audit_log::list_for_user, audit_log::list_for_user,
delete::delete_by_id, delete::delete_by_id,
search::{SearchParams, count_entries, fetch_secret_schemas, list_entries}, search::{SearchParams, fetch_secret_schemas, ilike_pattern, list_entries},
update::{UpdateEntryFieldsByIdParams, update_fields_by_id}, update::{UpdateEntryFieldsByIdParams, update_fields_by_id},
user::{ user::{
OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id, OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id,
@@ -88,15 +89,16 @@ struct EntriesPageTemplate {
user_name: String, user_name: String,
user_email: String, user_email: String,
entries: Vec<EntryListItemView>, entries: Vec<EntryListItemView>,
total_count: i64, folder_tabs: Vec<FolderTabView>,
shown_count: usize, type_options: Vec<String>,
limit: u32, secret_type_options_json: String,
filter_folder: String, filter_folder: String,
filter_name: String,
filter_type: String, filter_type: String,
version: &'static str, version: &'static str,
} }
/// Non-sensitive fields only (no `secrets` / ciphertext). /// Non-sensitive entry fields; `secrets` lists field names/types only (no ciphertext).
struct EntryListItemView { struct EntryListItemView {
id: String, id: String,
folder: String, folder: String,
@@ -104,24 +106,37 @@ struct EntryListItemView {
name: String, name: String,
notes: String, notes: String,
tags: String, tags: String,
metadata: String, /// Compact JSON for `data-entry-metadata` (dialog editor).
metadata_json: String,
/// Secret field summaries for table + dialog chips.
secrets: Vec<SecretSummaryView>, secrets: Vec<SecretSummaryView>,
/// RFC3339 UTC for `<time datetime>`; localized in entries.html. /// JSON array of `{ id, name, secret_type }` for dialog secret chips.
secrets_json: String,
/// RFC3339 UTC; shown in edit dialog.
updated_at_iso: String, updated_at_iso: String,
} }
#[derive(Serialize)]
struct SecretSummaryView { struct SecretSummaryView {
id: String, id: String,
name: String, name: String,
secret_type: String, secret_type: String,
} }
struct FolderTabView {
name: String,
count: i64,
href: String,
active: bool,
}
/// Cap for HTML list (avoids loading unbounded rows into memory). /// Cap for HTML list (avoids loading unbounded rows into memory).
const ENTRIES_PAGE_LIMIT: u32 = 5_000; const ENTRIES_PAGE_LIMIT: u32 = 5_000;
#[derive(Deserialize)] #[derive(Deserialize)]
struct EntriesQuery { struct EntriesQuery {
folder: Option<String>, folder: Option<String>,
name: Option<String>,
/// URL query key is `type` (maps to DB column `entries.type`). /// URL query key is `type` (maps to DB column `entries.type`).
#[serde(rename = "type")] #[serde(rename = "type")]
entry_type: Option<String>, entry_type: Option<String>,
@@ -183,6 +198,7 @@ pub fn web_router() -> Router<AppState> {
.route("/robots.txt", get(robots_txt)) .route("/robots.txt", get(robots_txt))
.route("/llms.txt", get(llms_txt)) .route("/llms.txt", get(llms_txt))
.route("/ai.txt", get(ai_txt)) .route("/ai.txt", get(ai_txt))
.route("/static/i18n.js", get(i18n_js))
.route("/favicon.svg", get(favicon_svg)) .route("/favicon.svg", get(favicon_svg))
.route( .route(
"/favicon.ico", "/favicon.ico",
@@ -218,6 +234,8 @@ pub fn web_router() -> Router<AppState> {
"/api/entries/{entry_id}/secrets/{secret_id}", "/api/entries/{entry_id}/secrets/{secret_id}",
axum::routing::delete(api_entry_secret_unlink), axum::routing::delete(api_entry_secret_unlink),
) )
.route("/api/secrets/{secret_id}", patch(api_secret_patch))
.route("/api/secrets/check-name", get(api_secret_check_name))
} }
fn text_asset_response(content: &'static str, content_type: &'static str) -> Response { fn text_asset_response(content: &'static str, content_type: &'static str) -> Response {
@@ -247,6 +265,13 @@ async fn ai_txt() -> Response {
llms_txt().await llms_txt().await
} }
async fn i18n_js() -> Response {
text_asset_response(
include_str!("../templates/i18n.js"),
"application/javascript; charset=utf-8",
)
}
async fn favicon_svg() -> Response { async fn favicon_svg() -> Response {
Response::builder() Response::builder()
.status(StatusCode::OK) .status(StatusCode::OK)
@@ -565,11 +590,17 @@ async fn entries_page(
.map(|s| s.trim()) .map(|s| s.trim())
.filter(|s| !s.is_empty()) .filter(|s| !s.is_empty())
.map(|s| s.to_string()); .map(|s| s.to_string());
let name_filter = q
.name
.as_ref()
.map(|s| s.trim())
.filter(|s| !s.is_empty())
.map(|s| s.to_string());
let params = SearchParams { let params = SearchParams {
folder: folder_filter.as_deref(), folder: folder_filter.as_deref(),
entry_type: type_filter.as_deref(), entry_type: type_filter.as_deref(),
name: None, name: None,
name_query: name_filter.as_deref(),
tags: &[], tags: &[],
query: None, query: None,
sort: "updated", sort: "updated",
@@ -578,16 +609,10 @@ async fn entries_page(
user_id: Some(user_id), user_id: Some(user_id),
}; };
let total_count = count_entries(&state.pool, &params).await.map_err(|e| {
tracing::error!(error = %e, "failed to count entries for web");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let rows = list_entries(&state.pool, params).await.map_err(|e| { let rows = list_entries(&state.pool, params).await.map_err(|e| {
tracing::error!(error = %e, "failed to load entries list for web"); tracing::error!(error = %e, "failed to load entries list for web");
StatusCode::INTERNAL_SERVER_ERROR StatusCode::INTERNAL_SERVER_ERROR
})?; })?;
let shown_count = rows.len();
let entry_ids: Vec<Uuid> = rows.iter().map(|e| e.id).collect(); let entry_ids: Vec<Uuid> = rows.iter().map(|e| e.id).collect();
let secret_schemas = fetch_secret_schemas(&state.pool, &entry_ids) let secret_schemas = fetch_secret_schemas(&state.pool, &entry_ids)
.await .await
@@ -596,18 +621,112 @@ async fn entries_page(
StatusCode::INTERNAL_SERVER_ERROR StatusCode::INTERNAL_SERVER_ERROR
})?; })?;
#[derive(sqlx::FromRow)]
struct FolderCountRow {
folder: String,
count: i64,
}
let mut folder_sql =
"SELECT folder, COUNT(*)::bigint AS count FROM entries WHERE user_id = $1".to_string();
let mut bind_idx = 2;
if type_filter.is_some() {
folder_sql.push_str(&format!(" AND type = ${bind_idx}"));
bind_idx += 1;
}
if name_filter.is_some() {
folder_sql.push_str(&format!(" AND name ILIKE ${bind_idx} ESCAPE '\\'"));
bind_idx += 1;
}
let _ = bind_idx;
folder_sql.push_str(" GROUP BY folder ORDER BY folder");
let mut folder_query = sqlx::query_as::<_, FolderCountRow>(&folder_sql).bind(user_id);
if let Some(t) = type_filter.as_deref() {
folder_query = folder_query.bind(t);
}
if let Some(n) = name_filter.as_deref() {
folder_query = folder_query.bind(ilike_pattern(n));
}
let folder_rows: Vec<FolderCountRow> =
folder_query.fetch_all(&state.pool).await.map_err(|e| {
tracing::error!(error = %e, "failed to load folder tabs for web");
StatusCode::INTERNAL_SERVER_ERROR
})?;
#[derive(sqlx::FromRow)]
struct TypeOptionRow {
#[sqlx(rename = "type")]
entry_type: String,
}
let mut type_options: Vec<String> = sqlx::query_as::<_, TypeOptionRow>(
"SELECT DISTINCT type FROM entries WHERE user_id = $1 ORDER BY type",
)
.bind(user_id)
.fetch_all(&state.pool)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to load type options for web");
StatusCode::INTERNAL_SERVER_ERROR
})?
.into_iter()
.map(|r| r.entry_type)
.filter(|t| !t.is_empty())
.collect();
if let Some(current) = type_filter.as_ref()
&& !current.is_empty()
&& !type_options.iter().any(|t| t == current)
{
type_options.push(current.clone());
type_options.sort_unstable();
}
fn entries_href(folder: Option<&str>, entry_type: Option<&str>, name: Option<&str>) -> String {
let mut pairs: Vec<String> = Vec::new();
if let Some(f) = folder
&& !f.is_empty()
{
pairs.push(format!("folder={}", urlencoding::encode(f)));
}
if let Some(t) = entry_type
&& !t.is_empty()
{
pairs.push(format!("type={}", urlencoding::encode(t)));
}
if let Some(n) = name
&& !n.is_empty()
{
pairs.push(format!("name={}", urlencoding::encode(n)));
}
if pairs.is_empty() {
"/entries".to_string()
} else {
format!("/entries?{}", pairs.join("&"))
}
}
let all_count: i64 = folder_rows.iter().map(|r| r.count).sum();
let mut folder_tabs: Vec<FolderTabView> = Vec::with_capacity(folder_rows.len() + 1);
folder_tabs.push(FolderTabView {
name: "全部".to_string(),
count: all_count,
href: entries_href(None, type_filter.as_deref(), name_filter.as_deref()),
active: folder_filter.is_none(),
});
for r in folder_rows {
let name = r.folder;
folder_tabs.push(FolderTabView {
href: entries_href(Some(&name), type_filter.as_deref(), name_filter.as_deref()),
active: folder_filter.as_deref() == Some(name.as_str()),
name,
count: r.count,
});
}
let entries = rows let entries = rows
.into_iter() .into_iter()
.map(|e| EntryListItemView { .map(|e| {
id: e.id.to_string(), let secrets: Vec<SecretSummaryView> = secret_schemas
folder: e.folder,
entry_type: e.entry_type,
name: e.name,
notes: e.notes,
tags: e.tags.join(", "),
metadata: serde_json::to_string_pretty(&e.metadata)
.unwrap_or_else(|_| "{}".to_string()),
secrets: secret_schemas
.get(&e.id) .get(&e.id)
.map(|fields| { .map(|fields| {
fields fields
@@ -619,8 +738,22 @@ async fn entries_page(
}) })
.collect() .collect()
}) })
.unwrap_or_default(), .unwrap_or_default();
updated_at_iso: e.updated_at.to_rfc3339_opts(SecondsFormat::Secs, true), let secrets_json = serde_json::to_string(&secrets).unwrap_or_else(|_| "[]".to_string());
let metadata_json =
serde_json::to_string(&e.metadata).unwrap_or_else(|_| "{}".to_string());
EntryListItemView {
id: e.id.to_string(),
folder: e.folder,
entry_type: e.entry_type,
name: e.name,
notes: e.notes,
tags: e.tags.join(", "),
metadata_json,
secrets,
secrets_json,
updated_at_iso: e.updated_at.to_rfc3339_opts(SecondsFormat::Secs, true),
}
}) })
.collect(); .collect();
@@ -628,10 +761,17 @@ async fn entries_page(
user_name: user.name.clone(), user_name: user.name.clone(),
user_email: user.email.clone().unwrap_or_default(), user_email: user.email.clone().unwrap_or_default(),
entries, entries,
total_count, folder_tabs,
shown_count, type_options,
limit: ENTRIES_PAGE_LIMIT, secret_type_options_json: serde_json::to_string(
&secrets_core::taxonomy::SECRET_TYPE_OPTIONS
.iter()
.map(|s| s.to_string())
.collect::<Vec<_>>(),
)
.unwrap_or_default(),
filter_folder: folder_filter.unwrap_or_default(), filter_folder: folder_filter.unwrap_or_default(),
filter_name: name_filter.unwrap_or_default(),
filter_type: type_filter.unwrap_or_default(), filter_type: type_filter.unwrap_or_default(),
version: env!("CARGO_PKG_VERSION"), version: env!("CARGO_PKG_VERSION"),
}; };
@@ -927,24 +1067,53 @@ struct EntryPatchBody {
type EntryApiError = (StatusCode, Json<serde_json::Value>); type EntryApiError = (StatusCode, Json<serde_json::Value>);
fn map_entry_mutation_err(e: anyhow::Error) -> EntryApiError { #[derive(Clone, Copy)]
let msg = e.to_string(); enum UiLang {
if msg.contains("Entry not found") { ZhCn,
return ( ZhTw,
StatusCode::NOT_FOUND, En,
Json(json!({ "error": "条目不存在或无权访问" })), }
);
fn request_ui_lang(headers: &HeaderMap) -> UiLang {
let Some(raw) = headers
.get(header::ACCEPT_LANGUAGE)
.and_then(|v| v.to_str().ok())
else {
return UiLang::ZhCn;
};
let lower = raw.to_ascii_lowercase();
if lower.contains("zh-tw") || lower.contains("zh-hk") || lower.contains("zh-hant") {
UiLang::ZhTw
} else if lower.contains("zh") {
UiLang::ZhCn
} else if lower.contains("en") {
UiLang::En
} else {
UiLang::ZhCn
} }
}
fn tr(lang: UiLang, zh_cn: &'static str, zh_tw: &'static str, en: &'static str) -> &'static str {
match lang {
UiLang::ZhCn => zh_cn,
UiLang::ZhTw => zh_tw,
UiLang::En => en,
}
}
fn map_entry_mutation_err(e: anyhow::Error, lang: UiLang) -> EntryApiError {
if let Some(app_err) = e.downcast_ref::<AppError>() {
return map_app_error(app_err, lang);
}
// Fallback for legacy string-based errors and raw sqlx errors
let msg = e.to_string();
if msg.contains("already exists") { if msg.contains("already exists") {
return ( return (
StatusCode::CONFLICT, StatusCode::CONFLICT,
Json(json!({ "error": "该账号下已存在相同 folder + name 的条目" })), Json(
); json!({ "error": tr(lang, "该账号下已存在相同 folder + name 的条目", "此帳號下已存在相同 folder + name 的條目", "An entry with the same folder + name already exists for this account") }),
} ),
if msg.contains("Concurrent modification") {
return (
StatusCode::CONFLICT,
Json(json!({ "error": "条目已被修改,请刷新后重试" })),
); );
} }
if msg.contains("must be at most") { if msg.contains("must be at most") {
@@ -953,19 +1122,57 @@ fn map_entry_mutation_err(e: anyhow::Error) -> EntryApiError {
tracing::error!(error = %e, "entry mutation failed"); tracing::error!(error = %e, "entry mutation failed");
( (
StatusCode::INTERNAL_SERVER_ERROR, StatusCode::INTERNAL_SERVER_ERROR,
Json(json!({ "error": "操作失败,请稍后重试" })), Json(
json!({ "error": tr(lang, "操作失败,请稍后重试", "操作失敗,請稍後重試", "Operation failed, please try again later") }),
),
) )
} }
fn map_app_error(err: &AppError, lang: UiLang) -> EntryApiError {
match err {
AppError::ConflictEntryName { .. } | AppError::ConflictSecretName { .. } => (
StatusCode::CONFLICT,
Json(json!({ "error": err.to_string() })),
),
AppError::NotFoundEntry => (
StatusCode::NOT_FOUND,
Json(
json!({ "error": tr(lang, "条目不存在或无权访问", "條目不存在或無權存取", "Entry not found or no access") }),
),
),
AppError::Validation { message } => {
(StatusCode::BAD_REQUEST, Json(json!({ "error": message })))
}
AppError::ConcurrentModification => (
StatusCode::CONFLICT,
Json(
json!({ "error": tr(lang, "条目已被修改,请刷新后重试", "條目已被修改,請重新整理後重試", "Entry was modified, please refresh and try again") }),
),
),
AppError::Internal(_) => {
tracing::error!(error = %err, "internal error in entry mutation");
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(
json!({ "error": tr(lang, "操作失败,请稍后重试", "操作失敗,請稍後重試", "Operation failed, please try again later") }),
),
)
}
}
}
async fn api_entry_patch( async fn api_entry_patch(
State(state): State<AppState>, State(state): State<AppState>,
session: Session, session: Session,
headers: HeaderMap,
Path(entry_id): Path<Uuid>, Path(entry_id): Path<Uuid>,
Json(body): Json<EntryPatchBody>, Json(body): Json<EntryPatchBody>,
) -> Result<Json<serde_json::Value>, EntryApiError> { ) -> Result<Json<serde_json::Value>, EntryApiError> {
let user_id = current_user_id(&session) let lang = request_ui_lang(&headers);
.await let user_id = current_user_id(&session).await.ok_or((
.ok_or((StatusCode::UNAUTHORIZED, Json(json!({ "error": "未登录" }))))?; StatusCode::UNAUTHORIZED,
Json(json!({ "error": tr(lang, "未登录", "尚未登入", "Not logged in") })),
))?;
let folder = body.folder.trim(); let folder = body.folder.trim();
let entry_type = body.entry_type.trim(); let entry_type = body.entry_type.trim();
@@ -975,7 +1182,9 @@ async fn api_entry_patch(
if name.is_empty() { if name.is_empty() {
return Err(( return Err((
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
Json(json!({ "error": "name 不能为空" })), Json(
json!({ "error": tr(lang, "name 不能为空", "name 不能為空", "name cannot be empty") }),
),
)); ));
} }
@@ -989,7 +1198,9 @@ async fn api_entry_patch(
if !body.metadata.is_object() { if !body.metadata.is_object() {
return Err(( return Err((
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
Json(json!({ "error": "metadata 必须是 JSON 对象" })), Json(
json!({ "error": tr(lang, "metadata 必须是 JSON 对象", "metadata 必須是 JSON 物件", "metadata must be a JSON object") }),
),
)); ));
} }
@@ -1007,7 +1218,7 @@ async fn api_entry_patch(
}, },
) )
.await .await
.map_err(map_entry_mutation_err)?; .map_err(|e| map_entry_mutation_err(e, lang))?;
Ok(Json(json!({ "ok": true }))) Ok(Json(json!({ "ok": true })))
} }
@@ -1015,25 +1226,291 @@ async fn api_entry_patch(
async fn api_entry_delete( async fn api_entry_delete(
State(state): State<AppState>, State(state): State<AppState>,
session: Session, session: Session,
headers: HeaderMap,
Path(entry_id): Path<Uuid>, Path(entry_id): Path<Uuid>,
) -> Result<Json<serde_json::Value>, EntryApiError> { ) -> Result<Json<serde_json::Value>, EntryApiError> {
let user_id = current_user_id(&session) let lang = request_ui_lang(&headers);
.await let user_id = current_user_id(&session).await.ok_or((
.ok_or((StatusCode::UNAUTHORIZED, Json(json!({ "error": "未登录" }))))?; StatusCode::UNAUTHORIZED,
Json(json!({ "error": tr(lang, "未登录", "尚未登入", "Not logged in") })),
))?;
let result = delete_by_id(&state.pool, entry_id, user_id) delete_by_id(&state.pool, entry_id, user_id)
.await .await
.map_err(map_entry_mutation_err)?; .map_err(|e| map_entry_mutation_err(e, lang))?;
Ok(Json(json!({ Ok(Json(json!({
"ok": true, "ok": true,
"migrated": result.migrated,
}))) })))
} }
#[derive(Deserialize)]
struct SecretCheckNameQuery {
name: String,
exclude_secret_id: Option<Uuid>,
}
#[derive(Serialize)]
struct SecretCheckNameResponse {
ok: bool,
available: bool,
#[serde(skip_serializing_if = "Option::is_none")]
error: Option<String>,
}
async fn api_secret_check_name(
State(state): State<AppState>,
session: Session,
headers: HeaderMap,
Query(params): Query<SecretCheckNameQuery>,
) -> Result<Json<SecretCheckNameResponse>, EntryApiError> {
let lang = request_ui_lang(&headers);
let user_id = current_user_id(&session).await.ok_or((
StatusCode::UNAUTHORIZED,
Json(json!({ "error": tr(lang, "未登录", "尚未登入", "Not logged in") })),
))?;
let name = params.name.trim();
if name.is_empty() {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret name 不能为空", "secret name 不能為空", "secret name cannot be empty") }),
),
));
}
if name.chars().count() > 256 {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret name 长度不能超过 256 个字符", "secret name 長度不能超過 256 個字元", "secret name must be at most 256 characters") }),
),
));
}
let count: i64 = if let Some(exclude_id) = params.exclude_secret_id {
sqlx::query_scalar::<_, i64>(
"SELECT COUNT(*) FROM secrets WHERE user_id = $1 AND name = $2 AND id != $3",
)
.bind(user_id)
.bind(name)
.bind(exclude_id)
.fetch_one(&state.pool)
.await
} else {
sqlx::query_scalar::<_, i64>(
"SELECT COUNT(*) FROM secrets WHERE user_id = $1 AND name = $2",
)
.bind(user_id)
.bind(name)
.fetch_one(&state.pool)
.await
}.map_err(|e| {
tracing::error!(error = %e, "failed to check secret name availability");
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(
json!({ "error": tr(lang, "操作失败,请稍后重试", "操作失敗,請稍後重試", "Operation failed, please try again later") }),
),
)
})?;
let available = count == 0;
let error = if available {
None
} else {
Some(
tr(
lang,
"该用户下已存在相同 name 的密文",
"該用戶下已存在相同 name 的密文",
"A secret with the same name already exists for this user",
)
.to_string(),
)
};
Ok(Json(SecretCheckNameResponse {
ok: true,
available,
error,
}))
}
#[derive(Deserialize)]
struct SecretPatchBody {
name: Option<String>,
#[serde(rename = "type")]
secret_type: Option<String>,
}
async fn api_secret_patch(
State(state): State<AppState>,
session: Session,
headers: HeaderMap,
Path(secret_id): Path<Uuid>,
Json(body): Json<SecretPatchBody>,
) -> Result<Json<serde_json::Value>, EntryApiError> {
#[derive(Serialize, sqlx::FromRow)]
struct LinkedEntryAuditRow {
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
name: String,
}
let lang = request_ui_lang(&headers);
let user_id = current_user_id(&session).await.ok_or((
StatusCode::UNAUTHORIZED,
Json(json!({ "error": tr(lang, "未登录", "尚未登入", "Not logged in") })),
))?;
let name = body.name.as_ref().map(|s| s.trim());
let secret_type = body.secret_type.as_ref().map(|s| s.trim());
if let Some(n) = name {
if n.is_empty() {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret name 不能为空", "secret name 不能為空", "secret name cannot be empty") }),
),
));
}
if n.chars().count() > 256 {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret name 长度不能超过 256 个字符", "secret name 長度不能超過 256 個字元", "secret name must be at most 256 characters") }),
),
));
}
}
if let Some(t) = secret_type {
if t.is_empty() {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret type 不能为空", "secret type 不能為空", "secret type cannot be empty") }),
),
));
}
if t.chars().count() > 64 {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret type 长度不能超过 64 个字符", "secret type 長度不能超過 64 個字元", "secret type must be at most 64 characters") }),
),
));
}
}
if name.is_none() && secret_type.is_none() {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "至少需要提供 name 或 type 之一", "至少需要提供 name 或 type 之一", "At least one of name or type is required") }),
),
));
}
let mut tx = state
.pool
.begin()
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
let secret_row: Option<(String, String)> =
sqlx::query_as("SELECT name, type FROM secrets WHERE id = $1 AND user_id = $2 FOR UPDATE")
.bind(secret_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
let Some((old_name, old_type)) = secret_row else {
let _ = tx.rollback().await;
return Err((
StatusCode::NOT_FOUND,
Json(
json!({ "error": tr(lang, "密文不存在或无权访问", "密文不存在或無權存取", "Secret not found or no access") }),
),
));
};
let linked_entries: Vec<LinkedEntryAuditRow> = sqlx::query_as(
"SELECT e.folder, e.type, e.name \
FROM entry_secrets es \
JOIN entries e ON e.id = es.entry_id \
WHERE es.secret_id = $1 AND e.user_id = $2 \
ORDER BY e.folder, e.type, e.name",
)
.bind(secret_id)
.bind(user_id)
.fetch_all(&mut *tx)
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
let new_name = name.unwrap_or(&old_name).to_string();
let new_type = secret_type.unwrap_or(&old_type).to_string();
let result = sqlx::query(
"UPDATE secrets SET name = $1, type = $2, version = version + 1, updated_at = NOW() \
WHERE id = $3",
)
.bind(&new_name)
.bind(&new_type)
.bind(secret_id)
.execute(&mut *tx)
.await;
if let Err(e) = result {
if let Some(db_err) = e.as_database_error()
&& db_err.code() == Some("23505".into())
{
let _ = tx.rollback().await;
return Err(map_app_error(
&AppError::ConflictSecretName {
secret_name: new_name.clone(),
},
lang,
));
}
let _ = tx.rollback().await;
return Err(map_entry_mutation_err(e.into(), lang));
}
secrets_core::audit::log_tx(
&mut tx,
Some(user_id),
"rename_secret",
"",
"",
&old_name,
json!({
"source": "web",
"secret_id": secret_id,
"old_name": old_name,
"new_name": new_name,
"old_type": old_type,
"new_type": new_type,
"linked_entries": linked_entries,
}),
)
.await;
tx.commit()
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
Ok(Json(json!({ "ok": true })))
}
async fn api_entry_secret_unlink( async fn api_entry_secret_unlink(
State(state): State<AppState>, State(state): State<AppState>,
session: Session, session: Session,
headers: HeaderMap,
Path((entry_id, secret_id)): Path<(Uuid, Uuid)>, Path((entry_id, secret_id)): Path<(Uuid, Uuid)>,
) -> Result<Json<serde_json::Value>, EntryApiError> { ) -> Result<Json<serde_json::Value>, EntryApiError> {
#[derive(sqlx::FromRow)] #[derive(sqlx::FromRow)]
@@ -1044,15 +1521,17 @@ async fn api_entry_secret_unlink(
name: String, name: String,
} }
let user_id = current_user_id(&session) let lang = request_ui_lang(&headers);
.await let user_id = current_user_id(&session).await.ok_or((
.ok_or((StatusCode::UNAUTHORIZED, Json(json!({ "error": "未登录" }))))?; StatusCode::UNAUTHORIZED,
Json(json!({ "error": tr(lang, "未登录", "尚未登入", "Not logged in") })),
))?;
let mut tx = state let mut tx = state
.pool .pool
.begin() .begin()
.await .await
.map_err(|e| map_entry_mutation_err(e.into()))?; .map_err(|e| map_entry_mutation_err(e.into(), lang))?;
let entry_row: Option<EntryAuditRow> = let entry_row: Option<EntryAuditRow> =
sqlx::query_as("SELECT folder, type, name FROM entries WHERE id = $1 AND user_id = $2") sqlx::query_as("SELECT folder, type, name FROM entries WHERE id = $1 AND user_id = $2")
@@ -1060,15 +1539,15 @@ async fn api_entry_secret_unlink(
.bind(user_id) .bind(user_id)
.fetch_optional(&mut *tx) .fetch_optional(&mut *tx)
.await .await
.map_err(|e| map_entry_mutation_err(e.into()))?; .map_err(|e| map_entry_mutation_err(e.into(), lang))?;
let Some(entry_row) = entry_row else { let Some(entry_row) = entry_row else {
tx.rollback() let _ = tx.rollback().await;
.await
.map_err(|e| map_entry_mutation_err(e.into()))?;
return Err(( return Err((
StatusCode::NOT_FOUND, StatusCode::NOT_FOUND,
Json(json!({ "error": "条目不存在或无权访问" })), Json(
json!({ "error": tr(lang, "条目不存在或无权访问", "條目不存在或無權存取", "Entry not found or no access") }),
),
)); ));
}; };
@@ -1077,16 +1556,14 @@ async fn api_entry_secret_unlink(
.bind(secret_id) .bind(secret_id)
.execute(&mut *tx) .execute(&mut *tx)
.await .await
.map_err(|e| map_entry_mutation_err(e.into()))? .map_err(|e| map_entry_mutation_err(e.into(), lang))?
.rows_affected(); .rows_affected();
if deleted == 0 { if deleted == 0 {
tx.rollback() let _ = tx.rollback().await;
.await
.map_err(|e| map_entry_mutation_err(e.into()))?;
return Err(( return Err((
StatusCode::NOT_FOUND, StatusCode::NOT_FOUND,
Json(json!({ "error": "关联不存在" })), Json(json!({ "error": tr(lang, "关联不存在", "關聯不存在", "Relation not found") })),
)); ));
} }
@@ -1098,7 +1575,7 @@ async fn api_entry_secret_unlink(
.bind(secret_id) .bind(secret_id)
.execute(&mut *tx) .execute(&mut *tx)
.await .await
.map_err(|e| map_entry_mutation_err(e.into()))? .map_err(|e| map_entry_mutation_err(e.into(), lang))?
.rows_affected() .rows_affected()
> 0; > 0;
@@ -1120,7 +1597,7 @@ async fn api_entry_secret_unlink(
tx.commit() tx.commit()
.await .await
.map_err(|e| map_entry_mutation_err(e.into()))?; .map_err(|e| map_entry_mutation_err(e.into(), lang))?;
Ok(Json(json!({ Ok(Json(json!({
"ok": true, "ok": true,
@@ -1173,3 +1650,27 @@ fn format_audit_target(folder: &str, entry_type: &str, name: &str) -> String {
name.to_string() name.to_string()
} }
} }
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn request_ui_lang_prefers_zh_cn_over_en_fallback() {
let mut headers = HeaderMap::new();
headers.insert(header::ACCEPT_LANGUAGE, "zh-CN, en;q=0.5".parse().unwrap());
assert!(matches!(request_ui_lang(&headers), UiLang::ZhCn));
}
#[test]
fn request_ui_lang_detects_traditional_chinese_variants() {
let mut headers = HeaderMap::new();
headers.insert(
header::ACCEPT_LANGUAGE,
"zh-Hant, en;q=0.5".parse().unwrap(),
);
assert!(matches!(request_ui_lang(&headers), UiLang::ZhTw));
}
}

View File

@@ -38,6 +38,10 @@
} }
.topbar-spacer { flex: 1; } .topbar-spacer { flex: 1; }
.nav-user { font-size: 13px; color: var(--text-muted); } .nav-user { font-size: 13px; color: var(--text-muted); }
.lang-bar { display: flex; gap: 2px; background: var(--surface2); border-radius: 6px; padding: 2px; }
.lang-btn { padding: 3px 9px; border: none; background: none; color: var(--text-muted);
font-size: 12px; cursor: pointer; border-radius: 4px; }
.lang-btn.active { background: var(--border); color: var(--text); }
.btn-sign-out { .btn-sign-out {
padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border); padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border);
background: none; color: var(--text); font-size: 12px; text-decoration: none; cursor: pointer; background: none; color: var(--text); font-size: 12px; text-decoration: none; cursor: pointer;
@@ -77,11 +81,8 @@
td::before { td::before {
display: block; color: var(--text-muted); font-size: 11px; display: block; color: var(--text-muted); font-size: 11px;
margin-bottom: 4px; text-transform: uppercase; margin-bottom: 4px; text-transform: uppercase;
content: attr(data-label);
} }
td.col-time::before { content: "Time"; }
td.col-action::before { content: "Action"; }
td.col-target::before { content: "Target"; }
td.col-detail::before { content: "Detail"; }
.detail { max-width: none; } .detail { max-width: none; }
} }
</style> </style>
@@ -91,9 +92,9 @@
<aside class="sidebar"> <aside class="sidebar">
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a> <a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<nav class="sidebar-menu"> <nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link">MCP</a> <a href="/dashboard" class="sidebar-link" data-i18n="navMcp">MCP</a>
<a href="/entries" class="sidebar-link">条目</a> <a href="/entries" class="sidebar-link" data-i18n="navEntries">条目</a>
<a href="/audit" class="sidebar-link active">审计</a> <a href="/audit" class="sidebar-link active" data-i18n="navAudit">审计</a>
</nav> </nav>
</aside> </aside>
@@ -101,35 +102,40 @@
<div class="topbar"> <div class="topbar">
<span class="topbar-spacer"></span> <span class="topbar-spacer"></span>
<span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span> <span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span>
<div class="lang-bar">
<button class="lang-btn" onclick="setLang('zh-CN')"></button>
<button class="lang-btn" onclick="setLang('zh-TW')"></button>
<button class="lang-btn" onclick="setLang('en')">EN</button>
</div>
<form action="/auth/logout" method="post" style="display:inline"> <form action="/auth/logout" method="post" style="display:inline">
<button type="submit" class="btn-sign-out">退出</button> <button type="submit" class="btn-sign-out" data-i18n="signOut">退出</button>
</form> </form>
</div> </div>
<main class="main"> <main class="main">
<section class="card"> <section class="card">
<div class="card-title">我的审计</div> <div class="card-title" data-i18n="auditTitle">我的审计</div>
<div class="card-subtitle">展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。</div> <div class="card-subtitle" data-i18n="auditSubtitle">展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。</div>
{% if entries.is_empty() %} {% if entries.is_empty() %}
<div class="empty">暂无审计记录。</div> <div class="empty" data-i18n="emptyAudit">暂无审计记录。</div>
{% else %} {% else %}
<table> <table>
<thead> <thead>
<tr> <tr>
<th>时间</th> <th data-i18n="colTime">时间</th>
<th>动作</th> <th data-i18n="colAction">动作</th>
<th>目标</th> <th data-i18n="colTarget">目标</th>
<th>详情</th> <th data-i18n="colDetail">详情</th>
</tr> </tr>
</thead> </thead>
<tbody> <tbody>
{% for entry in entries %} {% for entry in entries %}
<tr> <tr>
<td class="col-time mono"><time class="audit-local-time" datetime="{{ entry.created_at_iso }}">{{ entry.created_at_iso }}</time></td> <td class="col-time mono" data-label="时间"><time class="audit-local-time" datetime="{{ entry.created_at_iso }}">{{ entry.created_at_iso }}</time></td>
<td class="col-action mono">{{ entry.action }}</td> <td class="col-action mono" data-label="动作">{{ entry.action }}</td>
<td class="col-target mono">{{ entry.target }}</td> <td class="col-target mono" data-label="目标">{{ entry.target }}</td>
<td class="col-detail"><pre class="detail">{{ entry.detail }}</pre></td> <td class="col-detail" data-label="详情"><pre class="detail">{{ entry.detail }}</pre></td>
</tr> </tr>
{% endfor %} {% endfor %}
</tbody> </tbody>
@@ -139,8 +145,28 @@
</main> </main>
</div> </div>
</div> </div>
<script src="/static/i18n.js"></script>
<script> <script>
(function () { (function () {
I18N_PAGE = {
'zh-CN': { pageTitle: 'Secrets — 审计', auditTitle: '我的审计', auditSubtitle: '展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。', emptyAudit: '暂无审计记录。', colTime: '时间', colAction: '动作', colTarget: '目标', colDetail: '详情' },
'zh-TW': { pageTitle: 'Secrets — 審計', auditTitle: '我的審計', auditSubtitle: '顯示最近 100 筆與目前使用者相關的新審計記錄。時間為瀏覽器本地時區。', emptyAudit: '暫無審計記錄。', colTime: '時間', colAction: '動作', colTarget: '目標', colDetail: '詳情' },
en: { pageTitle: 'Secrets — Audit', auditTitle: 'My audit', auditSubtitle: 'Shows the latest 100 audit records related to the current user. Time is in browser local timezone.', emptyAudit: 'No audit records.', colTime: 'Time', colAction: 'Action', colTarget: 'Target', colDetail: 'Detail' }
};
window.applyPageLang = function () {
document.querySelectorAll('tbody tr').forEach(function (tr) {
var time = tr.querySelector('.col-time');
var action = tr.querySelector('.col-action');
var target = tr.querySelector('.col-target');
var detail = tr.querySelector('.col-detail');
if (time) time.setAttribute('data-label', t('mobileLabelTime'));
if (action) action.setAttribute('data-label', t('mobileLabelAction'));
if (target) target.setAttribute('data-label', t('mobileLabelTarget'));
if (detail) detail.setAttribute('data-label', t('mobileLabelDetail'));
});
};
document.querySelectorAll('time.audit-local-time[datetime]').forEach(function (el) { document.querySelectorAll('time.audit-local-time[datetime]').forEach(function (el) {
var raw = el.getAttribute('datetime'); var raw = el.getAttribute('datetime');
var d = raw ? new Date(raw) : null; var d = raw ? new Date(raw) : null;
@@ -149,6 +175,7 @@
el.title = raw + ' (UTC)'; el.title = raw + ' (UTC)';
} }
}); });
applyLang();
})(); })();
</script> </script>
</body> </body>

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,76 @@
var I18N_SHARED = {
'zh-CN': {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: '条目',
navAudit: '审计',
signOut: '退出',
mobileLabelTime: '时间',
mobileLabelAction: '动作',
mobileLabelTarget: '目标',
mobileLabelDetail: '详情'
},
'zh-TW': {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: '條目',
navAudit: '審計',
signOut: '登出',
mobileLabelTime: '時間',
mobileLabelAction: '動作',
mobileLabelTarget: '目標',
mobileLabelDetail: '詳情'
},
en: {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: 'Entries',
navAudit: 'Audit',
signOut: 'Sign out',
mobileLabelTime: 'Time',
mobileLabelAction: 'Action',
mobileLabelTarget: 'Target',
mobileLabelDetail: 'Detail'
}
};
var currentLang = localStorage.getItem('lang') || 'zh-CN';
var I18N_PAGE = {};
function t(key) {
var dict = I18N_PAGE[currentLang] || I18N_PAGE['en'] || {};
var val = dict[key] || (I18N_SHARED[currentLang] && I18N_SHARED[currentLang][key]) || (I18N_SHARED.en && I18N_SHARED.en[key]) || key;
return val;
}
function tf(key, vars) {
var tpl = t(key);
return Object.keys(vars || {}).reduce(function (acc, k) {
return acc.replace(new RegExp('\\{' + k + '\\}', 'g'), String(vars[k]));
}, tpl);
}
function applyLang() {
document.documentElement.lang = currentLang;
var title = t('pageTitle');
if (title) document.title = title;
document.querySelectorAll('[data-i18n]').forEach(function (el) {
var key = el.getAttribute('data-i18n');
el.textContent = t(key);
});
document.querySelectorAll('[data-i18n-ph]').forEach(function (el) {
var key = el.getAttribute('data-i18n-ph');
el.placeholder = t(key);
});
document.querySelectorAll('.lang-btn').forEach(function (btn) {
var map = { 'zh-CN': '简', 'zh-TW': '繁', en: 'EN' };
btn.classList.toggle('active', btn.textContent === map[currentLang]);
});
if (typeof applyPageLang === 'function') applyPageLang();
}
window.setLang = function (lang) {
currentLang = lang;
localStorage.setItem('lang', lang);
applyLang();
};

View File

@@ -1,126 +0,0 @@
-- Entry-Secret N:N migration (manual SQL)
-- Safe to re-run: uses IF EXISTS/IF NOT EXISTS guards.
BEGIN;
-- 1) secrets: add new columns
ALTER TABLE secrets
ADD COLUMN IF NOT EXISTS user_id UUID REFERENCES users(id) ON DELETE SET NULL;
ALTER TABLE secrets
ADD COLUMN IF NOT EXISTS type VARCHAR(64) NOT NULL DEFAULT 'text';
-- 2) rename field_name -> name (idempotent)
DO $$ BEGIN
IF EXISTS (
SELECT 1
FROM information_schema.columns
WHERE table_name = 'secrets' AND column_name = 'field_name'
) THEN
ALTER TABLE secrets RENAME COLUMN field_name TO name;
END IF;
END $$;
-- 3) create join table
CREATE TABLE IF NOT EXISTS entry_secrets (
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
secret_id UUID NOT NULL REFERENCES secrets(id) ON DELETE CASCADE,
sort_order INT NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (entry_id, secret_id)
);
CREATE INDEX IF NOT EXISTS idx_entry_secrets_secret_id ON entry_secrets(secret_id);
-- 4) backfill user_id and relationship from old secrets.entry_id
DO $$ BEGIN
IF EXISTS (
SELECT 1
FROM information_schema.columns
WHERE table_name = 'secrets' AND column_name = 'entry_id'
) THEN
UPDATE secrets s
SET user_id = e.user_id
FROM entries e
WHERE s.entry_id = e.id AND s.user_id IS NULL;
INSERT INTO entry_secrets(entry_id, secret_id, sort_order)
SELECT entry_id, id, 0
FROM secrets
WHERE entry_id IS NOT NULL
ON CONFLICT DO NOTHING;
END IF;
END $$;
-- 5) backfill secret types
UPDATE secrets SET type = 'pem' WHERE name IN ('ssh_key');
UPDATE secrets SET type = 'password' WHERE name IN ('password');
UPDATE secrets SET type = 'phone' WHERE name LIKE 'phone%';
UPDATE secrets SET type = 'url' WHERE name IN ('webhook_url', 'address');
UPDATE secrets
SET type = 'token'
WHERE name IN (
'access_key_id',
'access_key_secret',
'global_api_key',
'api_key',
'secret_key',
'personal_access_token',
'runner_token',
'GOOGLE_CLIENT_ID',
'GOOGLE_CLIENT_SECRET'
);
-- 6) drop old entry_id path
ALTER TABLE secrets DROP CONSTRAINT IF EXISTS secrets_entry_id_fkey;
DROP INDEX IF EXISTS idx_secrets_entry_id;
ALTER TABLE secrets DROP CONSTRAINT IF EXISTS secrets_entry_id_field_name_key;
ALTER TABLE secrets DROP CONSTRAINT IF EXISTS secrets_entry_id_name_key;
ALTER TABLE secrets DROP COLUMN IF EXISTS entry_id;
-- 7) add indexes for new access paths
CREATE INDEX IF NOT EXISTS idx_secrets_user_id
ON secrets(user_id) WHERE user_id IS NOT NULL;
DO $$
DECLARE
duplicate_samples TEXT;
BEGIN
SELECT string_agg(
format('user_id=%s, name=%s, count=%s', t.user_id, t.name, t.cnt),
E'\n'
)
INTO duplicate_samples
FROM (
SELECT user_id::TEXT AS user_id, name, COUNT(*) AS cnt
FROM secrets
WHERE user_id IS NOT NULL
GROUP BY user_id, name
HAVING COUNT(*) > 1
ORDER BY cnt DESC, user_id, name
LIMIT 20
) t;
IF duplicate_samples IS NOT NULL THEN
RAISE EXCEPTION
'Cannot enforce unique constraint on secrets(user_id, name). Duplicates found:%',
E'\n' || duplicate_samples
USING HINT = 'Please deduplicate conflicting rows, then rerun migration.';
END IF;
END $$;
CREATE UNIQUE INDEX IF NOT EXISTS idx_secrets_unique_user_name
ON secrets(user_id, name) WHERE user_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_secrets_name ON secrets(name);
CREATE INDEX IF NOT EXISTS idx_secrets_type ON secrets(type);
-- 8) secrets_history: rename and remove entry-scoped columns
DO $$ BEGIN
IF EXISTS (
SELECT 1
FROM information_schema.columns
WHERE table_name = 'secrets_history' AND column_name = 'field_name'
) THEN
ALTER TABLE secrets_history RENAME COLUMN field_name TO name;
END IF;
END $$;
ALTER TABLE secrets_history DROP COLUMN IF EXISTS entry_id;
ALTER TABLE secrets_history DROP COLUMN IF EXISTS entry_version;
COMMIT;

View File

@@ -1,67 +0,0 @@
-- Metadata cleanup migration (manual SQL)
-- Keep tags/type as dedicated columns; remove duplicated metadata keys.
BEGIN;
-- 1) Promote metadata.type -> entries.type when present.
UPDATE entries
SET type = metadata->>'type'
WHERE metadata->>'type' IS NOT NULL
AND metadata->>'type' <> '';
-- 2) Remove metadata.type.
UPDATE entries
SET metadata = metadata - 'type'
WHERE metadata ? 'type';
-- 3) Remove metadata.environment (duplicated by tags prod/dev).
UPDATE entries
SET metadata = metadata - 'environment'
WHERE metadata ? 'environment';
-- 4) Remove metadata.account when equal to folder.
UPDATE entries
SET metadata = metadata - 'account'
WHERE metadata->>'account' = folder;
-- 5) Normalize manufacturer -> provider.
UPDATE entries
SET metadata = (metadata - 'manufacturer')
|| jsonb_build_object('provider', metadata->>'manufacturer')
WHERE metadata ? 'manufacturer'
AND NOT metadata ? 'provider';
UPDATE entries
SET metadata = metadata - 'manufacturer'
WHERE metadata ? 'manufacturer'
AND metadata ? 'provider';
-- 6) Drop ssh_key_format (moved to secrets.type).
UPDATE entries
SET metadata = metadata - 'ssh_key_format'
WHERE metadata ? 'ssh_key_format';
-- 7) Remove display_name when duplicated by name.
UPDATE entries
SET metadata = metadata - 'display_name'
WHERE metadata->>'display_name' = name;
-- 8) Condense server_* metadata into server_ref.
UPDATE entries
SET metadata = metadata
- 'server_account'
- 'server_hostname'
- 'server_location'
- 'server_public_ip'
|| CASE
WHEN metadata ? 'server_entry_name'
THEN jsonb_build_object('server_ref', metadata->>'server_entry_name')
ELSE '{}'::jsonb
END
WHERE metadata ? 'server_entry_name' OR metadata ? 'server_account';
UPDATE entries
SET metadata = metadata - 'server_entry_name'
WHERE metadata ? 'server_entry_name';
COMMIT;

View File

@@ -1,22 +0,0 @@
-- Run against prod BEFORE deploying secrets-mcp with FK migration.
-- Requires: write access to SECRETS_DATABASE_URL.
-- Example: psql "$SECRETS_DATABASE_URL" -v ON_ERROR_STOP=1 -f scripts/cleanup-orphan-user-ids.sql
BEGIN;
UPDATE entries
SET user_id = NULL
WHERE user_id IS NOT NULL
AND NOT EXISTS (SELECT 1 FROM users u WHERE u.id = entries.user_id);
UPDATE entries_history
SET user_id = NULL
WHERE user_id IS NOT NULL
AND NOT EXISTS (SELECT 1 FROM users u WHERE u.id = entries_history.user_id);
UPDATE audit_log
SET user_id = NULL
WHERE user_id IS NOT NULL
AND NOT EXISTS (SELECT 1 FROM users u WHERE u.id = audit_log.user_id);
COMMIT;

View File

@@ -1,81 +0,0 @@
#!/usr/bin/env bash
# Migrate PostgreSQL data from secrets-mcp-prod to secrets-nn-test.
#
# Prereqs: pg_dump and pg_restore (PostgreSQL client tools) on PATH.
# TLS: Use the same connection parameters as your MCP / app (e.g. sslmode=verify-full
# and PGSSLROOTCERT if needed). If local psql fails with "certificate verify failed",
# run this script from a host that trusts the server CA, or set PGSSLROOTCERT.
#
# Usage:
# export SOURCE_DATABASE_URL='postgres://USER:PASS@host:5432/secrets-mcp-prod?sslmode=verify-full'
# export TARGET_DATABASE_URL='postgres://USER:PASS@host:5432/secrets-nn-test?sslmode=verify-full'
# ./scripts/migrate-db-prod-to-nn-test.sh
#
# Options (env):
# BACKUP_TARGET_FIRST=1 # default: dump target to ./backup-secrets-nn-test-<timestamp>.dump before restore
# RUN_NN_SQL=1 # default: run migrations/001_nn_schema.sql then 002_data_cleanup.sql on target after restore
# SKIP_TARGET_BACKUP=1 # skip target backup
#
# WARNINGS:
# - pg_restore with --clean --if-exists drops objects that exist in the dump; target DB is replaced
# to match the logical content of the source dump (same as typical full restore).
# - Optionally keep a manual dump of the target before proceeding.
# - 001_nn_schema.sql will fail if secrets has duplicate (user_id, name) after backfill; fix data first.
set -euo pipefail
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
cd "$ROOT"
SOURCE_URL="${SOURCE_DATABASE_URL:-}"
TARGET_URL="${TARGET_DATABASE_URL:-}"
if [[ -z "$SOURCE_URL" || -z "$TARGET_URL" ]]; then
echo "Set SOURCE_DATABASE_URL and TARGET_DATABASE_URL (postgres URLs)." >&2
exit 1
fi
if ! command -v pg_dump >/dev/null || ! command -v pg_restore >/dev/null; then
echo "pg_dump and pg_restore are required." >&2
exit 1
fi
TS="$(date +%Y%m%d%H%M%S)"
DUMP_FILE="${DUMP_FILE:-$ROOT/tmp/secrets-mcp-prod-${TS}.dump}"
mkdir -p "$(dirname "$DUMP_FILE")"
if [[ "${EXCLUDE_TOWER_SESSIONS:-}" == "1" ]]; then
echo "==> Excluding schema tower_sessions from dump"
pg_dump "$SOURCE_URL" -Fc --no-owner --no-acl --exclude-schema=tower_sessions -f "$DUMP_FILE"
else
echo "==> Dumping source (custom format) -> $DUMP_FILE"
pg_dump "$SOURCE_URL" -Fc --no-owner --no-acl -f "$DUMP_FILE"
fi
if [[ "${SKIP_TARGET_BACKUP:-}" != "1" && "${BACKUP_TARGET_FIRST:-1}" == "1" ]]; then
BACKUP_FILE="$ROOT/tmp/secrets-nn-test-before-${TS}.dump"
echo "==> Backing up target -> $BACKUP_FILE"
pg_dump "$TARGET_URL" -Fc --no-owner --no-acl -f "$BACKUP_FILE" || {
echo "Target backup failed (empty DB is OK). Continuing." >&2
}
fi
echo "==> Restoring into target (--clean --if-exists)"
pg_restore -d "$TARGET_URL" --no-owner --no-acl --clean --if-exists --verbose "$DUMP_FILE"
if [[ "${RUN_NN_SQL:-1}" == "1" ]]; then
if [[ ! -f "$ROOT/migrations/001_nn_schema.sql" ]]; then
echo "migrations/001_nn_schema.sql not found; skip NN SQL." >&2
else
echo "==> Applying migrations/001_nn_schema.sql on target"
psql "$TARGET_URL" -v ON_ERROR_STOP=1 -f "$ROOT/migrations/001_nn_schema.sql"
fi
if [[ -f "$ROOT/migrations/002_data_cleanup.sql" ]]; then
echo "==> Applying migrations/002_data_cleanup.sql on target"
psql "$TARGET_URL" -v ON_ERROR_STOP=1 -f "$ROOT/migrations/002_data_cleanup.sql"
fi
fi
echo "==> Done. Suggested verification:"
echo " psql \"\$TARGET_DATABASE_URL\" -c \"SELECT COUNT(*) FROM entries; SELECT COUNT(*) FROM secrets; SELECT COUNT(*) FROM entry_secrets;\""
echo " ./scripts/release-check.sh # optional app-side sanity"

View File

@@ -1,194 +0,0 @@
-- ============================================================================
-- migrate-v0.3.0.sql
-- Schema migration from v0.2.x → v0.3.0
--
-- Changes:
-- • entries: namespace → folder, kind → type; add notes column
-- • audit_log: namespace → folder, kind → type
-- • entries_history: namespace → folder, kind → type; add user_id column
-- • Unique index: (user_id, name) → (user_id, folder, name)
-- Same name in different folders is now allowed; no rename needed.
--
-- Safe to run multiple times (fully idempotent).
-- Preserves all data in users, entries, secrets.
-- ============================================================================
BEGIN;
-- ── entries: rename namespace→folder, kind→type ──────────────────────────────
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'namespace'
) THEN
ALTER TABLE entries RENAME COLUMN namespace TO folder;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'kind'
) THEN
ALTER TABLE entries RENAME COLUMN kind TO type;
END IF;
END $$;
-- Set NOT NULL + default for folder/type in entries
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'folder'
) THEN
UPDATE entries SET folder = '' WHERE folder IS NULL;
ALTER TABLE entries ALTER COLUMN folder SET NOT NULL;
ALTER TABLE entries ALTER COLUMN folder SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'type'
) THEN
UPDATE entries SET type = '' WHERE type IS NULL;
ALTER TABLE entries ALTER COLUMN type SET NOT NULL;
ALTER TABLE entries ALTER COLUMN type SET DEFAULT '';
END IF;
END $$;
-- Add notes column to entries if missing
ALTER TABLE entries ADD COLUMN IF NOT EXISTS notes TEXT NOT NULL DEFAULT '';
-- ── audit_log: rename namespace→folder, kind→type ────────────────────────────
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'namespace'
) THEN
ALTER TABLE audit_log RENAME COLUMN namespace TO folder;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'kind'
) THEN
ALTER TABLE audit_log RENAME COLUMN kind TO type;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'folder'
) THEN
UPDATE audit_log SET folder = '' WHERE folder IS NULL;
ALTER TABLE audit_log ALTER COLUMN folder SET NOT NULL;
ALTER TABLE audit_log ALTER COLUMN folder SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'type'
) THEN
UPDATE audit_log SET type = '' WHERE type IS NULL;
ALTER TABLE audit_log ALTER COLUMN type SET NOT NULL;
ALTER TABLE audit_log ALTER COLUMN type SET DEFAULT '';
END IF;
END $$;
ALTER TABLE audit_log DROP COLUMN IF EXISTS actor;
-- ── entries_history: rename namespace→folder, kind→type; add user_id ─────────
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'namespace'
) THEN
ALTER TABLE entries_history RENAME COLUMN namespace TO folder;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'kind'
) THEN
ALTER TABLE entries_history RENAME COLUMN kind TO type;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'folder'
) THEN
UPDATE entries_history SET folder = '' WHERE folder IS NULL;
ALTER TABLE entries_history ALTER COLUMN folder SET NOT NULL;
ALTER TABLE entries_history ALTER COLUMN folder SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'type'
) THEN
UPDATE entries_history SET type = '' WHERE type IS NULL;
ALTER TABLE entries_history ALTER COLUMN type SET NOT NULL;
ALTER TABLE entries_history ALTER COLUMN type SET DEFAULT '';
END IF;
END $$;
ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID;
ALTER TABLE entries_history DROP COLUMN IF EXISTS actor;
-- ── secrets_history: drop actor column ───────────────────────────────────────
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
-- ── Rebuild unique indexes: (user_id, folder, name) ──────────────────────────
-- Note: folder is now part of the key, so same name in different folders is
-- naturally distinct — no rename of existing rows needed.
DROP INDEX IF EXISTS idx_entries_unique_legacy;
DROP INDEX IF EXISTS idx_entries_unique_user;
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy
ON entries(folder, name)
WHERE user_id IS NULL;
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user
ON entries(user_id, folder, name)
WHERE user_id IS NOT NULL;
-- ── Replace old namespace/kind indexes with folder/type ──────────────────────
DROP INDEX IF EXISTS idx_entries_namespace;
DROP INDEX IF EXISTS idx_entries_kind;
DROP INDEX IF EXISTS idx_audit_log_ns_kind;
DROP INDEX IF EXISTS idx_entries_history_ns_kind_name;
CREATE INDEX IF NOT EXISTS idx_entries_folder
ON entries(folder) WHERE folder <> '';
CREATE INDEX IF NOT EXISTS idx_entries_type
ON entries(type) WHERE type <> '';
CREATE INDEX IF NOT EXISTS idx_entries_user_id
ON entries(user_id) WHERE user_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type
ON audit_log(folder, type);
CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name
ON entries_history(folder, type, name, version DESC);
CREATE INDEX IF NOT EXISTS idx_entries_history_user_id
ON entries_history(user_id) WHERE user_id IS NOT NULL;
COMMIT;
-- ── Verification queries (run these manually to confirm) ─────────────────────
-- SELECT column_name, data_type FROM information_schema.columns
-- WHERE table_name = 'entries' ORDER BY ordinal_position;
-- SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'entries';
-- SELECT COUNT(*) FROM entries;
-- SELECT COUNT(*) FROM users;
-- SELECT COUNT(*) FROM secrets;

95
scripts/sync-test-to-prod.sh Executable file
View File

@@ -0,0 +1,95 @@
#!/bin/bash
# 同步测试环境数据到生产环境
# 用法: ./scripts/sync-test-to-prod.sh
set -euo pipefail
# PostgreSQL 客户端工具路径 (Homebrew libpq)
export PATH="/opt/homebrew/opt/libpq/bin:$PATH"
# SSL 配置
export PGSSLMODE=verify-full
export PGSSLROOTCERT=/etc/ssl/cert.pem
# 测试环境
TEST_DB="postgres://postgres:Voson_2026_Pg18!@db.refining.ltd:5432/secrets-nn-test"
# 生产环境
PROD_DB="postgres://postgres:Voson_2026_Pg18!@db.refining.ltd:5432/secrets-nn-prod"
echo "========================================="
echo " 测试环境 -> 生产环境 数据同步"
echo "========================================="
echo ""
# 确认操作
read -p "⚠️ 此操作将覆盖生产环境数据,确认继续? (yes/no): " confirm
if [ "$confirm" != "yes" ]; then
echo "已取消"
exit 0
fi
echo ""
echo "步骤 1/4: 导出测试环境数据..."
TEMP_DIR=$(mktemp -d)
trap "rm -rf $TEMP_DIR" EXIT
# 导出测试环境数据(不含审计日志和历史记录)
pg_dump "$TEST_DB" \
--table=entries \
--table=secrets \
--table=entry_secrets \
--table=users \
--table=oauth_accounts \
--data-only \
--column-inserts \
--no-owner \
--no-privileges \
> "$TEMP_DIR/test_data.sql"
echo "✓ 测试数据已导出到临时文件"
echo " 文件大小: $(du -h "$TEMP_DIR/test_data.sql" | cut -f1)"
echo ""
echo "步骤 2/4: 备份当前生产数据..."
pg_dump "$PROD_DB" \
--table=entries \
--table=secrets \
--table=entry_secrets \
--table=users \
--table=oauth_accounts \
--data-only \
--column-inserts \
--no-owner \
--no-privileges \
> "$TEMP_DIR/prod_backup_$(date +%Y%m%d_%H%M%S).sql"
echo "✓ 生产数据已备份"
echo ""
echo "步骤 3/4: 清空生产环境目标表..."
psql "$PROD_DB" <<'SQL'
TRUNCATE TABLE entry_secrets CASCADE;
TRUNCATE TABLE secrets CASCADE;
TRUNCATE TABLE entries CASCADE;
SQL
echo "✓ 生产环境目标表已清空"
echo ""
echo "步骤 4/4: 导入测试数据到生产环境..."
psql "$PROD_DB" -f "$TEMP_DIR/test_data.sql" 2>&1 | tail -20
echo ""
echo "验证数据..."
echo "生产环境数据统计:"
psql "$PROD_DB" -c "SELECT 'users' as table_name, count(*) FROM users UNION ALL SELECT 'entries', count(*) FROM entries UNION ALL SELECT 'secrets', count(*) FROM secrets UNION ALL SELECT 'entry_secrets', count(*) FROM entry_secrets UNION ALL SELECT 'oauth_accounts', count(*) FROM oauth_accounts ORDER BY table_name;"
echo ""
echo "========================================="
echo " ✓ 数据同步完成!"
echo "========================================="
echo ""
echo "提示:"
echo " - 生产数据备份已保存在临时目录"
echo " - 临时文件将在脚本退出后自动删除"