Compare commits

..

14 Commits

Author SHA1 Message Date
voson
0ffb81e57f feat: entry update links existing secrets (link_secret_names)
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m19s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- secrets-core: update flow validates and applies secret links
- secrets-mcp: MCP tool params and UI for managing links on edit
- Align errors and templates; minor crypto/.gitignore tweaks

Made-with: Cursor
2026-04-04 20:30:32 +08:00
voson
4a1654c820 docs: update MCP tools list, env vars, taxonomy and deploy structure 2026-04-04 18:04:35 +08:00
voson
a15e2eaf4a docs: align README with removed SQL migration scripts
Made-with: Cursor
2026-04-04 17:58:26 +08:00
voson
1518388374 chore(release): secrets-mcp 0.4.0
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m19s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
Bump version for the N:N entry_secrets data model and related MCP/Web
changes. Remove superseded SQL migration artifacts; rely on auto-migrate.
Add structured errors, taxonomy normalization, and web i18n helpers.

Made-with: Cursor
2026-04-04 17:58:12 +08:00
b99d821644 Merge pull request 'refactor/entry-secret-nn' (#1) from refactor/entry-secret-nn into main
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 2m42s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Failing after 6s
Reviewed-on: #1
2026-04-03 19:44:47 +08:00
voson
32f275f88a feat(secrets-mcp): bump 0.3.9 and normalize listen address log
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m7s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been skipped
Prepare a new release version and improve startup log readability by showing localhost for loopback bind addresses without changing runtime binding behavior.

Made-with: Cursor
2026-04-03 19:36:12 +08:00
王松
c6fb457734 feat(nn): entry–secret N:N, unique secret names, web unlink
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Failing after 2m37s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been skipped
Bump secrets-mcp to 0.3.8 (tag 0.3.7 already used).

- Junction table entry_secrets; secrets user-scoped with type
- Per-user unique secrets.name; link_secret_names on add
- Manual migrations + migrate script; MCP/tool and Web updates

Made-with: Cursor
2026-04-03 17:37:04 +08:00
df701f21b9 feat(secrets-mcp): 共享 key 删除时自动迁移并重定向 (v0.3.7)
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m4s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
删除仍被 metadata.key_ref 引用的 key 条目时,在同一事务内将密文复制到首个引用方,
其余引用方的 key_ref 重定向到新 owner;env_map 解析 key_ref 时不再限定 type=key。
Web 删除 API 返回 migrated;Dashboard 删除成功后提示迁移。

Bump secrets-mcp to 0.3.7;补充删除迁移相关单测(需 SECRETS_DATABASE_URL)。

Made-with: Cursor
2026-04-03 09:27:20 +08:00
c3c536200e feat(secrets-mcp): Web 条目编辑 API 与 Notes 列表展示优化(0.3.6)
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m55s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- secrets-core: EntryWriteRow;按 id 更新/删除(含并发冲突与唯一键)
- Web: PATCH/DELETE /api/entries/{id};列表编辑/删除与错误映射
- entries 模板:Notes 限高滚动;空 Notes 不显示占位框
- 版本 0.3.5 → 0.3.6,同步 Cargo.lock

Made-with: Cursor
2026-04-02 14:58:10 +08:00
7909f7102d feat(secrets-mcp): 条目页按 folder/type 筛选并发版 0.3.5
- entries 路由支持 ?folder=&type= 查询,与搜索层 SearchParams 对齐
- 条目列表页增加筛选表单与说明文案
- 版本 0.3.4 → 0.3.5,同步 Cargo.lock

Made-with: Cursor
2026-04-02 14:37:36 +08:00
87a29af82d feat(web): 条目列表页 /entries 与总条数统计
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m56s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- 新增受会话保护的 GET /entries,仅展示 entries 非敏感列
- search: list_entries、count_entries 共享筛选条件;分页与计数不读 secrets
- 侧边栏在 dashboard/audit 增加「条目」入口
- secrets-mcp 0.3.4(tag 尚未存在)

Made-with: Cursor
2026-04-02 11:26:51 +08:00
1b11f7e976 release(secrets-mcp): v0.3.3 — 强制 PostgreSQL TLS 校验
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m54s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Failing after 7s
显式引入数据库 TLS 配置并在生产环境拒绝弱 sslmode,避免连接静默降级。同步更新 deploy/README 与运维 runbook,落地 db.refining.ltd 的证书与服务器配置流程。

Made-with: Cursor
2026-04-01 15:18:14 +08:00
08e81363c9 release(secrets-mcp): v0.3.2 — 修复 key_ref 多租户与歧义
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m41s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- env_map:key_ref 解析传入 user_id;支持 folder/name;多条匹配时报错
- 文档同步 key_ref 说明
- bump secrets-mcp 0.3.1 → 0.3.2,更新 Cargo.lock

Made-with: Cursor
2026-03-27 10:45:12 +08:00
voson
beade4503d release(secrets-mcp): v0.3.1
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m45s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 5s
- MCP: secrets_find, secrets_overview; secrets_get id-only; id on update/delete/history/rollback
- Add meta_obj/secrets_obj; delete guard; env_map/instructions updates
- Core: resolve_entry_by_id; get_*_by_id validates entry + tenant before decrypt

Made-with: Cursor
2026-03-26 17:35:56 +08:00
36 changed files with 4813 additions and 609 deletions

5
.gitignore vendored
View File

@@ -2,6 +2,7 @@
.env .env
.DS_Store .DS_Store
.cursor/ .cursor/
# Google OAuth 下载的 JSON 凭据文件
client_secret_*.apps.googleusercontent.com.json
*.pem *.pem
tmp/
client_secret_*.apps.googleusercontent.com.json
node_modules/

View File

@@ -55,13 +55,24 @@ entries (
```sql ```sql
secrets ( secrets (
id UUID PRIMARY KEY DEFAULT uuidv7(), id UUID PRIMARY KEY DEFAULT uuidv7(),
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE, user_id UUID,
field_name VARCHAR(256) NOT NULL, name VARCHAR(256) NOT NULL,
type VARCHAR(64) NOT NULL DEFAULT 'text',
encrypted BYTEA NOT NULL DEFAULT '\x', encrypted BYTEA NOT NULL DEFAULT '\x',
version BIGINT NOT NULL DEFAULT 1, version BIGINT NOT NULL DEFAULT 1,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
UNIQUE(entry_id, field_name) )
-- 唯一UNIQUE(user_id, name) WHERE user_id IS NOT NULL
```
```sql
entry_secrets (
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
secret_id UUID NOT NULL REFERENCES secrets(id) ON DELETE CASCADE,
sort_order INT NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY(entry_id, secret_id)
) )
``` ```
@@ -108,17 +119,20 @@ oauth_accounts (
| 字段 | 含义 | 示例 | | 字段 | 含义 | 示例 |
|------|------|------| |------|------|------|
| `folder` | 隔离空间(参与唯一键) | `refining` | | `folder` | 隔离空间(参与唯一键) | `refining` |
| `type` | 软分类(不参与唯一键) | `server`, `service`, `key`, `person` | | `type` | 软分类(不参与唯一键) | `server`, `service`, `person`, `document` |
| `name` | 标识名 | `gitea`, `aliyun` | | `name` | 标识名 | `gitea`, `aliyun` |
| `notes` | 非敏感说明 | 自由文本 | | `notes` | 非敏感说明 | 自由文本 |
| `tags` | 标签 | `["aliyun","prod"]` | | `tags` | 标签 | `["aliyun","prod"]` |
| `metadata` | 明文描述 | `ip``url``key_ref` | | `metadata` | 明文描述 | `ip``url``subtype` |
| `secrets.field_name` | 加密字段名(明文 | `token`, `ssh_key` | | `secrets.name` | 密钥名称(调用方提供 | `token`, `ssh_key`, `password` |
| `secrets.type` | 密钥类型(调用方提供,默认 `text` | `text`, `password`, `key` |
| `secrets.encrypted` | 密文 | AES-GCM | | `secrets.encrypted` | 密文 | AES-GCM |
### PEM 共享(`key_ref` ### 共享密钥N:N 关联
将共享 PEM 存为 **`type=key`** 的 entry其它记录在 `metadata.key_ref` 指向该 key 的 `name`。更新 key 记录后,引用方通过服务层解析合并逻辑即可使用新密钥(实现见 `secrets_core::service` 多个 entry 可共享同一 secret 字段,通过 `entry_secrets` 中间表关联
添加条目时通过 `link_secret_names` 参数指定要关联的已有 secret name`(user_id, name)` 精确匹配)。
删除 entry 时仅解除关联secret 本身若仍被引用则保留;不再被任何 entry 引用时自动清理。
## 代码规范 ## 代码规范
@@ -166,6 +180,9 @@ git tag -l 'secrets-mcp-*'
| 变量 | 说明 | | 变量 | 说明 |
|------|------| |------|------|
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL URL。 | | `SECRETS_DATABASE_URL` | **必填**。PostgreSQL URL。 |
| `SECRETS_DATABASE_SSL_MODE` | 可选但强烈建议生产必填。推荐 `verify-full`(至少 `verify-ca`)。 |
| `SECRETS_DATABASE_SSL_ROOT_CERT` | 可选。私有 CA 或自签链路时指定 CA 根证书路径。 |
| `SECRETS_ENV` | 可选。设为 `prod` / `production` 时会拒绝弱 PostgreSQL TLS 模式。 |
| `BASE_URL` | 对外基址OAuth 回调 `${BASE_URL}/auth/google/callback`。 | | `BASE_URL` | 对外基址OAuth 回调 `${BASE_URL}/auth/google/callback`。 |
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`(容器/远程直接暴露时需改为 `0.0.0.0:9315`)。 | | `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`(容器/远程直接暴露时需改为 `0.0.0.0:9315`)。 |
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;仅运行时配置。 | | `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;仅运行时配置。 |

3
Cargo.lock generated
View File

@@ -1960,6 +1960,7 @@ dependencies = [
"sha2", "sha2",
"sqlx", "sqlx",
"tempfile", "tempfile",
"thiserror",
"tokio", "tokio",
"toml", "toml",
"tracing", "tracing",
@@ -1968,7 +1969,7 @@ dependencies = [
[[package]] [[package]]
name = "secrets-mcp" name = "secrets-mcp"
version = "0.3.0" version = "0.5.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"askama", "askama",

View File

@@ -28,6 +28,7 @@ rand = "^0.10.0"
# Utils # Utils
anyhow = "^1.0.102" anyhow = "^1.0.102"
thiserror = "^2"
chrono = { version = "^0.4.44", features = ["serde"] } chrono = { version = "^0.4.44", features = ["serde"] }
uuid = { version = "^1.22.0", features = ["serde"] } uuid = { version = "^1.22.0", features = ["serde"] }
tracing = "^0.1" tracing = "^0.1"

View File

@@ -17,7 +17,10 @@ cargo build --release -p secrets-mcp
| 变量 | 说明 | | 变量 | 说明 |
|------|------| |------|------|
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL 连接串(建议专用库,如 `secrets-mcp`)。 | | `SECRETS_DATABASE_URL` | **必填**。PostgreSQL 连接串(推荐使用域名,例如 `db.refining.ltd`,避免直连 IP)。 |
| `SECRETS_DATABASE_SSL_MODE` | 可选但强烈建议生产必填。推荐 `verify-full`(至少 `verify-ca`),避免回退到弱 TLS 模式。 |
| `SECRETS_DATABASE_SSL_ROOT_CERT` | 可选。私有 CA 或自签链路时指定 CA 根证书路径(如 `/etc/secrets/pg-ca.crt`)。 |
| `SECRETS_ENV` | 可选。设为 `prod` / `production` 时会拒绝弱 PostgreSQL TLS 模式(`prefer``disable``allow``require`)。 |
| `BASE_URL` | 对外访问基址OAuth 回调为 `{BASE_URL}/auth/google/callback`。默认 `http://localhost:9315`。 | | `BASE_URL` | 对外访问基址OAuth 回调为 `{BASE_URL}/auth/google/callback`。默认 `http://localhost:9315`。 |
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`。容器内或直接对外暴露端口时请改为 `0.0.0.0:9315`;反代时常为 `127.0.0.1:9315`。 | | `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`。容器内或直接对外暴露端口时请改为 `0.0.0.0:9315`;反代时常为 `127.0.0.1:9315`。 |
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;不配置则无 Google 登录入口。运行时从环境读取,勿写入 CI、勿打入二进制。 | | `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;不配置则无 Google 登录入口。运行时从环境读取,勿写入 CI、勿打入二进制。 |
@@ -27,16 +30,55 @@ cargo build --release -p secrets-mcp
cargo run -p secrets-mcp cargo run -p secrets-mcp
``` ```
生产推荐示例PostgreSQL TLS
```bash
SECRETS_DATABASE_URL=postgres://postgres:***@db.refining.ltd:5432/secrets-mcp
SECRETS_DATABASE_SSL_MODE=verify-full
SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
SECRETS_ENV=production
```
- **Web**`BASE_URL`登录、Dashboard、设置密码短语、创建 API Key - **Web**`BASE_URL`登录、Dashboard、设置密码短语、创建 API Key
- **MCP**Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer <api_key>` + `X-Encryption-Key: <hex>` 请求头(读密文工具须带密钥)。 - **MCP**Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer <api_key>` + `X-Encryption-Key: <hex>` 请求头(读密文工具须带密钥)。
## PostgreSQL TLS 加固
- 推荐将数据库域名单独设置为 `db.refining.ltd`,服务域名保持 `secrets.refining.app`
- 数据库证书建议使用可校验链路(如 Let's Encrypt 或私有 CA并保证证书 `SAN` 包含 `db.refining.ltd`
- PostgreSQL 侧建议使用 `hostssl` 规则限制应用来源(如 `47.238.146.244/32`),逐步移除公网明文 `host` 访问。
- 应用端推荐 `SECRETS_DATABASE_SSL_MODE=verify-full`;仅在过渡阶段可临时用 `verify-ca`
- 可执行运维步骤见 [`deploy/postgres-tls-hardening.md`](deploy/postgres-tls-hardening.md)。
## MCP 与 AI 工作流v0.3+ ## MCP 与 AI 工作流v0.3+
条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun``ricnsmart/aliyun`)。 条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun``ricnsmart/aliyun`)。
- **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。 ### 工具列表
- **`secrets_get` / `secrets_update` / `secrets_delete`(按 name/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。
- **`secrets_delete`**`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder` | 工具 | 需要加密密钥 | 说明 |
|------|-------------|------|
| `secrets_find` | 否 | 发现条目(返回含 secret_fields schema支持 `name_query` 模糊匹配 |
| `secrets_search` | 否 | 搜索条目,支持 `query`/`folder`/`type`/`name` 过滤、`sort`/`offset` 分页、`summary` 摘要模式 |
| `secrets_get` | 是 | 按 UUID `id` 获取单条条目及解密后的 secrets |
| `secrets_add` | 是 | 添加新条目,支持 `meta_obj`/`secrets_obj` JSON 对象参数、`secret_types` 指定密钥类型、`link_secret_names` 关联已有 secret |
| `secrets_update` | 是 | 更新条目,支持 `id``name`+`folder` 定位 |
| `secrets_delete` | 否 | 删除条目,支持 `id``name`+`folder` 定位;`dry_run=true` 预览删除 |
| `secrets_history` | 否 | 查看条目历史,支持 `id``name`+`folder` 定位 |
| `secrets_rollback` | 是 | 回滚条目到指定历史版本,支持 `id``name`+`folder` 定位 |
| `secrets_export` | 是 | 导出条目(含解密明文),支持 JSON/TOML/YAML 格式 |
| `secrets_env_map` | 是 | 将 secrets 转换为环境变量映射(`UPPER(entry)_UPPER(field)` 格式),支持 `prefix` |
| `secrets_overview` | 否 | 返回各 folder 和 type 的 entry 计数概览 |
### 消歧规则
- **按 `name` 定位的工具**`secrets_update` / `secrets_delete` / `secrets_history` / `secrets_rollback`):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder`。也可直接传 `id`UUID跳过消歧。
- **`secrets_get`** 仅支持通过 `id`UUID获取。
- **`secrets_delete`** 的 `dry_run=true` 与真实删除使用相同消歧规则——唯一则预览一条,多条则报错并要求 `folder`
### 共享密钥
N:N 关联下,删除 entry 仅解除关联,被共享的 secret 若仍被其他 entry 引用则保留;无引用时自动清理。
## 加密架构(混合 E2EE ## 加密架构(混合 E2EE
@@ -130,24 +172,35 @@ flowchart LR
## 数据模型 ## 数据模型
主表 **`entries`**`folder``type``name``notes``tags``metadata`,多租户时带 `user_id`+ 子表 **`secrets`**(每行一个加密字段:`field_name``encrypted`)。**唯一性**`UNIQUE(user_id, folder, name)``user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history``secrets_history``audit_log`,以及 **`users`**(含 `key_salt``key_check``key_params``api_key`)、**`oauth_accounts`**。首次连库自动迁移建表(`secrets-core``migrate`);已有库可对照 [`scripts/migrate-v0.3.0.sql`](scripts/migrate-v0.3.0.sql) 做列重命名与索引重建。**Web 登录会话**tower-sessions使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp``PostgresStore::migrate`),无需额外环境变量。 主表 **`entries`**`folder``type``name``notes``tags``metadata`,多租户时带 `user_id`+ 子表 **`secrets`**(每行一个加密字段:`name``type``encrypted`,通过 `entry_secrets` 中间表与 entry 建立 N:N 关联)。**唯一性**`UNIQUE(user_id, folder, name)``user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history``secrets_history``audit_log`,以及 **`users`**(含 `key_salt``key_check``key_params``api_key`)、**`oauth_accounts`**。首次连库自动迁移建表(`secrets-core``migrate`);已有库在进程启动时亦由同一 `migrate()` 增量补齐表、索引与 N:N 结构。若需从更早版本对照一次性 SQL可在 git 历史中检索已移除的 `scripts/migrate-v0.3.0.sql`。**Web 登录会话**tower-sessions使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp``PostgresStore::migrate`),无需额外环境变量。
| 位置 | 字段 | 说明 | | 位置 | 字段 | 说明 |
|------|------|------| |------|------|------|
| entries | folder | 组织/隔离空间,如 `refining``ricnsmart`;参与唯一键 | | entries | folder | 组织/隔离空间,如 `refining``ricnsmart`;参与唯一键 |
| entries | type | 软分类,如 `server``service``key``person`(可扩展,不参与唯一键) | | entries | type | 软分类,如 `server``service``person``document`(可扩展,不参与唯一键) |
| entries | name | 人类可读标识;与 `folder` 一起在用户内唯一 | | entries | name | 人类可读标识;与 `folder` 一起在用户内唯一 |
| entries | notes | 非敏感说明文本 | | entries | notes | 非敏感说明文本 |
| entries | metadata | 明文 JSONip、url、`key_ref` 等) | | entries | metadata | 明文 JSONip、url、subtype 等) |
| secrets | field_name | 明文字段名,便于 schema 展示 | | secrets | name | 密钥名称(调用方提供) |
| secrets | type | 密钥类型(调用方提供,默认 `text` |
| secrets | encrypted | AES-GCM 密文(含 nonce | | secrets | encrypted | AES-GCM 密文(含 nonce |
| users | key_salt | PBKDF2 salt32B首次设置密码短语时写入 | | users | key_salt | PBKDF2 salt32B首次设置密码短语时写入 |
| users | key_check | 派生密钥加密已知常量,用于验证密码短语 | | users | key_check | 派生密钥加密已知常量,用于验证密码短语 |
| users | key_params | 派生算法参数,如 `{"alg":"pbkdf2-sha256","iterations":600000}` | | users | key_params | 派生算法参数,如 `{"alg":"pbkdf2-sha256","iterations":600000}` |
### PEM 共享(`key_ref` ### 共享密钥N:N 关联
同一 PEM 可被多条 `server` 等记录引用:将 PEM 存为 **`type=key`** 的 entry在其它条目的 `metadata.key_ref` 中写该 key 条目的 `name`;轮换时只更新 key 对应记录即可。 多个条目可共享同一密文字段,通过 `entry_secrets` 中间表实现 N:N 关联:
- 添加条目时可通过 `link_secret_names` 参数关联已有的 secret`(user_id, name)` 精确匹配查找)
- 同一 secret 可被多个 entry 引用,删除某 entry 不会级联删除被共享的 secret
- 当 secret 不再被任何 entry 引用时,自动清理(`NOT EXISTS` 子查询)
### 类型规范化Taxonomy
`type` 字段用于软分类,系统会自动将历史遗留类型映射为标准化类型:
- `git-server``database``cache``queue``storage` 等 → `service`(原始值存入 `metadata.subtype`
- 新增条目时建议使用标准类型:`server``service``person``document`
- 类型映射在 `crates/secrets-core/src/taxonomy.rs` 中定义
## 审计日志 ## 审计日志
@@ -166,10 +219,18 @@ LIMIT 20;
``` ```
Cargo.toml Cargo.toml
crates/secrets-core/ # db / crypto / models / audit / service crates/secrets-core/ # db / crypto / models / audit / service
src/
taxonomy.rs # 类型规范化legacy type → standard type + subtype
service/ # 业务逻辑add, search, update, delete, export, env_map 等)
crates/secrets-mcp/ # MCP HTTP、Web、OAuth、API Key crates/secrets-mcp/ # MCP HTTP、Web、OAuth、API Key
scripts/ scripts/
migrate-v0.3.0.sql # 可选:手动 SQL 迁移namespace/kind → folder/type、唯一键含 folder release-check.sh # 发版前 fmt / clippy / test
deploy/ # systemd、.env 示例 setup-gitea-actions.sh
sync-test-to-prod.sh # 测试库同步到生产(按需)
deploy/
.env.example # 环境变量模板
secrets-mcp.service # systemd 服务文件(生产部署用)
postgres-tls-hardening.md # PostgreSQL TLS 加固运维手册
``` ```
## CI/CDGitea Actions ## CI/CDGitea Actions

View File

@@ -10,6 +10,7 @@ path = "src/lib.rs"
[dependencies] [dependencies]
aes-gcm.workspace = true aes-gcm.workspace = true
anyhow.workspace = true anyhow.workspace = true
thiserror.workspace = true
chrono.workspace = true chrono.workspace = true
rand.workspace = true rand.workspace = true
serde.workspace = true serde.workspace = true

View File

@@ -1,4 +1,15 @@
use anyhow::Result; use std::path::PathBuf;
use anyhow::{Context, Result};
use sqlx::postgres::PgSslMode;
#[derive(Debug, Clone)]
pub struct DatabaseConfig {
pub url: String,
pub ssl_mode: Option<PgSslMode>,
pub ssl_root_cert: Option<PathBuf>,
pub enforce_strict_tls: bool,
}
/// Resolve database URL from environment. /// Resolve database URL from environment.
/// Priority: `SECRETS_DATABASE_URL` env var → error. /// Priority: `SECRETS_DATABASE_URL` env var → error.
@@ -18,3 +29,54 @@ pub fn resolve_db_url(override_url: &str) -> Result<String> {
Example: SECRETS_DATABASE_URL=postgres://user:pass@host:port/dbname" Example: SECRETS_DATABASE_URL=postgres://user:pass@host:port/dbname"
) )
} }
fn env_var_non_empty(name: &str) -> Option<String> {
std::env::var(name)
.ok()
.filter(|value| !value.trim().is_empty())
}
fn parse_ssl_mode_from_env() -> Result<Option<PgSslMode>> {
let Some(mode) = env_var_non_empty("SECRETS_DATABASE_SSL_MODE") else {
return Ok(None);
};
let parsed = mode.parse::<PgSslMode>().with_context(|| {
format!(
"Invalid SECRETS_DATABASE_SSL_MODE='{mode}'. Use one of: disable, allow, prefer, require, verify-ca, verify-full."
)
})?;
Ok(Some(parsed))
}
fn resolve_ssl_root_cert_from_env() -> Result<Option<PathBuf>> {
let Some(path) = env_var_non_empty("SECRETS_DATABASE_SSL_ROOT_CERT") else {
return Ok(None);
};
let path = PathBuf::from(path);
if !path.exists() {
anyhow::bail!(
"SECRETS_DATABASE_SSL_ROOT_CERT points to a missing file: {}",
path.display()
);
}
Ok(Some(path))
}
fn is_production_env() -> bool {
matches!(
env_var_non_empty("SECRETS_ENV")
.as_deref()
.map(|value| value.to_ascii_lowercase()),
Some(value) if value == "prod" || value == "production"
)
}
pub fn resolve_db_config(override_url: &str) -> Result<DatabaseConfig> {
Ok(DatabaseConfig {
url: resolve_db_url(override_url)?,
ssl_mode: parse_ssl_mode_from_env()?,
ssl_root_cert: resolve_ssl_root_cert_from_env()?,
enforce_strict_tls: is_production_env(),
})
}

View File

@@ -5,6 +5,8 @@ use aes_gcm::{
use anyhow::{Context, Result, bail}; use anyhow::{Context, Result, bail};
use serde_json::Value; use serde_json::Value;
use crate::error::AppError;
const NONCE_LEN: usize = 12; const NONCE_LEN: usize = 12;
// ─── AES-256-GCM encrypt / decrypt ─────────────────────────────────────────── // ─── AES-256-GCM encrypt / decrypt ───────────────────────────────────────────
@@ -38,7 +40,7 @@ pub fn decrypt(master_key: &[u8; 32], data: &[u8]) -> Result<Vec<u8>> {
let nonce = Nonce::from_slice(nonce_bytes); let nonce = Nonce::from_slice(nonce_bytes);
cipher cipher
.decrypt(nonce, ciphertext) .decrypt(nonce, ciphertext)
.map_err(|_| anyhow::anyhow!("decryption failed — wrong master key or corrupted data")) .map_err(|_| AppError::DecryptionFailed.into())
} }
// ─── JSON helpers ───────────────────────────────────────────────────────────── // ─── JSON helpers ─────────────────────────────────────────────────────────────

View File

@@ -1,14 +1,45 @@
use anyhow::Result; use std::str::FromStr;
use anyhow::{Context, Result};
use serde_json::Value; use serde_json::Value;
use sqlx::PgPool; use sqlx::PgPool;
use sqlx::postgres::PgPoolOptions; use sqlx::postgres::{PgConnectOptions, PgPoolOptions, PgSslMode};
pub async fn create_pool(database_url: &str) -> Result<PgPool> { use crate::config::DatabaseConfig;
fn build_connect_options(config: &DatabaseConfig) -> Result<PgConnectOptions> {
let mut options = PgConnectOptions::from_str(&config.url)
.with_context(|| "failed to parse SECRETS_DATABASE_URL".to_string())?;
if let Some(mode) = config.ssl_mode {
options = options.ssl_mode(mode);
}
if let Some(path) = &config.ssl_root_cert {
options = options.ssl_root_cert(path);
}
if config.enforce_strict_tls
&& !matches!(
options.get_ssl_mode(),
PgSslMode::VerifyCa | PgSslMode::VerifyFull
)
{
anyhow::bail!(
"Refusing to start in production with weak PostgreSQL TLS mode. \
Set SECRETS_DATABASE_SSL_MODE=verify-ca or verify-full."
);
}
Ok(options)
}
pub async fn create_pool(config: &DatabaseConfig) -> Result<PgPool> {
tracing::debug!("connecting to database"); tracing::debug!("connecting to database");
let connect_options = build_connect_options(config)?;
let pool = PgPoolOptions::new() let pool = PgPoolOptions::new()
.max_connections(10) .max_connections(10)
.acquire_timeout(std::time::Duration::from_secs(5)) .acquire_timeout(std::time::Duration::from_secs(5))
.connect(database_url) .connect_with(connect_options)
.await?; .await?;
tracing::debug!("database connection established"); tracing::debug!("database connection established");
Ok(pool) Ok(pool)
@@ -52,16 +83,30 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
-- ── secrets: one row per encrypted field ───────────────────────────────── -- ── secrets: one row per encrypted field ─────────────────────────────────
CREATE TABLE IF NOT EXISTS secrets ( CREATE TABLE IF NOT EXISTS secrets (
id UUID PRIMARY KEY DEFAULT uuidv7(), id UUID PRIMARY KEY DEFAULT uuidv7(),
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE, user_id UUID,
field_name VARCHAR(256) NOT NULL, name VARCHAR(256) NOT NULL,
type VARCHAR(64) NOT NULL DEFAULT 'text',
encrypted BYTEA NOT NULL DEFAULT '\x', encrypted BYTEA NOT NULL DEFAULT '\x',
version BIGINT NOT NULL DEFAULT 1, version BIGINT NOT NULL DEFAULT 1,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
UNIQUE(entry_id, field_name)
); );
CREATE INDEX IF NOT EXISTS idx_secrets_entry_id ON secrets(entry_id); CREATE INDEX IF NOT EXISTS idx_secrets_user_id ON secrets(user_id) WHERE user_id IS NOT NULL;
CREATE UNIQUE INDEX IF NOT EXISTS idx_secrets_unique_user_name
ON secrets(user_id, name) WHERE user_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_secrets_name ON secrets(name);
CREATE INDEX IF NOT EXISTS idx_secrets_type ON secrets(type);
-- ── entry_secrets: N:N relation ────────────────────────────────────────────
CREATE TABLE IF NOT EXISTS entry_secrets (
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
secret_id UUID NOT NULL REFERENCES secrets(id) ON DELETE CASCADE,
sort_order INT NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY(entry_id, secret_id)
);
CREATE INDEX IF NOT EXISTS idx_entry_secrets_secret_id ON entry_secrets(secret_id);
-- ── audit_log: append-only operation log ───────────────────────────────── -- ── audit_log: append-only operation log ─────────────────────────────────
CREATE TABLE IF NOT EXISTS audit_log ( CREATE TABLE IF NOT EXISTS audit_log (
@@ -110,17 +155,13 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
-- ── secrets_history: field-level snapshot ──────────────────────────────── -- ── secrets_history: field-level snapshot ────────────────────────────────
CREATE TABLE IF NOT EXISTS secrets_history ( CREATE TABLE IF NOT EXISTS secrets_history (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
entry_id UUID NOT NULL,
secret_id UUID NOT NULL, secret_id UUID NOT NULL,
entry_version BIGINT NOT NULL, name VARCHAR(256) NOT NULL,
field_name VARCHAR(256) NOT NULL,
encrypted BYTEA NOT NULL DEFAULT '\x', encrypted BYTEA NOT NULL DEFAULT '\x',
action VARCHAR(16) NOT NULL, action VARCHAR(16) NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
); );
CREATE INDEX IF NOT EXISTS idx_secrets_history_entry_id
ON secrets_history(entry_id, entry_version DESC);
CREATE INDEX IF NOT EXISTS idx_secrets_history_secret_id CREATE INDEX IF NOT EXISTS idx_secrets_history_secret_id
ON secrets_history(secret_id); ON secrets_history(secret_id);
@@ -179,6 +220,16 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
END IF; END IF;
END $$; END $$;
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'fk_secrets_user_id'
) THEN
ALTER TABLE secrets
ADD CONSTRAINT fk_secrets_user_id
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE SET NULL;
END IF;
END $$;
DO $$ BEGIN DO $$ BEGIN
IF NOT EXISTS ( IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'fk_audit_log_user_id' SELECT 1 FROM pg_constraint WHERE conname = 'fk_audit_log_user_id'
@@ -468,10 +519,8 @@ pub async fn snapshot_entry_history(
// ── Secret field-level history snapshot ────────────────────────────────────── // ── Secret field-level history snapshot ──────────────────────────────────────
pub struct SecretSnapshotParams<'a> { pub struct SecretSnapshotParams<'a> {
pub entry_id: uuid::Uuid,
pub secret_id: uuid::Uuid, pub secret_id: uuid::Uuid,
pub entry_version: i64, pub name: &'a str,
pub field_name: &'a str,
pub encrypted: &'a [u8], pub encrypted: &'a [u8],
pub action: &'a str, pub action: &'a str,
} }
@@ -482,13 +531,11 @@ pub async fn snapshot_secret_history(
) -> Result<()> { ) -> Result<()> {
sqlx::query( sqlx::query(
"INSERT INTO secrets_history \ "INSERT INTO secrets_history \
(entry_id, secret_id, entry_version, field_name, encrypted, action) \ (secret_id, name, encrypted, action) \
VALUES ($1, $2, $3, $4, $5, $6)", VALUES ($1, $2, $3, $4)",
) )
.bind(p.entry_id)
.bind(p.secret_id) .bind(p.secret_id)
.bind(p.entry_version) .bind(p.name)
.bind(p.field_name)
.bind(p.encrypted) .bind(p.encrypted)
.bind(p.action) .bind(p.action)
.execute(&mut **tx) .execute(&mut **tx)

View File

@@ -0,0 +1,142 @@
use sqlx::error::DatabaseError;
/// Structured business errors for the secrets service.
///
/// These replace ad-hoc `anyhow` strings for expected failure modes,
/// allowing MCP and Web layers to map to appropriate protocol-level errors.
#[derive(Debug, thiserror::Error)]
pub enum AppError {
#[error("A secret with the name '{secret_name}' already exists for this user")]
ConflictSecretName { secret_name: String },
#[error("An entry with folder='{folder}' and name='{name}' already exists")]
ConflictEntryName { folder: String, name: String },
#[error("Entry not found")]
NotFoundEntry,
#[error("Validation failed: {message}")]
Validation { message: String },
#[error("Concurrent modification detected")]
ConcurrentModification,
#[error("Decryption failed — the encryption key may be incorrect")]
DecryptionFailed,
#[error(transparent)]
Internal(#[from] anyhow::Error),
}
impl AppError {
/// Try to convert a sqlx database error into a structured `AppError`.
///
/// The caller should provide the context (which table was being written,
/// what values were being inserted) so we can produce a meaningful error.
pub fn from_db_error(err: sqlx::Error, ctx: DbErrorContext<'_>) -> Self {
if let sqlx::Error::Database(ref db_err) = err
&& db_err.code().as_deref() == Some("23505")
{
return Self::from_unique_violation(db_err.as_ref(), ctx);
}
AppError::Internal(err.into())
}
fn from_unique_violation(db_err: &dyn DatabaseError, ctx: DbErrorContext<'_>) -> Self {
let constraint = db_err.constraint();
match constraint {
Some("idx_secrets_unique_user_name") => AppError::ConflictSecretName {
secret_name: ctx.secret_name.unwrap_or("unknown").to_string(),
},
Some("idx_entries_unique_user") | Some("idx_entries_unique_legacy") => {
AppError::ConflictEntryName {
folder: ctx.folder.unwrap_or("").to_string(),
name: ctx.name.unwrap_or("unknown").to_string(),
}
}
_ => {
// Fall back to message-based detection for unnamed constraints
let msg = db_err.message();
if msg.contains("secrets") {
AppError::ConflictSecretName {
secret_name: ctx.secret_name.unwrap_or("unknown").to_string(),
}
} else {
AppError::ConflictEntryName {
folder: ctx.folder.unwrap_or("").to_string(),
name: ctx.name.unwrap_or("unknown").to_string(),
}
}
}
}
}
}
/// Context hints used when converting a database error to `AppError`.
#[derive(Debug, Default, Clone, Copy)]
pub struct DbErrorContext<'a> {
pub secret_name: Option<&'a str>,
pub folder: Option<&'a str>,
pub name: Option<&'a str>,
}
impl<'a> DbErrorContext<'a> {
pub fn secret_name(name: &'a str) -> Self {
Self {
secret_name: Some(name),
..Default::default()
}
}
pub fn entry(folder: &'a str, name: &'a str) -> Self {
Self {
folder: Some(folder),
name: Some(name),
..Default::default()
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn app_error_display_messages() {
let err = AppError::ConflictSecretName {
secret_name: "token".to_string(),
};
assert!(err.to_string().contains("token"));
let err = AppError::ConflictEntryName {
folder: "refining".to_string(),
name: "gitea".to_string(),
};
assert!(err.to_string().contains("refining"));
assert!(err.to_string().contains("gitea"));
let err = AppError::NotFoundEntry;
assert_eq!(err.to_string(), "Entry not found");
let err = AppError::Validation {
message: "too long".to_string(),
};
assert!(err.to_string().contains("too long"));
let err = AppError::ConcurrentModification;
assert!(err.to_string().contains("Concurrent modification"));
}
#[test]
fn db_error_context_helpers() {
let ctx = DbErrorContext::secret_name("my_key");
assert_eq!(ctx.secret_name, Some("my_key"));
assert!(ctx.folder.is_none());
let ctx = DbErrorContext::entry("prod", "db-creds");
assert_eq!(ctx.folder, Some("prod"));
assert_eq!(ctx.name, Some("db-creds"));
assert!(ctx.secret_name.is_none());
}
}

View File

@@ -2,5 +2,7 @@ pub mod audit;
pub mod config; pub mod config;
pub mod crypto; pub mod crypto;
pub mod db; pub mod db;
pub mod error;
pub mod models; pub mod models;
pub mod service; pub mod service;
pub mod taxonomy;

View File

@@ -27,8 +27,11 @@ pub struct Entry {
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)] #[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct SecretField { pub struct SecretField {
pub id: Uuid, pub id: Uuid,
pub entry_id: Uuid, pub user_id: Option<Uuid>,
pub field_name: String, pub name: String,
#[serde(rename = "type")]
#[sqlx(rename = "type")]
pub secret_type: String,
/// AES-256-GCM ciphertext: nonce(12B) || ciphertext+tag /// AES-256-GCM ciphertext: nonce(12B) || ciphertext+tag
pub encrypted: Vec<u8>, pub encrypted: Vec<u8>,
pub version: i64, pub version: i64,
@@ -51,11 +54,39 @@ pub struct EntryRow {
pub notes: String, pub notes: String,
} }
/// Entry row including `name` (used for id-scoped web / service updates).
#[derive(Debug, sqlx::FromRow)]
pub struct EntryWriteRow {
pub id: Uuid,
pub version: i64,
pub folder: String,
#[sqlx(rename = "type")]
pub entry_type: String,
pub name: String,
pub tags: Vec<String>,
pub metadata: Value,
pub notes: String,
}
impl From<&EntryWriteRow> for EntryRow {
fn from(r: &EntryWriteRow) -> Self {
EntryRow {
id: r.id,
version: r.version,
folder: r.folder.clone(),
entry_type: r.entry_type.clone(),
tags: r.tags.clone(),
metadata: r.metadata.clone(),
notes: r.notes.clone(),
}
}
}
/// Minimal secret field row fetched before snapshots or cascade deletes. /// Minimal secret field row fetched before snapshots or cascade deletes.
#[derive(Debug, sqlx::FromRow)] #[derive(Debug, sqlx::FromRow)]
pub struct SecretFieldRow { pub struct SecretFieldRow {
pub id: Uuid, pub id: Uuid,
pub field_name: String, pub name: String,
pub encrypted: Vec<u8>, pub encrypted: Vec<u8>,
} }

View File

@@ -1,12 +1,15 @@
use anyhow::Result; use anyhow::Result;
use serde_json::{Map, Value}; use serde_json::{Map, Value};
use sqlx::PgPool; use sqlx::PgPool;
use std::collections::{BTreeSet, HashSet};
use std::fs; use std::fs;
use uuid::Uuid; use uuid::Uuid;
use crate::crypto; use crate::crypto;
use crate::db; use crate::db;
use crate::error::{AppError, DbErrorContext};
use crate::models::EntryRow; use crate::models::EntryRow;
use crate::taxonomy;
// ── Key/value parsing helpers ───────────────────────────────────────────────── // ── Key/value parsing helpers ─────────────────────────────────────────────────
@@ -176,15 +179,27 @@ pub struct AddParams<'a> {
pub tags: &'a [String], pub tags: &'a [String],
pub meta_entries: &'a [String], pub meta_entries: &'a [String],
pub secret_entries: &'a [String], pub secret_entries: &'a [String],
pub secret_types: &'a std::collections::HashMap<String, String>,
pub link_secret_names: &'a [String],
/// Optional user_id for multi-user isolation (None = single-user CLI mode) /// Optional user_id for multi-user isolation (None = single-user CLI mode)
pub user_id: Option<Uuid>, pub user_id: Option<Uuid>,
} }
pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> Result<AddResult> { pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> Result<AddResult> {
let metadata = build_json(params.meta_entries)?; let Value::Object(mut metadata_map) = build_json(params.meta_entries)? else {
unreachable!("build_json always returns a JSON object");
};
let normalized_entry_type =
taxonomy::normalize_entry_type_and_metadata(params.entry_type, &mut metadata_map);
let metadata = Value::Object(metadata_map);
let secret_json = build_json(params.secret_entries)?; let secret_json = build_json(params.secret_entries)?;
let meta_keys = collect_key_paths(params.meta_entries)?; let meta_keys = collect_key_paths(params.meta_entries)?;
let secret_keys = collect_key_paths(params.secret_entries)?; let secret_keys = collect_key_paths(params.secret_entries)?;
let flat_fields = flatten_json_fields("", &secret_json);
let new_secret_names: BTreeSet<String> =
flat_fields.iter().map(|(name, _)| name.clone()).collect();
let link_secret_names =
validate_link_secret_names(params.link_secret_names, &new_secret_names)?;
let mut tx = pool.begin().await?; let mut tx = pool.begin().await?;
@@ -217,7 +232,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
entry_id: ex.id, entry_id: ex.id,
user_id: params.user_id, user_id: params.user_id,
folder: params.folder, folder: params.folder,
entry_type: params.entry_type, entry_type: &normalized_entry_type,
name: params.name, name: params.name,
version: ex.version, version: ex.version,
action: "add", action: "add",
@@ -247,7 +262,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
) )
.bind(uid) .bind(uid)
.bind(params.folder) .bind(params.folder)
.bind(params.entry_type) .bind(&normalized_entry_type)
.bind(params.name) .bind(params.name)
.bind(params.notes) .bind(params.notes)
.bind(params.tags) .bind(params.tags)
@@ -270,7 +285,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
RETURNING id"#, RETURNING id"#,
) )
.bind(params.folder) .bind(params.folder)
.bind(params.entry_type) .bind(&normalized_entry_type)
.bind(params.name) .bind(params.name)
.bind(params.notes) .bind(params.notes)
.bind(params.tags) .bind(params.tags)
@@ -279,7 +294,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
.await? .await?
}; };
let new_entry_version: i64 = sqlx::query_scalar("SELECT version FROM entries WHERE id = $1") let current_entry_version: i64 =
sqlx::query_scalar("SELECT version FROM entries WHERE id = $1")
.bind(entry_id) .bind(entry_id)
.fetch_one(&mut *tx) .fetch_one(&mut *tx)
.await?; .await?;
@@ -291,9 +307,9 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
entry_id, entry_id,
user_id: params.user_id, user_id: params.user_id,
folder: params.folder, folder: params.folder,
entry_type: params.entry_type, entry_type: &normalized_entry_type,
name: params.name, name: params.name,
version: new_entry_version, version: current_entry_version,
action: "create", action: "create",
tags: params.tags, tags: params.tags,
metadata: &metadata, metadata: &metadata,
@@ -308,11 +324,15 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
#[derive(sqlx::FromRow)] #[derive(sqlx::FromRow)]
struct ExistingField { struct ExistingField {
id: Uuid, id: Uuid,
field_name: String, name: String,
encrypted: Vec<u8>, encrypted: Vec<u8>,
} }
let existing_fields: Vec<ExistingField> = let existing_fields: Vec<ExistingField> = sqlx::query_as(
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1") "SELECT s.id, s.name, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1",
)
.bind(entry_id) .bind(entry_id)
.fetch_all(&mut *tx) .fetch_all(&mut *tx)
.await?; .await?;
@@ -321,10 +341,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
if let Err(e) = db::snapshot_secret_history( if let Err(e) = db::snapshot_secret_history(
&mut tx, &mut tx,
db::SecretSnapshotParams { db::SecretSnapshotParams {
entry_id,
secret_id: f.id, secret_id: f.id,
entry_version: new_entry_version - 1, name: &f.name,
field_name: &f.field_name,
encrypted: &f.encrypted, encrypted: &f.encrypted,
action: "add", action: "add",
}, },
@@ -335,29 +353,88 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
} }
} }
sqlx::query("DELETE FROM secrets WHERE entry_id = $1") let orphan_candidates: Vec<Uuid> = existing_fields.iter().map(|f| f.id).collect();
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1")
.bind(entry_id) .bind(entry_id)
.execute(&mut *tx) .execute(&mut *tx)
.await?; .await?;
if !orphan_candidates.is_empty() {
sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = ANY($1) \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(&orphan_candidates)
.execute(&mut *tx)
.await?;
}
}
for (field_name, field_value) in &flat_fields {
let encrypted = crypto::encrypt_json(master_key, field_value)?;
let secret_type = params
.secret_types
.get(field_name)
.map(|s| s.as_str())
.unwrap_or("text");
let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
)
.bind(params.user_id)
.bind(field_name)
.bind(secret_type)
.bind(&encrypted)
.fetch_one(&mut *tx)
.await
.map_err(|e| AppError::from_db_error(e, DbErrorContext::secret_name(field_name)))?;
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
.bind(entry_id)
.bind(secret_id)
.execute(&mut *tx)
.await?;
} }
let flat_fields = flatten_json_fields("", &secret_json); for link_name in &link_secret_names {
for (field_name, field_value) in &flat_fields { let secret_ids: Vec<Uuid> = if let Some(uid) = params.user_id {
let encrypted = crypto::encrypt_json(master_key, field_value)?; sqlx::query_scalar("SELECT id FROM secrets WHERE user_id = $1 AND name = $2")
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)") .bind(uid)
.bind(link_name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id IS NULL AND name = $1")
.bind(link_name)
.fetch_all(&mut *tx)
.await?
};
match secret_ids.len() {
0 => anyhow::bail!("Not found: secret named '{}'", link_name),
1 => {
sqlx::query(
"INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING",
)
.bind(entry_id) .bind(entry_id)
.bind(field_name) .bind(secret_ids[0])
.bind(&encrypted)
.execute(&mut *tx) .execute(&mut *tx)
.await?; .await?;
} }
n => anyhow::bail!(
"Ambiguous: {} secrets named '{}' found. Please deduplicate names first.",
n,
link_name
),
}
}
crate::audit::log_tx( crate::audit::log_tx(
&mut tx, &mut tx,
params.user_id, params.user_id,
"add", "add",
params.folder, params.folder,
params.entry_type, &normalized_entry_type,
params.name, params.name,
serde_json::json!({ serde_json::json!({
"tags": params.tags, "tags": params.tags,
@@ -372,16 +449,44 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
Ok(AddResult { Ok(AddResult {
name: params.name.to_string(), name: params.name.to_string(),
folder: params.folder.to_string(), folder: params.folder.to_string(),
entry_type: params.entry_type.to_string(), entry_type: normalized_entry_type,
tags: params.tags.to_vec(), tags: params.tags.to_vec(),
meta_keys, meta_keys,
secret_keys, secret_keys,
}) })
} }
fn validate_link_secret_names(
link_secret_names: &[String],
new_secret_names: &BTreeSet<String>,
) -> Result<Vec<String>> {
let mut deduped = Vec::new();
let mut seen = HashSet::new();
for raw in link_secret_names {
let trimmed = raw.trim();
if trimmed.is_empty() {
anyhow::bail!("link_secret_names contains an empty name");
}
if new_secret_names.contains(trimmed) {
anyhow::bail!(
"Conflict: secret '{}' is provided both in secrets/secrets_obj and link_secret_names",
trimmed
);
}
if seen.insert(trimmed.to_string()) {
deduped.push(trimmed.to_string());
}
}
Ok(deduped)
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use sqlx::PgPool;
use std::collections::BTreeSet;
#[test] #[test]
fn parse_nested_file_shorthand() { fn parse_nested_file_shorthand() {
@@ -410,4 +515,267 @@ mod tests {
assert_eq!(fields[1].0, "credentials.type"); assert_eq!(fields[1].0, "credentials.type");
assert_eq!(fields[2].0, "username"); assert_eq!(fields[2].0, "username");
} }
#[test]
fn validate_link_secret_names_conflict_with_new_secret() {
let mut new_names = BTreeSet::new();
new_names.insert("password".to_string());
let err = validate_link_secret_names(&[String::from("password")], &new_names)
.expect_err("must fail on overlap");
assert!(
err.to_string()
.contains("provided both in secrets/secrets_obj and link_secret_names")
);
}
#[test]
fn validate_link_secret_names_dedup_and_trim() {
let names = vec![
" shared_key ".to_string(),
"shared_key".to_string(),
"runner_token".to_string(),
];
let deduped = validate_link_secret_names(&names, &BTreeSet::new()).unwrap();
assert_eq!(deduped, vec!["shared_key", "runner_token"]);
}
async fn maybe_test_pool() -> Option<PgPool> {
let Ok(url) = std::env::var("SECRETS_DATABASE_URL") else {
eprintln!("skip add linkage tests: SECRETS_DATABASE_URL is not set");
return None;
};
let Ok(pool) = PgPool::connect(&url).await else {
eprintln!("skip add linkage tests: cannot connect to database");
return None;
};
if let Err(e) = crate::db::migrate(&pool).await {
eprintln!("skip add linkage tests: migrate failed: {e}");
return None;
}
Some(pool)
}
async fn cleanup_test_rows(pool: &PgPool, marker: &str) -> Result<()> {
sqlx::query(
"DELETE FROM entries WHERE user_id IS NULL AND (name LIKE $1 OR folder LIKE $1)",
)
.bind(format!("%{marker}%"))
.execute(pool)
.await?;
sqlx::query(
"DELETE FROM secrets WHERE user_id IS NULL AND name LIKE $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = secrets.id)",
)
.bind(format!("%{marker}%"))
.execute(pool)
.await?;
Ok(())
}
#[tokio::test]
async fn add_links_existing_secret_by_unique_name() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("link_unique_{}", &suffix[..8]);
let secret_name = format!("{}_secret", marker);
let entry_name = format!("{}_entry", marker);
cleanup_test_rows(&pool, &marker).await?;
let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, 'text', $2) RETURNING id",
)
.bind(&secret_name)
.bind(vec![1_u8, 2, 3])
.fetch_one(&pool)
.await?;
run(
&pool,
AddParams {
name: &entry_name,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[],
secret_types: &Default::default(),
link_secret_names: std::slice::from_ref(&secret_name),
user_id: None,
},
&[0_u8; 32],
)
.await?;
let linked: bool = sqlx::query_scalar(
"SELECT EXISTS( \
SELECT 1 FROM entry_secrets es \
JOIN entries e ON e.id = es.entry_id \
WHERE e.user_id IS NULL AND e.name = $1 AND es.secret_id = $2 \
)",
)
.bind(&entry_name)
.bind(secret_id)
.fetch_one(&pool)
.await?;
assert!(linked);
cleanup_test_rows(&pool, &marker).await?;
Ok(())
}
#[tokio::test]
async fn add_link_secret_name_not_found_fails() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("link_missing_{}", &suffix[..8]);
let secret_name = format!("{}_secret", marker);
let entry_name = format!("{}_entry", marker);
cleanup_test_rows(&pool, &marker).await?;
let err = run(
&pool,
AddParams {
name: &entry_name,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[],
secret_types: &Default::default(),
link_secret_names: std::slice::from_ref(&secret_name),
user_id: None,
},
&[0_u8; 32],
)
.await
.expect_err("must fail when linked secret is not found");
assert!(err.to_string().contains("Not found: secret named"));
cleanup_test_rows(&pool, &marker).await?;
Ok(())
}
#[tokio::test]
async fn add_link_secret_name_ambiguous_fails() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("link_amb_{}", &suffix[..8]);
let secret_name = format!("{}_dup_secret", marker);
let entry_name = format!("{}_entry", marker);
cleanup_test_rows(&pool, &marker).await?;
sqlx::query(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, 'text', $2)",
)
.bind(&secret_name)
.bind(vec![1_u8])
.execute(&pool)
.await?;
sqlx::query(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, 'text', $2)",
)
.bind(&secret_name)
.bind(vec![2_u8])
.execute(&pool)
.await?;
let err = run(
&pool,
AddParams {
name: &entry_name,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[],
secret_types: &Default::default(),
link_secret_names: std::slice::from_ref(&secret_name),
user_id: None,
},
&[0_u8; 32],
)
.await
.expect_err("must fail on ambiguous linked secret name");
assert!(err.to_string().contains("Ambiguous:"));
cleanup_test_rows(&pool, &marker).await?;
Ok(())
}
#[tokio::test]
async fn add_duplicate_secret_name_returns_conflict_error() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("dup_secret_{}", &suffix[..8]);
let entry_name = format!("{}_entry", marker);
let secret_name = "shared_token";
cleanup_test_rows(&pool, &marker).await?;
// First add succeeds
run(
&pool,
AddParams {
name: &entry_name,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[format!("{}=value1", secret_name)],
secret_types: &Default::default(),
link_secret_names: &[],
user_id: None,
},
&[0_u8; 32],
)
.await?;
// Second add with same secret name under same user_id should fail with ConflictSecretName
let entry_name2 = format!("{}_entry2", marker);
let err = run(
&pool,
AddParams {
name: &entry_name2,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[format!("{}=value2", secret_name)],
secret_types: &Default::default(),
link_secret_names: &[],
user_id: None,
},
&[0_u8; 32],
)
.await
.expect_err("must fail on duplicate secret name");
let app_err = err
.downcast_ref::<crate::error::AppError>()
.expect("error should be AppError");
assert!(
matches!(app_err, crate::error::AppError::ConflictSecretName { .. }),
"expected ConflictSecretName, got: {}",
app_err
);
cleanup_test_rows(&pool, &marker).await?;
Ok(())
}
} }

View File

@@ -4,7 +4,7 @@ use sqlx::PgPool;
use uuid::Uuid; use uuid::Uuid;
use crate::db; use crate::db;
use crate::models::{EntryRow, SecretFieldRow}; use crate::models::{EntryRow, EntryWriteRow, SecretFieldRow};
#[derive(Debug, serde::Serialize)] #[derive(Debug, serde::Serialize)]
pub struct DeletedEntry { pub struct DeletedEntry {
@@ -31,6 +31,62 @@ pub struct DeleteParams<'a> {
pub user_id: Option<Uuid>, pub user_id: Option<Uuid>,
} }
/// Delete a single entry by id (multi-tenant: `user_id` must match).
pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<DeleteResult> {
let mut tx = pool.begin().await?;
let row: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
WHERE id = $1 AND user_id = $2 FOR UPDATE",
)
.bind(entry_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await?;
let row = match row {
Some(r) => r,
None => {
tx.rollback().await?;
anyhow::bail!("Entry not found");
}
};
let folder = row.folder.clone();
let entry_type = row.entry_type.clone();
let name = row.name.clone();
let entry_row: EntryRow = (&row).into();
snapshot_and_delete(
&mut tx,
&folder,
&entry_type,
&name,
&entry_row,
Some(user_id),
)
.await?;
crate::audit::log_tx(
&mut tx,
Some(user_id),
"delete",
&folder,
&entry_type,
&name,
json!({ "source": "web", "entry_id": entry_id }),
)
.await;
tx.commit().await?;
Ok(DeleteResult {
deleted: vec![DeletedEntry {
name,
folder,
entry_type,
}],
dry_run: false,
})
}
pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult> { pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult> {
match params.name { match params.name {
Some(name) => delete_one(pool, name, params.folder, params.dry_run, params.user_id).await, Some(name) => delete_one(pool, name, params.folder, params.dry_run, params.user_id).await,
@@ -66,6 +122,8 @@ async fn delete_one(
// - 2+ matches → disambiguation error (same as non-dry-run) // - 2+ matches → disambiguation error (same as non-dry-run)
#[derive(sqlx::FromRow)] #[derive(sqlx::FromRow)]
struct DryRunRow { struct DryRunRow {
#[allow(dead_code)]
id: Uuid,
folder: String, folder: String,
#[sqlx(rename = "type")] #[sqlx(rename = "type")]
entry_type: String, entry_type: String,
@@ -74,7 +132,7 @@ async fn delete_one(
let rows: Vec<DryRunRow> = if let Some(uid) = user_id { let rows: Vec<DryRunRow> = if let Some(uid) = user_id {
if let Some(f) = folder { if let Some(f) = folder {
sqlx::query_as( sqlx::query_as(
"SELECT folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3", "SELECT id, folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3",
) )
.bind(uid) .bind(uid)
.bind(f) .bind(f)
@@ -82,7 +140,9 @@ async fn delete_one(
.fetch_all(pool) .fetch_all(pool)
.await? .await?
} else { } else {
sqlx::query_as("SELECT folder, type FROM entries WHERE user_id = $1 AND name = $2") sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND name = $2",
)
.bind(uid) .bind(uid)
.bind(name) .bind(name)
.fetch_all(pool) .fetch_all(pool)
@@ -90,14 +150,16 @@ async fn delete_one(
} }
} else if let Some(f) = folder { } else if let Some(f) = folder {
sqlx::query_as( sqlx::query_as(
"SELECT folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2", "SELECT id, folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2",
) )
.bind(f) .bind(f)
.bind(name) .bind(name)
.fetch_all(pool) .fetch_all(pool)
.await? .await?
} else { } else {
sqlx::query_as("SELECT folder, type FROM entries WHERE user_id IS NULL AND name = $1") sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND name = $1",
)
.bind(name) .bind(name)
.fetch_all(pool) .fetch_all(pool)
.await? .await?
@@ -257,14 +319,17 @@ async fn delete_bulk(
} }
if entry_type.is_some() { if entry_type.is_some() {
conditions.push(format!("type = ${}", idx)); conditions.push(format!("type = ${}", idx));
idx += 1;
} }
let where_clause = format!("WHERE {}", conditions.join(" AND ")); let where_clause = format!("WHERE {}", conditions.join(" AND "));
let _ = idx; // used only for placeholder numbering in conditions
if dry_run {
let sql = format!( let sql = format!(
"SELECT id, version, folder, type, name, metadata, tags, notes \ "SELECT id, version, folder, type, name, metadata, tags, notes \
FROM entries {where_clause} ORDER BY type, name" FROM entries {where_clause} ORDER BY type, name"
); );
let mut q = sqlx::query_as::<_, FullEntryRow>(&sql); let mut q = sqlx::query_as::<_, FullEntryRow>(&sql);
if let Some(uid) = user_id { if let Some(uid) = user_id {
q = q.bind(uid); q = q.bind(uid);
@@ -277,7 +342,6 @@ async fn delete_bulk(
} }
let rows = q.fetch_all(pool).await?; let rows = q.fetch_all(pool).await?;
if dry_run {
let deleted = rows let deleted = rows
.iter() .iter()
.map(|r| DeletedEntry { .map(|r| DeletedEntry {
@@ -292,9 +356,27 @@ async fn delete_bulk(
}); });
} }
let mut tx = pool.begin().await?;
let sql = format!(
"SELECT id, version, folder, type, name, metadata, tags, notes \
FROM entries {where_clause} ORDER BY type, name FOR UPDATE"
);
let mut q = sqlx::query_as::<_, FullEntryRow>(&sql);
if let Some(uid) = user_id {
q = q.bind(uid);
}
if let Some(f) = folder {
q = q.bind(f);
}
if let Some(t) = entry_type {
q = q.bind(t);
}
let rows = q.fetch_all(&mut *tx).await?;
let mut deleted = Vec::with_capacity(rows.len()); let mut deleted = Vec::with_capacity(rows.len());
for row in &rows { for row in &rows {
let entry_row = EntryRow { let entry_row: EntryRow = EntryRow {
id: row.id, id: row.id,
version: row.version, version: row.version,
folder: row.folder.clone(), folder: row.folder.clone(),
@@ -303,7 +385,6 @@ async fn delete_bulk(
metadata: row.metadata.clone(), metadata: row.metadata.clone(),
notes: row.notes.clone(), notes: row.notes.clone(),
}; };
let mut tx = pool.begin().await?;
snapshot_and_delete( snapshot_and_delete(
&mut tx, &mut tx,
&row.folder, &row.folder,
@@ -323,7 +404,6 @@ async fn delete_bulk(
json!({"bulk": true}), json!({"bulk": true}),
) )
.await; .await;
tx.commit().await?;
deleted.push(DeletedEntry { deleted.push(DeletedEntry {
name: row.name.clone(), name: row.name.clone(),
folder: row.folder.clone(), folder: row.folder.clone(),
@@ -331,6 +411,8 @@ async fn delete_bulk(
}); });
} }
tx.commit().await?;
Ok(DeleteResult { Ok(DeleteResult {
deleted, deleted,
dry_run: false, dry_run: false,
@@ -364,8 +446,12 @@ async fn snapshot_and_delete(
tracing::warn!(error = %e, "failed to snapshot entry history before delete"); tracing::warn!(error = %e, "failed to snapshot entry history before delete");
} }
let fields: Vec<SecretFieldRow> = let fields: Vec<SecretFieldRow> = sqlx::query_as(
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1") "SELECT s.id, s.name, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1",
)
.bind(row.id) .bind(row.id)
.fetch_all(&mut **tx) .fetch_all(&mut **tx)
.await?; .await?;
@@ -374,10 +460,8 @@ async fn snapshot_and_delete(
if let Err(e) = db::snapshot_secret_history( if let Err(e) = db::snapshot_secret_history(
tx, tx,
db::SecretSnapshotParams { db::SecretSnapshotParams {
entry_id: row.id,
secret_id: f.id, secret_id: f.id,
entry_version: row.version, name: &f.name,
field_name: &f.field_name,
encrypted: &f.encrypted, encrypted: &f.encrypted,
action: "delete", action: "delete",
}, },
@@ -393,5 +477,171 @@ async fn snapshot_and_delete(
.execute(&mut **tx) .execute(&mut **tx)
.await?; .await?;
let secret_ids: Vec<Uuid> = fields.iter().map(|f| f.id).collect();
if !secret_ids.is_empty() {
sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = ANY($1) \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(&secret_ids)
.execute(&mut **tx)
.await?;
}
Ok(()) Ok(())
} }
#[cfg(test)]
mod tests {
use super::*;
use sqlx::PgPool;
async fn maybe_test_pool() -> Option<PgPool> {
let Ok(url) = std::env::var("SECRETS_DATABASE_URL") else {
eprintln!("skip delete tests: SECRETS_DATABASE_URL is not set");
return None;
};
let Ok(pool) = PgPool::connect(&url).await else {
eprintln!("skip delete tests: cannot connect to database");
return None;
};
if let Err(e) = crate::db::migrate(&pool).await {
eprintln!("skip delete tests: migrate failed: {e}");
return None;
}
Some(pool)
}
async fn cleanup_single_user_rows(pool: &PgPool, marker: &str) -> Result<()> {
sqlx::query(
"DELETE FROM entries WHERE user_id IS NULL AND (name LIKE $1 OR folder LIKE $1)",
)
.bind(format!("%{marker}%"))
.execute(pool)
.await?;
sqlx::query(
"DELETE FROM secrets WHERE user_id IS NULL AND name LIKE $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = secrets.id)",
)
.bind(format!("%{marker}%"))
.execute(pool)
.await?;
Ok(())
}
#[tokio::test]
async fn delete_dry_run_reports_matching_entry_without_writes() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("delete_dry_{}", &suffix[..8]);
let entry_name = format!("{}_entry", marker);
cleanup_single_user_rows(&pool, &marker).await?;
sqlx::query(
"INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata) \
VALUES (NULL, $1, 'service', $2, '', '{}', '{}')",
)
.bind(&marker)
.bind(&entry_name)
.execute(&pool)
.await?;
let result = run(
&pool,
DeleteParams {
name: Some(&entry_name),
folder: Some(&marker),
entry_type: None,
dry_run: true,
user_id: None,
},
)
.await?;
assert!(result.dry_run);
assert_eq!(result.deleted.len(), 1);
assert_eq!(result.deleted[0].name, entry_name);
let still_exists: bool = sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2)",
)
.bind(&marker)
.bind(&entry_name)
.fetch_one(&pool)
.await?;
assert!(still_exists);
cleanup_single_user_rows(&pool, &marker).await?;
Ok(())
}
#[tokio::test]
async fn delete_by_id_removes_entry_and_orphan_secret() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("delete_id_{}", &suffix[..8]);
let user_id = Uuid::from_u128(rand::random());
let entry_name = format!("{}_entry", marker);
let secret_name = format!("{}_secret", marker);
sqlx::query("DELETE FROM entries WHERE user_id = $1 AND folder = $2")
.bind(user_id)
.bind(&marker)
.execute(&pool)
.await?;
sqlx::query("DELETE FROM secrets WHERE user_id = $1 AND name = $2")
.bind(user_id)
.bind(&secret_name)
.execute(&pool)
.await?;
let entry_id: Uuid = sqlx::query_scalar(
"INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata) \
VALUES ($1, $2, 'service', $3, '', '{}', '{}') RETURNING id",
)
.bind(user_id)
.bind(&marker)
.bind(&entry_name)
.fetch_one(&pool)
.await?;
let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, 'text', $3) RETURNING id",
)
.bind(user_id)
.bind(&secret_name)
.bind(vec![1_u8, 2, 3])
.fetch_one(&pool)
.await?;
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
.bind(entry_id)
.bind(secret_id)
.execute(&pool)
.await?;
let result = delete_by_id(&pool, entry_id, user_id).await?;
assert!(!result.dry_run);
assert_eq!(result.deleted.len(), 1);
assert_eq!(result.deleted[0].name, entry_name);
let entry_exists: bool =
sqlx::query_scalar("SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1)")
.bind(entry_id)
.fetch_one(&pool)
.await?;
let secret_exists: bool =
sqlx::query_scalar("SELECT EXISTS(SELECT 1 FROM secrets WHERE id = $1)")
.bind(secret_id)
.fetch_one(&pool)
.await?;
assert!(!entry_exists);
assert!(!secret_exists);
Ok(())
}
}

View File

@@ -26,7 +26,8 @@ pub async fn build_env_map(
let mut combined: HashMap<String, String> = HashMap::new(); let mut combined: HashMap<String, String> = HashMap::new();
for entry in &entries { for entry in &entries {
let entry_map = build_entry_env_map(pool, entry, only_fields, prefix, master_key).await?; let entry_map =
build_entry_env_map(pool, entry, only_fields, prefix, master_key, user_id).await?;
combined.extend(entry_map); combined.extend(entry_map);
} }
@@ -39,6 +40,7 @@ async fn build_entry_env_map(
only_fields: &[String], only_fields: &[String],
prefix: &str, prefix: &str,
master_key: &[u8; 32], master_key: &[u8; 32],
_user_id: Option<Uuid>,
) -> Result<HashMap<String, String>> { ) -> Result<HashMap<String, String>> {
let entry_ids = vec![entry.id]; let entry_ids = vec![entry.id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?; let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
@@ -49,7 +51,7 @@ async fn build_entry_env_map(
} else { } else {
all_fields all_fields
.iter() .iter()
.filter(|f| only_fields.contains(&f.field_name)) .filter(|f| only_fields.contains(&f.name))
.collect() .collect()
}; };
@@ -61,36 +63,11 @@ async fn build_entry_env_map(
let key = format!( let key = format!(
"{}_{}", "{}_{}",
effective_prefix, effective_prefix,
f.field_name.to_uppercase().replace(['-', '.'], "_") f.name.to_uppercase().replace(['-', '.'], "_")
); );
map.insert(key, json_to_env_string(&decrypted)); map.insert(key, json_to_env_string(&decrypted));
} }
// Resolve key_ref
if let Some(key_ref) = entry.metadata.get("key_ref").and_then(|v| v.as_str()) {
let key_entries =
fetch_entries(pool, None, Some("key"), Some(key_ref), &[], None, None).await?;
if let Some(key_entry) = key_entries.first() {
let key_ids = vec![key_entry.id];
let key_fields_map = fetch_secrets_for_entries(pool, &key_ids).await?;
let empty = vec![];
let key_fields = key_fields_map.get(&key_entry.id).unwrap_or(&empty);
let key_prefix = env_prefix(key_entry, prefix);
for f in key_fields {
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
let key_var = format!(
"{}_{}",
key_prefix,
f.field_name.to_uppercase().replace(['-', '.'], "_")
);
map.insert(key_var, json_to_env_string(&decrypted));
}
} else {
tracing::warn!(key_ref, "key_ref target not found");
}
}
Ok(map) Ok(map)
} }

View File

@@ -55,7 +55,7 @@ pub async fn export(
let mut map = BTreeMap::new(); let mut map = BTreeMap::new();
for f in fields { for f in fields {
let decrypted = crypto::decrypt_json(mk, &f.encrypted)?; let decrypted = crypto::decrypt_json(mk, &f.encrypted)?;
map.insert(f.field_name.clone(), decrypted); map.insert(f.name.clone(), decrypted);
} }
Some(map) Some(map)
} }

View File

@@ -5,7 +5,7 @@ use std::collections::HashMap;
use uuid::Uuid; use uuid::Uuid;
use crate::crypto; use crate::crypto;
use crate::service::search::{fetch_secrets_for_entries, resolve_entry}; use crate::service::search::{fetch_secrets_for_entries, resolve_entry, resolve_entry_by_id};
/// Decrypt a single named field from an entry. /// Decrypt a single named field from an entry.
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned. /// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
@@ -25,7 +25,7 @@ pub async fn get_secret_field(
let field = fields let field = fields
.iter() .iter()
.find(|f| f.field_name == field_name) .find(|f| f.name == field_name)
.ok_or_else(|| anyhow::anyhow!("Secret field '{}' not found", field_name))?; .ok_or_else(|| anyhow::anyhow!("Secret field '{}' not found", field_name))?;
crypto::decrypt_json(master_key, &field.encrypted) crypto::decrypt_json(master_key, &field.encrypted)
@@ -49,7 +49,56 @@ pub async fn get_all_secrets(
let mut map = HashMap::new(); let mut map = HashMap::new();
for f in fields { for f in fields {
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?; let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
map.insert(f.field_name.clone(), decrypted); map.insert(f.name.clone(), decrypted);
}
Ok(map)
}
/// Decrypt a single named field from an entry, located by its UUID.
pub async fn get_secret_field_by_id(
pool: &PgPool,
entry_id: Uuid,
field_name: &str,
master_key: &[u8; 32],
user_id: Option<Uuid>,
) -> Result<Value> {
resolve_entry_by_id(pool, entry_id, user_id)
.await
.map_err(|_| anyhow::anyhow!("Entry with id '{}' not found", entry_id))?;
let entry_ids = vec![entry_id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
let fields = secrets_map.get(&entry_id).map(Vec::as_slice).unwrap_or(&[]);
let field = fields
.iter()
.find(|f| f.name == field_name)
.ok_or_else(|| anyhow::anyhow!("Secret field '{}' not found", field_name))?;
crypto::decrypt_json(master_key, &field.encrypted)
}
/// Decrypt all secret fields from an entry, located by its UUID.
/// Returns a map field_name → decrypted Value.
pub async fn get_all_secrets_by_id(
pool: &PgPool,
entry_id: Uuid,
master_key: &[u8; 32],
user_id: Option<Uuid>,
) -> Result<HashMap<String, Value>> {
// Validate entry exists (and that it belongs to the requesting user)
resolve_entry_by_id(pool, entry_id, user_id)
.await
.map_err(|_| anyhow::anyhow!("Entry with id '{}' not found", entry_id))?;
let entry_ids = vec![entry_id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
let fields = secrets_map.get(&entry_id).map(Vec::as_slice).unwrap_or(&[]);
let mut map = HashMap::new();
for f in fields {
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
map.insert(f.name.clone(), decrypted);
} }
Ok(map) Ok(map)
} }

View File

@@ -85,6 +85,8 @@ pub async fn run(
tags: &entry.tags, tags: &entry.tags,
meta_entries: &meta_entries, meta_entries: &meta_entries,
secret_entries: &secret_entries, secret_entries: &secret_entries,
secret_types: &Default::default(),
link_secret_names: &[],
user_id: params.user_id, user_id: params.user_id,
}, },
master_key, master_key,

View File

@@ -3,7 +3,6 @@ use serde_json::Value;
use sqlx::PgPool; use sqlx::PgPool;
use uuid::Uuid; use uuid::Uuid;
use crate::crypto;
use crate::db; use crate::db;
#[derive(Debug, serde::Serialize)] #[derive(Debug, serde::Serialize)]
@@ -27,7 +26,6 @@ pub async fn run(
) -> Result<RollbackResult> { ) -> Result<RollbackResult> {
#[derive(sqlx::FromRow)] #[derive(sqlx::FromRow)]
struct EntryHistoryRow { struct EntryHistoryRow {
entry_id: Uuid,
folder: String, folder: String,
#[sqlx(rename = "type")] #[sqlx(rename = "type")]
entry_type: String, entry_type: String,
@@ -122,7 +120,7 @@ pub async fn run(
let snap: Option<EntryHistoryRow> = if let Some(ver) = to_version { let snap: Option<EntryHistoryRow> = if let Some(ver) = to_version {
sqlx::query_as( sqlx::query_as(
"SELECT entry_id, folder, type, version, action, tags, metadata \ "SELECT folder, type, version, action, tags, metadata \
FROM entries_history \ FROM entries_history \
WHERE entry_id = $1 AND version = $2 ORDER BY id DESC LIMIT 1", WHERE entry_id = $1 AND version = $2 ORDER BY id DESC LIMIT 1",
) )
@@ -132,7 +130,7 @@ pub async fn run(
.await? .await?
} else { } else {
sqlx::query_as( sqlx::query_as(
"SELECT entry_id, folder, type, version, action, tags, metadata \ "SELECT folder, type, version, action, tags, metadata \
FROM entries_history \ FROM entries_history \
WHERE entry_id = $1 ORDER BY id DESC LIMIT 1", WHERE entry_id = $1 ORDER BY id DESC LIMIT 1",
) )
@@ -151,33 +149,7 @@ pub async fn run(
) )
})?; })?;
#[derive(sqlx::FromRow)] let _ = master_key;
struct SecretHistoryRow {
field_name: String,
encrypted: Vec<u8>,
action: String,
}
let field_snaps: Vec<SecretHistoryRow> = sqlx::query_as(
"SELECT field_name, encrypted, action FROM secrets_history \
WHERE entry_id = $1 AND entry_version = $2 ORDER BY field_name",
)
.bind(snap.entry_id)
.bind(snap.version)
.fetch_all(pool)
.await?;
for f in &field_snaps {
if f.action != "delete" && !f.encrypted.is_empty() {
crypto::decrypt_json(master_key, &f.encrypted).map_err(|e| {
anyhow::anyhow!(
"Cannot decrypt snapshot for field '{}': {}",
f.field_name,
e
)
})?;
}
}
let mut tx = pool.begin().await?; let mut tx = pool.begin().await?;
@@ -226,11 +198,15 @@ pub async fn run(
#[derive(sqlx::FromRow)] #[derive(sqlx::FromRow)]
struct LiveField { struct LiveField {
id: Uuid, id: Uuid,
field_name: String, name: String,
encrypted: Vec<u8>, encrypted: Vec<u8>,
} }
let live_fields: Vec<LiveField> = let live_fields: Vec<LiveField> = sqlx::query_as(
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1") "SELECT s.id, s.name, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1",
)
.bind(lr.id) .bind(lr.id)
.fetch_all(&mut *tx) .fetch_all(&mut *tx)
.await?; .await?;
@@ -239,10 +215,8 @@ pub async fn run(
if let Err(e) = db::snapshot_secret_history( if let Err(e) = db::snapshot_secret_history(
&mut tx, &mut tx,
db::SecretSnapshotParams { db::SecretSnapshotParams {
entry_id: lr.id,
secret_id: f.id, secret_id: f.id,
entry_version: lr.version, name: &f.name,
field_name: &f.field_name,
encrypted: &f.encrypted, encrypted: &f.encrypted,
action: "rollback", action: "rollback",
}, },
@@ -297,22 +271,9 @@ pub async fn run(
} }
}; };
sqlx::query("DELETE FROM secrets WHERE entry_id = $1") // In N:N mode, rollback restores entry metadata/tags only.
.bind(live_entry_id) // Secret snapshots are kept for audit but secret linkage/content is not rewritten here.
.execute(&mut *tx) let _ = live_entry_id;
.await?;
for f in &field_snaps {
if f.action == "delete" {
continue;
}
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
.bind(live_entry_id)
.bind(&f.field_name)
.bind(&f.encrypted)
.execute(&mut *tx)
.await?;
}
crate::audit::log_tx( crate::audit::log_tx(
&mut tx, &mut tx,

View File

@@ -8,10 +8,23 @@ use crate::models::{Entry, SecretField};
pub const FETCH_ALL_LIMIT: u32 = 100_000; pub const FETCH_ALL_LIMIT: u32 = 100_000;
/// Build an ILIKE pattern for fuzzy matching, escaping `%` and `_` literals.
pub fn ilike_pattern(value: &str) -> String {
format!(
"%{}%",
value
.replace('\\', "\\\\")
.replace('%', "\\%")
.replace('_', "\\_")
)
}
pub struct SearchParams<'a> { pub struct SearchParams<'a> {
pub folder: Option<&'a str>, pub folder: Option<&'a str>,
pub entry_type: Option<&'a str>, pub entry_type: Option<&'a str>,
pub name: Option<&'a str>, pub name: Option<&'a str>,
/// Fuzzy match on `entries.name` only (ILIKE with escaped `%`/`_`).
pub name_query: Option<&'a str>,
pub tags: &'a [String], pub tags: &'a [String],
pub query: Option<&'a str>, pub query: Option<&'a str>,
pub sort: &'a str, pub sort: &'a str,
@@ -27,49 +40,50 @@ pub struct SearchResult {
pub secret_schemas: HashMap<Uuid, Vec<SecretField>>, pub secret_schemas: HashMap<Uuid, Vec<SecretField>>,
} }
pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult> { /// List `entries` rows matching params (paged, ordered per `params.sort`).
let entries = fetch_entries_paged(pool, &params).await?; /// Does not read the `secrets` table.
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect(); pub async fn list_entries(pool: &PgPool, params: SearchParams<'_>) -> Result<Vec<Entry>> {
let secret_schemas = if !entry_ids.is_empty() {
fetch_secret_schemas(pool, &entry_ids).await?
} else {
HashMap::new()
};
Ok(SearchResult {
entries,
secret_schemas,
})
}
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
pub async fn fetch_entries(
pool: &PgPool,
folder: Option<&str>,
entry_type: Option<&str>,
name: Option<&str>,
tags: &[String],
query: Option<&str>,
user_id: Option<Uuid>,
) -> Result<Vec<Entry>> {
let params = SearchParams {
folder,
entry_type,
name,
tags,
query,
sort: "name",
limit: FETCH_ALL_LIMIT,
offset: 0,
user_id,
};
fetch_entries_paged(pool, &params).await fetch_entries_paged(pool, &params).await
} }
async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<Entry>> { /// Count `entries` rows matching the same filters as [`list_entries`] (ignores `sort` / `limit` / `offset`).
/// Does not read the `secrets` table.
pub async fn count_entries(pool: &PgPool, a: &SearchParams<'_>) -> Result<i64> {
let (where_clause, _) = entry_where_clause_and_next_idx(a);
let sql = format!("SELECT COUNT(*)::bigint FROM entries {where_clause}");
let mut q = sqlx::query_scalar::<_, i64>(&sql);
if let Some(uid) = a.user_id {
q = q.bind(uid);
}
if let Some(v) = a.folder {
q = q.bind(v);
}
if let Some(v) = a.entry_type {
q = q.bind(v);
}
if let Some(v) = a.name {
q = q.bind(v);
}
if let Some(v) = a.name_query {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
for tag in a.tags {
q = q.bind(tag);
}
if let Some(v) = a.query {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
let n = q.fetch_one(pool).await?;
Ok(n)
}
/// Shared WHERE clause and the next `$n` index (for LIMIT/OFFSET in paged queries).
fn entry_where_clause_and_next_idx(a: &SearchParams<'_>) -> (String, i32) {
let mut conditions: Vec<String> = Vec::new(); let mut conditions: Vec<String> = Vec::new();
let mut idx: i32 = 1; let mut idx: i32 = 1;
// user_id filtering — always comes first when present
if a.user_id.is_some() { if a.user_id.is_some() {
conditions.push(format!("user_id = ${}", idx)); conditions.push(format!("user_id = ${}", idx));
idx += 1; idx += 1;
@@ -89,6 +103,10 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
conditions.push(format!("name = ${}", idx)); conditions.push(format!("name = ${}", idx));
idx += 1; idx += 1;
} }
if a.name_query.is_some() {
conditions.push(format!("name ILIKE ${} ESCAPE '\\'", idx));
idx += 1;
}
if !a.tags.is_empty() { if !a.tags.is_empty() {
let placeholders: Vec<String> = a let placeholders: Vec<String> = a
.tags .tags
@@ -115,6 +133,57 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
idx += 1; idx += 1;
} }
let where_clause = if conditions.is_empty() {
String::new()
} else {
format!("WHERE {}", conditions.join(" AND "))
};
(where_clause, idx)
}
pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult> {
let entries = fetch_entries_paged(pool, &params).await?;
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
let secret_schemas = if !entry_ids.is_empty() {
fetch_secret_schemas(pool, &entry_ids).await?
} else {
HashMap::new()
};
Ok(SearchResult {
entries,
secret_schemas,
})
}
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
#[allow(clippy::too_many_arguments)]
pub async fn fetch_entries(
pool: &PgPool,
folder: Option<&str>,
entry_type: Option<&str>,
name: Option<&str>,
tags: &[String],
query: Option<&str>,
user_id: Option<Uuid>,
) -> Result<Vec<Entry>> {
let params = SearchParams {
folder,
entry_type,
name,
name_query: None,
tags,
query,
sort: "name",
limit: FETCH_ALL_LIMIT,
offset: 0,
user_id,
};
list_entries(pool, params).await
}
async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<Entry>> {
let (where_clause, idx) = entry_where_clause_and_next_idx(a);
let order = match a.sort { let order = match a.sort {
"updated" => "updated_at DESC", "updated" => "updated_at DESC",
"created" => "created_at DESC", "created" => "created_at DESC",
@@ -122,14 +191,7 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
}; };
let limit_idx = idx; let limit_idx = idx;
idx += 1; let offset_idx = idx + 1;
let offset_idx = idx;
let where_clause = if conditions.is_empty() {
String::new()
} else {
format!("WHERE {}", conditions.join(" AND "))
};
let sql = format!( let sql = format!(
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \ "SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
@@ -138,7 +200,6 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
); );
let mut q = sqlx::query_as::<_, EntryRaw>(&sql); let mut q = sqlx::query_as::<_, EntryRaw>(&sql);
if let Some(uid) = a.user_id { if let Some(uid) = a.user_id {
q = q.bind(uid); q = q.bind(uid);
} }
@@ -151,11 +212,15 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
if let Some(v) = a.name { if let Some(v) = a.name {
q = q.bind(v); q = q.bind(v);
} }
if let Some(v) = a.name_query {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
for tag in a.tags { for tag in a.tags {
q = q.bind(tag); q = q.bind(tag);
} }
if let Some(v) = a.query { if let Some(v) = a.query {
let pattern = format!("%{}%", v.replace('%', "\\%").replace('_', "\\_")); let pattern = ilike_pattern(v);
q = q.bind(pattern); q = q.bind(pattern);
} }
q = q.bind(a.limit as i64).bind(a.offset as i64); q = q.bind(a.limit as i64).bind(a.offset as i64);
@@ -172,8 +237,12 @@ pub async fn fetch_secret_schemas(
if entry_ids.is_empty() { if entry_ids.is_empty() {
return Ok(HashMap::new()); return Ok(HashMap::new());
} }
let fields: Vec<SecretField> = sqlx::query_as( let fields: Vec<EntrySecretRow> = sqlx::query_as(
"SELECT * FROM secrets WHERE entry_id = ANY($1) ORDER BY entry_id, field_name", "SELECT es.entry_id, s.id, s.user_id, s.name, s.type, s.encrypted, s.version, s.created_at, s.updated_at \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = ANY($1) \
ORDER BY es.entry_id, es.sort_order, s.name",
) )
.bind(entry_ids) .bind(entry_ids)
.fetch_all(pool) .fetch_all(pool)
@@ -181,7 +250,8 @@ pub async fn fetch_secret_schemas(
let mut map: HashMap<Uuid, Vec<SecretField>> = HashMap::new(); let mut map: HashMap<Uuid, Vec<SecretField>> = HashMap::new();
for f in fields { for f in fields {
map.entry(f.entry_id).or_default().push(f); let entry_id = f.entry_id;
map.entry(entry_id).or_default().push(f.secret());
} }
Ok(map) Ok(map)
} }
@@ -194,8 +264,12 @@ pub async fn fetch_secrets_for_entries(
if entry_ids.is_empty() { if entry_ids.is_empty() {
return Ok(HashMap::new()); return Ok(HashMap::new());
} }
let fields: Vec<SecretField> = sqlx::query_as( let fields: Vec<EntrySecretRow> = sqlx::query_as(
"SELECT * FROM secrets WHERE entry_id = ANY($1) ORDER BY entry_id, field_name", "SELECT es.entry_id, s.id, s.user_id, s.name, s.type, s.encrypted, s.version, s.created_at, s.updated_at \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = ANY($1) \
ORDER BY es.entry_id, es.sort_order, s.name",
) )
.bind(entry_ids) .bind(entry_ids)
.fetch_all(pool) .fetch_all(pool)
@@ -203,11 +277,42 @@ pub async fn fetch_secrets_for_entries(
let mut map: HashMap<Uuid, Vec<SecretField>> = HashMap::new(); let mut map: HashMap<Uuid, Vec<SecretField>> = HashMap::new();
for f in fields { for f in fields {
map.entry(f.entry_id).or_default().push(f); let entry_id = f.entry_id;
map.entry(entry_id).or_default().push(f.secret());
} }
Ok(map) Ok(map)
} }
/// Resolve exactly one entry by its UUID primary key.
///
/// Returns an error if the entry does not exist or does not belong to the given user.
pub async fn resolve_entry_by_id(
pool: &PgPool,
id: Uuid,
user_id: Option<Uuid>,
) -> Result<crate::models::Entry> {
let row: Option<EntryRaw> = if let Some(uid) = user_id {
sqlx::query_as(
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
created_at, updated_at FROM entries WHERE id = $1 AND user_id = $2",
)
.bind(id)
.bind(uid)
.fetch_optional(pool)
.await?
} else {
sqlx::query_as(
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
created_at, updated_at FROM entries WHERE id = $1 AND user_id IS NULL",
)
.bind(id)
.fetch_optional(pool)
.await?
};
row.map(Entry::from)
.ok_or_else(|| anyhow::anyhow!("Entry with id '{}' not found", id))
}
/// Resolve exactly one entry by name, with optional folder for disambiguation. /// Resolve exactly one entry by name, with optional folder for disambiguation.
/// ///
/// - If `folder` is provided: exact `(folder, name)` match. /// - If `folder` is provided: exact `(folder, name)` match.
@@ -277,3 +382,42 @@ impl From<EntryRaw> for Entry {
} }
} }
} }
#[derive(sqlx::FromRow)]
struct EntrySecretRow {
entry_id: Uuid,
id: Uuid,
user_id: Option<Uuid>,
name: String,
#[sqlx(rename = "type")]
secret_type: String,
encrypted: Vec<u8>,
version: i64,
created_at: chrono::DateTime<chrono::Utc>,
updated_at: chrono::DateTime<chrono::Utc>,
}
impl EntrySecretRow {
fn secret(self) -> SecretField {
SecretField {
id: self.id,
user_id: self.user_id,
name: self.name,
secret_type: self.secret_type,
encrypted: self.encrypted,
version: self.version,
created_at: self.created_at,
updated_at: self.updated_at,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn ilike_pattern_escapes_backslash_percent_and_underscore() {
assert_eq!(ilike_pattern(r"hello\_100%"), r"%hello\\\_100\%%");
}
}

View File

@@ -5,11 +5,13 @@ use uuid::Uuid;
use crate::crypto; use crate::crypto;
use crate::db; use crate::db;
use crate::models::EntryRow; use crate::error::{AppError, DbErrorContext};
use crate::models::{EntryRow, EntryWriteRow};
use crate::service::add::{ use crate::service::add::{
collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path, collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path,
parse_kv, remove_path, parse_kv, remove_path,
}; };
use crate::taxonomy;
#[derive(Debug, serde::Serialize)] #[derive(Debug, serde::Serialize)]
pub struct UpdateResult { pub struct UpdateResult {
@@ -23,6 +25,8 @@ pub struct UpdateResult {
pub remove_meta: Vec<String>, pub remove_meta: Vec<String>,
pub secret_keys: Vec<String>, pub secret_keys: Vec<String>,
pub remove_secrets: Vec<String>, pub remove_secrets: Vec<String>,
pub linked_secrets: Vec<String>,
pub unlinked_secrets: Vec<String>,
} }
pub struct UpdateParams<'a> { pub struct UpdateParams<'a> {
@@ -35,7 +39,10 @@ pub struct UpdateParams<'a> {
pub meta_entries: &'a [String], pub meta_entries: &'a [String],
pub remove_meta: &'a [String], pub remove_meta: &'a [String],
pub secret_entries: &'a [String], pub secret_entries: &'a [String],
pub secret_types: &'a std::collections::HashMap<String, String>,
pub remove_secrets: &'a [String], pub remove_secrets: &'a [String],
pub link_secret_names: &'a [String],
pub unlink_secret_names: &'a [String],
pub user_id: Option<Uuid>, pub user_id: Option<Uuid>,
} }
@@ -90,10 +97,7 @@ pub async fn run(
let row = match rows.len() { let row = match rows.len() {
0 => { 0 => {
tx.rollback().await?; tx.rollback().await?;
anyhow::bail!( return Err(AppError::NotFoundEntry.into());
"Not found: '{}'. Use `add` to create it first.",
params.name
)
} }
1 => rows.into_iter().next().unwrap(), 1 => rows.into_iter().next().unwrap(),
_ => { _ => {
@@ -167,14 +171,9 @@ pub async fn run(
if result.rows_affected() == 0 { if result.rows_affected() == 0 {
tx.rollback().await?; tx.rollback().await?;
anyhow::bail!( return Err(AppError::ConcurrentModification.into());
"Concurrent modification detected for '{}'. Please retry.",
params.name
);
} }
let new_version = row.version + 1;
for entry in params.secret_entries { for entry in params.secret_entries {
let (path, field_value) = parse_kv(entry)?; let (path, field_value) = parse_kv(entry)?;
let flat = flatten_json_fields("", &{ let flat = flatten_json_fields("", &{
@@ -192,7 +191,10 @@ pub async fn run(
encrypted: Vec<u8>, encrypted: Vec<u8>,
} }
let ef: Option<ExistingField> = sqlx::query_as( let ef: Option<ExistingField> = sqlx::query_as(
"SELECT id, encrypted FROM secrets WHERE entry_id = $1 AND field_name = $2", "SELECT s.id, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1 AND s.name = $2",
) )
.bind(row.id) .bind(row.id)
.bind(field_name) .bind(field_name)
@@ -203,10 +205,8 @@ pub async fn run(
&& let Err(e) = db::snapshot_secret_history( && let Err(e) = db::snapshot_secret_history(
&mut tx, &mut tx,
db::SecretSnapshotParams { db::SecretSnapshotParams {
entry_id: row.id,
secret_id: ef.id, secret_id: ef.id,
entry_version: row.version, name: field_name,
field_name,
encrypted: &ef.encrypted, encrypted: &ef.encrypted,
action: "update", action: "update",
}, },
@@ -216,16 +216,36 @@ pub async fn run(
tracing::warn!(error = %e, "failed to snapshot secret field history"); tracing::warn!(error = %e, "failed to snapshot secret field history");
} }
if let Some(ef) = ef {
sqlx::query( sqlx::query(
"INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3) \ "UPDATE secrets SET encrypted = $1, version = version + 1, updated_at = NOW() WHERE id = $2",
ON CONFLICT (entry_id, field_name) DO UPDATE SET \
encrypted = EXCLUDED.encrypted, version = secrets.version + 1, updated_at = NOW()",
) )
.bind(row.id)
.bind(field_name)
.bind(&encrypted) .bind(&encrypted)
.bind(ef.id)
.execute(&mut *tx) .execute(&mut *tx)
.await?; .await?;
} else {
let secret_type = params
.secret_types
.get(field_name)
.map(|s| s.as_str())
.unwrap_or("text");
let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
)
.bind(params.user_id)
.bind(field_name.to_string())
.bind(secret_type)
.bind(&encrypted)
.fetch_one(&mut *tx)
.await
.map_err(|e| AppError::from_db_error(e, DbErrorContext::secret_name(field_name)))?;
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
.bind(row.id)
.bind(secret_id)
.execute(&mut *tx)
.await?;
}
} }
} }
@@ -239,7 +259,10 @@ pub async fn run(
encrypted: Vec<u8>, encrypted: Vec<u8>,
} }
let field: Option<FieldToDelete> = sqlx::query_as( let field: Option<FieldToDelete> = sqlx::query_as(
"SELECT id, encrypted FROM secrets WHERE entry_id = $1 AND field_name = $2", "SELECT s.id, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1 AND s.name = $2",
) )
.bind(row.id) .bind(row.id)
.bind(&field_name) .bind(&field_name)
@@ -250,10 +273,8 @@ pub async fn run(
if let Err(e) = db::snapshot_secret_history( if let Err(e) = db::snapshot_secret_history(
&mut tx, &mut tx,
db::SecretSnapshotParams { db::SecretSnapshotParams {
entry_id: row.id,
secret_id: f.id, secret_id: f.id,
entry_version: new_version, name: &field_name,
field_name: &field_name,
encrypted: &f.encrypted, encrypted: &f.encrypted,
action: "delete", action: "delete",
}, },
@@ -262,10 +283,114 @@ pub async fn run(
{ {
tracing::warn!(error = %e, "failed to snapshot secret field history before delete"); tracing::warn!(error = %e, "failed to snapshot secret field history before delete");
} }
sqlx::query("DELETE FROM secrets WHERE id = $1") sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
.bind(row.id)
.bind(f.id) .bind(f.id)
.execute(&mut *tx) .execute(&mut *tx)
.await?; .await?;
sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(f.id)
.execute(&mut *tx)
.await?;
}
}
// Link existing secrets by name
let mut linked_secrets = Vec::new();
for link_name in params.link_secret_names {
let link_name = link_name.trim();
if link_name.is_empty() {
anyhow::bail!("link_secret_names contains an empty name");
}
let secret_ids: Vec<Uuid> = if let Some(uid) = params.user_id {
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id = $1 AND name = $2")
.bind(uid)
.bind(link_name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id IS NULL AND name = $1")
.bind(link_name)
.fetch_all(&mut *tx)
.await?
};
match secret_ids.len() {
0 => anyhow::bail!("Not found: secret named '{}'", link_name),
1 => {
sqlx::query(
"INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING",
)
.bind(row.id)
.bind(secret_ids[0])
.execute(&mut *tx)
.await?;
linked_secrets.push(link_name.to_string());
}
n => anyhow::bail!(
"Ambiguous: {} secrets named '{}' found. Please deduplicate names first.",
n,
link_name
),
}
}
// Unlink secrets by name
let mut unlinked_secrets = Vec::new();
for unlink_name in params.unlink_secret_names {
let unlink_name = unlink_name.trim();
if unlink_name.is_empty() {
continue;
}
#[derive(sqlx::FromRow)]
struct SecretToUnlink {
id: Uuid,
encrypted: Vec<u8>,
}
let secret: Option<SecretToUnlink> = sqlx::query_as(
"SELECT s.id, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1 AND s.name = $2",
)
.bind(row.id)
.bind(unlink_name)
.fetch_optional(&mut *tx)
.await?;
if let Some(s) = secret {
if let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
secret_id: s.id,
name: unlink_name,
encrypted: &s.encrypted,
action: "delete",
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot secret field history before unlink");
}
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
.bind(row.id)
.bind(s.id)
.execute(&mut *tx)
.await?;
sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(s.id)
.execute(&mut *tx)
.await?;
unlinked_secrets.push(unlink_name.to_string());
} }
} }
@@ -288,6 +413,8 @@ pub async fn run(
"remove_meta": remove_meta_keys, "remove_meta": remove_meta_keys,
"secret_keys": secret_keys, "secret_keys": secret_keys,
"remove_secrets": remove_secret_keys, "remove_secrets": remove_secret_keys,
"linked_secrets": linked_secrets,
"unlinked_secrets": unlinked_secrets,
}), }),
) )
.await; .await;
@@ -304,5 +431,131 @@ pub async fn run(
remove_meta: remove_meta_keys, remove_meta: remove_meta_keys,
secret_keys, secret_keys,
remove_secrets: remove_secret_keys, remove_secrets: remove_secret_keys,
linked_secrets,
unlinked_secrets,
}) })
} }
/// Update non-sensitive entry columns by primary key (multi-tenant: `user_id` must match).
/// Does not read or modify `secrets` rows.
pub struct UpdateEntryFieldsByIdParams<'a> {
pub folder: &'a str,
pub entry_type: &'a str,
pub name: &'a str,
pub notes: &'a str,
pub tags: &'a [String],
pub metadata: &'a serde_json::Value,
}
pub async fn update_fields_by_id(
pool: &PgPool,
entry_id: Uuid,
user_id: Uuid,
params: UpdateEntryFieldsByIdParams<'_>,
) -> Result<()> {
if params.folder.chars().count() > 128 {
anyhow::bail!("folder must be at most 128 characters");
}
if params.entry_type.chars().count() > 64 {
anyhow::bail!("type must be at most 64 characters");
}
if params.name.chars().count() > 256 {
anyhow::bail!("name must be at most 256 characters");
}
let mut tx = pool.begin().await?;
let row: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
WHERE id = $1 AND user_id = $2 FOR UPDATE",
)
.bind(entry_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await?;
let row = match row {
Some(r) => r,
None => {
tx.rollback().await?;
return Err(AppError::NotFoundEntry.into());
}
};
if let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
entry_id: row.id,
user_id: Some(user_id),
folder: &row.folder,
entry_type: &row.entry_type,
name: &row.name,
version: row.version,
action: "update",
tags: &row.tags,
metadata: &row.metadata,
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot entry history before web update");
}
let mut metadata_map = match params.metadata {
Value::Object(m) => m.clone(),
_ => Map::new(),
};
let normalized_type =
taxonomy::normalize_entry_type_and_metadata(params.entry_type, &mut metadata_map);
let normalized_metadata = Value::Object(metadata_map);
let res = sqlx::query(
"UPDATE entries SET folder = $1, type = $2, name = $3, notes = $4, tags = $5, metadata = $6, \
version = version + 1, updated_at = NOW() \
WHERE id = $7 AND version = $8",
)
.bind(params.folder)
.bind(&normalized_type)
.bind(params.name)
.bind(params.notes)
.bind(params.tags)
.bind(&normalized_metadata)
.bind(row.id)
.bind(row.version)
.execute(&mut *tx)
.await
.map_err(|e| {
if let sqlx::Error::Database(ref d) = e
&& d.code().as_deref() == Some("23505")
{
return AppError::ConflictEntryName {
folder: params.folder.to_string(),
name: params.name.to_string(),
};
}
AppError::Internal(e.into())
})?;
if res.rows_affected() == 0 {
tx.rollback().await?;
return Err(AppError::ConcurrentModification.into());
}
crate::audit::log_tx(
&mut tx,
Some(user_id),
"update",
params.folder,
&normalized_type,
params.name,
serde_json::json!({
"source": "web",
"entry_id": entry_id,
"fields": ["folder", "type", "name", "notes", "tags", "metadata"],
}),
)
.await;
tx.commit().await?;
Ok(())
}

View File

@@ -0,0 +1,111 @@
use serde_json::{Map, Value};
fn normalize_token(input: &str) -> String {
input.trim().to_lowercase().replace('_', "-")
}
fn normalize_subtype_token(input: &str) -> String {
normalize_token(input)
}
fn map_legacy_entry_type(input: &str) -> Option<(&'static str, &'static str)> {
match input {
"log-ingestion-endpoint" => Some(("service", "log-ingestion")),
"cloud-api" => Some(("service", "cloud-api")),
"git-server" => Some(("service", "git")),
"mqtt-broker" => Some(("service", "mqtt-broker")),
"database" => Some(("service", "database")),
"monitoring-dashboard" => Some(("service", "monitoring")),
"dns-api" => Some(("service", "dns-api")),
"notification-webhook" => Some(("service", "webhook")),
"api-endpoint" => Some(("service", "api-endpoint")),
"credential" | "credential-key" => Some(("service", "credential")),
"key" => Some(("service", "credential")),
_ => None,
}
}
/// Normalize entry `type` and optionally backfill `metadata.subtype` for legacy values.
///
/// This keeps backward compatibility:
/// - stable primary types stay unchanged
/// - known legacy long-tail types are mapped to `service` + `metadata.subtype`
/// - unknown values are kept (normalized to kebab-case) instead of hard failing
pub fn normalize_entry_type_and_metadata(
entry_type: &str,
metadata: &mut Map<String, Value>,
) -> String {
let original_raw = entry_type.trim();
let normalized = normalize_token(original_raw);
if normalized.is_empty() {
return String::new();
}
if let Some((mapped_type, mapped_subtype)) = map_legacy_entry_type(&normalized) {
if !metadata.contains_key("subtype") {
metadata.insert(
"subtype".to_string(),
Value::String(mapped_subtype.to_string()),
);
}
if !metadata.contains_key("_original_type") && original_raw != mapped_type {
metadata.insert(
"_original_type".to_string(),
Value::String(original_raw.to_string()),
);
}
return mapped_type.to_string();
}
if let Some(subtype) = metadata.get_mut("subtype")
&& let Some(s) = subtype.as_str()
{
*subtype = Value::String(normalize_subtype_token(s));
}
normalized
}
/// Canonical secret type options for UI dropdowns.
pub const SECRET_TYPE_OPTIONS: &[&str] = &[
"text", "password", "token", "api-key", "ssh-key", "url", "phone", "id-card",
];
#[cfg(test)]
mod tests {
use super::*;
use serde_json::{Map, Value};
#[test]
fn normalize_entry_type_maps_legacy_type_and_backfills_metadata() {
let mut metadata = Map::new();
let normalized = normalize_entry_type_and_metadata("git-server", &mut metadata);
assert_eq!(normalized, "service");
assert_eq!(
metadata.get("subtype"),
Some(&Value::String("git".to_string()))
);
assert_eq!(
metadata.get("_original_type"),
Some(&Value::String("git-server".to_string()))
);
}
#[test]
fn normalize_entry_type_normalizes_existing_subtype() {
let mut metadata = Map::new();
metadata.insert(
"subtype".to_string(),
Value::String("Cloud_API".to_string()),
);
let normalized = normalize_entry_type_and_metadata("service", &mut metadata);
assert_eq!(normalized, "service");
assert_eq!(
metadata.get("subtype"),
Some(&Value::String("cloud-api".to_string()))
);
}
}

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "secrets-mcp" name = "secrets-mcp"
version = "0.3.0" version = "0.5.0"
edition.workspace = true edition.workspace = true
[[bin]] [[bin]]

View File

@@ -0,0 +1,40 @@
use secrets_core::error::AppError;
/// Map a structured `AppError` to an MCP protocol error.
///
/// This replaces the previous pattern of swallowing all errors into `-32603`.
pub fn app_error_to_mcp(err: &AppError) -> rmcp::ErrorData {
match err {
AppError::ConflictSecretName { secret_name } => rmcp::ErrorData::invalid_request(
format!(
"A secret with the name '{secret_name}' already exists for your account. \
Secret names must be unique per user."
),
None,
),
AppError::ConflictEntryName { folder, name } => rmcp::ErrorData::invalid_request(
format!(
"An entry with folder='{folder}' and name='{name}' already exists. \
The combination of folder and name must be unique."
),
None,
),
AppError::NotFoundEntry => rmcp::ErrorData::invalid_request(
"Entry not found. Use secrets_find to discover existing entries.",
None,
),
AppError::Validation { message } => rmcp::ErrorData::invalid_request(message.clone(), None),
AppError::ConcurrentModification => rmcp::ErrorData::invalid_request(
"The entry was modified by another request. Please refresh and try again.",
None,
),
AppError::DecryptionFailed => rmcp::ErrorData::invalid_request(
"Decryption failed — the encryption key may be incorrect or does not match the data.",
None,
),
AppError::Internal(_) => rmcp::ErrorData::internal_error(
"Request failed due to a server error. Check service logs if you need details.",
None,
),
}
}

View File

@@ -1,4 +1,5 @@
mod auth; mod auth;
mod error;
mod logging; mod logging;
mod oauth; mod oauth;
mod tools; mod tools;
@@ -21,7 +22,7 @@ use tower_sessions_sqlx_store_chrono::PostgresStore;
use tracing_subscriber::EnvFilter; use tracing_subscriber::EnvFilter;
use tracing_subscriber::fmt::time::FormatTime; use tracing_subscriber::fmt::time::FormatTime;
use secrets_core::config::resolve_db_url; use secrets_core::config::resolve_db_config;
use secrets_core::db::{create_pool, migrate}; use secrets_core::db::{create_pool, migrate};
use crate::oauth::OAuthConfig; use crate::oauth::OAuthConfig;
@@ -40,6 +41,14 @@ fn load_env_var(name: &str) -> Option<String> {
std::env::var(name).ok().filter(|s| !s.is_empty()) std::env::var(name).ok().filter(|s| !s.is_empty())
} }
/// Pretty-print bind address in logs (`127.0.0.1` → `localhost`); actual socket bind unchanged.
fn listen_addr_log_display(bind_addr: &str) -> String {
bind_addr
.strip_prefix("127.0.0.1:")
.map(|port| format!("localhost:{port}"))
.unwrap_or_else(|| bind_addr.to_string())
}
fn load_oauth_config(prefix: &str, base_url: &str, path: &str) -> Option<OAuthConfig> { fn load_oauth_config(prefix: &str, base_url: &str, path: &str) -> Option<OAuthConfig> {
let client_id = load_env_var(&format!("{}_CLIENT_ID", prefix))?; let client_id = load_env_var(&format!("{}_CLIENT_ID", prefix))?;
let client_secret = load_env_var(&format!("{}_CLIENT_SECRET", prefix))?; let client_secret = load_env_var(&format!("{}_CLIENT_SECRET", prefix))?;
@@ -78,9 +87,9 @@ async fn main() -> Result<()> {
.init(); .init();
// ── Database ────────────────────────────────────────────────────────────── // ── Database ──────────────────────────────────────────────────────────────
let db_url = resolve_db_url("") let db_config = resolve_db_config("")
.context("Database not configured. Set SECRETS_DATABASE_URL environment variable.")?; .context("Database not configured. Set SECRETS_DATABASE_URL environment variable.")?;
let pool = create_pool(&db_url) let pool = create_pool(&db_config)
.await .await
.context("failed to connect to database")?; .context("failed to connect to database")?;
migrate(&pool) migrate(&pool)
@@ -168,7 +177,10 @@ async fn main() -> Result<()> {
.await .await
.with_context(|| format!("failed to bind to {}", bind_addr))?; .with_context(|| format!("failed to bind to {}", bind_addr))?;
tracing::info!("Secrets MCP Server listening on http://{}", bind_addr); tracing::info!(
"Secrets MCP Server listening on http://{}",
listen_addr_log_display(&bind_addr)
);
tracing::info!("MCP endpoint: {}/mcp", base_url); tracing::info!("MCP endpoint: {}/mcp", base_url);
axum::serve( axum::serve(

View File

@@ -13,23 +13,79 @@ use rmcp::{
tool, tool_handler, tool_router, tool, tool_handler, tool_router,
}; };
use schemars::JsonSchema; use schemars::JsonSchema;
use serde::Deserialize; use serde::{Deserialize, Deserializer, de};
use serde_json::{Map, Value};
use sqlx::PgPool; use sqlx::PgPool;
use uuid::Uuid; use uuid::Uuid;
// ── Serde helpers for numeric parameters that may arrive as strings ──────────
mod deser {
use super::*;
/// Deserialize a value that may come as a JSON number or a JSON string.
pub fn option_u32_from_string<'de, D>(deserializer: D) -> Result<Option<u32>, D::Error>
where
D: Deserializer<'de>,
{
#[derive(Deserialize)]
#[serde(untagged)]
enum NumOrStr {
Num(u32),
Str(String),
}
match Option::<NumOrStr>::deserialize(deserializer)? {
None => Ok(None),
Some(NumOrStr::Num(n)) => Ok(Some(n)),
Some(NumOrStr::Str(s)) => {
if s.is_empty() {
return Ok(None);
}
s.parse::<u32>().map(Some).map_err(de::Error::custom)
}
}
}
/// Deserialize an i64 that may come as a JSON number or a JSON string.
pub fn option_i64_from_string<'de, D>(deserializer: D) -> Result<Option<i64>, D::Error>
where
D: Deserializer<'de>,
{
#[derive(Deserialize)]
#[serde(untagged)]
enum NumOrStr {
Num(i64),
Str(String),
}
match Option::<NumOrStr>::deserialize(deserializer)? {
None => Ok(None),
Some(NumOrStr::Num(n)) => Ok(Some(n)),
Some(NumOrStr::Str(s)) => {
if s.is_empty() {
return Ok(None);
}
s.parse::<i64>().map(Some).map_err(de::Error::custom)
}
}
}
}
use secrets_core::models::ExportFormat; use secrets_core::models::ExportFormat;
use secrets_core::service::{ use secrets_core::service::{
add::{AddParams, run as svc_add}, add::{AddParams, run as svc_add},
delete::{DeleteParams, run as svc_delete}, delete::{DeleteParams, run as svc_delete},
export::{ExportParams, export as svc_export}, export::{ExportParams, export as svc_export},
get_secret::{get_all_secrets, get_secret_field}, get_secret::{get_all_secrets_by_id, get_secret_field_by_id},
history::run as svc_history, history::run as svc_history,
rollback::run as svc_rollback, rollback::run as svc_rollback,
search::{SearchParams, run as svc_search}, search::{SearchParams, resolve_entry_by_id, run as svc_search},
update::{UpdateParams, run as svc_update}, update::{UpdateParams, run as svc_update},
}; };
use crate::auth::AuthUser; use crate::auth::AuthUser;
use crate::error;
// ── MCP client-facing errors (no internal details) ─────────────────────────── // ── MCP client-facing errors (no internal details) ───────────────────────────
@@ -49,6 +105,17 @@ fn mcp_err_internal_logged(
) )
} }
fn mcp_err_from_anyhow(
tool: &'static str,
user_id: Option<Uuid>,
err: anyhow::Error,
) -> rmcp::ErrorData {
if let Some(app_err) = err.downcast_ref::<secrets_core::error::AppError>() {
return error::app_error_to_mcp(app_err);
}
mcp_err_internal_logged(tool, user_id, err)
}
fn mcp_err_invalid_encryption_key_logged(err: impl std::fmt::Display) -> rmcp::ErrorData { fn mcp_err_invalid_encryption_key_logged(err: impl std::fmt::Display) -> rmcp::ErrorData {
tracing::warn!(error = %err, "invalid X-Encryption-Key"); tracing::warn!(error = %err, "invalid X-Encryption-Key");
rmcp::ErrorData::invalid_request( rmcp::ErrorData::invalid_request(
@@ -153,17 +220,49 @@ impl SecretsService {
// ── Tool parameter types ────────────────────────────────────────────────────── // ── Tool parameter types ──────────────────────────────────────────────────────
#[derive(Debug, Deserialize, JsonSchema)]
struct FindInput {
#[schemars(
description = "Fuzzy search across name, folder, type, notes, tags, and metadata values"
)]
query: Option<String>,
#[schemars(description = "Exact folder filter (e.g. 'refining', 'ricnsmart')")]
folder: Option<String>,
#[schemars(
description = "Exact type filter (recommended: 'server', 'service', 'person', 'document')"
)]
#[serde(rename = "type")]
entry_type: Option<String>,
#[schemars(description = "Exact name filter. For fuzzy matching use name_query instead.")]
name: Option<String>,
#[schemars(
description = "Fuzzy name filter (ILIKE, case-insensitive partial match). Use this instead of 'name' when you don't know the exact name."
)]
name_query: Option<String>,
#[schemars(description = "Tag filters (all must match)")]
tags: Option<Vec<String>>,
#[schemars(description = "Max results (default 20)")]
#[serde(default, deserialize_with = "deser::option_u32_from_string")]
limit: Option<u32>,
}
#[derive(Debug, Deserialize, JsonSchema)] #[derive(Debug, Deserialize, JsonSchema)]
struct SearchInput { struct SearchInput {
#[schemars(description = "Fuzzy search across name, folder, type, notes, tags, metadata")] #[schemars(description = "Fuzzy search across name, folder, type, notes, tags, metadata")]
query: Option<String>, query: Option<String>,
#[schemars(description = "Folder filter (e.g. 'refining', 'personal', 'family')")] #[schemars(description = "Folder filter (e.g. 'refining', 'personal', 'family')")]
folder: Option<String>, folder: Option<String>,
#[schemars(description = "Type filter (e.g. 'server', 'service', 'person', 'key')")] #[schemars(
description = "Type filter (recommended: 'server', 'service', 'person', 'document')"
)]
#[serde(rename = "type")] #[serde(rename = "type")]
entry_type: Option<String>, entry_type: Option<String>,
#[schemars(description = "Exact name to match")] #[schemars(description = "Exact name to match. For fuzzy matching use name_query instead.")]
name: Option<String>, name: Option<String>,
#[schemars(
description = "Fuzzy name filter (ILIKE, case-insensitive partial match). Use this instead of 'name' when you don't know the exact name."
)]
name_query: Option<String>,
#[schemars(description = "Tag filters (all must match)")] #[schemars(description = "Tag filters (all must match)")]
tags: Option<Vec<String>>, tags: Option<Vec<String>>,
#[schemars(description = "Return only summary fields (name/tags/notes/updated_at)")] #[schemars(description = "Return only summary fields (name/tags/notes/updated_at)")]
@@ -171,19 +270,17 @@ struct SearchInput {
#[schemars(description = "Sort order: 'name' (default), 'updated', 'created'")] #[schemars(description = "Sort order: 'name' (default), 'updated', 'created'")]
sort: Option<String>, sort: Option<String>,
#[schemars(description = "Max results (default 20)")] #[schemars(description = "Max results (default 20)")]
#[serde(default, deserialize_with = "deser::option_u32_from_string")]
limit: Option<u32>, limit: Option<u32>,
#[schemars(description = "Pagination offset (default 0)")] #[schemars(description = "Pagination offset (default 0)")]
#[serde(default, deserialize_with = "deser::option_u32_from_string")]
offset: Option<u32>, offset: Option<u32>,
} }
#[derive(Debug, Deserialize, JsonSchema)] #[derive(Debug, Deserialize, JsonSchema)]
struct GetSecretInput { struct GetSecretInput {
#[schemars(description = "Name of the entry")] #[schemars(description = "Entry UUID obtained from secrets_find results")]
name: String, id: String,
#[schemars(
description = "Folder for disambiguation when multiple entries share the same name (optional)"
)]
folder: Option<String>,
#[schemars(description = "Specific field to retrieve. If omitted, returns all fields.")] #[schemars(description = "Specific field to retrieve. If omitted, returns all fields.")]
field: Option<String>, field: Option<String>,
} }
@@ -195,7 +292,7 @@ struct AddInput {
#[schemars(description = "Folder for organization (optional, e.g. 'personal', 'refining')")] #[schemars(description = "Folder for organization (optional, e.g. 'personal', 'refining')")]
folder: Option<String>, folder: Option<String>,
#[schemars( #[schemars(
description = "Type/category of this entry (optional, e.g. 'server', 'person', 'key')" description = "Type/category of this entry (optional, recommended: 'server', 'service', 'person', 'document')"
)] )]
#[serde(rename = "type")] #[serde(rename = "type")]
entry_type: Option<String>, entry_type: Option<String>,
@@ -205,8 +302,26 @@ struct AddInput {
tags: Option<Vec<String>>, tags: Option<Vec<String>>,
#[schemars(description = "Metadata fields as 'key=value' or 'key:=json' strings")] #[schemars(description = "Metadata fields as 'key=value' or 'key:=json' strings")]
meta: Option<Vec<String>>, meta: Option<Vec<String>>,
#[schemars(description = "Secret fields as 'key=value' strings")] #[schemars(
description = "Metadata fields as a JSON object {\"key\": value}. Merged with 'meta' if both provided."
)]
meta_obj: Option<Map<String, Value>>,
#[schemars(
description = "Secret fields as 'key=value' strings. Reminder: non-sensitive endpoint/address fields should go to metadata.address instead of secrets."
)]
secrets: Option<Vec<String>>, secrets: Option<Vec<String>>,
#[schemars(
description = "Secret fields as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided. Reminder: non-sensitive endpoint/address fields should go to metadata.address."
)]
secrets_obj: Option<Map<String, Value>>,
#[schemars(
description = "Secret types as {\"secret_name\": \"type\"}. Keys must match secret field names. Missing keys default to \"text\"."
)]
secret_types: Option<Map<String, Value>>,
#[schemars(
description = "Link existing secrets by secret name. Names must resolve uniquely under current user."
)]
link_secret_names: Option<Vec<String>>,
} }
#[derive(Debug, Deserialize, JsonSchema)] #[derive(Debug, Deserialize, JsonSchema)]
@@ -217,6 +332,10 @@ struct UpdateInput {
description = "Folder for disambiguation when multiple entries share the same name (optional)" description = "Folder for disambiguation when multiple entries share the same name (optional)"
)] )]
folder: Option<String>, folder: Option<String>,
#[schemars(
description = "Entry UUID (from secrets_find). If provided, name/folder are used for disambiguation only."
)]
id: Option<String>,
#[schemars(description = "Update the notes field")] #[schemars(description = "Update the notes field")]
notes: Option<String>, notes: Option<String>,
#[schemars(description = "Tags to add")] #[schemars(description = "Tags to add")]
@@ -225,16 +344,43 @@ struct UpdateInput {
remove_tags: Option<Vec<String>>, remove_tags: Option<Vec<String>>,
#[schemars(description = "Metadata fields to update/add as 'key=value' strings")] #[schemars(description = "Metadata fields to update/add as 'key=value' strings")]
meta: Option<Vec<String>>, meta: Option<Vec<String>>,
#[schemars(
description = "Metadata fields to update/add as a JSON object {\"key\": value}. Merged with 'meta' if both provided."
)]
meta_obj: Option<Map<String, Value>>,
#[schemars(description = "Metadata field keys to remove")] #[schemars(description = "Metadata field keys to remove")]
remove_meta: Option<Vec<String>>, remove_meta: Option<Vec<String>>,
#[schemars(description = "Secret fields to update/add as 'key=value' strings")] #[schemars(
description = "Secret fields to update/add as 'key=value' strings. Reminder: non-sensitive endpoint/address fields should go to metadata.address instead of secrets."
)]
secrets: Option<Vec<String>>, secrets: Option<Vec<String>>,
#[schemars(
description = "Secret fields to update/add as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided. Reminder: non-sensitive endpoint/address fields should go to metadata.address."
)]
secrets_obj: Option<Map<String, Value>>,
#[schemars(
description = "Secret types as {\"secret_name\": \"type\"}. Keys must match secret field names. Missing keys default to \"text\"."
)]
secret_types: Option<Map<String, Value>>,
#[schemars(description = "Secret field keys to remove")] #[schemars(description = "Secret field keys to remove")]
remove_secrets: Option<Vec<String>>, remove_secrets: Option<Vec<String>>,
#[schemars(
description = "Link existing secrets by name to this entry. Names must resolve uniquely under current user."
)]
link_secret_names: Option<Vec<String>>,
#[schemars(
description = "Unlink secrets by name from this entry. Orphaned secrets are auto-deleted."
)]
unlink_secret_names: Option<Vec<String>>,
} }
#[derive(Debug, Deserialize, JsonSchema)] #[derive(Debug, Deserialize, JsonSchema)]
struct DeleteInput { struct DeleteInput {
#[schemars(
description = "Entry UUID (from secrets_find). If provided, deletes this specific entry \
regardless of name/folder."
)]
id: Option<String>,
#[schemars(description = "Name of the entry to delete (single delete). \ #[schemars(description = "Name of the entry to delete (single delete). \
Omit to bulk delete by folder/type filters.")] Omit to bulk delete by folder/type filters.")]
name: Option<String>, name: Option<String>,
@@ -255,7 +401,12 @@ struct HistoryInput {
description = "Folder for disambiguation when multiple entries share the same name (optional)" description = "Folder for disambiguation when multiple entries share the same name (optional)"
)] )]
folder: Option<String>, folder: Option<String>,
#[schemars(
description = "Entry UUID (from secrets_find). If provided, name/folder are ignored."
)]
id: Option<String>,
#[schemars(description = "Max history entries to return (default 20)")] #[schemars(description = "Max history entries to return (default 20)")]
#[serde(default, deserialize_with = "deser::option_u32_from_string")]
limit: Option<u32>, limit: Option<u32>,
} }
@@ -267,7 +418,12 @@ struct RollbackInput {
description = "Folder for disambiguation when multiple entries share the same name (optional)" description = "Folder for disambiguation when multiple entries share the same name (optional)"
)] )]
folder: Option<String>, folder: Option<String>,
#[schemars(
description = "Entry UUID (from secrets_find). If provided, name/folder are ignored."
)]
id: Option<String>,
#[schemars(description = "Target version number. Omit to restore the most recent snapshot.")] #[schemars(description = "Target version number. Omit to restore the most recent snapshot.")]
#[serde(default, deserialize_with = "deser::option_i64_from_string")]
to_version: Option<i64>, to_version: Option<i64>,
} }
@@ -301,17 +457,130 @@ struct EnvMapInput {
tags: Option<Vec<String>>, tags: Option<Vec<String>>,
#[schemars(description = "Only include these secret fields")] #[schemars(description = "Only include these secret fields")]
only_fields: Option<Vec<String>>, only_fields: Option<Vec<String>>,
#[schemars(description = "Environment variable name prefix")] #[schemars(description = "Environment variable name prefix. \
Variable names are built as UPPER(prefix)_UPPER(entry_name)_UPPER(field_name), \
with hyphens and dots replaced by underscores. \
Example: entry 'aliyun', field 'access_key_id' → ALIYUN_ACCESS_KEY_ID \
(or PREFIX_ALIYUN_ACCESS_KEY_ID with prefix set).")]
prefix: Option<String>, prefix: Option<String>,
} }
#[derive(Debug, Deserialize, JsonSchema)]
struct OverviewInput {}
// ── Helpers ───────────────────────────────────────────────────────────────────
/// Convert a JSON object map into "key=value" / "key:=json" strings for service-layer parsing.
fn map_to_kv_strings(map: Map<String, Value>) -> Vec<String> {
map.into_iter()
.map(|(k, v)| match &v {
Value::String(s) => format!("{}={}", k, s),
_ => format!("{}:={}", k, v),
})
.collect()
}
/// Parse a UUID string, returning an MCP error on failure.
fn parse_uuid(s: &str) -> Result<Uuid, rmcp::ErrorData> {
s.parse::<Uuid>()
.map_err(|_| rmcp::ErrorData::invalid_request(format!("Invalid UUID: '{}'", s), None))
}
// ── Tool implementations ────────────────────────────────────────────────────── // ── Tool implementations ──────────────────────────────────────────────────────
#[tool_router] #[tool_router]
impl SecretsService { impl SecretsService {
#[tool(
description = "Find entries in the secrets store by folder, name, type, tags, or a \
fuzzy query that also searches metadata values. Requires Bearer API key. \
Returns 0 or more entries with id, metadata, and secret field names (not values). \
Use the returned id with secrets_get to decrypt secret values. \
Replaces secrets_search for discovery tasks.",
annotations(title = "Find Secrets", read_only_hint = true, idempotent_hint = true)
)]
async fn secrets_find(
&self,
Parameters(input): Parameters<FindInput>,
ctx: RequestContext<RoleServer>,
) -> Result<CallToolResult, rmcp::ErrorData> {
let t = Instant::now();
let user_id = Self::require_user_id(&ctx)?;
tracing::info!(
tool = "secrets_find",
?user_id,
folder = input.folder.as_deref(),
entry_type = input.entry_type.as_deref(),
name = input.name.as_deref(),
name_query = input.name_query.as_deref(),
query = input.query.as_deref(),
"tool call start",
);
let tags = input.tags.unwrap_or_default();
let result = svc_search(
&self.pool,
SearchParams {
folder: input.folder.as_deref(),
entry_type: input.entry_type.as_deref(),
name: input.name.as_deref(),
name_query: input.name_query.as_deref(),
tags: &tags,
query: input.query.as_deref(),
sort: "name",
limit: input.limit.unwrap_or(20),
offset: 0,
user_id: Some(user_id),
},
)
.await
.map_err(|e| mcp_err_internal_logged("secrets_find", Some(user_id), e))?;
let entries: Vec<serde_json::Value> = result
.entries
.iter()
.map(|e| {
let schema: Vec<serde_json::Value> = result
.secret_schemas
.get(&e.id)
.map(|f| {
f.iter()
.map(|s| {
serde_json::json!({
"id": s.id,
"name": s.name,
"type": s.secret_type,
})
})
.collect()
})
.unwrap_or_default();
serde_json::json!({
"id": e.id,
"name": e.name,
"folder": e.folder,
"type": e.entry_type,
"tags": e.tags,
"metadata": e.metadata,
"secret_fields": schema,
"updated_at": e.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
})
})
.collect();
tracing::info!(
tool = "secrets_find",
?user_id,
result_count = entries.len(),
elapsed_ms = t.elapsed().as_millis(),
"tool call ok",
);
let json = serde_json::to_string_pretty(&entries).unwrap_or_else(|_| "[]".to_string());
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[tool( #[tool(
description = "Search entries in the secrets store. Requires Bearer API key. Returns \ description = "Search entries in the secrets store. Requires Bearer API key. Returns \
entries with metadata and secret field names (not values). Use secrets_get to decrypt secret values.", entries with metadata and secret field names (not values). \
Prefer secrets_find for discovery; secrets_search is kept for backward compatibility.",
annotations( annotations(
title = "Search Secrets", title = "Search Secrets",
read_only_hint = true, read_only_hint = true,
@@ -331,6 +600,7 @@ impl SecretsService {
folder = input.folder.as_deref(), folder = input.folder.as_deref(),
entry_type = input.entry_type.as_deref(), entry_type = input.entry_type.as_deref(),
name = input.name.as_deref(), name = input.name.as_deref(),
name_query = input.name_query.as_deref(),
query = input.query.as_deref(), query = input.query.as_deref(),
"tool call start", "tool call start",
); );
@@ -341,6 +611,7 @@ impl SecretsService {
folder: input.folder.as_deref(), folder: input.folder.as_deref(),
entry_type: input.entry_type.as_deref(), entry_type: input.entry_type.as_deref(),
name: input.name.as_deref(), name: input.name.as_deref(),
name_query: input.name_query.as_deref(),
tags: &tags, tags: &tags,
query: input.query.as_deref(), query: input.query.as_deref(),
sort: input.sort.as_deref().unwrap_or("name"), sort: input.sort.as_deref().unwrap_or("name"),
@@ -367,10 +638,20 @@ impl SecretsService {
"updated_at": e.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(), "updated_at": e.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
}) })
} else { } else {
let schema: Vec<&str> = result let schema: Vec<serde_json::Value> = result
.secret_schemas .secret_schemas
.get(&e.id) .get(&e.id)
.map(|f| f.iter().map(|s| s.field_name.as_str()).collect()) .map(|f| {
f.iter()
.map(|s| {
serde_json::json!({
"id": s.id,
"name": s.name,
"type": s.secret_type,
})
})
.collect()
})
.unwrap_or_default(); .unwrap_or_default();
serde_json::json!({ serde_json::json!({
"id": e.id, "id": e.id,
@@ -401,8 +682,8 @@ impl SecretsService {
} }
#[tool( #[tool(
description = "Get decrypted secret field values for an entry. Requires your \ description = "Get decrypted secret field values for an entry identified by its UUID \
encryption key via X-Encryption-Key header (64 hex chars, PBKDF2-derived). \ (from secrets_find). Requires X-Encryption-Key header. \
Returns all fields, or a specific field if 'field' is provided.", Returns all fields, or a specific field if 'field' is provided.",
annotations( annotations(
title = "Get Secret Values", title = "Get Secret Values",
@@ -417,29 +698,23 @@ impl SecretsService {
) -> Result<CallToolResult, rmcp::ErrorData> { ) -> Result<CallToolResult, rmcp::ErrorData> {
let t = Instant::now(); let t = Instant::now();
let (user_id, user_key) = Self::require_user_and_key(&ctx)?; let (user_id, user_key) = Self::require_user_and_key(&ctx)?;
let entry_id = parse_uuid(&input.id)?;
tracing::info!( tracing::info!(
tool = "secrets_get", tool = "secrets_get",
?user_id, id = %input.id,
name = %input.name,
field = input.field.as_deref(), field = input.field.as_deref(),
"tool call start", "tool call start",
); );
if let Some(field_name) = &input.field { if let Some(field_name) = &input.field {
let value = get_secret_field( let value =
&self.pool, get_secret_field_by_id(&self.pool, entry_id, field_name, &user_key, Some(user_id))
&input.name,
input.folder.as_deref(),
field_name,
&user_key,
Some(user_id),
)
.await .await
.map_err(|e| mcp_err_internal_logged("secrets_get", Some(user_id), e))?; .map_err(|e| mcp_err_from_anyhow("secrets_get", Some(user_id), e))?;
tracing::info!( tracing::info!(
tool = "secrets_get", tool = "secrets_get",
?user_id, id = %input.id,
elapsed_ms = t.elapsed().as_millis(), elapsed_ms = t.elapsed().as_millis(),
"tool call ok", "tool call ok",
); );
@@ -447,21 +722,14 @@ impl SecretsService {
let json = serde_json::to_string_pretty(&result).unwrap_or_default(); let json = serde_json::to_string_pretty(&result).unwrap_or_default();
Ok(CallToolResult::success(vec![Content::text(json)])) Ok(CallToolResult::success(vec![Content::text(json)]))
} else { } else {
let secrets = get_all_secrets( let secrets = get_all_secrets_by_id(&self.pool, entry_id, &user_key, Some(user_id))
&self.pool,
&input.name,
input.folder.as_deref(),
&user_key,
Some(user_id),
)
.await .await
.map_err(|e| mcp_err_internal_logged("secrets_get", Some(user_id), e))?; .map_err(|e| mcp_err_from_anyhow("secrets_get", Some(user_id), e))?;
let count = secrets.len();
tracing::info!( tracing::info!(
tool = "secrets_get", tool = "secrets_get",
?user_id, id = %entry_id,
field_count = count, field_count = secrets.len(),
elapsed_ms = t.elapsed().as_millis(), elapsed_ms = t.elapsed().as_millis(),
"tool call ok", "tool call ok",
); );
@@ -473,7 +741,8 @@ impl SecretsService {
#[tool( #[tool(
description = "Add or upsert an entry with metadata and encrypted secret fields. \ description = "Add or upsert an entry with metadata and encrypted secret fields. \
Requires X-Encryption-Key header. \ Requires X-Encryption-Key header. \
Meta and secret values use 'key=value', 'key=@file', or 'key:=<json>' format.", Meta and secret values use 'key=value', 'key=@file', or 'key:=<json>' format, \
or pass a JSON object via meta_obj / secrets_obj.",
annotations(title = "Add Secret Entry") annotations(title = "Add Secret Entry")
)] )]
async fn secrets_add( async fn secrets_add(
@@ -493,8 +762,20 @@ impl SecretsService {
); );
let tags = input.tags.unwrap_or_default(); let tags = input.tags.unwrap_or_default();
let meta = input.meta.unwrap_or_default(); let mut meta = input.meta.unwrap_or_default();
let secrets = input.secrets.unwrap_or_default(); if let Some(obj) = input.meta_obj {
meta.extend(map_to_kv_strings(obj));
}
let mut secrets = input.secrets.unwrap_or_default();
if let Some(obj) = input.secrets_obj {
secrets.extend(map_to_kv_strings(obj));
}
let secret_types = input.secret_types.unwrap_or_default();
let secret_types_map: std::collections::HashMap<String, String> = secret_types
.into_iter()
.filter_map(|(k, v)| v.as_str().map(|s| (k, s.to_string())))
.collect();
let link_secret_names = input.link_secret_names.unwrap_or_default();
let folder = input.folder.as_deref().unwrap_or(""); let folder = input.folder.as_deref().unwrap_or("");
let entry_type = input.entry_type.as_deref().unwrap_or(""); let entry_type = input.entry_type.as_deref().unwrap_or("");
let notes = input.notes.as_deref().unwrap_or(""); let notes = input.notes.as_deref().unwrap_or("");
@@ -509,12 +790,14 @@ impl SecretsService {
tags: &tags, tags: &tags,
meta_entries: &meta, meta_entries: &meta,
secret_entries: &secrets, secret_entries: &secrets,
secret_types: &secret_types_map,
link_secret_names: &link_secret_names,
user_id: Some(user_id), user_id: Some(user_id),
}, },
&user_key, &user_key,
) )
.await .await
.map_err(|e| mcp_err_internal_logged("secrets_add", Some(user_id), e))?; .map_err(|e| mcp_err_from_anyhow("secrets_add", Some(user_id), e))?;
tracing::info!( tracing::info!(
tool = "secrets_add", tool = "secrets_add",
@@ -529,7 +812,8 @@ impl SecretsService {
#[tool( #[tool(
description = "Incrementally update an existing entry. Requires X-Encryption-Key header. \ description = "Incrementally update an existing entry. Requires X-Encryption-Key header. \
Only the fields you specify are changed; everything else is preserved.", Only the fields you specify are changed; everything else is preserved. \
Optionally pass 'id' (from secrets_find) to target the entry directly.",
annotations(title = "Update Secret Entry") annotations(title = "Update Secret Entry")
)] )]
async fn secrets_update( async fn secrets_update(
@@ -543,39 +827,68 @@ impl SecretsService {
tool = "secrets_update", tool = "secrets_update",
?user_id, ?user_id,
name = %input.name, name = %input.name,
id = ?input.id,
"tool call start", "tool call start",
); );
// When id is provided, resolve to (name, folder) via primary key to skip disambiguation.
let (resolved_name, resolved_folder): (String, Option<String>) =
if let Some(ref id_str) = input.id {
let eid = parse_uuid(id_str)?;
let entry = resolve_entry_by_id(&self.pool, eid, Some(user_id))
.await
.map_err(|e| mcp_err_internal_logged("secrets_update", Some(user_id), e))?;
(entry.name, Some(entry.folder))
} else {
(input.name.clone(), input.folder.clone())
};
let add_tags = input.add_tags.unwrap_or_default(); let add_tags = input.add_tags.unwrap_or_default();
let remove_tags = input.remove_tags.unwrap_or_default(); let remove_tags = input.remove_tags.unwrap_or_default();
let meta = input.meta.unwrap_or_default(); let mut meta = input.meta.unwrap_or_default();
if let Some(obj) = input.meta_obj {
meta.extend(map_to_kv_strings(obj));
}
let remove_meta = input.remove_meta.unwrap_or_default(); let remove_meta = input.remove_meta.unwrap_or_default();
let secrets = input.secrets.unwrap_or_default(); let mut secrets = input.secrets.unwrap_or_default();
if let Some(obj) = input.secrets_obj {
secrets.extend(map_to_kv_strings(obj));
}
let secret_types = input.secret_types.unwrap_or_default();
let secret_types_map: std::collections::HashMap<String, String> = secret_types
.into_iter()
.filter_map(|(k, v)| v.as_str().map(|s| (k, s.to_string())))
.collect();
let remove_secrets = input.remove_secrets.unwrap_or_default(); let remove_secrets = input.remove_secrets.unwrap_or_default();
let link_secret_names = input.link_secret_names.unwrap_or_default();
let unlink_secret_names = input.unlink_secret_names.unwrap_or_default();
let result = svc_update( let result = svc_update(
&self.pool, &self.pool,
UpdateParams { UpdateParams {
name: &input.name, name: &resolved_name,
folder: input.folder.as_deref(), folder: resolved_folder.as_deref(),
notes: input.notes.as_deref(), notes: input.notes.as_deref(),
add_tags: &add_tags, add_tags: &add_tags,
remove_tags: &remove_tags, remove_tags: &remove_tags,
meta_entries: &meta, meta_entries: &meta,
remove_meta: &remove_meta, remove_meta: &remove_meta,
secret_entries: &secrets, secret_entries: &secrets,
secret_types: &secret_types_map,
remove_secrets: &remove_secrets, remove_secrets: &remove_secrets,
link_secret_names: &link_secret_names,
unlink_secret_names: &unlink_secret_names,
user_id: Some(user_id), user_id: Some(user_id),
}, },
&user_key, &user_key,
) )
.await .await
.map_err(|e| mcp_err_internal_logged("secrets_update", Some(user_id), e))?; .map_err(|e| mcp_err_from_anyhow("secrets_update", Some(user_id), e))?;
tracing::info!( tracing::info!(
tool = "secrets_update", tool = "secrets_update",
?user_id, ?user_id,
name = %input.name, name = %resolved_name,
elapsed_ms = t.elapsed().as_millis(), elapsed_ms = t.elapsed().as_millis(),
"tool call ok", "tool call ok",
); );
@@ -584,8 +897,9 @@ impl SecretsService {
} }
#[tool( #[tool(
description = "Delete one entry by name, or bulk delete entries matching folder and/or type. \ description = "Delete one entry by name (or id), or bulk delete entries matching folder \
Use dry_run=true to preview.", and/or type. Use dry_run=true to preview. \
At least one of id, name, folder, or type must be provided.",
annotations(title = "Delete Secret Entry", destructive_hint = true) annotations(title = "Delete Secret Entry", destructive_hint = true)
)] )]
async fn secrets_delete( async fn secrets_delete(
@@ -595,9 +909,23 @@ impl SecretsService {
) -> Result<CallToolResult, rmcp::ErrorData> { ) -> Result<CallToolResult, rmcp::ErrorData> {
let t = Instant::now(); let t = Instant::now();
let user_id = Self::user_id_from_ctx(&ctx)?; let user_id = Self::user_id_from_ctx(&ctx)?;
// Safety: require at least one filter.
if input.id.is_none()
&& input.name.is_none()
&& input.folder.is_none()
&& input.entry_type.is_none()
{
return Err(rmcp::ErrorData::invalid_request(
"At least one of id, name, folder, or type must be provided.",
None,
));
}
tracing::info!( tracing::info!(
tool = "secrets_delete", tool = "secrets_delete",
?user_id, ?user_id,
id = ?input.id,
name = input.name.as_deref(), name = input.name.as_deref(),
folder = input.folder.as_deref(), folder = input.folder.as_deref(),
entry_type = input.entry_type.as_deref(), entry_type = input.entry_type.as_deref(),
@@ -605,11 +933,24 @@ impl SecretsService {
"tool call start", "tool call start",
); );
// When id is provided, resolve to name+folder for the single-entry delete path.
let (effective_name, effective_folder): (Option<String>, Option<String>) =
if let Some(ref id_str) = input.id {
let eid = parse_uuid(id_str)?;
let uid = user_id;
let entry = resolve_entry_by_id(&self.pool, eid, uid)
.await
.map_err(|e| mcp_err_internal_logged("secrets_delete", uid, e))?;
(Some(entry.name), Some(entry.folder))
} else {
(input.name.clone(), input.folder.clone())
};
let result = svc_delete( let result = svc_delete(
&self.pool, &self.pool,
DeleteParams { DeleteParams {
name: input.name.as_deref(), name: effective_name.as_deref(),
folder: input.folder.as_deref(), folder: effective_folder.as_deref(),
entry_type: input.entry_type.as_deref(), entry_type: input.entry_type.as_deref(),
dry_run: input.dry_run.unwrap_or(false), dry_run: input.dry_run.unwrap_or(false),
user_id, user_id,
@@ -630,7 +971,7 @@ impl SecretsService {
#[tool( #[tool(
description = "View change history for an entry. Returns a list of versions with \ description = "View change history for an entry. Returns a list of versions with \
actions and timestamps.", actions and timestamps. Optionally pass 'id' (from secrets_find) to target directly.",
annotations( annotations(
title = "View Secret History", title = "View Secret History",
read_only_hint = true, read_only_hint = true,
@@ -648,13 +989,25 @@ impl SecretsService {
tool = "secrets_history", tool = "secrets_history",
?user_id, ?user_id,
name = %input.name, name = %input.name,
id = ?input.id,
"tool call start", "tool call start",
); );
let (resolved_name, resolved_folder): (String, Option<String>) =
if let Some(ref id_str) = input.id {
let eid = parse_uuid(id_str)?;
let entry = resolve_entry_by_id(&self.pool, eid, user_id)
.await
.map_err(|e| mcp_err_internal_logged("secrets_history", user_id, e))?;
(entry.name, Some(entry.folder))
} else {
(input.name.clone(), input.folder.clone())
};
let result = svc_history( let result = svc_history(
&self.pool, &self.pool,
&input.name, &resolved_name,
input.folder.as_deref(), resolved_folder.as_deref(),
input.limit.unwrap_or(20), input.limit.unwrap_or(20),
user_id, user_id,
) )
@@ -673,7 +1026,8 @@ impl SecretsService {
#[tool( #[tool(
description = "Rollback an entry to a previous version. Requires X-Encryption-Key header. \ description = "Rollback an entry to a previous version. Requires X-Encryption-Key header. \
Omit to_version to restore the most recent snapshot.", Omit to_version to restore the most recent snapshot. \
Optionally pass 'id' (from secrets_find) to target directly.",
annotations(title = "Rollback Secret Entry", destructive_hint = true) annotations(title = "Rollback Secret Entry", destructive_hint = true)
)] )]
async fn secrets_rollback( async fn secrets_rollback(
@@ -687,14 +1041,26 @@ impl SecretsService {
tool = "secrets_rollback", tool = "secrets_rollback",
?user_id, ?user_id,
name = %input.name, name = %input.name,
id = ?input.id,
to_version = input.to_version, to_version = input.to_version,
"tool call start", "tool call start",
); );
let (resolved_name, resolved_folder): (String, Option<String>) =
if let Some(ref id_str) = input.id {
let eid = parse_uuid(id_str)?;
let entry = resolve_entry_by_id(&self.pool, eid, Some(user_id))
.await
.map_err(|e| mcp_err_internal_logged("secrets_rollback", Some(user_id), e))?;
(entry.name, Some(entry.folder))
} else {
(input.name.clone(), input.folder.clone())
};
let result = svc_rollback( let result = svc_rollback(
&self.pool, &self.pool,
&input.name, &resolved_name,
input.folder.as_deref(), resolved_folder.as_deref(),
input.to_version, input.to_version,
&user_key, &user_key,
Some(user_id), Some(user_id),
@@ -753,7 +1119,7 @@ impl SecretsService {
Some(&user_key), Some(&user_key),
) )
.await .await
.map_err(|e| mcp_err_internal_logged("secrets_export", Some(user_id), e))?; .map_err(|e| mcp_err_from_anyhow("secrets_export", Some(user_id), e))?;
let fmt = format.parse::<ExportFormat>().map_err(|e| { let fmt = format.parse::<ExportFormat>().map_err(|e| {
tracing::warn!( tracing::warn!(
@@ -769,7 +1135,7 @@ impl SecretsService {
})?; })?;
let serialized = fmt let serialized = fmt
.serialize(&data) .serialize(&data)
.map_err(|e| mcp_err_internal_logged("secrets_export", Some(user_id), e))?; .map_err(|e| mcp_err_from_anyhow("secrets_export", Some(user_id), e))?;
tracing::info!( tracing::info!(
tool = "secrets_export", tool = "secrets_export",
@@ -784,7 +1150,10 @@ impl SecretsService {
#[tool( #[tool(
description = "Build the environment variable map from entry secrets with decrypted \ description = "Build the environment variable map from entry secrets with decrypted \
plaintext values. Requires X-Encryption-Key header. \ plaintext values. Requires X-Encryption-Key header. \
Returns a JSON object of VAR_NAME -> plaintext_value ready for injection.", Returns a JSON object of VAR_NAME -> plaintext_value ready for injection. \
Variable names follow the pattern UPPER(entry_name)_UPPER(field_name), \
with hyphens and dots replaced by underscores. \
Example: entry 'aliyun', field 'access_key_id' → ALIYUN_ACCESS_KEY_ID.",
annotations(title = "Build Env Map", read_only_hint = true, idempotent_hint = true) annotations(title = "Build Env Map", read_only_hint = true, idempotent_hint = true)
)] )]
async fn secrets_env_map( async fn secrets_env_map(
@@ -817,7 +1186,7 @@ impl SecretsService {
Some(user_id), Some(user_id),
) )
.await .await
.map_err(|e| mcp_err_internal_logged("secrets_env_map", Some(user_id), e))?; .map_err(|e| mcp_err_from_anyhow("secrets_env_map", Some(user_id), e))?;
let entry_count = env_map.len(); let entry_count = env_map.len();
tracing::info!( tracing::info!(
@@ -830,6 +1199,67 @@ impl SecretsService {
let json = serde_json::to_string_pretty(&env_map).unwrap_or_default(); let json = serde_json::to_string_pretty(&env_map).unwrap_or_default();
Ok(CallToolResult::success(vec![Content::text(json)])) Ok(CallToolResult::success(vec![Content::text(json)]))
} }
#[tool(
description = "Get an overview of the secrets store: counts of entries per folder and \
per type. Requires Bearer API key. Useful for exploring the store structure.",
annotations(
title = "Secrets Overview",
read_only_hint = true,
idempotent_hint = true
)
)]
async fn secrets_overview(
&self,
Parameters(_input): Parameters<OverviewInput>,
ctx: RequestContext<RoleServer>,
) -> Result<CallToolResult, rmcp::ErrorData> {
let t = Instant::now();
let user_id = Self::require_user_id(&ctx)?;
tracing::info!(tool = "secrets_overview", ?user_id, "tool call start");
#[derive(sqlx::FromRow)]
struct CountRow {
name: String,
count: i64,
}
let folder_rows: Vec<CountRow> = sqlx::query_as(
"SELECT folder AS name, COUNT(*) AS count FROM entries \
WHERE user_id = $1 GROUP BY folder ORDER BY folder",
)
.bind(user_id)
.fetch_all(&*self.pool)
.await
.map_err(|e| mcp_err_internal_logged("secrets_overview", Some(user_id), e))?;
let type_rows: Vec<CountRow> = sqlx::query_as(
"SELECT type AS name, COUNT(*) AS count FROM entries \
WHERE user_id = $1 GROUP BY type ORDER BY type",
)
.bind(user_id)
.fetch_all(&*self.pool)
.await
.map_err(|e| mcp_err_internal_logged("secrets_overview", Some(user_id), e))?;
let total: i64 = folder_rows.iter().map(|r| r.count).sum();
let result = serde_json::json!({
"total": total,
"folders": folder_rows.iter().map(|r| serde_json::json!({"name": r.name, "count": r.count})).collect::<Vec<_>>(),
"types": type_rows.iter().map(|r| serde_json::json!({"name": r.name, "count": r.count})).collect::<Vec<_>>(),
});
tracing::info!(
tool = "secrets_overview",
?user_id,
total,
elapsed_ms = t.elapsed().as_millis(),
"tool call ok",
);
let json = serde_json::to_string_pretty(&result).unwrap_or_default();
Ok(CallToolResult::success(vec![Content::text(json)]))
}
} }
// ── ServerHandler ───────────────────────────────────────────────────────────── // ── ServerHandler ─────────────────────────────────────────────────────────────
@@ -846,11 +1276,11 @@ impl ServerHandler for SecretsService {
info.protocol_version = ProtocolVersion::V_2025_06_18; info.protocol_version = ProtocolVersion::V_2025_06_18;
info.instructions = Some( info.instructions = Some(
"Manage cross-device secrets and configuration securely. \ "Manage cross-device secrets and configuration securely. \
Data is encrypted with your passphrase-derived key. \ Use secrets_find to discover entries by folder, name, type, tags, or query \
Include your 64-char hex key in the X-Encryption-Key header for all read/write operations. \ (query also searches metadata values). \
Use secrets_search to discover entries (Bearer token required; encryption key not needed), \ Use secrets_get with the entry id (from secrets_find) to decrypt secret values. \
secrets_get to decrypt secret values, \ Use secrets_add / secrets_update to write entries. \
and secrets_add/secrets_update to write encrypted secrets." Use secrets_overview for a quick count of entries per folder and type."
.to_string(), .to_string(),
); );
info info

View File

@@ -8,17 +8,22 @@ use axum::{
extract::{ConnectInfo, Path, Query, State}, extract::{ConnectInfo, Path, Query, State},
http::{HeaderMap, StatusCode, header}, http::{HeaderMap, StatusCode, header},
response::{Html, IntoResponse, Redirect, Response}, response::{Html, IntoResponse, Redirect, Response},
routing::{get, post}, routing::{get, patch, post},
}; };
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::json;
use tower_sessions::Session; use tower_sessions::Session;
use uuid::Uuid; use uuid::Uuid;
use secrets_core::audit::log_login; use secrets_core::audit::log_login;
use secrets_core::crypto::hex; use secrets_core::crypto::hex;
use secrets_core::error::AppError;
use secrets_core::service::{ use secrets_core::service::{
api_key::{ensure_api_key, regenerate_api_key}, api_key::{ensure_api_key, regenerate_api_key},
audit_log::list_for_user, audit_log::list_for_user,
delete::delete_by_id,
search::{SearchParams, fetch_secret_schemas, ilike_pattern, list_entries},
update::{UpdateEntryFieldsByIdParams, update_fields_by_id},
user::{ user::{
OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id, OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id,
unbind_oauth_account, update_user_key_setup, unbind_oauth_account, update_user_key_setup,
@@ -78,6 +83,65 @@ struct AuditEntryView {
detail: String, detail: String,
} }
#[derive(Template)]
#[template(path = "entries.html")]
struct EntriesPageTemplate {
user_name: String,
user_email: String,
entries: Vec<EntryListItemView>,
folder_tabs: Vec<FolderTabView>,
type_options: Vec<String>,
secret_type_options_json: String,
filter_folder: String,
filter_name: String,
filter_type: String,
version: &'static str,
}
/// Non-sensitive entry fields; `secrets` lists field names/types only (no ciphertext).
struct EntryListItemView {
id: String,
folder: String,
entry_type: String,
name: String,
notes: String,
tags: String,
/// Compact JSON for `data-entry-metadata` (dialog editor).
metadata_json: String,
/// Secret field summaries for table + dialog chips.
secrets: Vec<SecretSummaryView>,
/// JSON array of `{ id, name, secret_type }` for dialog secret chips.
secrets_json: String,
/// RFC3339 UTC; shown in edit dialog.
updated_at_iso: String,
}
#[derive(Serialize)]
struct SecretSummaryView {
id: String,
name: String,
secret_type: String,
}
struct FolderTabView {
name: String,
count: i64,
href: String,
active: bool,
}
/// Cap for HTML list (avoids loading unbounded rows into memory).
const ENTRIES_PAGE_LIMIT: u32 = 5_000;
#[derive(Deserialize)]
struct EntriesQuery {
folder: Option<String>,
name: Option<String>,
/// URL query key is `type` (maps to DB column `entries.type`).
#[serde(rename = "type")]
entry_type: Option<String>,
}
// ── App state helpers ───────────────────────────────────────────────────────── // ── App state helpers ─────────────────────────────────────────────────────────
fn google_cfg(state: &AppState) -> Option<&OAuthConfig> { fn google_cfg(state: &AppState) -> Option<&OAuthConfig> {
@@ -134,6 +198,7 @@ pub fn web_router() -> Router<AppState> {
.route("/robots.txt", get(robots_txt)) .route("/robots.txt", get(robots_txt))
.route("/llms.txt", get(llms_txt)) .route("/llms.txt", get(llms_txt))
.route("/ai.txt", get(ai_txt)) .route("/ai.txt", get(ai_txt))
.route("/static/i18n.js", get(i18n_js))
.route("/favicon.svg", get(favicon_svg)) .route("/favicon.svg", get(favicon_svg))
.route( .route(
"/favicon.ico", "/favicon.ico",
@@ -149,6 +214,7 @@ pub fn web_router() -> Router<AppState> {
.route("/auth/google/callback", get(auth_google_callback)) .route("/auth/google/callback", get(auth_google_callback))
.route("/auth/logout", post(auth_logout)) .route("/auth/logout", post(auth_logout))
.route("/dashboard", get(dashboard)) .route("/dashboard", get(dashboard))
.route("/entries", get(entries_page))
.route("/audit", get(audit_page)) .route("/audit", get(audit_page))
.route("/account/bind/google", get(account_bind_google)) .route("/account/bind/google", get(account_bind_google))
.route( .route(
@@ -160,6 +226,16 @@ pub fn web_router() -> Router<AppState> {
.route("/api/key-setup", post(api_key_setup)) .route("/api/key-setup", post(api_key_setup))
.route("/api/apikey", get(api_apikey_get)) .route("/api/apikey", get(api_apikey_get))
.route("/api/apikey/regenerate", post(api_apikey_regenerate)) .route("/api/apikey/regenerate", post(api_apikey_regenerate))
.route(
"/api/entries/{id}",
patch(api_entry_patch).delete(api_entry_delete),
)
.route(
"/api/entries/{entry_id}/secrets/{secret_id}",
axum::routing::delete(api_entry_secret_unlink),
)
.route("/api/secrets/{secret_id}", patch(api_secret_patch))
.route("/api/secrets/check-name", get(api_secret_check_name))
} }
fn text_asset_response(content: &'static str, content_type: &'static str) -> Response { fn text_asset_response(content: &'static str, content_type: &'static str) -> Response {
@@ -189,6 +265,13 @@ async fn ai_txt() -> Response {
llms_txt().await llms_txt().await
} }
async fn i18n_js() -> Response {
text_asset_response(
include_str!("../templates/i18n.js"),
"application/javascript; charset=utf-8",
)
}
async fn favicon_svg() -> Response { async fn favicon_svg() -> Response {
Response::builder() Response::builder()
.status(StatusCode::OK) .status(StatusCode::OK)
@@ -478,6 +561,224 @@ async fn dashboard(
render_template(tmpl) render_template(tmpl)
} }
async fn entries_page(
State(state): State<AppState>,
session: Session,
Query(q): Query<EntriesQuery>,
) -> Result<Response, StatusCode> {
let Some(user_id) = current_user_id(&session).await else {
return Ok(Redirect::to("/login").into_response());
};
let user = match get_user_by_id(&state.pool, user_id).await.map_err(|e| {
tracing::error!(error = %e, %user_id, "failed to load user for entries page");
StatusCode::INTERNAL_SERVER_ERROR
})? {
Some(u) => u,
None => return Ok(Redirect::to("/login").into_response()),
};
let folder_filter = q
.folder
.as_ref()
.map(|s| s.trim())
.filter(|s| !s.is_empty())
.map(|s| s.to_string());
let type_filter = q
.entry_type
.as_ref()
.map(|s| s.trim())
.filter(|s| !s.is_empty())
.map(|s| s.to_string());
let name_filter = q
.name
.as_ref()
.map(|s| s.trim())
.filter(|s| !s.is_empty())
.map(|s| s.to_string());
let params = SearchParams {
folder: folder_filter.as_deref(),
entry_type: type_filter.as_deref(),
name: None,
name_query: name_filter.as_deref(),
tags: &[],
query: None,
sort: "updated",
limit: ENTRIES_PAGE_LIMIT,
offset: 0,
user_id: Some(user_id),
};
let rows = list_entries(&state.pool, params).await.map_err(|e| {
tracing::error!(error = %e, "failed to load entries list for web");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let entry_ids: Vec<Uuid> = rows.iter().map(|e| e.id).collect();
let secret_schemas = fetch_secret_schemas(&state.pool, &entry_ids)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to load secret schema list for web");
StatusCode::INTERNAL_SERVER_ERROR
})?;
#[derive(sqlx::FromRow)]
struct FolderCountRow {
folder: String,
count: i64,
}
let mut folder_sql =
"SELECT folder, COUNT(*)::bigint AS count FROM entries WHERE user_id = $1".to_string();
let mut bind_idx = 2;
if type_filter.is_some() {
folder_sql.push_str(&format!(" AND type = ${bind_idx}"));
bind_idx += 1;
}
if name_filter.is_some() {
folder_sql.push_str(&format!(" AND name ILIKE ${bind_idx} ESCAPE '\\'"));
bind_idx += 1;
}
let _ = bind_idx;
folder_sql.push_str(" GROUP BY folder ORDER BY folder");
let mut folder_query = sqlx::query_as::<_, FolderCountRow>(&folder_sql).bind(user_id);
if let Some(t) = type_filter.as_deref() {
folder_query = folder_query.bind(t);
}
if let Some(n) = name_filter.as_deref() {
folder_query = folder_query.bind(ilike_pattern(n));
}
let folder_rows: Vec<FolderCountRow> =
folder_query.fetch_all(&state.pool).await.map_err(|e| {
tracing::error!(error = %e, "failed to load folder tabs for web");
StatusCode::INTERNAL_SERVER_ERROR
})?;
#[derive(sqlx::FromRow)]
struct TypeOptionRow {
#[sqlx(rename = "type")]
entry_type: String,
}
let mut type_options: Vec<String> = sqlx::query_as::<_, TypeOptionRow>(
"SELECT DISTINCT type FROM entries WHERE user_id = $1 ORDER BY type",
)
.bind(user_id)
.fetch_all(&state.pool)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to load type options for web");
StatusCode::INTERNAL_SERVER_ERROR
})?
.into_iter()
.map(|r| r.entry_type)
.filter(|t| !t.is_empty())
.collect();
if let Some(current) = type_filter.as_ref()
&& !current.is_empty()
&& !type_options.iter().any(|t| t == current)
{
type_options.push(current.clone());
type_options.sort_unstable();
}
fn entries_href(folder: Option<&str>, entry_type: Option<&str>, name: Option<&str>) -> String {
let mut pairs: Vec<String> = Vec::new();
if let Some(f) = folder
&& !f.is_empty()
{
pairs.push(format!("folder={}", urlencoding::encode(f)));
}
if let Some(t) = entry_type
&& !t.is_empty()
{
pairs.push(format!("type={}", urlencoding::encode(t)));
}
if let Some(n) = name
&& !n.is_empty()
{
pairs.push(format!("name={}", urlencoding::encode(n)));
}
if pairs.is_empty() {
"/entries".to_string()
} else {
format!("/entries?{}", pairs.join("&"))
}
}
let all_count: i64 = folder_rows.iter().map(|r| r.count).sum();
let mut folder_tabs: Vec<FolderTabView> = Vec::with_capacity(folder_rows.len() + 1);
folder_tabs.push(FolderTabView {
name: "全部".to_string(),
count: all_count,
href: entries_href(None, type_filter.as_deref(), name_filter.as_deref()),
active: folder_filter.is_none(),
});
for r in folder_rows {
let name = r.folder;
folder_tabs.push(FolderTabView {
href: entries_href(Some(&name), type_filter.as_deref(), name_filter.as_deref()),
active: folder_filter.as_deref() == Some(name.as_str()),
name,
count: r.count,
});
}
let entries = rows
.into_iter()
.map(|e| {
let secrets: Vec<SecretSummaryView> = secret_schemas
.get(&e.id)
.map(|fields| {
fields
.iter()
.map(|f| SecretSummaryView {
id: f.id.to_string(),
name: f.name.clone(),
secret_type: f.secret_type.clone(),
})
.collect()
})
.unwrap_or_default();
let secrets_json = serde_json::to_string(&secrets).unwrap_or_else(|_| "[]".to_string());
let metadata_json =
serde_json::to_string(&e.metadata).unwrap_or_else(|_| "{}".to_string());
EntryListItemView {
id: e.id.to_string(),
folder: e.folder,
entry_type: e.entry_type,
name: e.name,
notes: e.notes,
tags: e.tags.join(", "),
metadata_json,
secrets,
secrets_json,
updated_at_iso: e.updated_at.to_rfc3339_opts(SecondsFormat::Secs, true),
}
})
.collect();
let tmpl = EntriesPageTemplate {
user_name: user.name.clone(),
user_email: user.email.clone().unwrap_or_default(),
entries,
folder_tabs,
type_options,
secret_type_options_json: serde_json::to_string(
&secrets_core::taxonomy::SECRET_TYPE_OPTIONS
.iter()
.map(|s| s.to_string())
.collect::<Vec<_>>(),
)
.unwrap_or_default(),
filter_folder: folder_filter.unwrap_or_default(),
filter_name: name_filter.unwrap_or_default(),
filter_type: type_filter.unwrap_or_default(),
version: env!("CARGO_PKG_VERSION"),
};
render_template(tmpl)
}
async fn audit_page( async fn audit_page(
State(state): State<AppState>, State(state): State<AppState>,
session: Session, session: Session,
@@ -751,6 +1052,566 @@ async fn api_apikey_regenerate(
Ok(Json(ApiKeyResponse { api_key })) Ok(Json(ApiKeyResponse { api_key }))
} }
// ── Entry management (Web UI, non-sensitive fields only) ───────────────────────
#[derive(Deserialize)]
struct EntryPatchBody {
folder: String,
#[serde(rename = "type")]
entry_type: String,
name: String,
notes: String,
tags: Vec<String>,
metadata: serde_json::Value,
}
type EntryApiError = (StatusCode, Json<serde_json::Value>);
#[derive(Clone, Copy)]
enum UiLang {
ZhCn,
ZhTw,
En,
}
fn request_ui_lang(headers: &HeaderMap) -> UiLang {
let Some(raw) = headers
.get(header::ACCEPT_LANGUAGE)
.and_then(|v| v.to_str().ok())
else {
return UiLang::ZhCn;
};
let lower = raw.to_ascii_lowercase();
if lower.contains("zh-tw") || lower.contains("zh-hk") || lower.contains("zh-hant") {
UiLang::ZhTw
} else if lower.contains("zh") {
UiLang::ZhCn
} else if lower.contains("en") {
UiLang::En
} else {
UiLang::ZhCn
}
}
fn tr(lang: UiLang, zh_cn: &'static str, zh_tw: &'static str, en: &'static str) -> &'static str {
match lang {
UiLang::ZhCn => zh_cn,
UiLang::ZhTw => zh_tw,
UiLang::En => en,
}
}
fn map_entry_mutation_err(e: anyhow::Error, lang: UiLang) -> EntryApiError {
if let Some(app_err) = e.downcast_ref::<AppError>() {
return map_app_error(app_err, lang);
}
// Fallback for legacy string-based errors and raw sqlx errors
let msg = e.to_string();
if msg.contains("already exists") {
return (
StatusCode::CONFLICT,
Json(
json!({ "error": tr(lang, "该账号下已存在相同 folder + name 的条目", "此帳號下已存在相同 folder + name 的條目", "An entry with the same folder + name already exists for this account") }),
),
);
}
if msg.contains("must be at most") {
return (StatusCode::BAD_REQUEST, Json(json!({ "error": msg })));
}
tracing::error!(error = %e, "entry mutation failed");
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(
json!({ "error": tr(lang, "操作失败,请稍后重试", "操作失敗,請稍後重試", "Operation failed, please try again later") }),
),
)
}
fn map_app_error(err: &AppError, lang: UiLang) -> EntryApiError {
match err {
AppError::ConflictEntryName { .. } | AppError::ConflictSecretName { .. } => (
StatusCode::CONFLICT,
Json(json!({ "error": err.to_string() })),
),
AppError::NotFoundEntry => (
StatusCode::NOT_FOUND,
Json(
json!({ "error": tr(lang, "条目不存在或无权访问", "條目不存在或無權存取", "Entry not found or no access") }),
),
),
AppError::Validation { message } => {
(StatusCode::BAD_REQUEST, Json(json!({ "error": message })))
}
AppError::ConcurrentModification => (
StatusCode::CONFLICT,
Json(
json!({ "error": tr(lang, "条目已被修改,请刷新后重试", "條目已被修改,請重新整理後重試", "Entry was modified, please refresh and try again") }),
),
),
AppError::DecryptionFailed => (
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "解密失败,请检查密码短语", "解密失敗,請檢查密碼短語", "Decryption failed — please check your passphrase") }),
),
),
AppError::Internal(_) => {
tracing::error!(error = %err, "internal error in entry mutation");
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(
json!({ "error": tr(lang, "操作失败,请稍后重试", "操作失敗,請稍後重試", "Operation failed, please try again later") }),
),
)
}
}
}
async fn api_entry_patch(
State(state): State<AppState>,
session: Session,
headers: HeaderMap,
Path(entry_id): Path<Uuid>,
Json(body): Json<EntryPatchBody>,
) -> Result<Json<serde_json::Value>, EntryApiError> {
let lang = request_ui_lang(&headers);
let user_id = current_user_id(&session).await.ok_or((
StatusCode::UNAUTHORIZED,
Json(json!({ "error": tr(lang, "未登录", "尚未登入", "Not logged in") })),
))?;
let folder = body.folder.trim();
let entry_type = body.entry_type.trim();
let name = body.name.trim();
let notes = body.notes.trim();
if name.is_empty() {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "name 不能为空", "name 不能為空", "name cannot be empty") }),
),
));
}
let tags: Vec<String> = body
.tags
.into_iter()
.map(|t| t.trim().to_string())
.filter(|t| !t.is_empty())
.collect();
if !body.metadata.is_object() {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "metadata 必须是 JSON 对象", "metadata 必須是 JSON 物件", "metadata must be a JSON object") }),
),
));
}
update_fields_by_id(
&state.pool,
entry_id,
user_id,
UpdateEntryFieldsByIdParams {
folder,
entry_type,
name,
notes,
tags: &tags,
metadata: &body.metadata,
},
)
.await
.map_err(|e| map_entry_mutation_err(e, lang))?;
Ok(Json(json!({ "ok": true })))
}
async fn api_entry_delete(
State(state): State<AppState>,
session: Session,
headers: HeaderMap,
Path(entry_id): Path<Uuid>,
) -> Result<Json<serde_json::Value>, EntryApiError> {
let lang = request_ui_lang(&headers);
let user_id = current_user_id(&session).await.ok_or((
StatusCode::UNAUTHORIZED,
Json(json!({ "error": tr(lang, "未登录", "尚未登入", "Not logged in") })),
))?;
delete_by_id(&state.pool, entry_id, user_id)
.await
.map_err(|e| map_entry_mutation_err(e, lang))?;
Ok(Json(json!({
"ok": true,
})))
}
#[derive(Deserialize)]
struct SecretCheckNameQuery {
name: String,
exclude_secret_id: Option<Uuid>,
}
#[derive(Serialize)]
struct SecretCheckNameResponse {
ok: bool,
available: bool,
#[serde(skip_serializing_if = "Option::is_none")]
error: Option<String>,
}
async fn api_secret_check_name(
State(state): State<AppState>,
session: Session,
headers: HeaderMap,
Query(params): Query<SecretCheckNameQuery>,
) -> Result<Json<SecretCheckNameResponse>, EntryApiError> {
let lang = request_ui_lang(&headers);
let user_id = current_user_id(&session).await.ok_or((
StatusCode::UNAUTHORIZED,
Json(json!({ "error": tr(lang, "未登录", "尚未登入", "Not logged in") })),
))?;
let name = params.name.trim();
if name.is_empty() {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret name 不能为空", "secret name 不能為空", "secret name cannot be empty") }),
),
));
}
if name.chars().count() > 256 {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret name 长度不能超过 256 个字符", "secret name 長度不能超過 256 個字元", "secret name must be at most 256 characters") }),
),
));
}
let count: i64 = if let Some(exclude_id) = params.exclude_secret_id {
sqlx::query_scalar::<_, i64>(
"SELECT COUNT(*) FROM secrets WHERE user_id = $1 AND name = $2 AND id != $3",
)
.bind(user_id)
.bind(name)
.bind(exclude_id)
.fetch_one(&state.pool)
.await
} else {
sqlx::query_scalar::<_, i64>(
"SELECT COUNT(*) FROM secrets WHERE user_id = $1 AND name = $2",
)
.bind(user_id)
.bind(name)
.fetch_one(&state.pool)
.await
}.map_err(|e| {
tracing::error!(error = %e, "failed to check secret name availability");
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(
json!({ "error": tr(lang, "操作失败,请稍后重试", "操作失敗,請稍後重試", "Operation failed, please try again later") }),
),
)
})?;
let available = count == 0;
let error = if available {
None
} else {
Some(
tr(
lang,
"该用户下已存在相同 name 的密文",
"該用戶下已存在相同 name 的密文",
"A secret with the same name already exists for this user",
)
.to_string(),
)
};
Ok(Json(SecretCheckNameResponse {
ok: true,
available,
error,
}))
}
#[derive(Deserialize)]
struct SecretPatchBody {
name: Option<String>,
#[serde(rename = "type")]
secret_type: Option<String>,
}
async fn api_secret_patch(
State(state): State<AppState>,
session: Session,
headers: HeaderMap,
Path(secret_id): Path<Uuid>,
Json(body): Json<SecretPatchBody>,
) -> Result<Json<serde_json::Value>, EntryApiError> {
#[derive(Serialize, sqlx::FromRow)]
struct LinkedEntryAuditRow {
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
name: String,
}
let lang = request_ui_lang(&headers);
let user_id = current_user_id(&session).await.ok_or((
StatusCode::UNAUTHORIZED,
Json(json!({ "error": tr(lang, "未登录", "尚未登入", "Not logged in") })),
))?;
let name = body.name.as_ref().map(|s| s.trim());
let secret_type = body.secret_type.as_ref().map(|s| s.trim());
if let Some(n) = name {
if n.is_empty() {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret name 不能为空", "secret name 不能為空", "secret name cannot be empty") }),
),
));
}
if n.chars().count() > 256 {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret name 长度不能超过 256 个字符", "secret name 長度不能超過 256 個字元", "secret name must be at most 256 characters") }),
),
));
}
}
if let Some(t) = secret_type {
if t.is_empty() {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret type 不能为空", "secret type 不能為空", "secret type cannot be empty") }),
),
));
}
if t.chars().count() > 64 {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "secret type 长度不能超过 64 个字符", "secret type 長度不能超過 64 個字元", "secret type must be at most 64 characters") }),
),
));
}
}
if name.is_none() && secret_type.is_none() {
return Err((
StatusCode::BAD_REQUEST,
Json(
json!({ "error": tr(lang, "至少需要提供 name 或 type 之一", "至少需要提供 name 或 type 之一", "At least one of name or type is required") }),
),
));
}
let mut tx = state
.pool
.begin()
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
let secret_row: Option<(String, String)> =
sqlx::query_as("SELECT name, type FROM secrets WHERE id = $1 AND user_id = $2 FOR UPDATE")
.bind(secret_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
let Some((old_name, old_type)) = secret_row else {
let _ = tx.rollback().await;
return Err((
StatusCode::NOT_FOUND,
Json(
json!({ "error": tr(lang, "密文不存在或无权访问", "密文不存在或無權存取", "Secret not found or no access") }),
),
));
};
let linked_entries: Vec<LinkedEntryAuditRow> = sqlx::query_as(
"SELECT e.folder, e.type, e.name \
FROM entry_secrets es \
JOIN entries e ON e.id = es.entry_id \
WHERE es.secret_id = $1 AND e.user_id = $2 \
ORDER BY e.folder, e.type, e.name",
)
.bind(secret_id)
.bind(user_id)
.fetch_all(&mut *tx)
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
let new_name = name.unwrap_or(&old_name).to_string();
let new_type = secret_type.unwrap_or(&old_type).to_string();
let result = sqlx::query(
"UPDATE secrets SET name = $1, type = $2, version = version + 1, updated_at = NOW() \
WHERE id = $3",
)
.bind(&new_name)
.bind(&new_type)
.bind(secret_id)
.execute(&mut *tx)
.await;
if let Err(e) = result {
if let Some(db_err) = e.as_database_error()
&& db_err.code() == Some("23505".into())
{
let _ = tx.rollback().await;
return Err(map_app_error(
&AppError::ConflictSecretName {
secret_name: new_name.clone(),
},
lang,
));
}
let _ = tx.rollback().await;
return Err(map_entry_mutation_err(e.into(), lang));
}
secrets_core::audit::log_tx(
&mut tx,
Some(user_id),
"rename_secret",
"",
"",
&old_name,
json!({
"source": "web",
"secret_id": secret_id,
"old_name": old_name,
"new_name": new_name,
"old_type": old_type,
"new_type": new_type,
"linked_entries": linked_entries,
}),
)
.await;
tx.commit()
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
Ok(Json(json!({ "ok": true })))
}
async fn api_entry_secret_unlink(
State(state): State<AppState>,
session: Session,
headers: HeaderMap,
Path((entry_id, secret_id)): Path<(Uuid, Uuid)>,
) -> Result<Json<serde_json::Value>, EntryApiError> {
#[derive(sqlx::FromRow)]
struct EntryAuditRow {
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
name: String,
}
let lang = request_ui_lang(&headers);
let user_id = current_user_id(&session).await.ok_or((
StatusCode::UNAUTHORIZED,
Json(json!({ "error": tr(lang, "未登录", "尚未登入", "Not logged in") })),
))?;
let mut tx = state
.pool
.begin()
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
let entry_row: Option<EntryAuditRow> =
sqlx::query_as("SELECT folder, type, name FROM entries WHERE id = $1 AND user_id = $2")
.bind(entry_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
let Some(entry_row) = entry_row else {
let _ = tx.rollback().await;
return Err((
StatusCode::NOT_FOUND,
Json(
json!({ "error": tr(lang, "条目不存在或无权访问", "條目不存在或無權存取", "Entry not found or no access") }),
),
));
};
let deleted = sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
.bind(entry_id)
.bind(secret_id)
.execute(&mut *tx)
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?
.rows_affected();
if deleted == 0 {
let _ = tx.rollback().await;
return Err((
StatusCode::NOT_FOUND,
Json(json!({ "error": tr(lang, "关联不存在", "關聯不存在", "Relation not found") })),
));
}
let secret_deleted = sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(secret_id)
.execute(&mut *tx)
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?
.rows_affected()
> 0;
secrets_core::audit::log_tx(
&mut tx,
Some(user_id),
"unlink_secret",
&entry_row.folder,
&entry_row.entry_type,
&entry_row.name,
json!({
"source": "web",
"entry_id": entry_id,
"secret_id": secret_id,
"deleted_secret": secret_deleted,
}),
)
.await;
tx.commit()
.await
.map_err(|e| map_entry_mutation_err(e.into(), lang))?;
Ok(Json(json!({
"ok": true,
"deleted_relation": true,
"deleted_secret": secret_deleted,
})))
}
// ── OAuth / Well-known ──────────────────────────────────────────────────────── // ── OAuth / Well-known ────────────────────────────────────────────────────────
/// RFC 9728 — OAuth 2.0 Protected Resource Metadata. /// RFC 9728 — OAuth 2.0 Protected Resource Metadata.
@@ -795,3 +1656,27 @@ fn format_audit_target(folder: &str, entry_type: &str, name: &str) -> String {
name.to_string() name.to_string()
} }
} }
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn request_ui_lang_prefers_zh_cn_over_en_fallback() {
let mut headers = HeaderMap::new();
headers.insert(header::ACCEPT_LANGUAGE, "zh-CN, en;q=0.5".parse().unwrap());
assert!(matches!(request_ui_lang(&headers), UiLang::ZhCn));
}
#[test]
fn request_ui_lang_detects_traditional_chinese_variants() {
let mut headers = HeaderMap::new();
headers.insert(
header::ACCEPT_LANGUAGE,
"zh-Hant, en;q=0.5".parse().unwrap(),
);
assert!(matches!(request_ui_lang(&headers), UiLang::ZhTw));
}
}

View File

@@ -38,6 +38,10 @@
} }
.topbar-spacer { flex: 1; } .topbar-spacer { flex: 1; }
.nav-user { font-size: 13px; color: var(--text-muted); } .nav-user { font-size: 13px; color: var(--text-muted); }
.lang-bar { display: flex; gap: 2px; background: var(--surface2); border-radius: 6px; padding: 2px; }
.lang-btn { padding: 3px 9px; border: none; background: none; color: var(--text-muted);
font-size: 12px; cursor: pointer; border-radius: 4px; }
.lang-btn.active { background: var(--border); color: var(--text); }
.btn-sign-out { .btn-sign-out {
padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border); padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border);
background: none; color: var(--text); font-size: 12px; text-decoration: none; cursor: pointer; background: none; color: var(--text); font-size: 12px; text-decoration: none; cursor: pointer;
@@ -77,11 +81,8 @@
td::before { td::before {
display: block; color: var(--text-muted); font-size: 11px; display: block; color: var(--text-muted); font-size: 11px;
margin-bottom: 4px; text-transform: uppercase; margin-bottom: 4px; text-transform: uppercase;
content: attr(data-label);
} }
td.col-time::before { content: "Time"; }
td.col-action::before { content: "Action"; }
td.col-target::before { content: "Target"; }
td.col-detail::before { content: "Detail"; }
.detail { max-width: none; } .detail { max-width: none; }
} }
</style> </style>
@@ -91,8 +92,9 @@
<aside class="sidebar"> <aside class="sidebar">
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a> <a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<nav class="sidebar-menu"> <nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link">MCP</a> <a href="/dashboard" class="sidebar-link" data-i18n="navMcp">MCP</a>
<a href="/audit" class="sidebar-link active">审计</a> <a href="/entries" class="sidebar-link" data-i18n="navEntries">条目</a>
<a href="/audit" class="sidebar-link active" data-i18n="navAudit">审计</a>
</nav> </nav>
</aside> </aside>
@@ -100,35 +102,40 @@
<div class="topbar"> <div class="topbar">
<span class="topbar-spacer"></span> <span class="topbar-spacer"></span>
<span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span> <span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span>
<div class="lang-bar">
<button class="lang-btn" onclick="setLang('zh-CN')"></button>
<button class="lang-btn" onclick="setLang('zh-TW')"></button>
<button class="lang-btn" onclick="setLang('en')">EN</button>
</div>
<form action="/auth/logout" method="post" style="display:inline"> <form action="/auth/logout" method="post" style="display:inline">
<button type="submit" class="btn-sign-out">退出</button> <button type="submit" class="btn-sign-out" data-i18n="signOut">退出</button>
</form> </form>
</div> </div>
<main class="main"> <main class="main">
<section class="card"> <section class="card">
<div class="card-title">我的审计</div> <div class="card-title" data-i18n="auditTitle">我的审计</div>
<div class="card-subtitle">展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。</div> <div class="card-subtitle" data-i18n="auditSubtitle">展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。</div>
{% if entries.is_empty() %} {% if entries.is_empty() %}
<div class="empty">暂无审计记录。</div> <div class="empty" data-i18n="emptyAudit">暂无审计记录。</div>
{% else %} {% else %}
<table> <table>
<thead> <thead>
<tr> <tr>
<th>时间</th> <th data-i18n="colTime">时间</th>
<th>动作</th> <th data-i18n="colAction">动作</th>
<th>目标</th> <th data-i18n="colTarget">目标</th>
<th>详情</th> <th data-i18n="colDetail">详情</th>
</tr> </tr>
</thead> </thead>
<tbody> <tbody>
{% for entry in entries %} {% for entry in entries %}
<tr> <tr>
<td class="col-time mono"><time class="audit-local-time" datetime="{{ entry.created_at_iso }}">{{ entry.created_at_iso }}</time></td> <td class="col-time mono" data-label="时间"><time class="audit-local-time" datetime="{{ entry.created_at_iso }}">{{ entry.created_at_iso }}</time></td>
<td class="col-action mono">{{ entry.action }}</td> <td class="col-action mono" data-label="动作">{{ entry.action }}</td>
<td class="col-target mono">{{ entry.target }}</td> <td class="col-target mono" data-label="目标">{{ entry.target }}</td>
<td class="col-detail"><pre class="detail">{{ entry.detail }}</pre></td> <td class="col-detail" data-label="详情"><pre class="detail">{{ entry.detail }}</pre></td>
</tr> </tr>
{% endfor %} {% endfor %}
</tbody> </tbody>
@@ -138,8 +145,28 @@
</main> </main>
</div> </div>
</div> </div>
<script src="/static/i18n.js"></script>
<script> <script>
(function () { (function () {
I18N_PAGE = {
'zh-CN': { pageTitle: 'Secrets — 审计', auditTitle: '我的审计', auditSubtitle: '展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。', emptyAudit: '暂无审计记录。', colTime: '时间', colAction: '动作', colTarget: '目标', colDetail: '详情' },
'zh-TW': { pageTitle: 'Secrets — 審計', auditTitle: '我的審計', auditSubtitle: '顯示最近 100 筆與目前使用者相關的新審計記錄。時間為瀏覽器本地時區。', emptyAudit: '暫無審計記錄。', colTime: '時間', colAction: '動作', colTarget: '目標', colDetail: '詳情' },
en: { pageTitle: 'Secrets — Audit', auditTitle: 'My audit', auditSubtitle: 'Shows the latest 100 audit records related to the current user. Time is in browser local timezone.', emptyAudit: 'No audit records.', colTime: 'Time', colAction: 'Action', colTarget: 'Target', colDetail: 'Detail' }
};
window.applyPageLang = function () {
document.querySelectorAll('tbody tr').forEach(function (tr) {
var time = tr.querySelector('.col-time');
var action = tr.querySelector('.col-action');
var target = tr.querySelector('.col-target');
var detail = tr.querySelector('.col-detail');
if (time) time.setAttribute('data-label', t('mobileLabelTime'));
if (action) action.setAttribute('data-label', t('mobileLabelAction'));
if (target) target.setAttribute('data-label', t('mobileLabelTarget'));
if (detail) detail.setAttribute('data-label', t('mobileLabelDetail'));
});
};
document.querySelectorAll('time.audit-local-time[datetime]').forEach(function (el) { document.querySelectorAll('time.audit-local-time[datetime]').forEach(function (el) {
var raw = el.getAttribute('datetime'); var raw = el.getAttribute('datetime');
var d = raw ? new Date(raw) : null; var d = raw ? new Date(raw) : null;
@@ -148,6 +175,7 @@
el.title = raw + ' (UTC)'; el.title = raw + ' (UTC)';
} }
}); });
applyLang();
})(); })();
</script> </script>
</body> </body>

View File

@@ -174,6 +174,7 @@
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a> <a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<nav class="sidebar-menu"> <nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link active">MCP</a> <a href="/dashboard" class="sidebar-link active">MCP</a>
<a href="/entries" class="sidebar-link">条目</a>
<a href="/audit" class="sidebar-link">审计</a> <a href="/audit" class="sidebar-link">审计</a>
</nav> </nav>
</aside> </aside>

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,76 @@
var I18N_SHARED = {
'zh-CN': {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: '条目',
navAudit: '审计',
signOut: '退出',
mobileLabelTime: '时间',
mobileLabelAction: '动作',
mobileLabelTarget: '目标',
mobileLabelDetail: '详情'
},
'zh-TW': {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: '條目',
navAudit: '審計',
signOut: '登出',
mobileLabelTime: '時間',
mobileLabelAction: '動作',
mobileLabelTarget: '目標',
mobileLabelDetail: '詳情'
},
en: {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: 'Entries',
navAudit: 'Audit',
signOut: 'Sign out',
mobileLabelTime: 'Time',
mobileLabelAction: 'Action',
mobileLabelTarget: 'Target',
mobileLabelDetail: 'Detail'
}
};
var currentLang = localStorage.getItem('lang') || 'zh-CN';
var I18N_PAGE = {};
function t(key) {
var dict = I18N_PAGE[currentLang] || I18N_PAGE['en'] || {};
var val = dict[key] || (I18N_SHARED[currentLang] && I18N_SHARED[currentLang][key]) || (I18N_SHARED.en && I18N_SHARED.en[key]) || key;
return val;
}
function tf(key, vars) {
var tpl = t(key);
return Object.keys(vars || {}).reduce(function (acc, k) {
return acc.replace(new RegExp('\\{' + k + '\\}', 'g'), String(vars[k]));
}, tpl);
}
function applyLang() {
document.documentElement.lang = currentLang;
var title = t('pageTitle');
if (title) document.title = title;
document.querySelectorAll('[data-i18n]').forEach(function (el) {
var key = el.getAttribute('data-i18n');
el.textContent = t(key);
});
document.querySelectorAll('[data-i18n-ph]').forEach(function (el) {
var key = el.getAttribute('data-i18n-ph');
el.placeholder = t(key);
});
document.querySelectorAll('.lang-btn').forEach(function (btn) {
var map = { 'zh-CN': '简', 'zh-TW': '繁', en: 'EN' };
btn.classList.toggle('active', btn.textContent === map[currentLang]);
});
if (typeof applyPageLang === 'function') applyPageLang();
}
window.setLang = function (lang) {
currentLang = lang;
localStorage.setItem('lang', lang);
applyLang();
};

View File

@@ -3,7 +3,13 @@
# ─── 数据库 ─────────────────────────────────────────────────────────── # ─── 数据库 ───────────────────────────────────────────────────────────
# Web 会话tower-sessions与业务数据共用此库启动时会自动 migrate 会话表,无需额外环境变量。 # Web 会话tower-sessions与业务数据共用此库启动时会自动 migrate 会话表,无需额外环境变量。
SECRETS_DATABASE_URL=postgres://postgres:PASSWORD@HOST:PORT/secrets-mcp SECRETS_DATABASE_URL=postgres://postgres:PASSWORD@db.refining.ltd:5432/secrets-mcp
# 强烈建议生产使用 verify-full至少 verify-ca
SECRETS_DATABASE_SSL_MODE=verify-full
# 私有 CA 或自建链路时填写 CA 根证书路径;使用公共受信 CA 可留空
# SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
# 当设为 prod/production 时,服务会拒绝弱 TLS 模式prefer/disable/allow/require
SECRETS_ENV=production
# ─── 服务地址 ───────────────────────────────────────────────────────── # ─── 服务地址 ─────────────────────────────────────────────────────────
# 内网监听地址Cloudflare / Nginx 反代时填内网端口) # 内网监听地址Cloudflare / Nginx 反代时填内网端口)

View File

@@ -0,0 +1,92 @@
# PostgreSQL TLS Hardening Runbook
This runbook applies to:
- PostgreSQL server: `47.117.131.22` (`db.refining.ltd`)
- `secrets-mcp` app server: `47.238.146.244` (`secrets.refining.app`)
## 1) Issue certificate for `db.refining.ltd` (Let's Encrypt + Cloudflare DNS-01)
Install `acme.sh` on the PostgreSQL server and use a Cloudflare API token with DNS edit permission for the target zone.
```bash
curl https://get.acme.sh | sh -s email=ops@refining.ltd
export CF_Token="your_cloudflare_dns_token"
export CF_Zone_ID="your_zone_id"
~/.acme.sh/acme.sh --issue --dns dns_cf -d db.refining.ltd --keylength ec-256
```
Install cert/key into a PostgreSQL-readable path:
```bash
sudo mkdir -p /etc/postgresql/tls
sudo ~/.acme.sh/acme.sh --install-cert -d db.refining.ltd --ecc \
--fullchain-file /etc/postgresql/tls/fullchain.pem \
--key-file /etc/postgresql/tls/privkey.pem \
--reloadcmd "systemctl reload postgresql || systemctl restart postgresql"
sudo chown -R postgres:postgres /etc/postgresql/tls
sudo chmod 600 /etc/postgresql/tls/privkey.pem
sudo chmod 644 /etc/postgresql/tls/fullchain.pem
```
## 2) Configure PostgreSQL TLS and access rules
In `postgresql.conf`:
```conf
ssl = on
ssl_cert_file = '/etc/postgresql/tls/fullchain.pem'
ssl_key_file = '/etc/postgresql/tls/privkey.pem'
```
In `pg_hba.conf`, allow app traffic via TLS only (example):
```conf
hostssl secrets-mcp postgres 47.238.146.244/32 scram-sha-256
```
Keep a safe admin path (`local` socket or restricted source CIDR) before removing old plaintext `host` rules.
Reload PostgreSQL:
```bash
sudo systemctl reload postgresql
```
## 3) Verify server-side TLS
```bash
openssl s_client -starttls postgres -connect db.refining.ltd:5432 -servername db.refining.ltd
```
The handshake should succeed and the certificate should match `db.refining.ltd`.
## 4) Update `secrets-mcp` app server env
Use environment values like:
```bash
SECRETS_DATABASE_URL=postgres://postgres:***@db.refining.ltd:5432/secrets-mcp
SECRETS_DATABASE_SSL_MODE=verify-full
SECRETS_ENV=production
```
If you use private CA instead of public CA, also set:
```bash
SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
```
Restart `secrets-mcp` after updating env.
## 5) Verify from app server
Run positive and negative checks:
- Positive: app starts, migrations pass, dashboard + MCP API work.
- Negative:
- wrong hostname -> connection fails
- wrong CA file -> connection fails
- disable TLS on DB -> connection fails
This ensures no silent downgrade to weak TLS in production.

View File

@@ -1,22 +0,0 @@
-- Run against prod BEFORE deploying secrets-mcp with FK migration.
-- Requires: write access to SECRETS_DATABASE_URL.
-- Example: psql "$SECRETS_DATABASE_URL" -v ON_ERROR_STOP=1 -f scripts/cleanup-orphan-user-ids.sql
BEGIN;
UPDATE entries
SET user_id = NULL
WHERE user_id IS NOT NULL
AND NOT EXISTS (SELECT 1 FROM users u WHERE u.id = entries.user_id);
UPDATE entries_history
SET user_id = NULL
WHERE user_id IS NOT NULL
AND NOT EXISTS (SELECT 1 FROM users u WHERE u.id = entries_history.user_id);
UPDATE audit_log
SET user_id = NULL
WHERE user_id IS NOT NULL
AND NOT EXISTS (SELECT 1 FROM users u WHERE u.id = audit_log.user_id);
COMMIT;

View File

@@ -1,194 +0,0 @@
-- ============================================================================
-- migrate-v0.3.0.sql
-- Schema migration from v0.2.x → v0.3.0
--
-- Changes:
-- • entries: namespace → folder, kind → type; add notes column
-- • audit_log: namespace → folder, kind → type
-- • entries_history: namespace → folder, kind → type; add user_id column
-- • Unique index: (user_id, name) → (user_id, folder, name)
-- Same name in different folders is now allowed; no rename needed.
--
-- Safe to run multiple times (fully idempotent).
-- Preserves all data in users, entries, secrets.
-- ============================================================================
BEGIN;
-- ── entries: rename namespace→folder, kind→type ──────────────────────────────
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'namespace'
) THEN
ALTER TABLE entries RENAME COLUMN namespace TO folder;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'kind'
) THEN
ALTER TABLE entries RENAME COLUMN kind TO type;
END IF;
END $$;
-- Set NOT NULL + default for folder/type in entries
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'folder'
) THEN
UPDATE entries SET folder = '' WHERE folder IS NULL;
ALTER TABLE entries ALTER COLUMN folder SET NOT NULL;
ALTER TABLE entries ALTER COLUMN folder SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'type'
) THEN
UPDATE entries SET type = '' WHERE type IS NULL;
ALTER TABLE entries ALTER COLUMN type SET NOT NULL;
ALTER TABLE entries ALTER COLUMN type SET DEFAULT '';
END IF;
END $$;
-- Add notes column to entries if missing
ALTER TABLE entries ADD COLUMN IF NOT EXISTS notes TEXT NOT NULL DEFAULT '';
-- ── audit_log: rename namespace→folder, kind→type ────────────────────────────
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'namespace'
) THEN
ALTER TABLE audit_log RENAME COLUMN namespace TO folder;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'kind'
) THEN
ALTER TABLE audit_log RENAME COLUMN kind TO type;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'folder'
) THEN
UPDATE audit_log SET folder = '' WHERE folder IS NULL;
ALTER TABLE audit_log ALTER COLUMN folder SET NOT NULL;
ALTER TABLE audit_log ALTER COLUMN folder SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'type'
) THEN
UPDATE audit_log SET type = '' WHERE type IS NULL;
ALTER TABLE audit_log ALTER COLUMN type SET NOT NULL;
ALTER TABLE audit_log ALTER COLUMN type SET DEFAULT '';
END IF;
END $$;
ALTER TABLE audit_log DROP COLUMN IF EXISTS actor;
-- ── entries_history: rename namespace→folder, kind→type; add user_id ─────────
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'namespace'
) THEN
ALTER TABLE entries_history RENAME COLUMN namespace TO folder;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'kind'
) THEN
ALTER TABLE entries_history RENAME COLUMN kind TO type;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'folder'
) THEN
UPDATE entries_history SET folder = '' WHERE folder IS NULL;
ALTER TABLE entries_history ALTER COLUMN folder SET NOT NULL;
ALTER TABLE entries_history ALTER COLUMN folder SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'type'
) THEN
UPDATE entries_history SET type = '' WHERE type IS NULL;
ALTER TABLE entries_history ALTER COLUMN type SET NOT NULL;
ALTER TABLE entries_history ALTER COLUMN type SET DEFAULT '';
END IF;
END $$;
ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID;
ALTER TABLE entries_history DROP COLUMN IF EXISTS actor;
-- ── secrets_history: drop actor column ───────────────────────────────────────
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
-- ── Rebuild unique indexes: (user_id, folder, name) ──────────────────────────
-- Note: folder is now part of the key, so same name in different folders is
-- naturally distinct — no rename of existing rows needed.
DROP INDEX IF EXISTS idx_entries_unique_legacy;
DROP INDEX IF EXISTS idx_entries_unique_user;
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy
ON entries(folder, name)
WHERE user_id IS NULL;
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user
ON entries(user_id, folder, name)
WHERE user_id IS NOT NULL;
-- ── Replace old namespace/kind indexes with folder/type ──────────────────────
DROP INDEX IF EXISTS idx_entries_namespace;
DROP INDEX IF EXISTS idx_entries_kind;
DROP INDEX IF EXISTS idx_audit_log_ns_kind;
DROP INDEX IF EXISTS idx_entries_history_ns_kind_name;
CREATE INDEX IF NOT EXISTS idx_entries_folder
ON entries(folder) WHERE folder <> '';
CREATE INDEX IF NOT EXISTS idx_entries_type
ON entries(type) WHERE type <> '';
CREATE INDEX IF NOT EXISTS idx_entries_user_id
ON entries(user_id) WHERE user_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type
ON audit_log(folder, type);
CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name
ON entries_history(folder, type, name, version DESC);
CREATE INDEX IF NOT EXISTS idx_entries_history_user_id
ON entries_history(user_id) WHERE user_id IS NOT NULL;
COMMIT;
-- ── Verification queries (run these manually to confirm) ─────────────────────
-- SELECT column_name, data_type FROM information_schema.columns
-- WHERE table_name = 'entries' ORDER BY ordinal_position;
-- SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'entries';
-- SELECT COUNT(*) FROM entries;
-- SELECT COUNT(*) FROM users;
-- SELECT COUNT(*) FROM secrets;

95
scripts/sync-test-to-prod.sh Executable file
View File

@@ -0,0 +1,95 @@
#!/bin/bash
# 同步测试环境数据到生产环境
# 用法: ./scripts/sync-test-to-prod.sh
set -euo pipefail
# PostgreSQL 客户端工具路径 (Homebrew libpq)
export PATH="/opt/homebrew/opt/libpq/bin:$PATH"
# SSL 配置
export PGSSLMODE=verify-full
export PGSSLROOTCERT=/etc/ssl/cert.pem
# 测试环境
TEST_DB="postgres://postgres:Voson_2026_Pg18!@db.refining.ltd:5432/secrets-nn-test"
# 生产环境
PROD_DB="postgres://postgres:Voson_2026_Pg18!@db.refining.ltd:5432/secrets-nn-prod"
echo "========================================="
echo " 测试环境 -> 生产环境 数据同步"
echo "========================================="
echo ""
# 确认操作
read -p "⚠️ 此操作将覆盖生产环境数据,确认继续? (yes/no): " confirm
if [ "$confirm" != "yes" ]; then
echo "已取消"
exit 0
fi
echo ""
echo "步骤 1/4: 导出测试环境数据..."
TEMP_DIR=$(mktemp -d)
trap "rm -rf $TEMP_DIR" EXIT
# 导出测试环境数据(不含审计日志和历史记录)
pg_dump "$TEST_DB" \
--table=entries \
--table=secrets \
--table=entry_secrets \
--table=users \
--table=oauth_accounts \
--data-only \
--column-inserts \
--no-owner \
--no-privileges \
> "$TEMP_DIR/test_data.sql"
echo "✓ 测试数据已导出到临时文件"
echo " 文件大小: $(du -h "$TEMP_DIR/test_data.sql" | cut -f1)"
echo ""
echo "步骤 2/4: 备份当前生产数据..."
pg_dump "$PROD_DB" \
--table=entries \
--table=secrets \
--table=entry_secrets \
--table=users \
--table=oauth_accounts \
--data-only \
--column-inserts \
--no-owner \
--no-privileges \
> "$TEMP_DIR/prod_backup_$(date +%Y%m%d_%H%M%S).sql"
echo "✓ 生产数据已备份"
echo ""
echo "步骤 3/4: 清空生产环境目标表..."
psql "$PROD_DB" <<'SQL'
TRUNCATE TABLE entry_secrets CASCADE;
TRUNCATE TABLE secrets CASCADE;
TRUNCATE TABLE entries CASCADE;
SQL
echo "✓ 生产环境目标表已清空"
echo ""
echo "步骤 4/4: 导入测试数据到生产环境..."
psql "$PROD_DB" -f "$TEMP_DIR/test_data.sql" 2>&1 | tail -20
echo ""
echo "验证数据..."
echo "生产环境数据统计:"
psql "$PROD_DB" -c "SELECT 'users' as table_name, count(*) FROM users UNION ALL SELECT 'entries', count(*) FROM entries UNION ALL SELECT 'secrets', count(*) FROM secrets UNION ALL SELECT 'entry_secrets', count(*) FROM entry_secrets UNION ALL SELECT 'oauth_accounts', count(*) FROM oauth_accounts ORDER BY table_name;"
echo ""
echo "========================================="
echo " ✓ 数据同步完成!"
echo "========================================="
echo ""
echo "提示:"
echo " - 生产数据备份已保存在临时目录"
echo " - 临时文件将在脚本退出后自动删除"