Compare commits

...

24 Commits

Author SHA1 Message Date
voson
98d69f5f12 fix(update): include deleted_at in SELECT for EntryWriteRow mapping
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 5m43s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m35s
The update_fields_by_id query was missing deleted_at column, causing
sqlx FromRow mapping to fail against EntryWriteRow struct.
2026-04-09 20:55:05 +08:00
agent
089d0b4b58 style(dashboard): move version footer out of card
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 6m30s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m37s
2026-04-09 17:32:40 +08:00
agent
10da51c203 ci: 添加版本 bump 硬检查,防止代码变更未发版
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 5m47s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m36s
- CI 工作流解析版本时检查 crates/ 变更是否伴随版本 bump
- 若代码变更但版本号未变,直接失败并提示
- 与 scripts/release-check.sh 本地检查形成双保险
2026-04-07 14:05:59 +08:00
agent
bc8995cf71 chore: sync Cargo.lock with Cargo.toml version 0.5.12
Some checks failed
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been cancelled
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Has been cancelled
2026-04-07 13:47:51 +08:00
agent
5333b863c5 refactor(entries): 将编辑弹窗中的密文管理功能移到查看密文弹窗
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Failing after 1m44s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been skipped
- 编辑弹窗移除密文区域(重命名、类型修改、解绑)
- 查看密文弹窗增加:重命名(带 debounce 校验)、类型选择、解绑、保存
- 列表行密文 chips 保留只读展示,移除解绑按钮
- 简化编辑弹窗保存逻辑,不再处理密文变更
- bump 0.5.12
2026-04-07 13:32:29 +08:00
agent
6fde982c20 refactor(entries): 将编辑弹窗中的密文管理功能移到查看密文弹窗
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 6m6s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m36s
- 编辑弹窗移除密文区域(重命名、类型修改、解绑)
- 查看密文弹窗增加:重命名(带 debounce 校验)、类型选择、解绑、保存
- 列表行密文 chips 保留只读展示,移除解绑按钮
- 简化编辑弹窗保存逻辑,不再处理密文变更
2026-04-07 13:25:33 +08:00
agent
a2a80a1744 fix(dashboard): 修正 OpenCode MCP 配置,移除 enabled 字段、添加 oauth: false
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 5m50s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m36s
2026-04-07 11:17:03 +08:00
dfe282095c feat(dashboard): OpenCode MCP 配置改用原生 Streamable HTTP,移除 mcp-remote 中转;bump 0.5.11
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 6m10s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m36s
2026-04-07 10:43:17 +08:00
voson
59084a409d release(secrets-mcp): 0.5.10 — Web 模块化、性能与错误处理
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 6m3s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m36s
- 拆分 web.rs 为 web/ 子模块;统一 client_ip 提取
- core: user_scope SQL 复用、env_map N+1 消除、FETCH_ALL 上限调整
- entries 列表页并行查询;PgPool 去 Arc;结构化 NotFound 等错误
- CI: SSH 私钥安全写入;crypto/hex 与依赖清理;MCP 输入长度校验
- AGENTS: API Key 明文存储设计说明
2026-04-06 23:41:07 +08:00
voson
b0fcb83592 release(secrets-mcp): 0.5.9 — users.key_version 与会话失效;Web 条目解密 API 与列表增强
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 5m24s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m36s
2026-04-06 17:23:20 +08:00
voson
8942718641 scripts: 添加基于 CSV 的 MCP secrets 重加密修复工具
通过读取 entry_id/secret_name/secret_value 调用 secrets_update 让服务端用当前密钥重加密。附带模板 CSV,.gitignore 忽略 *.pyc。
2026-04-06 16:38:37 +08:00
voson
53d53ff96a release(secrets-mcp): 0.5.8 — 修复更换密码短语流程
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 5m17s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m36s
- secrets-core: change_user_key() 事务内全量解密并重加密 secrets
- web: POST /api/key-change;已有密钥时拒绝 POST /api/key-setup(409)
- dashboard: 更换密码需当前密码,调用 key-change
- 同步 Cargo.lock
2026-04-06 12:04:35 +08:00
voson
cab234cfcb feat(secrets-mcp): 增强 MCP 请求日志与 encryption_key 参数支持
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m28s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m36s
logging.rs:
- 每条 MCP POST 日志新增 auth_key(Bearer token 前12字符掩码)、
  enc_key(X-Encryption-Key 前4后4字符指纹,如 146b…5516(64) 或 absent)、
  user_id、tool_args(白名单非敏感参数摘要)字段
- 新增辅助函数 mask_bearer / mask_enc_key / extract_tool_args / summarize_value

tools.rs:
- extract_enc_key 成功路径增加 debug 级指纹日志(raw_len/trimmed_len/prefix/suffix)
- 新增 extract_enc_key_or_arg / require_user_and_key_or_arg:优先使用参数传入的密钥,
  fallback 到 X-Encryption-Key 头,绕过 Cursor Chat MCP 头透传异常
- GetSecretInput / AddInput / UpdateInput / ExportInput / EnvMapInput 各增加可选
  encryption_key 字段,对应工具实现改用 require_user_and_key_or_arg
2026-04-06 11:03:01 +08:00
voson
e0fee639c1 release(secrets-mcp): 0.5.7
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 5m8s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m36s
2026-04-05 17:07:31 +08:00
voson
7c53bfb782 feat(core): entry 历史附带关联 secret 密文快照,rollback 可恢复 N:N 与密文
- db: metadata_with_secret_snapshot / strip / parse 辅助
- add/update/delete/rollback 在写 entries_history 前合并快照
- rollback: 按历史快照同步 entry_secrets、更新或插入 secrets
- 满足 clippy collapsible_if
2026-04-05 17:06:53 +08:00
voson
63cb3a8216 release(secrets-mcp): 0.5.6
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 5m8s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 1m36s
修复 OAuth 解绑时非法聚合 FOR UPDATE,Web OAuth 审计 IP 与 TRUST_PROXY 对齐并校验 IP,账号绑定写入 oauth_state 失败时回滚 bind 标记。回滚条目时恢复 folder/type,导入冲突检查在 DB 失败时传播错误,MCP delete/history 要求已登录用户,全局请求体 10MiB 限制。CI 部署支持 DEPLOY_KNOWN_HOSTS,默认 accept-new;文档与 deploy 示例补充连接池、限流、TRUST_PROXY。移除含明文凭据的 sync-test-to-prod 脚本。
2026-04-05 15:29:03 +08:00
voson
2b994141b8 release(secrets-mcp): 0.5.5 — 生产 CORS 显式 allow_methods,修复 tower-http 启动 panic
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m59s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
credentials + wildcard methods/headers 被 tower-http 禁止;生产环境改为 GET/POST/PATCH/DELETE/OPTIONS 白名单。
2026-04-05 12:27:40 +08:00
voson
9d6ac5c13a release(secrets-mcp): 0.5.4 — Web 分页修正与 hex 解码;批量删除上限;MCP @ 路径检测
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m55s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Failing after 6s
2026-04-05 12:14:40 +08:00
voson
1860cce86c release(secrets-mcp): 0.5.3 — 审计日志分页与 Web;CONTRIBUTING;文档与模板修正 2026-04-05 11:34:04 +08:00
dd24f7cc44 release: secrets-mcp 0.5.2
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 6m7s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Failing after 6s
Bump version: secrets-mcp-0.5.1 tag already existed while crates had further changes.

Made-with: Cursor
2026-04-05 10:38:50 +08:00
voson
aefad33870 chore(secrets-mcp): 0.5.1 — 移除 entry type 归一化,MCP 参数兼容字符串形式
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m22s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- 去掉 taxonomy 对 entry type 的自动映射与 metadata.subtype 回填;仅 trim 后入库
- MCP tools:Vec/Map/bool 等可选字段支持 JSON 内嵌字符串解析,并改进解析失败提示
- 新增 deser 单元测试;README/AGENTS 与 models 注释同步

Made-with: Cursor
2026-04-04 21:27:33 +08:00
voson
0ffb81e57f feat: entry update links existing secrets (link_secret_names)
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m19s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- secrets-core: update flow validates and applies secret links
- secrets-mcp: MCP tool params and UI for managing links on edit
- Align errors and templates; minor crypto/.gitignore tweaks

Made-with: Cursor
2026-04-04 20:30:32 +08:00
voson
4a1654c820 docs: update MCP tools list, env vars, taxonomy and deploy structure 2026-04-04 18:04:35 +08:00
voson
a15e2eaf4a docs: align README with removed SQL migration scripts
Made-with: Cursor
2026-04-04 17:58:26 +08:00
59 changed files with 7922 additions and 3143 deletions

View File

@@ -48,6 +48,18 @@ jobs:
echo "version=${version}" >> "$GITHUB_OUTPUT"
echo "tag=${tag}" >> "$GITHUB_OUTPUT"
# 版本 bump 硬检查:若本次推送包含 crates/ 或 Cargo.toml 变更,
# 但版本号与上一提交一致,则视为未发版,直接失败。
prev_version=$(git show HEAD^:crates/secrets-mcp/Cargo.toml 2>/dev/null | grep -m1 '^version' | sed 's/.*"\(.*\)".*/\1/' || true)
if [ -n "$prev_version" ] && [ "$version" = "$prev_version" ]; then
# 确认本次推送是否包含 crates/ 或 Cargo.toml 变更
if git diff --name-only HEAD^ HEAD 2>/dev/null | grep -qE '^crates/|^Cargo\.toml$'; then
echo "::error::工作区包含 crates/ 或 Cargo.toml 变更,但版本号未 bump${version} == ${prev_version}"
echo "按规则,每次代码变更必须 bump crates/secrets-mcp/Cargo.toml 中的 version。"
exit 1
fi
fi
if git rev-parse "refs/tags/${tag}" >/dev/null 2>&1; then
echo "⚠ 版本 ${tag} 已存在,将覆盖重新发版。"
echo "tag_exists=true" >> "$GITHUB_OUTPUT"
@@ -208,27 +220,35 @@ jobs:
DEPLOY_HOST: ${{ vars.DEPLOY_HOST }}
DEPLOY_USER: ${{ vars.DEPLOY_USER }}
DEPLOY_SSH_KEY: ${{ secrets.DEPLOY_SSH_KEY }}
DEPLOY_KNOWN_HOSTS: ${{ vars.DEPLOY_KNOWN_HOSTS }}
run: |
if [ -z "$DEPLOY_HOST" ] || [ -z "$DEPLOY_USER" ] || [ -z "$DEPLOY_SSH_KEY" ]; then
echo "部署跳过:请配置 vars.DEPLOY_HOST、vars.DEPLOY_USER 与 secrets.DEPLOY_SSH_KEY"
exit 0
fi
install -m 600 /dev/null /tmp/deploy_key
echo "$DEPLOY_SSH_KEY" > /tmp/deploy_key
chmod 600 /tmp/deploy_key
trap 'rm -f /tmp/deploy_key' EXIT
scp -i /tmp/deploy_key -o StrictHostKeyChecking=no \
if [ -n "$DEPLOY_KNOWN_HOSTS" ]; then
echo "$DEPLOY_KNOWN_HOSTS" > /tmp/deploy_known_hosts
ssh_opts="-o UserKnownHostsFile=/tmp/deploy_known_hosts -o StrictHostKeyChecking=yes"
else
ssh_opts="-o StrictHostKeyChecking=accept-new"
fi
scp -i /tmp/deploy_key $ssh_opts \
"/tmp/artifact/${MCP_BINARY}" \
"${DEPLOY_USER}@${DEPLOY_HOST}:/tmp/secrets-mcp.new"
ssh -i /tmp/deploy_key -o StrictHostKeyChecking=no "${DEPLOY_USER}@${DEPLOY_HOST}" "
ssh -i /tmp/deploy_key $ssh_opts "${DEPLOY_USER}@${DEPLOY_HOST}" "
sudo mv /tmp/secrets-mcp.new /opt/secrets-mcp/secrets-mcp
sudo chmod +x /opt/secrets-mcp/secrets-mcp
sudo systemctl restart secrets-mcp
sleep 2
sudo systemctl is-active secrets-mcp && echo '服务启动成功' || (sudo journalctl -u secrets-mcp -n 20 && exit 1)
"
rm -f /tmp/deploy_key
- name: 飞书通知
if: always()

4
.gitignore vendored
View File

@@ -4,4 +4,6 @@
.cursor/
*.pem
tmp/
client_secret_*.apps.googleusercontent.com.json
client_secret_*.apps.googleusercontent.com.json
node_modules/
*.pyc

3
.vscode/tasks.json vendored
View File

@@ -22,7 +22,6 @@
"label": "test: workspace",
"type": "shell",
"command": "cargo test --workspace --locked",
"dependsOn": "build",
"group": { "kind": "test", "isDefault": true }
},
{
@@ -35,7 +34,7 @@
"label": "clippy: workspace",
"type": "shell",
"command": "cargo clippy --workspace --locked -- -D warnings",
"dependsOn": "build"
"problemMatcher": []
},
{
"label": "ci: release-check",

View File

@@ -2,12 +2,37 @@
本仓库为 **MCP SaaS**`secrets-core`(业务与持久化)+ `secrets-mcp`Streamable HTTP MCP、Web、OAuth、API Key。对外入口见 `crates/secrets-mcp`
## 版本控制
本仓库使用 **[Jujutsu (jj)](https://jj-vcs.dev/)** 作为版本控制系统(纯 jj 模式,无 `.git` 目录)。
### 常用 jj 命令对照
| 操作 | jj 命令 |
|------|---------|
| 查看历史 | `jj log` / `jj log 'all()'` |
| 查看状态 | `jj status` |
| 新建提交 | `jj commit` |
| 创建新变更 | `jj new` |
| 变基 | `jj rebase` |
| 合并提交 | `jj squash` |
| 撤销操作 | `jj undo` |
| 查看标签 | `jj tag list` |
| 查看分支 | `jj bookmark list` |
| 推送远端 | `jj git push` |
| 拉取远端 | `jj git fetch` |
### 注意事项
- 本仓库为**纯 jj 模式**,无 `.git` 目录;本地不要使用 `git` 命令
- CI/CDGitea Actions仍通过 Git 协议拉取代码Runner 侧自动使用 `git`,无需修改
- 检查标签是否存在时使用 `jj log --no-graph --revisions "tag(${tag})"` 而非 `git rev-parse`
## 提交 / 推送硬规则(优先于下文)
**每次提交和推送前必须执行以下检查,无论是否明确「发版」:**
1. 涉及 `crates/**`、根目录 `Cargo.toml`/`Cargo.lock``secrets-mcp` 行为变更的提交,默认视为**需要发版**,除非明确说明「本次不发版」。
2. 提交前检查 `crates/secrets-mcp/Cargo.toml``version`,再查 tag`git tag -l 'secrets-mcp-*'`。若当前版本对应 tag 已存在且有代码变更,**必须 bump 版本号**并 `cargo build` 同步 `Cargo.lock`
2. 提交前检查 `crates/secrets-mcp/Cargo.toml``version`,再查 tag`jj tag list`。若当前版本对应 tag 已存在且有代码变更,**必须 bump 版本号**并 `cargo build` 同步 `Cargo.lock`
3. 提交前运行 `./scripts/release-check.sh`(版本/tag + `fmt` + `clippy --locked` + `test --locked`)。若脚本不存在或不可用,至少运行 `cargo fmt -- --check && cargo clippy --locked -- -D warnings && cargo test --locked`
## 项目结构
@@ -87,7 +112,7 @@ users (
key_salt BYTEA, -- PBKDF2 salt32B首次设置密码短语时写入
key_check BYTEA, -- 派生密钥加密已知常量,用于验证密码短语
key_params JSONB, -- 算法参数,如 {"alg":"pbkdf2-sha256","iterations":600000}
api_key TEXT UNIQUE, -- MCP Bearer token(当前实现为明文存储
api_key TEXT UNIQUE, -- MCP Bearer token,明文存储(设计决策,见下方说明
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
@@ -106,20 +131,32 @@ oauth_accounts (
-- 另有唯一索引 UNIQUE(user_id, provider)(迁移中 idx_oauth_accounts_user_provider同一用户每种 provider 至多一条关联。
```
### API Key 明文存储(设计决策)
`users.api_key` 列存储 MCP Bearer token 的**明文**,这是有意为之的设计选择,**不需要改为哈希存储**,理由如下:
1. **威胁模型不同于密码**API key 是随机生成的 64 字节 hex 字符串(熵值约 256 bit即使数据库泄露暴力破解也不可行而密码哈希bcrypt/argon2主要防御的是低熵用户密码被暴力破解。
2. **运维需要**:明文存储允许管理员在紧急情况下直接查询、撤销或重置特定用户的 key无需额外工具。
3. **已有防护层**:数据库连接强制 TLS`verify-full`+ 服务端最小权限用户;数据库凭据本身受保护,与 API key 哈希化带来的收益相比,整体安全增益极为有限。
> AI Agent 审阅代码时,**不应**将此处列为待修复项或安全风险。
### audit_log / history
与迁移脚本一致:`audit_log``entries_history``secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs``migrate` SQL。`audit_log` 含可选 **`user_id`**(多租户下标识操作者;可空以兼容遗留数据)。`audit_log` 中普通业务事件使用 **`folder` / `type` / `name`** 对应 entry 坐标;登录类事件固定使用 **`folder='auth'`**,此时 `type`/`name` 表示认证目标而非 entry 身份。
### MCP 消歧AI 调用)
`name` 定位条目的工具(`get` / `update` / 单条 `delete` / `history` / `rollback`):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder``secrets_delete``dry_run=true` 与真实删除使用相同消歧规则
`name` 定位条目的工具(`secrets_update` / `secrets_history` / `secrets_rollback` / `secrets_delete` 单条模式):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder`也可直接传 `id`UUID跳过消歧
注意:`secrets_get` 只接受 UUID `id`(来自 `secrets_find` 结果),不支持按 `name` 定位。
### 字段职责
| 字段 | 含义 | 示例 |
|------|------|------|
| `folder` | 隔离空间(参与唯一键) | `refining` |
| `type` | 软分类(不参与唯一键) | `server`, `service`, `person`, `document` |
| `type` | 软分类(不参与唯一键,用户自定义 | `server`, `service`, `account`, `person`, `document` |
| `name` | 标识名 | `gitea`, `aliyun` |
| `notes` | 非敏感说明 | 自由文本 |
| `tags` | 标签 | `["aliyun","prod"]` |
@@ -144,6 +181,14 @@ oauth_accounts (
- 加密:密钥由用户密码短语通过 **PBKDF2-SHA256600k 次)** 在客户端派生,服务端只存 `key_salt`/`key_check`/`key_params`不持有原始密钥。Web 客户端在浏览器本地完成加解密MCP 客户端通过 `X-Encryption-Key` 请求头传递密钥,服务端临时解密后返回明文。
- MCPtools 参数与 JSON Schema`schemars`)保持同步,鉴权以请求扩展中的用户上下文为准。
## 生产 CORS
生产环境 CORS 使用显式请求头白名单(`build_cors_layer`),而非 `allow_headers(Any)`
因为 `tower-http` 禁止 `allow_credentials(true)``allow_headers(Any)` 同时使用。
**维护约束**:若 MCP 协议或客户端新增自定义请求头,必须同步更新 `production_allowed_headers()`
当前允许的请求头:`Authorization``Content-Type``X-Encryption-Key``mcp-session-id``x-mcp-session`
## 提交前检查
```bash
@@ -162,7 +207,7 @@ cargo test --locked
```bash
grep '^version' crates/secrets-mcp/Cargo.toml
git tag -l 'secrets-mcp-*'
jj tag list
```
## CI/CD
@@ -180,9 +225,19 @@ git tag -l 'secrets-mcp-*'
| 变量 | 说明 |
|------|------|
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL URL。 |
| `SECRETS_DATABASE_SSL_MODE` | 可选但强烈建议生产必填。推荐 `verify-full`(至少 `verify-ca`)。 |
| `SECRETS_DATABASE_SSL_ROOT_CERT` | 可选。私有 CA 或自签链路时指定 CA 根证书路径。 |
| `SECRETS_DATABASE_POOL_SIZE` | 可选。连接池最大连接数,默认 `10`。 |
| `SECRETS_DATABASE_ACQUIRE_TIMEOUT` | 可选。获取连接超时秒数,默认 `5`。 |
| `SECRETS_ENV` | 可选。设为 `prod` / `production` 时会拒绝弱 PostgreSQL TLS 模式。 |
| `BASE_URL` | 对外基址OAuth 回调 `${BASE_URL}/auth/google/callback`。 |
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`(容器/远程直接暴露时需改为 `0.0.0.0:9315`)。 |
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;仅运行时配置。 |
| `RUST_LOG` | 如 `secrets_mcp=debug`。 |
| `RATE_LIMIT_GLOBAL_PER_SECOND` | 可选。全局限流速率,默认 `100` req/s。 |
| `RATE_LIMIT_GLOBAL_BURST` | 可选。全局限流突发量,默认 `200`。 |
| `RATE_LIMIT_IP_PER_SECOND` | 可选。单 IP 限流速率,默认 `20` req/s。 |
| `RATE_LIMIT_IP_BURST` | 可选。单 IP 限流突发量,默认 `40`。 |
| `TRUST_PROXY` | 可选。设为 `1`/`true`/`yes` 时从 `X-Forwarded-For` / `X-Real-IP` 提取客户端 IP。 |
> `SERVER_MASTER_KEY` 已不再需要。新架构下密钥由用户密码短语在客户端派生,服务端不持有。

55
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,55 @@
# Contributing
## 版本控制
本仓库使用 **[Jujutsu (jj)](https://jj-vcs.dev/)**。请勿使用 `git` 命令。
```bash
jj log # 查看历史
jj status # 查看状态
jj new # 创建新变更
jj commit # 提交
jj rebase # 变基
jj squash # 合并提交
jj git push # 推送到远端
```
详见 [AGENTS.md](AGENTS.md) 的「版本控制」章节。
## 本地开发
```bash
# 复制环境变量
cp deploy/.env.example .env
# 填写数据库连接等配置后
cargo build
cargo test --locked
```
## 提交前检查
每次提交前必须通过:
```bash
cargo fmt -- --check
cargo clippy --locked -- -D warnings
cargo test --locked
```
或使用脚本:
```bash
./scripts/release-check.sh
```
## 发版规则
涉及 `crates/**`、根目录 `Cargo.toml`/`Cargo.lock``secrets-mcp` 行为变更的提交,默认需要发版。
1. 检查 `crates/secrets-mcp/Cargo.toml``version`
2. 运行 `jj tag list` 确认对应 tag 是否已存在
3. 若 tag 已存在且有代码变更,**必须 bump 版本**并 `cargo build` 同步 `Cargo.lock`
4. 通过 release-check 后再提交
详见 [AGENTS.md](AGENTS.md) 的「提交 / 推送硬规则」章节。

137
Cargo.lock generated
View File

@@ -464,6 +464,20 @@ dependencies = [
"syn",
]
[[package]]
name = "dashmap"
version = "6.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5041cc499144891f3790297212f32a74fb938e5136a14943f338ef9e0ae276cf"
dependencies = [
"cfg-if",
"crossbeam-utils",
"hashbrown 0.14.5",
"lock_api",
"once_cell",
"parking_lot_core",
]
[[package]]
name = "der"
version = "0.7.10"
@@ -596,6 +610,12 @@ version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d9c4f5dac5e15c24eb999c26181a6ca40b39fe946cbe4c263c7209467bc83af2"
[[package]]
name = "foldhash"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb"
[[package]]
name = "form_urlencoded"
version = "1.2.2"
@@ -687,6 +707,12 @@ version = "0.3.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "037711b3d59c33004d3856fbdc83b99d4ff37a24768fa1be9ce3538a1cde4393"
[[package]]
name = "futures-timer"
version = "3.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f288b0a4f20f9a56b5d1da57e2227c661b7b16168e2f72365f57b63326e29b24"
[[package]]
name = "futures-util"
version = "0.3.32"
@@ -765,6 +791,35 @@ dependencies = [
"polyval",
]
[[package]]
name = "governor"
version = "0.10.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9efcab3c1958580ff1f25a2a41be1668f7603d849bb63af523b208a3cc1223b8"
dependencies = [
"cfg-if",
"dashmap",
"futures-sink",
"futures-timer",
"futures-util",
"getrandom 0.3.4",
"hashbrown 0.16.1",
"nonzero_ext",
"parking_lot",
"portable-atomic",
"quanta",
"rand 0.9.2",
"smallvec",
"spinning_top",
"web-time",
]
[[package]]
name = "hashbrown"
version = "0.14.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1"
[[package]]
name = "hashbrown"
version = "0.15.5"
@@ -773,7 +828,7 @@ checksum = "9229cfe53dfd69f0609a49f65461bd93001ea1ef889cd5529dd176593f5338a1"
dependencies = [
"allocator-api2",
"equivalent",
"foldhash",
"foldhash 0.1.5",
]
[[package]]
@@ -781,6 +836,11 @@ name = "hashbrown"
version = "0.16.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
dependencies = [
"allocator-api2",
"equivalent",
"foldhash 0.2.0",
]
[[package]]
name = "hashlink"
@@ -1283,6 +1343,12 @@ dependencies = [
"windows-sys 0.61.2",
]
[[package]]
name = "nonzero_ext"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38bf9645c8b145698bb0b18a4637dcacbc421ea49bef2317e4fd8065a387cf21"
[[package]]
name = "nu-ansi-term"
version = "0.50.3"
@@ -1463,6 +1529,12 @@ dependencies = [
"universal-hash",
]
[[package]]
name = "portable-atomic"
version = "1.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c33a9471896f1c69cecef8d20cbe2f7accd12527ce60845ff44c153bb2a21b49"
[[package]]
name = "potential_utf"
version = "0.1.4"
@@ -1506,6 +1578,21 @@ dependencies = [
"unicode-ident",
]
[[package]]
name = "quanta"
version = "0.12.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f3ab5a9d756f0d97bdc89019bd2e4ea098cf9cde50ee7564dde6b81ccc8f06c7"
dependencies = [
"crossbeam-utils",
"libc",
"once_cell",
"raw-cpuid",
"wasi",
"web-sys",
"winapi",
]
[[package]]
name = "quinn"
version = "0.11.9"
@@ -1658,6 +1745,15 @@ version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c8d0fd677905edcbeedbf2edb6494d676f0e98d54d5cf9bda0b061cb8fb8aba"
[[package]]
name = "raw-cpuid"
version = "11.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "498cd0dc59d73224351ee52a95fee0f1a617a2eae0e7d9d720cc622c73a54186"
dependencies = [
"bitflags",
]
[[package]]
name = "redox_syscall"
version = "0.5.18"
@@ -1953,11 +2049,11 @@ dependencies = [
"aes-gcm",
"anyhow",
"chrono",
"hex",
"rand 0.10.0",
"serde",
"serde_json",
"serde_yaml",
"sha2",
"sqlx",
"tempfile",
"thiserror",
@@ -1969,7 +2065,7 @@ dependencies = [
[[package]]
name = "secrets-mcp"
version = "0.4.0"
version = "0.5.14"
dependencies = [
"anyhow",
"askama",
@@ -1977,6 +2073,7 @@ dependencies = [
"axum-extra",
"chrono",
"dotenvy",
"governor",
"http",
"rand 0.10.0",
"reqwest",
@@ -1985,7 +2082,6 @@ dependencies = [
"secrets-core",
"serde",
"serde_json",
"sha2",
"sqlx",
"time",
"tokio",
@@ -1995,6 +2091,7 @@ dependencies = [
"tower-sessions-sqlx-store-chrono",
"tracing",
"tracing-subscriber",
"url",
"urlencoding",
"uuid",
]
@@ -2195,6 +2292,15 @@ dependencies = [
"lock_api",
]
[[package]]
name = "spinning_top"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d96d2d1d716fb500937168cc09353ffdc7a012be8475ac7308e1bdf0e3923300"
dependencies = [
"lock_api",
]
[[package]]
name = "spki"
version = "0.7.3"
@@ -2717,6 +2823,7 @@ dependencies = [
"futures-util",
"http",
"http-body",
"http-body-util",
"iri-string",
"pin-project-lite",
"tower",
@@ -3167,6 +3274,28 @@ dependencies = [
"wasite",
]
[[package]]
name = "winapi"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
dependencies = [
"winapi-i686-pc-windows-gnu",
"winapi-x86_64-pc-windows-gnu",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windows-core"
version = "0.62.2"

View File

@@ -25,6 +25,13 @@ cargo build --release -p secrets-mcp
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`。容器内或直接对外暴露端口时请改为 `0.0.0.0:9315`;反代时常为 `127.0.0.1:9315`。 |
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;不配置则无 Google 登录入口。运行时从环境读取,勿写入 CI、勿打入二进制。 |
| `RUST_LOG` | 可选;日志级别,如 `secrets_mcp=debug`。 |
| `SECRETS_DATABASE_POOL_SIZE` | 可选。连接池最大连接数,默认 `10`。 |
| `SECRETS_DATABASE_ACQUIRE_TIMEOUT` | 可选。获取连接超时秒数,默认 `5`。 |
| `RATE_LIMIT_GLOBAL_PER_SECOND` | 可选。全局限流速率,默认 `100` req/s。 |
| `RATE_LIMIT_GLOBAL_BURST` | 可选。全局限流突发量,默认 `200`。 |
| `RATE_LIMIT_IP_PER_SECOND` | 可选。单 IP 限流速率,默认 `20` req/s。 |
| `RATE_LIMIT_IP_BURST` | 可选。单 IP 限流突发量,默认 `40`。 |
| `TRUST_PROXY` | 可选。设为 `1`/`true`/`yes` 时从 `X-Forwarded-For` / `X-Real-IP` 提取客户端 IP仅在反代环境下启用。 |
```bash
cargo run -p secrets-mcp
@@ -54,10 +61,31 @@ SECRETS_ENV=production
条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun``ricnsmart/aliyun`)。
- **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。
- **`secrets_get` / `secrets_update` / `secrets_delete`(按 name/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。
- **`secrets_delete`**`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder`
- **共享密钥**N:N 关联下,删除 entry 仅解除关联,被共享的 secret 若仍被其他 entry 引用则保留;无引用时自动清理。
### 工具列表
| 工具 | 需要加密密钥 | 说明 |
|------|-------------|------|
| `secrets_find` | 否 | 发现条目(返回含 secret_fields schema支持 `name_query` 模糊匹配 |
| `secrets_search` | 否 | 搜索条目,支持 `query`/`folder`/`type`/`name` 过滤、`sort`/`offset` 分页、`summary` 摘要模式 |
| `secrets_get` | 是 | 按 UUID `id` 获取单条条目及解密后的 secrets |
| `secrets_add` | 是 | 添加新条目,支持 `meta_obj`/`secrets_obj` JSON 对象参数、`secret_types` 指定密钥类型、`link_secret_names` 关联已有 secret |
| `secrets_update` | 是 | 更新条目,支持 `id``name`+`folder` 定位 |
| `secrets_delete` | 否 | 删除条目,支持 `id``name`+`folder` 定位;`dry_run=true` 预览删除 |
| `secrets_history` | 否 | 查看条目历史,支持 `id``name`+`folder` 定位 |
| `secrets_rollback` | 是 | 回滚条目到指定历史版本,支持 `id``name`+`folder` 定位 |
| `secrets_export` | 是 | 导出条目(含解密明文),支持 JSON/TOML/YAML 格式 |
| `secrets_env_map` | 是 | 将 secrets 转换为环境变量映射(`UPPER(entry)_UPPER(field)` 格式),支持 `prefix` |
| `secrets_overview` | 否 | 返回各 folder 和 type 的 entry 计数概览 |
### 消歧规则
- **按 `name` 定位的工具**`secrets_update` / `secrets_delete` / `secrets_history` / `secrets_rollback`):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder`。也可直接传 `id`UUID跳过消歧。
- **`secrets_get`** 仅支持通过 `id`UUID获取。
- **`secrets_delete`** 的 `dry_run=true` 与真实删除使用相同消歧规则——唯一则预览一条,多条则报错并要求 `folder`
### 共享密钥
N:N 关联下,删除 entry 仅解除关联,被共享的 secret 若仍被其他 entry 引用则保留;无引用时自动清理。
## 加密架构(混合 E2EE
@@ -151,12 +179,12 @@ flowchart LR
## 数据模型
主表 **`entries`**`folder``type``name``notes``tags``metadata`,多租户时带 `user_id`+ 子表 **`secrets`**(每行一个加密字段:`name``type``encrypted`,通过 `entry_secrets` 中间表与 entry 建立 N:N 关联)。**唯一性**`UNIQUE(user_id, folder, name)``user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history``secrets_history``audit_log`,以及 **`users`**(含 `key_salt``key_check``key_params``api_key`)、**`oauth_accounts`**。首次连库自动迁移建表(`secrets-core``migrate`);已有库可对照 [`scripts/migrate-v0.3.0.sql`](scripts/migrate-v0.3.0.sql) 做列重命名与索引重建。**Web 登录会话**tower-sessions使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp``PostgresStore::migrate`),无需额外环境变量。
主表 **`entries`**`folder``type``name``notes``tags``metadata`,多租户时带 `user_id`+ 子表 **`secrets`**(每行一个加密字段:`name``type``encrypted`,通过 `entry_secrets` 中间表与 entry 建立 N:N 关联)。**唯一性**`UNIQUE(user_id, folder, name)``user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history``secrets_history``audit_log`,以及 **`users`**(含 `key_salt``key_check``key_params``api_key`)、**`oauth_accounts`**。首次连库自动迁移建表(`secrets-core``migrate`);已有库在进程启动时亦由同一 `migrate()` 增量补齐表、索引与 N:N 结构。若需从更早版本对照一次性 SQL可在 git 历史中检索已移除的 `scripts/migrate-v0.3.0.sql`。**Web 登录会话**tower-sessions使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp``PostgresStore::migrate`),无需额外环境变量。
| 位置 | 字段 | 说明 |
|------|------|------|
| entries | folder | 组织/隔离空间,如 `refining``ricnsmart`;参与唯一键 |
| entries | type | 软分类,如 `server``service``person``document`可扩展,不参与唯一键) |
| entries | type | 软分类,用户自定义,`server``service``account``person``document`(不参与唯一键) |
| entries | name | 人类可读标识;与 `folder` 一起在用户内唯一 |
| entries | notes | 非敏感说明文本 |
| entries | metadata | 明文 JSONip、url、subtype 等) |
@@ -174,6 +202,10 @@ flowchart LR
- 同一 secret 可被多个 entry 引用,删除某 entry 不会级联删除被共享的 secret
- 当 secret 不再被任何 entry 引用时,自动清理(`NOT EXISTS` 子查询)
### 类型Type
`type` 字段用于软分类,由用户自由填写,不做任何自动转换或归一化。常见示例:`server``service``account``person``document`,但任何值均可接受。
## 审计日志
`add``update``delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。多租户场景下可写 **`user_id`**(可空,兼容遗留行)。
@@ -191,10 +223,18 @@ LIMIT 20;
```
Cargo.toml
crates/secrets-core/ # db / crypto / models / audit / service
src/
taxonomy.rs # SECRET_TYPE_OPTIONSsecret 字段类型下拉选项)
service/ # 业务逻辑add, search, update, delete, export, env_map 等)
crates/secrets-mcp/ # MCP HTTP、Web、OAuth、API Key
scripts/
migrate-v0.3.0.sql # 可选:手动 SQL 迁移namespace/kind → folder/type、唯一键含 folder
deploy/ # systemd、.env 示例
release-check.sh # 发版前 fmt / clippy / test
setup-gitea-actions.sh
sync-test-to-prod.sh # 测试库同步到生产(按需)
deploy/
.env.example # 环境变量模板
secrets-mcp.service # systemd 服务文件(生产部署用)
postgres-tls-hardening.md # PostgreSQL TLS 加固运维手册
```
## CI/CDGitea Actions

View File

@@ -12,11 +12,11 @@ aes-gcm.workspace = true
anyhow.workspace = true
thiserror.workspace = true
chrono.workspace = true
hex = "0.4"
rand.workspace = true
serde.workspace = true
serde_json.workspace = true
serde_yaml.workspace = true
sha2.workspace = true
sqlx.workspace = true
toml.workspace = true
tokio.workspace = true

View File

@@ -5,6 +5,8 @@ use aes_gcm::{
use anyhow::{Context, Result, bail};
use serde_json::Value;
use crate::error::AppError;
const NONCE_LEN: usize = 12;
// ─── AES-256-GCM encrypt / decrypt ───────────────────────────────────────────
@@ -38,7 +40,7 @@ pub fn decrypt(master_key: &[u8; 32], data: &[u8]) -> Result<Vec<u8>> {
let nonce = Nonce::from_slice(nonce_bytes);
cipher
.decrypt(nonce, ciphertext)
.map_err(|_| anyhow::anyhow!("decryption failed — wrong master key or corrupted data"))
.map_err(|_| AppError::DecryptionFailed.into())
}
// ─── JSON helpers ─────────────────────────────────────────────────────────────
@@ -59,7 +61,7 @@ pub fn decrypt_json(master_key: &[u8; 32], data: &[u8]) -> Result<Value> {
/// Parse a 64-char hex string (from X-Encryption-Key header) into a 32-byte key.
pub fn extract_key_from_hex(hex_str: &str) -> Result<[u8; 32]> {
let bytes = hex::decode_hex(hex_str.trim())?;
let bytes = ::hex::decode(hex_str.trim())?;
if bytes.len() != 32 {
bail!(
"X-Encryption-Key must be 64 hex chars (32 bytes), got {} bytes",
@@ -74,21 +76,14 @@ pub fn extract_key_from_hex(hex_str: &str) -> Result<[u8; 32]> {
// ─── Public hex helpers ───────────────────────────────────────────────────────
pub mod hex {
use anyhow::{Result, bail};
use anyhow::Result;
pub fn encode_hex(bytes: &[u8]) -> String {
bytes.iter().map(|b| format!("{:02x}", b)).collect()
::hex::encode(bytes)
}
pub fn decode_hex(s: &str) -> Result<Vec<u8>> {
let s = s.trim();
if !s.len().is_multiple_of(2) {
bail!("hex string has odd length");
}
(0..s.len())
.step_by(2)
.map(|i| u8::from_str_radix(&s[i..i + 2], 16).map_err(|e| anyhow::anyhow!("{}", e)))
.collect()
Ok(::hex::decode(s.trim())?)
}
}

View File

@@ -1,7 +1,7 @@
use std::str::FromStr;
use anyhow::{Context, Result};
use serde_json::Value;
use serde_json::{Map, Value};
use sqlx::PgPool;
use sqlx::postgres::{PgConnectOptions, PgPoolOptions, PgSslMode};
@@ -36,12 +36,31 @@ fn build_connect_options(config: &DatabaseConfig) -> Result<PgConnectOptions> {
pub async fn create_pool(config: &DatabaseConfig) -> Result<PgPool> {
tracing::debug!("connecting to database");
let connect_options = build_connect_options(config)?;
// Connection pool configuration from environment
let max_connections = std::env::var("SECRETS_DATABASE_POOL_SIZE")
.ok()
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(10);
let acquire_timeout_secs = std::env::var("SECRETS_DATABASE_ACQUIRE_TIMEOUT")
.ok()
.and_then(|v| v.parse::<u64>().ok())
.unwrap_or(5);
let pool = PgPoolOptions::new()
.max_connections(10)
.acquire_timeout(std::time::Duration::from_secs(5))
.max_connections(max_connections)
.acquire_timeout(std::time::Duration::from_secs(acquire_timeout_secs))
.max_lifetime(std::time::Duration::from_secs(1800)) // 30 minutes
.idle_timeout(std::time::Duration::from_secs(600)) // 10 minutes
.connect_with(connect_options)
.await?;
tracing::debug!("database connection established");
tracing::debug!(
max_connections,
acquire_timeout_secs,
"database connection established"
);
Ok(pool)
}
@@ -61,10 +80,12 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
metadata JSONB NOT NULL DEFAULT '{}',
version BIGINT NOT NULL DEFAULT 1,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
deleted_at TIMESTAMPTZ
);
-- Legacy unique constraint without user_id (single-user mode)
-- NOTE: These are rebuilt below with `deleted_at IS NULL` for soft-delete support.
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy
ON entries(folder, name)
WHERE user_id IS NULL;
@@ -108,6 +129,17 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
);
CREATE INDEX IF NOT EXISTS idx_entry_secrets_secret_id ON entry_secrets(secret_id);
-- ── entry_relations: parent-child links between entries ──────────────────
CREATE TABLE IF NOT EXISTS entry_relations (
parent_entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
child_entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY(parent_entry_id, child_entry_id),
CHECK (parent_entry_id <> child_entry_id)
);
CREATE INDEX IF NOT EXISTS idx_entry_relations_parent ON entry_relations(parent_entry_id);
CREATE INDEX IF NOT EXISTS idx_entry_relations_child ON entry_relations(child_entry_id);
-- ── audit_log: append-only operation log ─────────────────────────────────
CREATE TABLE IF NOT EXISTS audit_log (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
@@ -151,6 +183,7 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
-- Backfill: add notes to entries if not present (fresh installs already have it)
ALTER TABLE entries ADD COLUMN IF NOT EXISTS notes TEXT NOT NULL DEFAULT '';
ALTER TABLE entries ADD COLUMN IF NOT EXISTS deleted_at TIMESTAMPTZ;
-- ── secrets_history: field-level snapshot ────────────────────────────────
CREATE TABLE IF NOT EXISTS secrets_history (
@@ -385,11 +418,11 @@ async fn migrate_schema(pool: &PgPool) -> Result<()> {
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy
ON entries(folder, name)
WHERE user_id IS NULL;
WHERE user_id IS NULL AND deleted_at IS NULL;
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user
ON entries(user_id, folder, name)
WHERE user_id IS NOT NULL;
WHERE user_id IS NOT NULL AND deleted_at IS NULL;
-- ── Replace old namespace/kind indexes ────────────────────────────────────
DROP INDEX IF EXISTS idx_entries_namespace;
@@ -401,6 +434,8 @@ async fn migrate_schema(pool: &PgPool) -> Result<()> {
ON entries(folder) WHERE folder <> '';
CREATE INDEX IF NOT EXISTS idx_entries_type
ON entries(type) WHERE type <> '';
CREATE INDEX IF NOT EXISTS idx_entries_deleted_at
ON entries(deleted_at) WHERE deleted_at IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type
ON audit_log(folder, type);
CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name
@@ -409,6 +444,9 @@ async fn migrate_schema(pool: &PgPool) -> Result<()> {
-- ── Drop legacy actor columns ─────────────────────────────────────────────
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
ALTER TABLE audit_log DROP COLUMN IF EXISTS actor;
-- ── key_version: incremented on passphrase change to invalidate other sessions ──
ALTER TABLE users ADD COLUMN IF NOT EXISTS key_version BIGINT NOT NULL DEFAULT 0;
"#,
)
.execute(pool)
@@ -543,4 +581,75 @@ pub async fn snapshot_secret_history(
Ok(())
}
pub const ENTRY_HISTORY_SECRETS_KEY: &str = "__secrets_snapshot_v1";
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct EntrySecretSnapshot {
pub name: String,
#[serde(rename = "type")]
pub secret_type: String,
pub encrypted_hex: String,
}
pub async fn metadata_with_secret_snapshot(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
entry_id: uuid::Uuid,
metadata: &Value,
) -> Result<Value> {
#[derive(sqlx::FromRow)]
struct Row {
name: String,
#[sqlx(rename = "type")]
secret_type: String,
encrypted: Vec<u8>,
}
let rows: Vec<Row> = sqlx::query_as(
"SELECT s.name, s.type, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1 \
ORDER BY s.name ASC",
)
.bind(entry_id)
.fetch_all(&mut **tx)
.await?;
let snapshots: Vec<EntrySecretSnapshot> = rows
.into_iter()
.map(|r| EntrySecretSnapshot {
name: r.name,
secret_type: r.secret_type,
encrypted_hex: ::hex::encode(r.encrypted),
})
.collect();
let mut merged = match metadata.clone() {
Value::Object(obj) => obj,
_ => Map::new(),
};
merged.insert(
ENTRY_HISTORY_SECRETS_KEY.to_string(),
serde_json::to_value(snapshots)?,
);
Ok(Value::Object(merged))
}
pub fn strip_secret_snapshot_from_metadata(metadata: &Value) -> Value {
let mut m = match metadata.clone() {
Value::Object(obj) => obj,
_ => return metadata.clone(),
};
m.remove(ENTRY_HISTORY_SECRETS_KEY);
Value::Object(m)
}
pub fn entry_secret_snapshot_from_metadata(metadata: &Value) -> Option<Vec<EntrySecretSnapshot>> {
let Value::Object(map) = metadata else {
return None;
};
let raw = map.get(ENTRY_HISTORY_SECRETS_KEY)?;
serde_json::from_value(raw.clone()).ok()
}
// ── DB helpers ────────────────────────────────────────────────────────────────

View File

@@ -15,12 +15,30 @@ pub enum AppError {
#[error("Entry not found")]
NotFoundEntry,
#[error("User not found")]
NotFoundUser,
#[error("Secret not found")]
NotFoundSecret,
#[error("Authentication failed")]
AuthenticationFailed,
#[error("Unauthorized: insufficient permissions")]
Unauthorized,
#[error("Validation failed: {message}")]
Validation { message: String },
#[error("Concurrent modification detected")]
ConcurrentModification,
#[error("Decryption failed — the encryption key may be incorrect")]
DecryptionFailed,
#[error("Encryption key not set — user must set passphrase first")]
EncryptionKeyNotSet,
#[error(transparent)]
Internal(#[from] anyhow::Error),
}
@@ -116,6 +134,18 @@ mod tests {
let err = AppError::NotFoundEntry;
assert_eq!(err.to_string(), "Entry not found");
let err = AppError::NotFoundUser;
assert_eq!(err.to_string(), "User not found");
let err = AppError::NotFoundSecret;
assert_eq!(err.to_string(), "Secret not found");
let err = AppError::AuthenticationFailed;
assert_eq!(err.to_string(), "Authentication failed");
let err = AppError::Unauthorized;
assert!(err.to_string().contains("Unauthorized"));
let err = AppError::Validation {
message: "too long".to_string(),
};
@@ -123,6 +153,9 @@ mod tests {
let err = AppError::ConcurrentModification;
assert!(err.to_string().contains("Concurrent modification"));
let err = AppError::EncryptionKeyNotSet;
assert!(err.to_string().contains("Encryption key not set"));
}
#[test]

View File

@@ -4,7 +4,7 @@ use serde_json::Value;
use std::collections::BTreeMap;
use uuid::Uuid;
/// A top-level entry (server, service, key, person, …).
/// A top-level entry (server, service, account, person, …).
/// Sensitive fields are stored separately in `secrets`.
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct Entry {
@@ -21,6 +21,7 @@ pub struct Entry {
pub version: i64,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
pub deleted_at: Option<DateTime<Utc>>,
}
/// A single encrypted field belonging to an Entry.
@@ -52,6 +53,7 @@ pub struct EntryRow {
pub tags: Vec<String>,
pub metadata: Value,
pub notes: String,
pub name: String,
}
/// Entry row including `name` (used for id-scoped web / service updates).
@@ -66,6 +68,7 @@ pub struct EntryWriteRow {
pub tags: Vec<String>,
pub metadata: Value,
pub notes: String,
pub deleted_at: Option<DateTime<Utc>>,
}
impl From<&EntryWriteRow> for EntryRow {
@@ -78,6 +81,7 @@ impl From<&EntryWriteRow> for EntryRow {
tags: r.tags.clone(),
metadata: r.metadata.clone(),
notes: r.notes.clone(),
name: r.name.clone(),
}
}
}
@@ -200,6 +204,8 @@ pub struct User {
pub key_params: Option<serde_json::Value>,
/// Plaintext API key for MCP Bearer authentication. Auto-created on first login.
pub api_key: Option<String>,
/// Incremented each time the passphrase is changed; used to invalidate sessions on other devices.
pub key_version: i64,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
}

View File

@@ -9,7 +9,6 @@ use crate::crypto;
use crate::db;
use crate::error::{AppError, DbErrorContext};
use crate::models::EntryRow;
use crate::taxonomy;
// ── Key/value parsing helpers ─────────────────────────────────────────────────
@@ -186,11 +185,19 @@ pub struct AddParams<'a> {
}
pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> Result<AddResult> {
let Value::Object(mut metadata_map) = build_json(params.meta_entries)? else {
if params.folder.chars().count() > 128 {
anyhow::bail!("folder must be at most 128 characters");
}
if params.name.chars().count() > 256 {
anyhow::bail!("name must be at most 256 characters");
}
if params.entry_type.trim().chars().count() > 64 {
anyhow::bail!("type must be at most 64 characters");
}
let Value::Object(metadata_map) = build_json(params.meta_entries)? else {
unreachable!("build_json always returns a JSON object");
};
let normalized_entry_type =
taxonomy::normalize_entry_type_and_metadata(params.entry_type, &mut metadata_map);
let entry_type = params.entry_type.trim();
let metadata = Value::Object(metadata_map);
let secret_json = build_json(params.secret_entries)?;
let meta_keys = collect_key_paths(params.meta_entries)?;
@@ -206,8 +213,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
// Fetch existing entry by (user_id, folder, name) — the natural unique key
let existing: Option<EntryRow> = if let Some(uid) = params.user_id {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id = $1 AND folder = $2 AND name = $3",
"SELECT id, version, folder, type, tags, metadata, notes, name FROM entries \
WHERE user_id = $1 AND folder = $2 AND name = $3 AND deleted_at IS NULL",
)
.bind(uid)
.bind(params.folder)
@@ -216,8 +223,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
.await?
} else {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id IS NULL AND folder = $1 AND name = $2",
"SELECT id, version, folder, type, tags, metadata, notes, name FROM entries \
WHERE user_id IS NULL AND folder = $1 AND name = $2 AND deleted_at IS NULL",
)
.bind(params.folder)
.bind(params.name)
@@ -225,26 +232,40 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
.await?
};
if let Some(ref ex) = existing
&& let Err(e) = db::snapshot_entry_history(
if let Some(ref ex) = existing {
let history_metadata =
match db::metadata_with_secret_snapshot(&mut tx, ex.id, &ex.metadata).await {
Ok(v) => v,
Err(e) => {
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
ex.metadata.clone()
}
};
if let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
entry_id: ex.id,
user_id: params.user_id,
folder: params.folder,
entry_type: &normalized_entry_type,
entry_type,
name: params.name,
version: ex.version,
action: "add",
tags: &ex.tags,
metadata: &ex.metadata,
metadata: &history_metadata,
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot entry history before upsert");
{
tracing::warn!(error = %e, "failed to snapshot entry history before upsert");
}
}
// Upsert the entry row. On conflict (existing entry with same user_id+folder+name),
// the entry columns are replaced wholesale. The old secret associations are torn down
// below within the same transaction, so the whole operation is atomic: if any step
// after this point fails, the transaction rolls back and the entry reverts to its
// pre-upsert state (including the version bump that happened in the DO UPDATE clause).
let entry_id: Uuid = if let Some(uid) = params.user_id {
sqlx::query_scalar(
r#"INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata, version, updated_at)
@@ -262,7 +283,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
)
.bind(uid)
.bind(params.folder)
.bind(&normalized_entry_type)
.bind(entry_type)
.bind(params.name)
.bind(params.notes)
.bind(params.tags)
@@ -285,7 +306,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
RETURNING id"#,
)
.bind(params.folder)
.bind(&normalized_entry_type)
.bind(entry_type)
.bind(params.name)
.bind(params.notes)
.bind(params.tags)
@@ -300,26 +321,6 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
.fetch_one(&mut *tx)
.await?;
if existing.is_none()
&& let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
entry_id,
user_id: params.user_id,
folder: params.folder,
entry_type: &normalized_entry_type,
name: params.name,
version: current_entry_version,
action: "create",
tags: params.tags,
metadata: &metadata,
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot entry history on create");
}
if existing.is_some() {
#[derive(sqlx::FromRow)]
struct ExistingField {
@@ -429,12 +430,41 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
}
}
if existing.is_none() {
let history_metadata =
match db::metadata_with_secret_snapshot(&mut tx, entry_id, &metadata).await {
Ok(v) => v,
Err(e) => {
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
metadata.clone()
}
};
if let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
entry_id,
user_id: params.user_id,
folder: params.folder,
entry_type,
name: params.name,
version: current_entry_version,
action: "create",
tags: params.tags,
metadata: &history_metadata,
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot entry history on create");
}
}
crate::audit::log_tx(
&mut tx,
params.user_id,
"add",
params.folder,
&normalized_entry_type,
entry_type,
params.name,
serde_json::json!({
"tags": params.tags,
@@ -449,7 +479,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
Ok(AddResult {
name: params.name.to_string(),
folder: params.folder.to_string(),
entry_type: normalized_entry_type,
entry_type: entry_type.to_string(),
tags: params.tags.to_vec(),
meta_keys,
secret_keys,

View File

@@ -2,6 +2,8 @@ use anyhow::Result;
use sqlx::PgPool;
use uuid::Uuid;
use crate::error::AppError;
const KEY_PREFIX: &str = "sk_";
/// Generate a new API key: `sk_<64 hex chars>` = 67 characters total.
@@ -9,28 +11,36 @@ pub fn generate_api_key() -> String {
use rand::RngExt;
let mut bytes = [0u8; 32];
rand::rng().fill(&mut bytes);
let hex: String = bytes.iter().map(|b| format!("{:02x}", b)).collect();
format!("{}{}", KEY_PREFIX, hex)
format!("{}{}", KEY_PREFIX, ::hex::encode(bytes))
}
/// Return the user's existing API key, or generate and store a new one if NULL.
/// Uses a transaction with atomic update to prevent TOCTOU race conditions.
pub async fn ensure_api_key(pool: &PgPool, user_id: Uuid) -> Result<String> {
let existing: Option<(Option<String>,)> =
sqlx::query_as("SELECT api_key FROM users WHERE id = $1")
.bind(user_id)
.fetch_optional(pool)
.await?;
let mut tx = pool.begin().await?;
if let Some((Some(key),)) = existing {
// Lock the row and check existing key
let existing: (Option<String>,) =
sqlx::query_as("SELECT api_key FROM users WHERE id = $1 FOR UPDATE")
.bind(user_id)
.fetch_optional(&mut *tx)
.await?
.ok_or(AppError::NotFoundUser)?;
if let Some(key) = existing.0 {
tx.commit().await?;
return Ok(key);
}
// Generate and store new key atomically
let new_key = generate_api_key();
sqlx::query("UPDATE users SET api_key = $1 WHERE id = $2")
.bind(&new_key)
.bind(user_id)
.execute(pool)
.execute(&mut *tx)
.await?;
tx.commit().await?;
Ok(new_key)
}

View File

@@ -4,20 +4,36 @@ use uuid::Uuid;
use crate::models::AuditLogEntry;
pub async fn list_for_user(pool: &PgPool, user_id: Uuid, limit: i64) -> Result<Vec<AuditLogEntry>> {
pub async fn list_for_user(
pool: &PgPool,
user_id: Uuid,
limit: i64,
offset: i64,
) -> Result<Vec<AuditLogEntry>> {
let limit = limit.clamp(1, 200);
let offset = offset.max(0);
let rows = sqlx::query_as(
"SELECT id, user_id, action, folder, type, name, detail, created_at \
FROM audit_log \
WHERE user_id = $1 \
ORDER BY created_at DESC, id DESC \
LIMIT $2",
LIMIT $2 OFFSET $3",
)
.bind(user_id)
.bind(limit)
.bind(offset)
.fetch_all(pool)
.await?;
Ok(rows)
}
pub async fn count_for_user(pool: &PgPool, user_id: Uuid) -> Result<i64> {
let count: i64 =
sqlx::query_scalar("SELECT COUNT(*)::bigint FROM audit_log WHERE user_id = $1")
.bind(user_id)
.fetch_one(pool)
.await?;
Ok(count)
}

View File

@@ -4,7 +4,9 @@ use sqlx::PgPool;
use uuid::Uuid;
use crate::db;
use crate::error::AppError;
use crate::models::{EntryRow, EntryWriteRow, SecretFieldRow};
use crate::service::util::user_scope_condition;
#[derive(Debug, serde::Serialize)]
pub struct DeletedEntry {
@@ -20,6 +22,17 @@ pub struct DeleteResult {
pub dry_run: bool,
}
#[derive(Debug, serde::Serialize, sqlx::FromRow)]
pub struct TrashEntry {
pub id: Uuid,
pub name: String,
pub folder: String,
#[serde(rename = "type")]
#[sqlx(rename = "type")]
pub entry_type: String,
pub deleted_at: chrono::DateTime<chrono::Utc>,
}
pub struct DeleteParams<'a> {
/// If set, delete a single entry by name.
pub name: Option<&'a str>,
@@ -31,12 +44,160 @@ pub struct DeleteParams<'a> {
pub user_id: Option<Uuid>,
}
/// Maximum number of entries that can be deleted in a single bulk operation.
/// Prevents accidental mass deletion when filters are too broad.
pub const MAX_BULK_DELETE: usize = 1000;
pub async fn list_deleted_entries(
pool: &PgPool,
user_id: Uuid,
limit: u32,
offset: u32,
) -> Result<Vec<TrashEntry>> {
sqlx::query_as(
"SELECT id, name, folder, type, deleted_at FROM entries \
WHERE user_id = $1 AND deleted_at IS NOT NULL \
ORDER BY deleted_at DESC, name ASC LIMIT $2 OFFSET $3",
)
.bind(user_id)
.bind(limit as i64)
.bind(offset as i64)
.fetch_all(pool)
.await
.map_err(Into::into)
}
pub async fn count_deleted_entries(pool: &PgPool, user_id: Uuid) -> Result<i64> {
sqlx::query_scalar::<_, i64>(
"SELECT COUNT(*)::bigint FROM entries WHERE user_id = $1 AND deleted_at IS NOT NULL",
)
.bind(user_id)
.fetch_one(pool)
.await
.map_err(Into::into)
}
pub async fn restore_deleted_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<()> {
let mut tx = pool.begin().await?;
let row: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes, deleted_at FROM entries \
WHERE id = $1 AND user_id = $2 AND deleted_at IS NOT NULL FOR UPDATE",
)
.bind(entry_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await?;
let row = match row {
Some(r) => r,
None => {
tx.rollback().await?;
return Err(AppError::NotFoundEntry.into());
}
};
let conflict_exists: bool = sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries \
WHERE user_id = $1 AND folder = $2 AND name = $3 AND deleted_at IS NULL AND id <> $4)",
)
.bind(user_id)
.bind(&row.folder)
.bind(&row.name)
.bind(row.id)
.fetch_one(&mut *tx)
.await?;
if conflict_exists {
tx.rollback().await?;
return Err(AppError::ConflictEntryName {
folder: row.folder,
name: row.name,
}
.into());
}
sqlx::query("UPDATE entries SET deleted_at = NULL, updated_at = NOW() WHERE id = $1")
.bind(row.id)
.execute(&mut *tx)
.await?;
crate::audit::log_tx(
&mut tx,
Some(user_id),
"restore",
&row.folder,
&row.entry_type,
&row.name,
json!({ "entry_id": row.id }),
)
.await;
tx.commit().await?;
Ok(())
}
pub async fn purge_deleted_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<()> {
let mut tx = pool.begin().await?;
let row: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes, deleted_at FROM entries \
WHERE id = $1 AND user_id = $2 AND deleted_at IS NOT NULL FOR UPDATE",
)
.bind(entry_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await?;
let row = match row {
Some(r) => r,
None => {
tx.rollback().await?;
return Err(AppError::NotFoundEntry.into());
}
};
purge_entry_record(&mut tx, row.id).await?;
crate::audit::log_tx(
&mut tx,
Some(user_id),
"purge",
&row.folder,
&row.entry_type,
&row.name,
json!({ "entry_id": row.id }),
)
.await;
tx.commit().await?;
Ok(())
}
pub async fn purge_expired_deleted_entries(pool: &PgPool) -> Result<u64> {
#[derive(sqlx::FromRow)]
struct ExpiredRow {
id: Uuid,
}
let mut tx = pool.begin().await?;
let rows: Vec<ExpiredRow> = sqlx::query_as(
"SELECT id FROM entries \
WHERE deleted_at IS NOT NULL \
AND deleted_at < NOW() - INTERVAL '3 months' \
FOR UPDATE",
)
.fetch_all(&mut *tx)
.await?;
for row in &rows {
purge_entry_record(&mut tx, row.id).await?;
}
tx.commit().await?;
Ok(rows.len() as u64)
}
/// Delete a single entry by id (multi-tenant: `user_id` must match).
pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<DeleteResult> {
let mut tx = pool.begin().await?;
let row: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
WHERE id = $1 AND user_id = $2 FOR UPDATE",
"SELECT id, version, folder, type, name, tags, metadata, notes, deleted_at FROM entries \
WHERE id = $1 AND user_id = $2 AND deleted_at IS NULL FOR UPDATE",
)
.bind(entry_id)
.bind(user_id)
@@ -56,7 +217,7 @@ pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Resul
let name = row.name.clone();
let entry_row: EntryRow = (&row).into();
snapshot_and_delete(
snapshot_and_soft_delete(
&mut tx,
&folder,
&entry_type,
@@ -122,48 +283,32 @@ async fn delete_one(
// - 2+ matches → disambiguation error (same as non-dry-run)
#[derive(sqlx::FromRow)]
struct DryRunRow {
#[allow(dead_code)]
id: Uuid,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
}
let rows: Vec<DryRunRow> = if let Some(uid) = user_id {
if let Some(f) = folder {
sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3",
)
.bind(uid)
.bind(f)
.bind(name)
.fetch_all(pool)
.await?
} else {
sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND name = $2",
)
.bind(uid)
.bind(name)
.fetch_all(pool)
.await?
}
} else if let Some(f) = folder {
sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2",
)
.bind(f)
.bind(name)
.fetch_all(pool)
.await?
} else {
sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND name = $1",
)
.bind(name)
.fetch_all(pool)
.await?
};
let mut idx = 1i32;
let user_cond = user_scope_condition(user_id, &mut idx);
let mut conditions = vec![user_cond];
if folder.is_some() {
conditions.push(format!("folder = ${}", idx));
idx += 1;
}
conditions.push(format!("name = ${}", idx));
let sql = format!(
"SELECT folder, type FROM entries WHERE {} AND deleted_at IS NULL",
conditions.join(" AND ")
);
let mut q = sqlx::query_as::<_, DryRunRow>(&sql);
if let Some(uid) = user_id {
q = q.bind(uid);
}
if let Some(f) = folder {
q = q.bind(f);
}
q = q.bind(name);
let rows = q.fetch_all(pool).await?;
return match rows.len() {
0 => Ok(DeleteResult {
@@ -171,7 +316,10 @@ async fn delete_one(
dry_run: true,
}),
1 => {
let row = rows.into_iter().next().unwrap();
let row = rows
.into_iter()
.next()
.ok_or_else(|| anyhow::anyhow!("internal: matched row vanished"))?;
Ok(DeleteResult {
deleted: vec![DeletedEntry {
name: name.to_string(),
@@ -197,45 +345,28 @@ async fn delete_one(
let mut tx = pool.begin().await?;
// Fetch matching rows with FOR UPDATE; use folder when provided to resolve ambiguity.
let rows: Vec<EntryRow> = if let Some(uid) = user_id {
if let Some(f) = folder {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id = $1 AND folder = $2 AND name = $3 FOR UPDATE",
)
.bind(uid)
.bind(f)
.bind(name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id = $1 AND name = $2 FOR UPDATE",
)
.bind(uid)
.bind(name)
.fetch_all(&mut *tx)
.await?
}
} else if let Some(f) = folder {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id IS NULL AND folder = $1 AND name = $2 FOR UPDATE",
)
.bind(f)
.bind(name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id IS NULL AND name = $1 FOR UPDATE",
)
.bind(name)
.fetch_all(&mut *tx)
.await?
};
let mut idx = 1i32;
let user_cond = user_scope_condition(user_id, &mut idx);
let mut conditions = vec![user_cond];
if folder.is_some() {
conditions.push(format!("folder = ${}", idx));
idx += 1;
}
conditions.push(format!("name = ${}", idx));
let sql = format!(
"SELECT id, version, folder, type, tags, metadata, notes, name FROM entries \
WHERE {} AND deleted_at IS NULL FOR UPDATE",
conditions.join(" AND ")
);
let mut q = sqlx::query_as::<_, EntryRow>(&sql);
if let Some(uid) = user_id {
q = q.bind(uid);
}
if let Some(f) = folder {
q = q.bind(f);
}
q = q.bind(name);
let rows = q.fetch_all(&mut *tx).await?;
let row = match rows.len() {
0 => {
@@ -245,7 +376,10 @@ async fn delete_one(
dry_run: false,
});
}
1 => rows.into_iter().next().unwrap(),
1 => rows
.into_iter()
.next()
.ok_or_else(|| anyhow::anyhow!("internal: matched row vanished"))?,
_ => {
tx.rollback().await?;
let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect();
@@ -261,7 +395,7 @@ async fn delete_one(
let folder = row.folder.clone();
let entry_type = row.entry_type.clone();
snapshot_and_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?;
snapshot_and_soft_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?;
crate::audit::log_tx(
&mut tx,
user_id,
@@ -328,7 +462,7 @@ async fn delete_bulk(
if dry_run {
let sql = format!(
"SELECT id, version, folder, type, name, metadata, tags, notes \
FROM entries {where_clause} ORDER BY type, name"
FROM entries {where_clause} AND deleted_at IS NULL ORDER BY type, name"
);
let mut q = sqlx::query_as::<_, FullEntryRow>(&sql);
if let Some(uid) = user_id {
@@ -360,7 +494,7 @@ async fn delete_bulk(
let sql = format!(
"SELECT id, version, folder, type, name, metadata, tags, notes \
FROM entries {where_clause} ORDER BY type, name FOR UPDATE"
FROM entries {where_clause} AND deleted_at IS NULL ORDER BY type, name FOR UPDATE"
);
let mut q = sqlx::query_as::<_, FullEntryRow>(&sql);
if let Some(uid) = user_id {
@@ -374,6 +508,16 @@ async fn delete_bulk(
}
let rows = q.fetch_all(&mut *tx).await?;
if rows.len() > MAX_BULK_DELETE {
tx.rollback().await?;
anyhow::bail!(
"Bulk delete would affect {} entries (limit: {}). \
Narrow your filters or delete entries individually.",
rows.len(),
MAX_BULK_DELETE,
);
}
let mut deleted = Vec::with_capacity(rows.len());
for row in &rows {
let entry_row: EntryRow = EntryRow {
@@ -384,8 +528,9 @@ async fn delete_bulk(
tags: row.tags.clone(),
metadata: row.metadata.clone(),
notes: row.notes.clone(),
name: row.name.clone(),
};
snapshot_and_delete(
snapshot_and_soft_delete(
&mut tx,
&row.folder,
&row.entry_type,
@@ -419,7 +564,7 @@ async fn delete_bulk(
})
}
async fn snapshot_and_delete(
async fn snapshot_and_soft_delete(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
folder: &str,
entry_type: &str,
@@ -427,6 +572,15 @@ async fn snapshot_and_delete(
row: &EntryRow,
user_id: Option<Uuid>,
) -> Result<()> {
let history_metadata = match db::metadata_with_secret_snapshot(tx, row.id, &row.metadata).await
{
Ok(v) => v,
Err(e) => {
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
row.metadata.clone()
}
};
if let Err(e) = db::snapshot_entry_history(
tx,
db::EntrySnapshotParams {
@@ -438,7 +592,7 @@ async fn snapshot_and_delete(
version: row.version,
action: "delete",
tags: &row.tags,
metadata: &row.metadata,
metadata: &history_metadata,
},
)
.await
@@ -472,11 +626,33 @@ async fn snapshot_and_delete(
}
}
sqlx::query("DELETE FROM entries WHERE id = $1")
sqlx::query("UPDATE entries SET deleted_at = NOW(), updated_at = NOW() WHERE id = $1")
.bind(row.id)
.execute(&mut **tx)
.await?;
Ok(())
}
async fn purge_entry_record(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
entry_id: Uuid,
) -> Result<()> {
let fields: Vec<SecretFieldRow> = sqlx::query_as(
"SELECT s.id, s.name, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1",
)
.bind(entry_id)
.fetch_all(&mut **tx)
.await?;
sqlx::query("DELETE FROM entries WHERE id = $1")
.bind(entry_id)
.execute(&mut **tx)
.await?;
let secret_ids: Vec<Uuid> = fields.iter().map(|f| f.id).collect();
if !secret_ids.is_empty() {
sqlx::query(

View File

@@ -5,7 +5,6 @@ use std::collections::HashMap;
use uuid::Uuid;
use crate::crypto;
use crate::models::Entry;
use crate::service::search::{fetch_entries, fetch_secrets_for_entries};
/// Build an env variable map from entry secrets (for dry-run preview or injection).
@@ -21,57 +20,44 @@ pub async fn build_env_map(
master_key: &[u8; 32],
user_id: Option<Uuid>,
) -> Result<HashMap<String, String>> {
let entries = fetch_entries(pool, folder, entry_type, name, tags, None, user_id).await?;
let entries = fetch_entries(pool, folder, entry_type, name, tags, None, None, user_id).await?;
if entries.is_empty() {
return Ok(HashMap::new());
}
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
let mut combined: HashMap<String, String> = HashMap::new();
for entry in &entries {
let entry_map =
build_entry_env_map(pool, entry, only_fields, prefix, master_key, user_id).await?;
combined.extend(entry_map);
let all_fields = secrets_map.get(&entry.id).map(Vec::as_slice).unwrap_or(&[]);
let effective_prefix = env_prefix(entry, prefix);
let fields: Vec<_> = if only_fields.is_empty() {
all_fields.iter().collect()
} else {
all_fields
.iter()
.filter(|f| only_fields.contains(&f.name))
.collect()
};
for f in fields {
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
let key = format!(
"{}_{}",
effective_prefix,
f.name.to_uppercase().replace(['-', '.'], "_")
);
combined.insert(key, json_to_env_string(&decrypted));
}
}
Ok(combined)
}
async fn build_entry_env_map(
pool: &PgPool,
entry: &Entry,
only_fields: &[String],
prefix: &str,
master_key: &[u8; 32],
_user_id: Option<Uuid>,
) -> Result<HashMap<String, String>> {
let entry_ids = vec![entry.id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
let all_fields = secrets_map.get(&entry.id).map(Vec::as_slice).unwrap_or(&[]);
let fields: Vec<_> = if only_fields.is_empty() {
all_fields.iter().collect()
} else {
all_fields
.iter()
.filter(|f| only_fields.contains(&f.name))
.collect()
};
let effective_prefix = env_prefix(entry, prefix);
let mut map = HashMap::new();
for f in fields {
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
let key = format!(
"{}_{}",
effective_prefix,
f.name.to_uppercase().replace(['-', '.'], "_")
);
map.insert(key, json_to_env_string(&decrypted));
}
Ok(map)
}
fn env_prefix(entry: &Entry, prefix: &str) -> String {
fn env_prefix(entry: &crate::models::Entry, prefix: &str) -> String {
let name_part = entry.name.to_uppercase().replace(['-', '.', ' '], "_");
if prefix.is_empty() {
name_part

View File

@@ -30,6 +30,7 @@ pub async fn export(
params.name,
params.tags,
params.query,
None,
params.user_id,
)
.await?;

View File

@@ -5,6 +5,7 @@ use std::collections::HashMap;
use uuid::Uuid;
use crate::crypto;
use crate::error::AppError;
use crate::service::search::{fetch_secrets_for_entries, resolve_entry, resolve_entry_by_id};
/// Decrypt a single named field from an entry.
@@ -64,7 +65,7 @@ pub async fn get_secret_field_by_id(
) -> Result<Value> {
resolve_entry_by_id(pool, entry_id, user_id)
.await
.map_err(|_| anyhow::anyhow!("Entry with id '{}' not found", entry_id))?;
.map_err(|_| anyhow::Error::from(AppError::NotFoundEntry))?;
let entry_ids = vec![entry_id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
@@ -89,7 +90,7 @@ pub async fn get_all_secrets_by_id(
// Validate entry exists (and that it belongs to the requesting user)
resolve_entry_by_id(pool, entry_id, user_id)
.await
.map_err(|_| anyhow::anyhow!("Entry with id '{}' not found", entry_id))?;
.map_err(|_| anyhow::Error::from(AppError::NotFoundEntry))?;
let entry_ids = vec![entry_id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;

View File

@@ -31,8 +31,11 @@ pub async fn run(
let entry = resolve_entry(pool, name, folder, user_id).await?;
let rows: Vec<Row> = sqlx::query_as(
"SELECT version, action, created_at FROM entries_history \
WHERE entry_id = $1 ORDER BY id DESC LIMIT $2",
"SELECT DISTINCT ON (version) version, action, created_at \
FROM entries_history \
WHERE entry_id = $1 \
ORDER BY version DESC, id DESC \
LIMIT $2",
)
.bind(entry.id)
.bind(limit as i64)

View File

@@ -54,7 +54,13 @@ pub async fn run(
.bind(params.user_id)
.fetch_one(pool)
.await
.unwrap_or(false);
.map_err(|e| {
anyhow::anyhow!(
"Failed to check entry existence for '{}': {}",
entry.name,
e
)
})?;
if exists && !params.force {
return Err(anyhow::anyhow!(

View File

@@ -7,7 +7,9 @@ pub mod export;
pub mod get_secret;
pub mod history;
pub mod import;
pub mod relations;
pub mod rollback;
pub mod search;
pub mod update;
pub mod user;
pub mod util;

View File

@@ -0,0 +1,324 @@
use std::collections::{BTreeSet, HashMap};
use anyhow::Result;
use sqlx::PgPool;
use uuid::Uuid;
use crate::error::AppError;
#[derive(Debug, Clone, serde::Serialize, sqlx::FromRow)]
pub struct RelationEntrySummary {
pub id: Uuid,
pub folder: String,
#[serde(rename = "type")]
#[sqlx(rename = "type")]
pub entry_type: String,
pub name: String,
}
#[derive(Debug, Clone, Default, serde::Serialize)]
pub struct EntryRelations {
pub parents: Vec<RelationEntrySummary>,
pub children: Vec<RelationEntrySummary>,
}
pub async fn add_parent_relation(
pool: &PgPool,
parent_entry_id: Uuid,
child_entry_id: Uuid,
user_id: Option<Uuid>,
) -> Result<()> {
if parent_entry_id == child_entry_id {
return Err(AppError::Validation {
message: "entry cannot reference itself".to_string(),
}
.into());
}
let mut tx = pool.begin().await?;
validate_live_entries(&mut tx, &[parent_entry_id, child_entry_id], user_id).await?;
let cycle_exists: bool = sqlx::query_scalar(
"WITH RECURSIVE descendants AS ( \
SELECT child_entry_id FROM entry_relations WHERE parent_entry_id = $1 \
UNION \
SELECT er.child_entry_id \
FROM entry_relations er \
JOIN descendants d ON d.child_entry_id = er.parent_entry_id \
) \
SELECT EXISTS(SELECT 1 FROM descendants WHERE child_entry_id = $2)",
)
.bind(child_entry_id)
.bind(parent_entry_id)
.fetch_one(&mut *tx)
.await?;
if cycle_exists {
tx.rollback().await?;
return Err(AppError::Validation {
message: "adding this relation would create a cycle".to_string(),
}
.into());
}
sqlx::query(
"INSERT INTO entry_relations (parent_entry_id, child_entry_id) \
VALUES ($1, $2) ON CONFLICT DO NOTHING",
)
.bind(parent_entry_id)
.bind(child_entry_id)
.execute(&mut *tx)
.await?;
tx.commit().await?;
Ok(())
}
pub async fn remove_parent_relation(
pool: &PgPool,
parent_entry_id: Uuid,
child_entry_id: Uuid,
user_id: Option<Uuid>,
) -> Result<()> {
let mut tx = pool.begin().await?;
validate_live_entries(&mut tx, &[parent_entry_id, child_entry_id], user_id).await?;
sqlx::query("DELETE FROM entry_relations WHERE parent_entry_id = $1 AND child_entry_id = $2")
.bind(parent_entry_id)
.bind(child_entry_id)
.execute(&mut *tx)
.await?;
tx.commit().await?;
Ok(())
}
pub async fn set_parent_relations(
pool: &PgPool,
child_entry_id: Uuid,
parent_entry_ids: &[Uuid],
user_id: Option<Uuid>,
) -> Result<()> {
let deduped: Vec<Uuid> = parent_entry_ids
.iter()
.copied()
.collect::<BTreeSet<_>>()
.into_iter()
.collect();
if deduped.contains(&child_entry_id) {
return Err(AppError::Validation {
message: "entry cannot reference itself".to_string(),
}
.into());
}
let mut tx = pool.begin().await?;
let mut validate_ids = Vec::with_capacity(deduped.len() + 1);
validate_ids.push(child_entry_id);
validate_ids.extend(deduped.iter().copied());
validate_live_entries(&mut tx, &validate_ids, user_id).await?;
let current_parent_ids: Vec<Uuid> =
sqlx::query_scalar("SELECT parent_entry_id FROM entry_relations WHERE child_entry_id = $1")
.bind(child_entry_id)
.fetch_all(&mut *tx)
.await?;
let current: BTreeSet<Uuid> = current_parent_ids.into_iter().collect();
let target: BTreeSet<Uuid> = deduped.iter().copied().collect();
for parent_id in current.difference(&target) {
sqlx::query(
"DELETE FROM entry_relations WHERE parent_entry_id = $1 AND child_entry_id = $2",
)
.bind(*parent_id)
.bind(child_entry_id)
.execute(&mut *tx)
.await?;
}
for parent_id in target.difference(&current) {
let cycle_exists: bool = sqlx::query_scalar(
"WITH RECURSIVE descendants AS ( \
SELECT child_entry_id FROM entry_relations WHERE parent_entry_id = $1 \
UNION \
SELECT er.child_entry_id \
FROM entry_relations er \
JOIN descendants d ON d.child_entry_id = er.parent_entry_id \
) \
SELECT EXISTS(SELECT 1 FROM descendants WHERE child_entry_id = $2)",
)
.bind(child_entry_id)
.bind(*parent_id)
.fetch_one(&mut *tx)
.await?;
if cycle_exists {
tx.rollback().await?;
return Err(AppError::Validation {
message: "adding this relation would create a cycle".to_string(),
}
.into());
}
sqlx::query(
"INSERT INTO entry_relations (parent_entry_id, child_entry_id) VALUES ($1, $2) \
ON CONFLICT DO NOTHING",
)
.bind(*parent_id)
.bind(child_entry_id)
.execute(&mut *tx)
.await?;
}
tx.commit().await?;
Ok(())
}
pub async fn get_relations_for_entries(
pool: &PgPool,
entry_ids: &[Uuid],
user_id: Option<Uuid>,
) -> Result<HashMap<Uuid, EntryRelations>> {
if entry_ids.is_empty() {
return Ok(HashMap::new());
}
#[derive(sqlx::FromRow)]
struct ParentRow {
owner_entry_id: Uuid,
id: Uuid,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
name: String,
}
#[derive(sqlx::FromRow)]
struct ChildRow {
owner_entry_id: Uuid,
id: Uuid,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
name: String,
}
let (parents, children): (Vec<ParentRow>, Vec<ChildRow>) = if let Some(uid) = user_id {
let parents = sqlx::query_as(
"SELECT er.child_entry_id AS owner_entry_id, p.id, p.folder, p.type, p.name \
FROM entry_relations er \
JOIN entries p ON p.id = er.parent_entry_id \
JOIN entries c ON c.id = er.child_entry_id \
WHERE er.child_entry_id = ANY($1) \
AND p.user_id = $2 AND c.user_id = $2 \
AND p.deleted_at IS NULL AND c.deleted_at IS NULL \
ORDER BY er.child_entry_id, p.name ASC",
)
.bind(entry_ids)
.bind(uid)
.fetch_all(pool);
let children = sqlx::query_as(
"SELECT er.parent_entry_id AS owner_entry_id, c.id, c.folder, c.type, c.name \
FROM entry_relations er \
JOIN entries c ON c.id = er.child_entry_id \
JOIN entries p ON p.id = er.parent_entry_id \
WHERE er.parent_entry_id = ANY($1) \
AND p.user_id = $2 AND c.user_id = $2 \
AND p.deleted_at IS NULL AND c.deleted_at IS NULL \
ORDER BY er.parent_entry_id, c.name ASC",
)
.bind(entry_ids)
.bind(uid)
.fetch_all(pool);
(parents.await?, children.await?)
} else {
let parents = sqlx::query_as(
"SELECT er.child_entry_id AS owner_entry_id, p.id, p.folder, p.type, p.name \
FROM entry_relations er \
JOIN entries p ON p.id = er.parent_entry_id \
JOIN entries c ON c.id = er.child_entry_id \
WHERE er.child_entry_id = ANY($1) \
AND p.user_id IS NULL AND c.user_id IS NULL \
AND p.deleted_at IS NULL AND c.deleted_at IS NULL \
ORDER BY er.child_entry_id, p.name ASC",
)
.bind(entry_ids)
.fetch_all(pool);
let children = sqlx::query_as(
"SELECT er.parent_entry_id AS owner_entry_id, c.id, c.folder, c.type, c.name \
FROM entry_relations er \
JOIN entries c ON c.id = er.child_entry_id \
JOIN entries p ON p.id = er.parent_entry_id \
WHERE er.parent_entry_id = ANY($1) \
AND p.user_id IS NULL AND c.user_id IS NULL \
AND p.deleted_at IS NULL AND c.deleted_at IS NULL \
ORDER BY er.parent_entry_id, c.name ASC",
)
.bind(entry_ids)
.fetch_all(pool);
(parents.await?, children.await?)
};
let mut map: HashMap<Uuid, EntryRelations> = entry_ids
.iter()
.copied()
.map(|id| (id, EntryRelations::default()))
.collect();
for row in parents {
map.entry(row.owner_entry_id)
.or_default()
.parents
.push(RelationEntrySummary {
id: row.id,
folder: row.folder,
entry_type: row.entry_type,
name: row.name,
});
}
for row in children {
map.entry(row.owner_entry_id)
.or_default()
.children
.push(RelationEntrySummary {
id: row.id,
folder: row.folder,
entry_type: row.entry_type,
name: row.name,
});
}
Ok(map)
}
async fn validate_live_entries(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
entry_ids: &[Uuid],
user_id: Option<Uuid>,
) -> Result<()> {
let unique_ids: Vec<Uuid> = entry_ids
.iter()
.copied()
.collect::<BTreeSet<_>>()
.into_iter()
.collect();
let live_count: i64 = if let Some(uid) = user_id {
sqlx::query_scalar(
"SELECT COUNT(*)::bigint FROM entries \
WHERE id = ANY($1) AND user_id = $2 AND deleted_at IS NULL",
)
.bind(&unique_ids)
.bind(uid)
.fetch_one(&mut **tx)
.await?
} else {
sqlx::query_scalar(
"SELECT COUNT(*)::bigint FROM entries \
WHERE id = ANY($1) AND user_id IS NULL AND deleted_at IS NULL",
)
.bind(&unique_ids)
.fetch_one(&mut **tx)
.await?
};
if live_count != unique_ids.len() as i64 {
return Err(AppError::NotFoundEntry.into());
}
Ok(())
}

View File

@@ -1,9 +1,13 @@
use std::collections::HashSet;
use anyhow::Result;
use serde_json::Value;
use sqlx::PgPool;
use uuid::Uuid;
use crate::db;
use crate::error::AppError;
use crate::models::EntryWriteRow;
#[derive(Debug, serde::Serialize)]
pub struct RollbackResult {
@@ -15,13 +19,10 @@ pub struct RollbackResult {
}
/// Roll back entry `name` to `to_version` (or the most recent snapshot if None).
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
pub async fn run(
pool: &PgPool,
name: &str,
folder: Option<&str>,
entry_id: Uuid,
to_version: Option<i64>,
master_key: &[u8; 32],
user_id: Option<Uuid>,
) -> Result<RollbackResult> {
#[derive(sqlx::FromRow)]
@@ -35,94 +36,32 @@ pub async fn run(
metadata: Value,
}
// Disambiguate: find the unique entry_id for (name, folder).
// Query entries_history by entry_id once we know it; first resolve via name + optional folder.
let entry_id: Option<Uuid> = if let Some(uid) = user_id {
if let Some(f) = folder {
sqlx::query_scalar(
"SELECT DISTINCT entry_id FROM entries_history \
WHERE name = $1 AND folder = $2 AND user_id = $3 LIMIT 1",
)
.bind(name)
.bind(f)
.bind(uid)
.fetch_optional(pool)
.await?
} else {
let ids: Vec<Uuid> = sqlx::query_scalar(
"SELECT DISTINCT entry_id FROM entries_history \
WHERE name = $1 AND user_id = $2",
)
.bind(name)
.bind(uid)
.fetch_all(pool)
.await?;
match ids.len() {
0 => None,
1 => Some(ids[0]),
_ => {
let folders: Vec<String> = sqlx::query_scalar(
"SELECT DISTINCT folder FROM entries_history \
WHERE name = $1 AND user_id = $2",
)
.bind(name)
.bind(uid)
.fetch_all(pool)
.await?;
anyhow::bail!(
"Ambiguous: entries named '{}' exist in folders: [{}]. \
Specify 'folder' to disambiguate.",
name,
folders.join(", ")
)
}
}
}
} else if let Some(f) = folder {
sqlx::query_scalar(
"SELECT DISTINCT entry_id FROM entries_history \
WHERE name = $1 AND folder = $2 AND user_id IS NULL LIMIT 1",
let live_entry: Option<EntryWriteRow> = if let Some(uid) = user_id {
sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes, deleted_at FROM entries \
WHERE id = $1 AND user_id = $2 AND deleted_at IS NULL",
)
.bind(name)
.bind(f)
.bind(entry_id)
.bind(uid)
.fetch_optional(pool)
.await?
} else {
let ids: Vec<Uuid> = sqlx::query_scalar(
"SELECT DISTINCT entry_id FROM entries_history \
WHERE name = $1 AND user_id IS NULL",
sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes, deleted_at FROM entries \
WHERE id = $1 AND user_id IS NULL AND deleted_at IS NULL",
)
.bind(name)
.fetch_all(pool)
.await?;
match ids.len() {
0 => None,
1 => Some(ids[0]),
_ => {
let folders: Vec<String> = sqlx::query_scalar(
"SELECT DISTINCT folder FROM entries_history \
WHERE name = $1 AND user_id IS NULL",
)
.bind(name)
.fetch_all(pool)
.await?;
anyhow::bail!(
"Ambiguous: entries named '{}' exist in folders: [{}]. \
Specify 'folder' to disambiguate.",
name,
folders.join(", ")
)
}
}
.bind(entry_id)
.fetch_optional(pool)
.await?
};
let entry_id = entry_id.ok_or_else(|| anyhow::anyhow!("No history found for '{}'", name))?;
let live_entry = live_entry.ok_or(AppError::NotFoundEntry)?;
let snap: Option<EntryHistoryRow> = if let Some(ver) = to_version {
sqlx::query_as(
"SELECT folder, type, version, action, tags, metadata \
FROM entries_history \
WHERE entry_id = $1 AND version = $2 ORDER BY id DESC LIMIT 1",
WHERE entry_id = $1 AND version = $2 ORDER BY id ASC LIMIT 1",
)
.bind(entry_id)
.bind(ver)
@@ -141,41 +80,37 @@ pub async fn run(
let snap = snap.ok_or_else(|| {
anyhow::anyhow!(
"No history found for '{}'{}.",
name,
"No history found for entry '{}'{}.",
live_entry.name,
to_version
.map(|v| format!(" at version {}", v))
.unwrap_or_default()
)
})?;
let _ = master_key;
let snap_secret_snapshot = db::entry_secret_snapshot_from_metadata(&snap.metadata);
let snap_metadata = db::strip_secret_snapshot_from_metadata(&snap.metadata);
let mut tx = pool.begin().await?;
#[derive(sqlx::FromRow)]
struct LiveEntry {
id: Uuid,
version: i64,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
tags: Vec<String>,
metadata: Value,
#[allow(dead_code)]
notes: String,
}
// Lock the live entry if it exists (matched by entry_id for precision).
let live: Option<LiveEntry> = sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE id = $1 FOR UPDATE",
let live: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes, deleted_at FROM entries \
WHERE id = $1 AND deleted_at IS NULL FOR UPDATE",
)
.bind(entry_id)
.fetch_optional(&mut *tx)
.await?;
let live_entry_id = if let Some(ref lr) = live {
let history_metadata =
match db::metadata_with_secret_snapshot(&mut tx, lr.id, &lr.metadata).await {
Ok(v) => v,
Err(e) => {
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
lr.metadata.clone()
}
};
if let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
@@ -183,11 +118,11 @@ pub async fn run(
user_id,
folder: &lr.folder,
entry_type: &lr.entry_type,
name,
name: &lr.name,
version: lr.version,
action: "rollback",
tags: &lr.tags,
metadata: &lr.metadata,
metadata: &history_metadata,
},
)
.await
@@ -228,52 +163,27 @@ pub async fn run(
}
sqlx::query(
"UPDATE entries SET tags = $1, metadata = $2, version = version + 1, \
updated_at = NOW() WHERE id = $3",
"UPDATE entries SET folder = $1, type = $2, name = $3, notes = $4, tags = $5, metadata = $6, \
version = version + 1, updated_at = NOW() WHERE id = $7",
)
.bind(&snap.folder)
.bind(&snap.entry_type)
.bind(&live_entry.name)
.bind(&live_entry.notes)
.bind(&snap.tags)
.bind(&snap.metadata)
.bind(&snap_metadata)
.bind(lr.id)
.execute(&mut *tx)
.await?;
lr.id
} else {
if let Some(uid) = user_id {
sqlx::query_scalar(
"INSERT INTO entries \
(user_id, folder, type, name, notes, tags, metadata, version, updated_at) \
VALUES ($1, $2, $3, $4, '', $5, $6, $7, NOW()) RETURNING id",
)
.bind(uid)
.bind(&snap.folder)
.bind(&snap.entry_type)
.bind(name)
.bind(&snap.tags)
.bind(&snap.metadata)
.bind(snap.version)
.fetch_one(&mut *tx)
.await?
} else {
sqlx::query_scalar(
"INSERT INTO entries \
(folder, type, name, notes, tags, metadata, version, updated_at) \
VALUES ($1, $2, $3, '', $4, $5, $6, NOW()) RETURNING id",
)
.bind(&snap.folder)
.bind(&snap.entry_type)
.bind(name)
.bind(&snap.tags)
.bind(&snap.metadata)
.bind(snap.version)
.fetch_one(&mut *tx)
.await?
}
return Err(AppError::NotFoundEntry.into());
};
// In N:N mode, rollback restores entry metadata/tags only.
// Secret snapshots are kept for audit but secret linkage/content is not rewritten here.
let _ = live_entry_id;
if let Some(secret_snapshot) = snap_secret_snapshot {
restore_entry_secrets(&mut tx, live_entry_id, user_id, &secret_snapshot).await?;
}
crate::audit::log_tx(
&mut tx,
@@ -281,8 +191,9 @@ pub async fn run(
"rollback",
&snap.folder,
&snap.entry_type,
name,
&live_entry.name,
serde_json::json!({
"entry_id": entry_id,
"restored_version": snap.version,
"original_action": snap.action,
}),
@@ -292,9 +203,150 @@ pub async fn run(
tx.commit().await?;
Ok(RollbackResult {
name: name.to_string(),
name: live_entry.name,
folder: snap.folder,
entry_type: snap.entry_type,
restored_version: snap.version,
})
}
async fn restore_entry_secrets(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
entry_id: Uuid,
user_id: Option<Uuid>,
snapshot: &[db::EntrySecretSnapshot],
) -> Result<()> {
#[derive(sqlx::FromRow)]
struct LinkedSecret {
id: Uuid,
name: String,
encrypted: Vec<u8>,
}
let linked: Vec<LinkedSecret> = sqlx::query_as(
"SELECT s.id, s.name, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1",
)
.bind(entry_id)
.fetch_all(&mut **tx)
.await?;
let target_names: HashSet<&str> = snapshot.iter().map(|s| s.name.as_str()).collect();
for s in &linked {
if target_names.contains(s.name.as_str()) {
continue;
}
if let Err(e) = db::snapshot_secret_history(
tx,
db::SecretSnapshotParams {
secret_id: s.id,
name: &s.name,
encrypted: &s.encrypted,
action: "rollback",
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot secret before rollback unlink");
}
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
.bind(entry_id)
.bind(s.id)
.execute(&mut **tx)
.await?;
sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(s.id)
.execute(&mut **tx)
.await?;
}
for snap in snapshot {
let encrypted = ::hex::decode(&snap.encrypted_hex).map_err(|e| {
anyhow::anyhow!("invalid secret snapshot data for '{}': {}", snap.name, e)
})?;
#[derive(sqlx::FromRow)]
struct ExistingSecret {
id: Uuid,
encrypted: Vec<u8>,
}
let existing: Option<ExistingSecret> = if let Some(uid) = user_id {
sqlx::query_as("SELECT id, encrypted FROM secrets WHERE user_id = $1 AND name = $2")
.bind(uid)
.bind(&snap.name)
.fetch_optional(&mut **tx)
.await?
} else {
sqlx::query_as("SELECT id, encrypted FROM secrets WHERE user_id IS NULL AND name = $1")
.bind(&snap.name)
.fetch_optional(&mut **tx)
.await?
};
let secret_id = if let Some(ex) = existing {
if ex.encrypted != encrypted
&& let Err(e) = db::snapshot_secret_history(
tx,
db::SecretSnapshotParams {
secret_id: ex.id,
name: &snap.name,
encrypted: &ex.encrypted,
action: "rollback",
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot secret before rollback restore");
}
sqlx::query(
"UPDATE secrets SET type = $1, encrypted = $2, version = version + 1, updated_at = NOW() \
WHERE id = $3",
)
.bind(&snap.secret_type)
.bind(&encrypted)
.bind(ex.id)
.execute(&mut **tx)
.await?;
ex.id
} else if let Some(uid) = user_id {
sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
)
.bind(uid)
.bind(&snap.name)
.bind(&snap.secret_type)
.bind(&encrypted)
.fetch_one(&mut **tx)
.await?
} else {
sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, $2, $3) RETURNING id",
)
.bind(&snap.name)
.bind(&snap.secret_type)
.bind(&encrypted)
.fetch_one(&mut **tx)
.await?
};
sqlx::query(
"INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING",
)
.bind(entry_id)
.bind(secret_id)
.execute(&mut **tx)
.await?;
}
Ok(())
}

View File

@@ -6,7 +6,7 @@ use uuid::Uuid;
use crate::models::{Entry, SecretField};
pub const FETCH_ALL_LIMIT: u32 = 100_000;
pub const FETCH_ALL_LIMIT: u32 = 10_000;
/// Build an ILIKE pattern for fuzzy matching, escaping `%` and `_` literals.
pub fn ilike_pattern(value: &str) -> String {
@@ -27,6 +27,7 @@ pub struct SearchParams<'a> {
pub name_query: Option<&'a str>,
pub tags: &'a [String],
pub query: Option<&'a str>,
pub metadata_query: Option<&'a str>,
pub sort: &'a str,
pub limit: u32,
pub offset: u32,
@@ -75,6 +76,10 @@ pub async fn count_entries(pool: &PgPool, a: &SearchParams<'_>) -> Result<i64> {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
if let Some(v) = a.metadata_query {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
let n = q.fetch_one(pool).await?;
Ok(n)
}
@@ -90,6 +95,7 @@ fn entry_where_clause_and_next_idx(a: &SearchParams<'_>) -> (String, i32) {
} else {
conditions.push("user_id IS NULL".to_string());
}
conditions.push("deleted_at IS NULL".to_string());
if a.folder.is_some() {
conditions.push(format!("folder = ${}", idx));
@@ -132,6 +138,14 @@ fn entry_where_clause_and_next_idx(a: &SearchParams<'_>) -> (String, i32) {
));
idx += 1;
}
if a.metadata_query.is_some() {
conditions.push(format!(
"EXISTS (SELECT 1 FROM jsonb_path_query(metadata, 'strict $.** ? (@.type() != \"object\" && @.type() != \"array\")') AS val \
WHERE (val #>> '{{}}') ILIKE ${} ESCAPE '\\')",
idx
));
idx += 1;
}
let where_clause = if conditions.is_empty() {
String::new()
@@ -145,7 +159,7 @@ pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult
let entries = fetch_entries_paged(pool, &params).await?;
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
let secret_schemas = if !entry_ids.is_empty() {
fetch_secret_schemas(pool, &entry_ids).await?
fetch_secrets_for_entries(pool, &entry_ids).await?
} else {
HashMap::new()
};
@@ -164,6 +178,7 @@ pub async fn fetch_entries(
name: Option<&str>,
tags: &[String],
query: Option<&str>,
metadata_query: Option<&str>,
user_id: Option<Uuid>,
) -> Result<Vec<Entry>> {
let params = SearchParams {
@@ -173,6 +188,7 @@ pub async fn fetch_entries(
name_query: None,
tags,
query,
metadata_query,
sort: "name",
limit: FETCH_ALL_LIMIT,
offset: 0,
@@ -195,7 +211,7 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
let sql = format!(
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
created_at, updated_at \
created_at, updated_at, deleted_at \
FROM entries {where_clause} ORDER BY {order} LIMIT ${limit_idx} OFFSET ${offset_idx}"
);
@@ -223,39 +239,16 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
if let Some(v) = a.metadata_query {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
q = q.bind(a.limit as i64).bind(a.offset as i64);
let rows = q.fetch_all(pool).await?;
Ok(rows.into_iter().map(Entry::from).collect())
}
/// Fetch secret field names for a set of entry ids (no decryption).
pub async fn fetch_secret_schemas(
pool: &PgPool,
entry_ids: &[Uuid],
) -> Result<HashMap<Uuid, Vec<SecretField>>> {
if entry_ids.is_empty() {
return Ok(HashMap::new());
}
let fields: Vec<EntrySecretRow> = sqlx::query_as(
"SELECT es.entry_id, s.id, s.user_id, s.name, s.type, s.encrypted, s.version, s.created_at, s.updated_at \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = ANY($1) \
ORDER BY es.entry_id, es.sort_order, s.name",
)
.bind(entry_ids)
.fetch_all(pool)
.await?;
let mut map: HashMap<Uuid, Vec<SecretField>> = HashMap::new();
for f in fields {
let entry_id = f.entry_id;
map.entry(entry_id).or_default().push(f.secret());
}
Ok(map)
}
/// Fetch all secret fields (including encrypted bytes) for a set of entry ids.
pub async fn fetch_secrets_for_entries(
pool: &PgPool,
@@ -294,7 +287,7 @@ pub async fn resolve_entry_by_id(
let row: Option<EntryRaw> = if let Some(uid) = user_id {
sqlx::query_as(
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
created_at, updated_at FROM entries WHERE id = $1 AND user_id = $2",
created_at, updated_at, deleted_at FROM entries WHERE id = $1 AND user_id = $2 AND deleted_at IS NULL",
)
.bind(id)
.bind(uid)
@@ -303,7 +296,7 @@ pub async fn resolve_entry_by_id(
} else {
sqlx::query_as(
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
created_at, updated_at FROM entries WHERE id = $1 AND user_id IS NULL",
created_at, updated_at, deleted_at FROM entries WHERE id = $1 AND user_id IS NULL AND deleted_at IS NULL",
)
.bind(id)
.fetch_optional(pool)
@@ -325,7 +318,7 @@ pub async fn resolve_entry(
folder: Option<&str>,
user_id: Option<Uuid>,
) -> Result<crate::models::Entry> {
let entries = fetch_entries(pool, folder, None, Some(name), &[], None, user_id).await?;
let entries = fetch_entries(pool, folder, None, Some(name), &[], None, None, user_id).await?;
match entries.len() {
0 => {
if let Some(f) = folder {
@@ -334,7 +327,10 @@ pub async fn resolve_entry(
anyhow::bail!("Not found: '{}'", name)
}
}
1 => Ok(entries.into_iter().next().unwrap()),
1 => entries
.into_iter()
.next()
.ok_or_else(|| anyhow::anyhow!("internal: resolve_entry result vanished")),
_ => {
let folders: Vec<&str> = entries.iter().map(|e| e.folder.as_str()).collect();
anyhow::bail!(
@@ -363,6 +359,7 @@ struct EntryRaw {
version: i64,
created_at: chrono::DateTime<chrono::Utc>,
updated_at: chrono::DateTime<chrono::Utc>,
deleted_at: Option<chrono::DateTime<chrono::Utc>>,
}
impl From<EntryRaw> for Entry {
@@ -379,6 +376,7 @@ impl From<EntryRaw> for Entry {
version: r.version,
created_at: r.created_at,
updated_at: r.updated_at,
deleted_at: r.deleted_at,
}
}
}

View File

@@ -11,7 +11,7 @@ use crate::service::add::{
collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path,
parse_kv, remove_path,
};
use crate::taxonomy;
use crate::service::util::user_scope_condition;
#[derive(Debug, serde::Serialize)]
pub struct UpdateResult {
@@ -25,6 +25,8 @@ pub struct UpdateResult {
pub remove_meta: Vec<String>,
pub secret_keys: Vec<String>,
pub remove_secrets: Vec<String>,
pub linked_secrets: Vec<String>,
pub unlinked_secrets: Vec<String>,
}
pub struct UpdateParams<'a> {
@@ -39,6 +41,8 @@ pub struct UpdateParams<'a> {
pub secret_entries: &'a [String],
pub secret_types: &'a std::collections::HashMap<String, String>,
pub remove_secrets: &'a [String],
pub link_secret_names: &'a [String],
pub unlink_secret_names: &'a [String],
pub user_id: Option<Uuid>,
}
@@ -47,55 +51,44 @@ pub async fn run(
params: UpdateParams<'_>,
master_key: &[u8; 32],
) -> Result<UpdateResult> {
if params.name.chars().count() > 256 {
anyhow::bail!("name must be at most 256 characters");
}
let mut tx = pool.begin().await?;
// Fetch matching rows with FOR UPDATE; use folder when provided to resolve ambiguity.
let rows: Vec<EntryRow> = if let Some(uid) = params.user_id {
if let Some(folder) = params.folder {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id = $1 AND folder = $2 AND name = $3 FOR UPDATE",
)
.bind(uid)
.bind(folder)
.bind(params.name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id = $1 AND name = $2 FOR UPDATE",
)
.bind(uid)
.bind(params.name)
.fetch_all(&mut *tx)
.await?
}
} else if let Some(folder) = params.folder {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id IS NULL AND folder = $1 AND name = $2 FOR UPDATE",
)
.bind(folder)
.bind(params.name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id IS NULL AND name = $1 FOR UPDATE",
)
.bind(params.name)
.fetch_all(&mut *tx)
.await?
};
let mut idx = 1i32;
let user_cond = user_scope_condition(params.user_id, &mut idx);
let mut conditions = vec![user_cond];
if params.folder.is_some() {
conditions.push(format!("folder = ${}", idx));
idx += 1;
}
conditions.push(format!("name = ${}", idx));
let sql = format!(
"SELECT id, version, folder, type, tags, metadata, notes, name FROM entries \
WHERE {} AND deleted_at IS NULL FOR UPDATE",
conditions.join(" AND ")
);
let mut q = sqlx::query_as::<_, EntryRow>(&sql);
if let Some(uid) = params.user_id {
q = q.bind(uid);
}
if let Some(folder) = params.folder {
q = q.bind(folder);
}
q = q.bind(params.name);
let rows = q.fetch_all(&mut *tx).await?;
let row = match rows.len() {
0 => {
tx.rollback().await?;
return Err(AppError::NotFoundEntry.into());
}
1 => rows.into_iter().next().unwrap(),
1 => rows
.into_iter()
.next()
.ok_or_else(|| anyhow::anyhow!("internal: matched row vanished"))?,
_ => {
tx.rollback().await?;
let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect();
@@ -109,6 +102,15 @@ pub async fn run(
}
};
let history_metadata =
match db::metadata_with_secret_snapshot(&mut tx, row.id, &row.metadata).await {
Ok(v) => v,
Err(e) => {
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
row.metadata.clone()
}
};
if let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
@@ -120,7 +122,7 @@ pub async fn run(
version: row.version,
action: "update",
tags: &row.tags,
metadata: &row.metadata,
metadata: &history_metadata,
},
)
.await
@@ -295,6 +297,101 @@ pub async fn run(
}
}
// Link existing secrets by name
let mut linked_secrets = Vec::new();
for link_name in params.link_secret_names {
let link_name = link_name.trim();
if link_name.is_empty() {
anyhow::bail!("link_secret_names contains an empty name");
}
let secret_ids: Vec<Uuid> = if let Some(uid) = params.user_id {
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id = $1 AND name = $2")
.bind(uid)
.bind(link_name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id IS NULL AND name = $1")
.bind(link_name)
.fetch_all(&mut *tx)
.await?
};
match secret_ids.len() {
0 => anyhow::bail!("Not found: secret named '{}'", link_name),
1 => {
sqlx::query(
"INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING",
)
.bind(row.id)
.bind(secret_ids[0])
.execute(&mut *tx)
.await?;
linked_secrets.push(link_name.to_string());
}
n => anyhow::bail!(
"Ambiguous: {} secrets named '{}' found. Please deduplicate names first.",
n,
link_name
),
}
}
// Unlink secrets by name
let mut unlinked_secrets = Vec::new();
for unlink_name in params.unlink_secret_names {
let unlink_name = unlink_name.trim();
if unlink_name.is_empty() {
continue;
}
#[derive(sqlx::FromRow)]
struct SecretToUnlink {
id: Uuid,
encrypted: Vec<u8>,
}
let secret: Option<SecretToUnlink> = sqlx::query_as(
"SELECT s.id, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1 AND s.name = $2",
)
.bind(row.id)
.bind(unlink_name)
.fetch_optional(&mut *tx)
.await?;
if let Some(s) = secret {
if let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
secret_id: s.id,
name: unlink_name,
encrypted: &s.encrypted,
action: "delete",
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot secret field history before unlink");
}
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
.bind(row.id)
.bind(s.id)
.execute(&mut *tx)
.await?;
sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(s.id)
.execute(&mut *tx)
.await?;
unlinked_secrets.push(unlink_name.to_string());
}
}
let meta_keys = collect_key_paths(params.meta_entries)?;
let remove_meta_keys = collect_field_paths(params.remove_meta)?;
let secret_keys = collect_key_paths(params.secret_entries)?;
@@ -304,8 +401,8 @@ pub async fn run(
&mut tx,
params.user_id,
"update",
"",
"",
&row.folder,
&row.entry_type,
params.name,
serde_json::json!({
"add_tags": params.add_tags,
@@ -314,6 +411,8 @@ pub async fn run(
"remove_meta": remove_meta_keys,
"secret_keys": secret_keys,
"remove_secrets": remove_secret_keys,
"linked_secrets": linked_secrets,
"unlinked_secrets": unlinked_secrets,
}),
)
.await;
@@ -330,6 +429,8 @@ pub async fn run(
remove_meta: remove_meta_keys,
secret_keys,
remove_secrets: remove_secret_keys,
linked_secrets,
unlinked_secrets,
})
}
@@ -363,8 +464,8 @@ pub async fn update_fields_by_id(
let mut tx = pool.begin().await?;
let row: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
WHERE id = $1 AND user_id = $2 FOR UPDATE",
"SELECT id, version, folder, type, name, tags, metadata, notes, deleted_at FROM entries \
WHERE id = $1 AND user_id = $2 AND deleted_at IS NULL FOR UPDATE",
)
.bind(entry_id)
.bind(user_id)
@@ -379,6 +480,15 @@ pub async fn update_fields_by_id(
}
};
let history_metadata =
match db::metadata_with_secret_snapshot(&mut tx, row.id, &row.metadata).await {
Ok(v) => v,
Err(e) => {
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
row.metadata.clone()
}
};
if let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
@@ -390,7 +500,7 @@ pub async fn update_fields_by_id(
version: row.version,
action: "update",
tags: &row.tags,
metadata: &row.metadata,
metadata: &history_metadata,
},
)
.await
@@ -398,13 +508,7 @@ pub async fn update_fields_by_id(
tracing::warn!(error = %e, "failed to snapshot entry history before web update");
}
let mut metadata_map = match params.metadata {
Value::Object(m) => m.clone(),
_ => Map::new(),
};
let normalized_type =
taxonomy::normalize_entry_type_and_metadata(params.entry_type, &mut metadata_map);
let normalized_metadata = Value::Object(metadata_map);
let entry_type = params.entry_type.trim();
let res = sqlx::query(
"UPDATE entries SET folder = $1, type = $2, name = $3, notes = $4, tags = $5, metadata = $6, \
@@ -412,11 +516,11 @@ pub async fn update_fields_by_id(
WHERE id = $7 AND version = $8",
)
.bind(params.folder)
.bind(&normalized_type)
.bind(entry_type)
.bind(params.name)
.bind(params.notes)
.bind(params.tags)
.bind(&normalized_metadata)
.bind(params.metadata)
.bind(row.id)
.bind(row.version)
.execute(&mut *tx)
@@ -443,7 +547,7 @@ pub async fn update_fields_by_id(
Some(user_id),
"update",
params.folder,
&normalized_type,
entry_type,
params.name,
serde_json::json!({
"source": "web",

View File

@@ -16,24 +16,28 @@ pub struct OAuthProfile {
/// Find or create a user from an OAuth profile.
/// Returns (user, is_new) where is_new indicates first-time registration.
pub async fn find_or_create_user(pool: &PgPool, profile: OAuthProfile) -> Result<(User, bool)> {
// Check if this OAuth account already exists
// Use a transaction with FOR UPDATE to prevent TOCTOU race conditions
let mut tx = pool.begin().await?;
// Check if this OAuth account already exists (with row lock)
let existing: Option<OauthAccount> = sqlx::query_as(
"SELECT id, user_id, provider, provider_id, email, name, avatar_url, created_at \
FROM oauth_accounts WHERE provider = $1 AND provider_id = $2",
FROM oauth_accounts WHERE provider = $1 AND provider_id = $2 FOR UPDATE",
)
.bind(&profile.provider)
.bind(&profile.provider_id)
.fetch_optional(pool)
.fetch_optional(&mut *tx)
.await?;
if let Some(oa) = existing {
let user: User = sqlx::query_as(
"SELECT id, email, name, avatar_url, key_salt, key_check, key_params, api_key, created_at, updated_at \
"SELECT id, email, name, avatar_url, key_salt, key_check, key_params, api_key, key_version, created_at, updated_at \
FROM users WHERE id = $1",
)
.bind(oa.user_id)
.fetch_one(pool)
.fetch_one(&mut *tx)
.await?;
tx.commit().await?;
return Ok((user, false));
}
@@ -43,12 +47,10 @@ pub async fn find_or_create_user(pool: &PgPool, profile: OAuthProfile) -> Result
.clone()
.unwrap_or_else(|| profile.email.clone().unwrap_or_else(|| "User".to_string()));
let mut tx = pool.begin().await?;
let user: User = sqlx::query_as(
"INSERT INTO users (email, name, avatar_url) \
VALUES ($1, $2, $3) \
RETURNING id, email, name, avatar_url, key_salt, key_check, key_params, api_key, created_at, updated_at",
RETURNING id, email, name, avatar_url, key_salt, key_check, key_params, api_key, key_version, created_at, updated_at",
)
.bind(&profile.email)
.bind(&display_name)
@@ -74,6 +76,53 @@ pub async fn find_or_create_user(pool: &PgPool, profile: OAuthProfile) -> Result
Ok((user, true))
}
/// Re-encrypt all of a user's secrets from `old_key` to `new_key` and update the key metadata.
///
/// Runs entirely inside a single database transaction: if any secret fails to re-encrypt
/// the whole operation is rolled back, leaving the database unchanged.
pub async fn change_user_key(
pool: &PgPool,
user_id: Uuid,
old_key: &[u8; 32],
new_key: &[u8; 32],
new_salt: &[u8],
new_key_check: &[u8],
new_key_params: &Value,
) -> Result<()> {
let mut tx = pool.begin().await?;
let secrets: Vec<(uuid::Uuid, Vec<u8>)> =
sqlx::query_as("SELECT id, encrypted FROM secrets WHERE user_id = $1 FOR UPDATE")
.bind(user_id)
.fetch_all(&mut *tx)
.await?;
for (id, encrypted) in &secrets {
let plaintext = crate::crypto::decrypt(old_key, encrypted)?;
let new_encrypted = crate::crypto::encrypt(new_key, &plaintext)?;
sqlx::query("UPDATE secrets SET encrypted = $1, updated_at = NOW() WHERE id = $2")
.bind(&new_encrypted)
.bind(id)
.execute(&mut *tx)
.await?;
}
sqlx::query(
"UPDATE users SET key_salt = $1, key_check = $2, key_params = $3, \
key_version = key_version + 1, updated_at = NOW() \
WHERE id = $4",
)
.bind(new_salt)
.bind(new_key_check)
.bind(new_key_params)
.bind(user_id)
.execute(&mut *tx)
.await?;
tx.commit().await?;
Ok(())
}
/// Store the PBKDF2 salt, key_check, and params for a user's passphrase setup.
pub async fn update_user_key_setup(
pool: &PgPool,
@@ -98,7 +147,7 @@ pub async fn update_user_key_setup(
/// Fetch a user by ID.
pub async fn get_user_by_id(pool: &PgPool, user_id: Uuid) -> Result<Option<User>> {
let user = sqlx::query_as(
"SELECT id, email, name, avatar_url, key_salt, key_check, key_params, api_key, created_at, updated_at \
"SELECT id, email, name, avatar_url, key_salt, key_check, key_params, api_key, key_version, created_at, updated_at \
FROM users WHERE id = $1",
)
.bind(user_id)
@@ -125,13 +174,16 @@ pub async fn bind_oauth_account(
user_id: Uuid,
profile: OAuthProfile,
) -> Result<OauthAccount> {
// Check if this provider_id is already linked to someone else
// Use a transaction with FOR UPDATE to prevent TOCTOU race conditions
let mut tx = pool.begin().await?;
// Check if this provider_id is already linked to someone else (with row lock)
let conflict: Option<(Uuid,)> = sqlx::query_as(
"SELECT user_id FROM oauth_accounts WHERE provider = $1 AND provider_id = $2",
"SELECT user_id FROM oauth_accounts WHERE provider = $1 AND provider_id = $2 FOR UPDATE",
)
.bind(&profile.provider)
.bind(&profile.provider_id)
.fetch_optional(pool)
.fetch_optional(&mut *tx)
.await?;
if let Some((existing_user_id,)) = conflict {
@@ -148,11 +200,11 @@ pub async fn bind_oauth_account(
}
let existing_provider_for_user: Option<(String,)> = sqlx::query_as(
"SELECT provider_id FROM oauth_accounts WHERE user_id = $1 AND provider = $2",
"SELECT provider_id FROM oauth_accounts WHERE user_id = $1 AND provider = $2 FOR UPDATE",
)
.bind(user_id)
.bind(&profile.provider)
.fetch_optional(pool)
.fetch_optional(&mut *tx)
.await?;
if existing_provider_for_user.is_some() {
@@ -174,9 +226,10 @@ pub async fn bind_oauth_account(
.bind(&profile.email)
.bind(&profile.name)
.bind(&profile.avatar_url)
.fetch_one(pool)
.fetch_one(&mut *tx)
.await?;
tx.commit().await?;
Ok(account)
}
@@ -194,10 +247,14 @@ pub async fn unbind_oauth_account(
);
}
let count: i64 = sqlx::query_scalar("SELECT COUNT(*) FROM oauth_accounts WHERE user_id = $1")
.bind(user_id)
.fetch_one(pool)
.await?;
let mut tx = pool.begin().await?;
let locked_accounts: Vec<(String,)> =
sqlx::query_as("SELECT provider FROM oauth_accounts WHERE user_id = $1 FOR UPDATE")
.bind(user_id)
.fetch_all(&mut *tx)
.await?;
let count = locked_accounts.len();
if count <= 1 {
anyhow::bail!("Cannot unbind the last OAuth account. Please link another account first.");
@@ -206,8 +263,87 @@ pub async fn unbind_oauth_account(
sqlx::query("DELETE FROM oauth_accounts WHERE user_id = $1 AND provider = $2")
.bind(user_id)
.bind(provider)
.execute(pool)
.execute(&mut *tx)
.await?;
tx.commit().await?;
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
async fn maybe_test_pool() -> Option<PgPool> {
let database_url = match std::env::var("SECRETS_DATABASE_URL") {
Ok(v) => v,
Err(_) => {
eprintln!("skip user service tests: SECRETS_DATABASE_URL not set");
return None;
}
};
let pool = match sqlx::PgPool::connect(&database_url).await {
Ok(pool) => pool,
Err(e) => {
eprintln!("skip user service tests: cannot connect to database: {e}");
return None;
}
};
if let Err(e) = crate::db::migrate(&pool).await {
eprintln!("skip user service tests: migrate failed: {e}");
return None;
}
Some(pool)
}
async fn cleanup_user_rows(pool: &PgPool, user_id: Uuid) -> Result<()> {
sqlx::query("DELETE FROM oauth_accounts WHERE user_id = $1")
.bind(user_id)
.execute(pool)
.await?;
sqlx::query("DELETE FROM users WHERE id = $1")
.bind(user_id)
.execute(pool)
.await?;
Ok(())
}
#[tokio::test]
async fn unbind_oauth_account_removes_only_requested_provider() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let user_id = Uuid::from_u128(rand::random());
cleanup_user_rows(&pool, user_id).await?;
sqlx::query("INSERT INTO users (id, name) VALUES ($1, '')")
.bind(user_id)
.execute(&pool)
.await?;
sqlx::query(
"INSERT INTO oauth_accounts (user_id, provider, provider_id, email, name, avatar_url) \
VALUES ($1, 'google', $2, NULL, NULL, NULL), \
($1, 'github', $3, NULL, NULL, NULL)",
)
.bind(user_id)
.bind(format!("google-{user_id}"))
.bind(format!("github-{user_id}"))
.execute(&pool)
.await?;
unbind_oauth_account(&pool, user_id, "github", Some("google")).await?;
let remaining: Vec<(String,)> = sqlx::query_as(
"SELECT provider FROM oauth_accounts WHERE user_id = $1 ORDER BY provider",
)
.bind(user_id)
.fetch_all(&pool)
.await?;
assert_eq!(remaining, vec![("google".to_string(),)]);
cleanup_user_rows(&pool, user_id).await?;
Ok(())
}
}

View File

@@ -0,0 +1,27 @@
use uuid::Uuid;
/// Returns a WHERE condition fragment for user scope and advances `idx` if `user_id` is Some.
///
/// - `Some(uid)` → `"user_id = $N"` with idx incremented.
/// - `None` → `"user_id IS NULL"` with idx unchanged.
///
/// # Usage
///
/// ```rust,ignore
/// let mut idx = 1i32;
/// let user_cond = user_scope_condition(user_id, &mut idx);
/// // idx is now 2 if user_id is Some, still 1 if None
/// let sql = format!("SELECT ... FROM entries WHERE {user_cond} AND name = ${idx}");
/// let mut q = sqlx::query_as::<_, Row>(&sql);
/// if let Some(uid) = user_id { q = q.bind(uid); }
/// q = q.bind(name);
/// ```
pub fn user_scope_condition(user_id: Option<Uuid>, idx: &mut i32) -> String {
if user_id.is_some() {
let s = format!("user_id = ${}", *idx);
*idx += 1;
s
} else {
"user_id IS NULL".to_string()
}
}

View File

@@ -1,111 +1,4 @@
use serde_json::{Map, Value};
fn normalize_token(input: &str) -> String {
input.trim().to_lowercase().replace('_', "-")
}
fn normalize_subtype_token(input: &str) -> String {
normalize_token(input)
}
fn map_legacy_entry_type(input: &str) -> Option<(&'static str, &'static str)> {
match input {
"log-ingestion-endpoint" => Some(("service", "log-ingestion")),
"cloud-api" => Some(("service", "cloud-api")),
"git-server" => Some(("service", "git")),
"mqtt-broker" => Some(("service", "mqtt-broker")),
"database" => Some(("service", "database")),
"monitoring-dashboard" => Some(("service", "monitoring")),
"dns-api" => Some(("service", "dns-api")),
"notification-webhook" => Some(("service", "webhook")),
"api-endpoint" => Some(("service", "api-endpoint")),
"credential" | "credential-key" => Some(("service", "credential")),
"key" => Some(("service", "credential")),
_ => None,
}
}
/// Normalize entry `type` and optionally backfill `metadata.subtype` for legacy values.
///
/// This keeps backward compatibility:
/// - stable primary types stay unchanged
/// - known legacy long-tail types are mapped to `service` + `metadata.subtype`
/// - unknown values are kept (normalized to kebab-case) instead of hard failing
pub fn normalize_entry_type_and_metadata(
entry_type: &str,
metadata: &mut Map<String, Value>,
) -> String {
let original_raw = entry_type.trim();
let normalized = normalize_token(original_raw);
if normalized.is_empty() {
return String::new();
}
if let Some((mapped_type, mapped_subtype)) = map_legacy_entry_type(&normalized) {
if !metadata.contains_key("subtype") {
metadata.insert(
"subtype".to_string(),
Value::String(mapped_subtype.to_string()),
);
}
if !metadata.contains_key("_original_type") && original_raw != mapped_type {
metadata.insert(
"_original_type".to_string(),
Value::String(original_raw.to_string()),
);
}
return mapped_type.to_string();
}
if let Some(subtype) = metadata.get_mut("subtype")
&& let Some(s) = subtype.as_str()
{
*subtype = Value::String(normalize_subtype_token(s));
}
normalized
}
/// Canonical secret type options for UI dropdowns.
pub const SECRET_TYPE_OPTIONS: &[&str] = &[
"text", "password", "token", "api-key", "ssh-key", "url", "phone", "id-card",
];
#[cfg(test)]
mod tests {
use super::*;
use serde_json::{Map, Value};
#[test]
fn normalize_entry_type_maps_legacy_type_and_backfills_metadata() {
let mut metadata = Map::new();
let normalized = normalize_entry_type_and_metadata("git-server", &mut metadata);
assert_eq!(normalized, "service");
assert_eq!(
metadata.get("subtype"),
Some(&Value::String("git".to_string()))
);
assert_eq!(
metadata.get("_original_type"),
Some(&Value::String("git-server".to_string()))
);
}
#[test]
fn normalize_entry_type_normalizes_existing_subtype() {
let mut metadata = Map::new();
metadata.insert(
"subtype".to_string(),
Value::String("Cloud_API".to_string()),
);
let normalized = normalize_entry_type_and_metadata("service", &mut metadata);
assert_eq!(normalized, "service");
assert_eq!(
metadata.get("subtype"),
Some(&Value::String("cloud-api".to_string()))
);
}
}

View File

@@ -1,6 +1,6 @@
[package]
name = "secrets-mcp"
version = "0.4.0"
version = "0.5.14"
edition.workspace = true
[[bin]]
@@ -17,9 +17,10 @@ rmcp = { version = "1", features = ["server", "macros", "transport-streamable-ht
axum = "0.8"
axum-extra = { version = "0.10", features = ["typed-header"] }
tower = "0.5"
tower-http = { version = "0.6", features = ["cors", "trace"] }
tower-http = { version = "0.6", features = ["cors", "trace", "limit"] }
tower-sessions = "0.14"
tower-sessions-sqlx-store-chrono = { version = "0.14", features = ["postgres"] }
governor = { version = "0.10", features = ["std", "jitter"] }
time = "0.3"
# OAuth (manual token exchange via reqwest)
@@ -33,7 +34,6 @@ anyhow.workspace = true
chrono.workspace = true
serde.workspace = true
serde_json.workspace = true
sha2.workspace = true
rand.workspace = true
sqlx.workspace = true
tokio.workspace = true
@@ -44,3 +44,4 @@ dotenvy.workspace = true
urlencoding = "2"
schemars = "1"
http = "1"
url = "2"

View File

@@ -1,7 +1,5 @@
use std::net::SocketAddr;
use axum::{
extract::{ConnectInfo, Request, State},
extract::{Request, State},
http::StatusCode,
middleware::Next,
response::Response,
@@ -11,29 +9,14 @@ use uuid::Uuid;
use secrets_core::service::api_key::validate_api_key;
use crate::client_ip;
/// Injected into request extensions after Bearer token validation.
#[derive(Clone, Debug)]
pub struct AuthUser {
pub user_id: Uuid,
}
fn log_client_ip(req: &Request) -> Option<String> {
if let Some(first) = req
.headers()
.get("x-forwarded-for")
.and_then(|v| v.to_str().ok())
.and_then(|s| s.split(',').next())
{
let s = first.trim();
if !s.is_empty() {
return Some(s.to_string());
}
}
req.extensions()
.get::<ConnectInfo<SocketAddr>>()
.map(|c| c.ip().to_string())
}
/// Axum middleware that validates Bearer API keys for the /mcp route.
/// Passes all non-MCP paths through without authentication.
pub async fn bearer_auth_middleware(
@@ -43,7 +26,7 @@ pub async fn bearer_auth_middleware(
) -> Result<Response, StatusCode> {
let path = req.uri().path();
let method = req.method().as_str();
let client_ip = log_client_ip(&req);
let client_ip = client_ip::extract_client_ip(&req);
// Only authenticate /mcp paths
if !path.starts_with("/mcp") {
@@ -66,7 +49,7 @@ pub async fn bearer_auth_middleware(
tracing::warn!(
method,
path,
client_ip = client_ip.as_deref(),
%client_ip,
"invalid Authorization header format on /mcp (expected Bearer …)"
);
return Err(StatusCode::UNAUTHORIZED);
@@ -75,7 +58,7 @@ pub async fn bearer_auth_middleware(
tracing::warn!(
method,
path,
client_ip = client_ip.as_deref(),
%client_ip,
"missing Authorization header on /mcp"
);
return Err(StatusCode::UNAUTHORIZED);
@@ -93,7 +76,7 @@ pub async fn bearer_auth_middleware(
tracing::warn!(
method,
path,
client_ip = client_ip.as_deref(),
%client_ip,
key_prefix = %&raw_key.chars().take(12).collect::<String>(),
key_len = raw_key.len(),
"invalid api key (not found in database — e.g. revoked key or DB was reset; update MCP client Bearer token)"
@@ -104,7 +87,7 @@ pub async fn bearer_auth_middleware(
tracing::error!(
method,
path,
client_ip = client_ip.as_deref(),
%client_ip,
error = %e,
"api key validation error"
);

View File

@@ -0,0 +1,85 @@
use axum::extract::Request;
use std::net::{IpAddr, SocketAddr};
/// Extract the client IP from a request.
///
/// When the `TRUST_PROXY` environment variable is set to `1` or `true`, the
/// `X-Forwarded-For` and `X-Real-IP` headers are consulted first, which is
/// appropriate when the service runs behind a trusted reverse proxy (e.g.
/// Caddy). Otherwise — or if those headers are absent/empty — the direct TCP
/// connection address from `ConnectInfo` is used.
///
/// **Important**: only enable `TRUST_PROXY` when the application is guaranteed
/// to receive traffic exclusively through a controlled reverse proxy. Enabling
/// it on a directly-exposed port allows clients to spoof their IP address and
/// bypass per-IP rate limiting.
pub fn extract_client_ip(req: &Request) -> String {
if trust_proxy_enabled() {
if let Some(ip) = forwarded_for_ip(req.headers()) {
return ip;
}
if let Some(ip) = real_ip(req.headers()) {
return ip;
}
}
connect_info_ip(req).unwrap_or_else(|| "unknown".to_string())
}
/// Extract the client IP from individual header map and socket address components.
///
/// This variant is used by handlers that receive headers and connect info as
/// separate axum extractor parameters (e.g. OAuth callback handlers).
/// The same `TRUST_PROXY` logic applies.
pub fn extract_client_ip_parts(
headers: &axum::http::HeaderMap,
addr: std::net::SocketAddr,
) -> String {
if trust_proxy_enabled() {
if let Some(ip) = forwarded_for_ip(headers) {
return ip;
}
if let Some(ip) = real_ip(headers) {
return ip;
}
}
addr.ip().to_string()
}
fn trust_proxy_enabled() -> bool {
static CACHE: std::sync::OnceLock<bool> = std::sync::OnceLock::new();
*CACHE.get_or_init(|| {
matches!(
std::env::var("TRUST_PROXY").as_deref(),
Ok("1") | Ok("true") | Ok("yes")
)
})
}
fn forwarded_for_ip(headers: &axum::http::HeaderMap) -> Option<String> {
let value = headers.get("x-forwarded-for")?.to_str().ok()?;
let first = value.split(',').next()?.trim();
if first.is_empty() {
None
} else {
validate_ip(first)
}
}
fn real_ip(headers: &axum::http::HeaderMap) -> Option<String> {
let value = headers.get("x-real-ip")?.to_str().ok()?;
let ip = value.trim();
if ip.is_empty() { None } else { validate_ip(ip) }
}
/// Validate that a string is a valid IP address.
/// Returns Some(ip) if valid, None otherwise.
fn validate_ip(s: &str) -> Option<String> {
s.parse::<IpAddr>().ok().map(|ip| ip.to_string())
}
fn connect_info_ip(req: &Request) -> Option<String> {
req.extensions()
.get::<axum::extract::ConnectInfo<SocketAddr>>()
.map(|c| c.0.ip().to_string())
}

View File

@@ -23,11 +23,29 @@ pub fn app_error_to_mcp(err: &AppError) -> rmcp::ErrorData {
"Entry not found. Use secrets_find to discover existing entries.",
None,
),
AppError::NotFoundUser => rmcp::ErrorData::invalid_request("User not found.", None),
AppError::NotFoundSecret => rmcp::ErrorData::invalid_request("Secret not found.", None),
AppError::AuthenticationFailed => rmcp::ErrorData::invalid_request(
"Authentication failed. Please check your API key or login credentials.",
None,
),
AppError::Unauthorized => rmcp::ErrorData::invalid_request(
"Unauthorized: you do not have permission to access this resource.",
None,
),
AppError::Validation { message } => rmcp::ErrorData::invalid_request(message.clone(), None),
AppError::ConcurrentModification => rmcp::ErrorData::invalid_request(
"The entry was modified by another request. Please refresh and try again.",
None,
),
AppError::DecryptionFailed => rmcp::ErrorData::invalid_request(
"Decryption failed — the encryption key may be incorrect or does not match the data.",
None,
),
AppError::EncryptionKeyNotSet => rmcp::ErrorData::invalid_request(
"Encryption key not set. You must set a passphrase before using this feature.",
None,
),
AppError::Internal(_) => rmcp::ErrorData::internal_error(
"Request failed due to a server error. Check service logs if you need details.",
None,

View File

@@ -1,25 +1,28 @@
use std::net::SocketAddr;
use std::time::Instant;
use axum::{
body::{Body, Bytes, to_bytes},
extract::{ConnectInfo, Request},
extract::Request,
http::{
HeaderMap, Method, StatusCode,
header::{CONTENT_LENGTH, CONTENT_TYPE, USER_AGENT},
header::{AUTHORIZATION, CONTENT_LENGTH, CONTENT_TYPE, USER_AGENT},
},
middleware::Next,
response::{IntoResponse, Response},
};
use crate::auth::AuthUser;
/// Axum middleware that logs structured info for every HTTP request.
///
/// All requests: method, path, status, latency_ms, client_ip, user_agent.
/// POST /mcp requests: additionally parses JSON-RPC body for jsonrpc_method,
/// tool_name, jsonrpc_id, mcp_session, batch_size.
/// tool_name, jsonrpc_id, mcp_session, batch_size, tool_args (non-sensitive
/// arguments only), plus masked auth_key / enc_key fingerprints and user_id
/// for diagnosing header forwarding issues.
///
/// Sensitive headers (Authorization, X-Encryption-Key) and secret values
/// are never logged.
/// Sensitive headers (Authorization, X-Encryption-Key) are never logged in
/// full — only short fingerprints are emitted.
pub async fn request_logging_middleware(req: Request, next: Next) -> Response {
let method = req.method().clone();
let path = req.uri().path().to_string();
@@ -33,6 +36,10 @@ pub async fn request_logging_middleware(req: Request, next: Next) -> Response {
.and_then(|v| v.to_str().ok())
.map(|s| s.to_string());
// Capture header fingerprints before consuming the request.
let auth_key = mask_bearer(req.headers());
let enc_key = mask_enc_key(req.headers());
let is_mcp_post = path.starts_with("/mcp") && method == Method::POST;
let is_json = header_str(req.headers(), CONTENT_TYPE)
.map(|ct| ct.contains("application/json"))
@@ -46,6 +53,11 @@ pub async fn request_logging_middleware(req: Request, next: Next) -> Response {
let cap = content_len.unwrap_or(0);
if cap <= 512 * 1024 {
let (parts, body) = req.into_parts();
// user_id is available after auth middleware has run (injected into extensions).
let user_id = parts
.extensions
.get::<AuthUser>()
.map(|a| a.user_id.to_string());
match to_bytes(body, 512 * 1024).await {
Ok(bytes) => {
let rpc = parse_jsonrpc_meta(&bytes);
@@ -62,6 +74,9 @@ pub async fn request_logging_middleware(req: Request, next: Next) -> Response {
ua.as_deref(),
content_len,
mcp_session.as_deref(),
auth_key.as_deref(),
&enc_key,
user_id.as_deref(),
&rpc,
);
return resp;
@@ -78,6 +93,9 @@ pub async fn request_logging_middleware(req: Request, next: Next) -> Response {
ua = ua.as_deref(),
content_length = content_len,
mcp_session = mcp_session.as_deref(),
auth_key = auth_key.as_deref(),
enc_key = enc_key.as_str(),
user_id = user_id.as_deref(),
"mcp request",
);
return (
@@ -160,6 +178,9 @@ fn log_mcp_request(
ua: Option<&str>,
content_length: Option<u64>,
mcp_session: Option<&str>,
auth_key: Option<&str>,
enc_key: &str,
user_id: Option<&str>,
rpc: &JsonRpcMeta,
) {
tracing::info!(
@@ -175,18 +196,94 @@ fn log_mcp_request(
tool = rpc.tool_name.as_deref(),
jsonrpc_id = rpc.request_id.as_deref(),
batch_size = rpc.batch_size,
tool_args = rpc.tool_args.as_deref(),
auth_key,
enc_key,
user_id,
"mcp request",
);
}
// ── Sensitive header masking ──────────────────────────────────────────────────
/// Mask a Bearer token: emit only the first 12 characters followed by `…`.
/// Returns `None` if the Authorization header is absent or not a Bearer token.
/// Example: `sk_90c88844e4e5…`
fn mask_bearer(headers: &HeaderMap) -> Option<String> {
let val = headers.get(AUTHORIZATION)?.to_str().ok()?;
let token = val.strip_prefix("Bearer ")?.trim();
if token.is_empty() {
return None;
}
if token.len() > 12 {
Some(format!("{}", &token[..12]))
} else {
Some(token.to_string())
}
}
/// Fingerprint the X-Encryption-Key header.
///
/// Emits first 4 chars, last 4 chars, and raw byte length, e.g. `146b…5516(64)`.
/// Returns `"absent"` when the header is missing. Reveals enough to confirm
/// which key arrived and whether it was truncated or padded, without revealing
/// the full value.
fn mask_enc_key(headers: &HeaderMap) -> String {
match headers
.get("x-encryption-key")
.and_then(|v| v.to_str().ok())
{
Some(val) => {
let raw_len = val.len();
let t = val.trim();
let len = t.len();
if len >= 8 {
let prefix = &t[..4];
let suffix = &t[len - 4..];
if raw_len != len {
// Trailing/leading whitespace detected — extra diagnostic.
format!("{prefix}{suffix}({len}, raw={raw_len})")
} else {
format!("{prefix}{suffix}({len})")
}
} else {
format!("…({len})")
}
}
None => "absent".to_string(),
}
}
// ── JSON-RPC body parsing ─────────────────────────────────────────────────────
/// Safe (non-sensitive) argument keys that may be included verbatim in logs.
/// Keys NOT in this list (e.g. `secrets`, `secrets_obj`, `meta_obj`,
/// `encryption_key`) are silently dropped.
const SAFE_ARG_KEYS: &[&str] = &[
"id",
"name",
"name_query",
"folder",
"type",
"entry_type",
"field",
"query",
"tags",
"limit",
"offset",
"format",
"dry_run",
"prefix",
];
#[derive(Debug, Default)]
struct JsonRpcMeta {
request_id: Option<String>,
rpc_method: Option<String>,
tool_name: Option<String>,
batch_size: Option<usize>,
/// Non-sensitive tool call arguments for diagnostic logging.
tool_args: Option<String>,
}
fn parse_jsonrpc_meta(bytes: &Bytes) -> JsonRpcMeta {
@@ -216,12 +313,47 @@ fn parse_single(value: &serde_json::Value) -> JsonRpcMeta {
.pointer("/params/name")
.and_then(|v| v.as_str())
.map(|s| s.to_string());
let tool_args = extract_tool_args(value);
JsonRpcMeta {
request_id,
rpc_method,
tool_name,
batch_size: None,
tool_args,
}
}
/// Extract a compact summary of non-sensitive tool arguments for logging.
/// Only keys listed in `SAFE_ARG_KEYS` are included.
fn extract_tool_args(value: &serde_json::Value) -> Option<String> {
let args = value.pointer("/params/arguments")?;
let obj = args.as_object()?;
let pairs: Vec<String> = obj
.iter()
.filter(|(k, v)| SAFE_ARG_KEYS.contains(&k.as_str()) && !v.is_null())
.map(|(k, v)| format!("{}={}", k, summarize_value(v)))
.collect();
if pairs.is_empty() {
None
} else {
Some(pairs.join(" "))
}
}
/// Produce a short, log-safe representation of a JSON value.
fn summarize_value(v: &serde_json::Value) -> String {
match v {
serde_json::Value::String(s) => {
if s.len() > 64 {
format!("\"{}\"", &s[..64])
} else {
format!("\"{s}\"")
}
}
serde_json::Value::Array(arr) => format!("[…{}]", arr.len()),
serde_json::Value::Object(_) => "{…}".to_string(),
other => other.to_string(),
}
}
@@ -245,18 +377,5 @@ fn header_str(headers: &HeaderMap, name: impl axum::http::header::AsHeaderName)
}
fn client_ip(req: &Request) -> Option<String> {
if let Some(first) = req
.headers()
.get("x-forwarded-for")
.and_then(|v| v.to_str().ok())
.and_then(|s| s.split(',').next())
{
let s = first.trim();
if !s.is_empty() {
return Some(s.to_string());
}
}
req.extensions()
.get::<ConnectInfo<SocketAddr>>()
.map(|c| c.ip().to_string())
crate::client_ip::extract_client_ip(req).into()
}

View File

@@ -1,12 +1,14 @@
mod auth;
mod client_ip;
mod error;
mod logging;
mod oauth;
mod rate_limit;
mod tools;
mod validation;
mod web;
use std::net::SocketAddr;
use std::sync::Arc;
use anyhow::{Context, Result};
use axum::Router;
@@ -24,6 +26,7 @@ use tracing_subscriber::fmt::time::FormatTime;
use secrets_core::config::resolve_db_config;
use secrets_core::db::{create_pool, migrate};
use secrets_core::service::delete::purge_expired_deleted_entries;
use crate::oauth::OAuthConfig;
use crate::tools::SecretsService;
@@ -141,11 +144,11 @@ async fn main() -> Result<()> {
};
// ── MCP service ───────────────────────────────────────────────────────────
let pool_arc = Arc::new(pool.clone());
let pool_for_mcp = pool.clone();
let mcp_service = StreamableHttpService::new(
move || {
let p = pool_arc.clone();
let p = pool_for_mcp.clone();
Ok(SecretsService::new(p))
},
LocalSessionManager::default().into(),
@@ -153,10 +156,21 @@ async fn main() -> Result<()> {
);
// ── Router ────────────────────────────────────────────────────────────────
let cors = CorsLayer::new()
.allow_origin(Any)
.allow_methods(Any)
.allow_headers(Any);
// CORS: restrict origins in production, allow all in development
let is_production = matches!(
load_env_var("SECRETS_ENV")
.as_deref()
.map(|s| s.to_ascii_lowercase())
.as_deref(),
Some("prod" | "production")
);
let cors = build_cors_layer(&base_url, is_production);
// Rate limiting
let rate_limit_state = rate_limit::RateLimitState::new();
let rate_limit_cleanup = rate_limit::spawn_cleanup_task(rate_limit_state.ip_limiter.clone());
let recycle_bin_cleanup = tokio::spawn(start_recycle_bin_cleanup_task(pool.clone()));
let router = Router::new()
.merge(web::web_router())
@@ -168,8 +182,15 @@ async fn main() -> Result<()> {
pool,
auth::bearer_auth_middleware,
))
.layer(axum::middleware::from_fn_with_state(
rate_limit_state.clone(),
rate_limit::rate_limit_middleware,
))
.layer(session_layer)
.layer(cors)
.layer(tower_http::limit::RequestBodyLimitLayer::new(
10 * 1024 * 1024,
))
.with_state(app_state);
// ── Start server ──────────────────────────────────────────────────────────
@@ -192,12 +213,154 @@ async fn main() -> Result<()> {
.context("server error")?;
session_cleanup.abort();
rate_limit_cleanup.abort();
recycle_bin_cleanup.abort();
Ok(())
}
async fn start_recycle_bin_cleanup_task(pool: PgPool) {
let mut interval = tokio::time::interval(tokio::time::Duration::from_secs(24 * 60 * 60));
interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay);
loop {
interval.tick().await;
match purge_expired_deleted_entries(&pool).await {
Ok(count) if count > 0 => {
tracing::info!(purged_count = count, "purged expired recycle bin entries");
}
Ok(_) => {}
Err(error) => {
tracing::warn!(error = %error, "failed to purge expired recycle bin entries");
}
}
}
}
async fn shutdown_signal() {
tokio::signal::ctrl_c()
.await
.expect("failed to install CTRL+C signal handler");
let ctrl_c = tokio::signal::ctrl_c();
#[cfg(unix)]
let terminate = async {
tokio::signal::unix::signal(tokio::signal::unix::SignalKind::terminate())
.expect("failed to install SIGTERM handler")
.recv()
.await;
};
#[cfg(not(unix))]
let terminate = std::future::pending::<()>();
tokio::select! {
_ = ctrl_c => {},
_ = terminate => {},
}
tracing::info!("Shutting down gracefully...");
}
/// Production CORS allowed headers.
///
/// When adding a new custom header to the MCP or Web API, this list must be
/// updated accordingly — otherwise browsers will block the request during
/// the CORS preflight check.
fn production_allowed_headers() -> [axum::http::HeaderName; 5] {
[
axum::http::header::AUTHORIZATION,
axum::http::header::CONTENT_TYPE,
axum::http::HeaderName::from_static("x-encryption-key"),
axum::http::HeaderName::from_static("mcp-session-id"),
axum::http::HeaderName::from_static("x-mcp-session"),
]
}
/// Production CORS allowed methods.
///
/// Keep this list explicit because tower-http rejects
/// `allow_credentials(true)` together with `allow_methods(Any)`.
fn production_allowed_methods() -> [axum::http::Method; 5] {
[
axum::http::Method::GET,
axum::http::Method::POST,
axum::http::Method::PATCH,
axum::http::Method::DELETE,
axum::http::Method::OPTIONS,
]
}
/// Build the CORS layer for the application.
///
/// In production mode the origin is restricted to the BASE_URL origin
/// (scheme://host:port, path stripped) and credentials are allowed.
/// `allow_headers` and `allow_methods` use explicit whitelists to avoid the
/// tower-http restriction on `allow_credentials(true)` + wildcards.
///
/// In development mode all origins, methods and headers are allowed.
fn build_cors_layer(base_url: &str, is_production: bool) -> CorsLayer {
if is_production {
let allowed_origin = if let Ok(parsed) = base_url.parse::<url::Url>() {
let origin = parsed.origin().ascii_serialization();
origin
.parse::<axum::http::HeaderValue>()
.unwrap_or_else(|_| panic!("invalid BASE_URL origin: {}", origin))
} else {
base_url
.parse::<axum::http::HeaderValue>()
.unwrap_or_else(|_| panic!("invalid BASE_URL: {}", base_url))
};
CorsLayer::new()
.allow_origin(allowed_origin)
.allow_methods(production_allowed_methods())
.allow_headers(production_allowed_headers())
.allow_credentials(true)
} else {
CorsLayer::new()
.allow_origin(Any)
.allow_methods(Any)
.allow_headers(Any)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn production_cors_does_not_panic() {
let layer = build_cors_layer("https://secrets.example.com/app", true);
let _ = layer;
}
#[test]
fn production_cors_headers_include_all_required() {
let headers = production_allowed_headers();
let names: Vec<&str> = headers.iter().map(|h| h.as_str()).collect();
assert!(names.contains(&"authorization"));
assert!(names.contains(&"content-type"));
assert!(names.contains(&"x-encryption-key"));
assert!(names.contains(&"mcp-session-id"));
assert!(names.contains(&"x-mcp-session"));
}
#[test]
fn production_cors_methods_include_all_required() {
let methods = production_allowed_methods();
assert!(methods.contains(&axum::http::Method::GET));
assert!(methods.contains(&axum::http::Method::POST));
assert!(methods.contains(&axum::http::Method::PATCH));
assert!(methods.contains(&axum::http::Method::DELETE));
assert!(methods.contains(&axum::http::Method::OPTIONS));
}
#[test]
fn production_cors_normalizes_base_url_with_path() {
let url = url::Url::parse("https://secrets.example.com/secrets/app").unwrap();
let origin = url.origin().ascii_serialization();
assert_eq!(origin, "https://secrets.example.com");
}
#[test]
fn development_cors_allows_everything() {
let layer = build_cors_layer("http://localhost:9315", false);
let _ = layer;
}
}

View File

@@ -41,5 +41,5 @@ pub fn random_state() -> String {
use rand::RngExt;
let mut bytes = [0u8; 16];
rand::rng().fill(&mut bytes);
bytes.iter().map(|b| format!("{:02x}", b)).collect()
secrets_core::crypto::hex::encode_hex(&bytes)
}

View File

@@ -8,7 +8,7 @@ use super::{OAuthConfig, OAuthUserInfo};
/// - Docs: https://developers.weixin.qq.com/doc/oplatform/Website_App/WeChat_Login/Wechat_Login.html
use anyhow::{Result, bail};
#[allow(dead_code)]
#[allow(dead_code)] // Placeholder — implement when WeChat login is needed.
pub async fn exchange_code(
_client: &reqwest::Client,
_config: &OAuthConfig,

View File

@@ -0,0 +1,160 @@
use std::num::NonZeroU32;
use std::sync::Arc;
use std::time::Duration;
use axum::{
extract::{Request, State},
http::{HeaderMap, HeaderValue, StatusCode},
middleware::Next,
response::{IntoResponse, Response},
};
use governor::{
Quota, RateLimiter,
clock::{Clock, DefaultClock},
state::{InMemoryState, NotKeyed, keyed::DashMapStateStore},
};
use serde_json::json;
use crate::client_ip;
/// Per-IP rate limiter (keyed by client IP string)
type IpRateLimiter = RateLimiter<String, DashMapStateStore<String>, DefaultClock>;
/// Global rate limiter (not keyed)
type GlobalRateLimiter = RateLimiter<NotKeyed, InMemoryState, DefaultClock>;
/// Parse a u32 env value into NonZeroU32, logging a warning and falling back
/// to the default if the value is zero.
fn nz_or_log(value: u32, default: u32, name: &str) -> NonZeroU32 {
NonZeroU32::new(value).unwrap_or_else(|| {
tracing::warn!(
configured = value,
default,
"{name} must be non-zero, using default"
);
NonZeroU32::new(default).unwrap()
})
}
#[derive(Clone)]
pub struct RateLimitState {
pub ip_limiter: Arc<IpRateLimiter>,
pub global_limiter: Arc<GlobalRateLimiter>,
}
impl RateLimitState {
/// Create a new RateLimitState with default limits.
///
/// Default limits (can be overridden via environment variables):
/// - Global: 100 req/s, burst 200
/// - Per-IP: 20 req/s, burst 40
pub fn new() -> Self {
let global_rate = std::env::var("RATE_LIMIT_GLOBAL_PER_SECOND")
.ok()
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(100);
let global_burst = std::env::var("RATE_LIMIT_GLOBAL_BURST")
.ok()
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(200);
let ip_rate = std::env::var("RATE_LIMIT_IP_PER_SECOND")
.ok()
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(20);
let ip_burst = std::env::var("RATE_LIMIT_IP_BURST")
.ok()
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(40);
let global_rate_nz = nz_or_log(global_rate, 100, "RATE_LIMIT_GLOBAL_PER_SECOND");
let global_burst_nz = nz_or_log(global_burst, 200, "RATE_LIMIT_GLOBAL_BURST");
let ip_rate_nz = nz_or_log(ip_rate, 20, "RATE_LIMIT_IP_PER_SECOND");
let ip_burst_nz = nz_or_log(ip_burst, 40, "RATE_LIMIT_IP_BURST");
let global_quota = Quota::per_second(global_rate_nz).allow_burst(global_burst_nz);
let ip_quota = Quota::per_second(ip_rate_nz).allow_burst(ip_burst_nz);
tracing::info!(
global_rate = global_rate_nz.get(),
global_burst = global_burst_nz.get(),
ip_rate = ip_rate_nz.get(),
ip_burst = ip_burst_nz.get(),
"rate limiter initialized"
);
Self {
global_limiter: Arc::new(RateLimiter::direct(global_quota)),
ip_limiter: Arc::new(RateLimiter::dashmap(ip_quota)),
}
}
}
/// Rate limiting middleware function.
///
/// Checks both global and per-IP rate limits before allowing the request through.
/// Returns 429 Too Many Requests if either limit is exceeded.
pub async fn rate_limit_middleware(
State(rl): State<RateLimitState>,
req: Request,
next: Next,
) -> Result<Response, Response> {
// Check global rate limit first
if let Err(negative) = rl.global_limiter.check() {
let retry_after = negative.wait_time_from(DefaultClock::default().now());
tracing::warn!(
retry_after_secs = retry_after.as_secs(),
"global rate limit exceeded"
);
return Err(too_many_requests_response(Some(retry_after)));
}
// Check per-IP rate limit
let key = client_ip::extract_client_ip(&req);
if let Err(negative) = rl.ip_limiter.check_key(&key) {
let retry_after = negative.wait_time_from(DefaultClock::default().now());
tracing::warn!(
client_ip = %key,
retry_after_secs = retry_after.as_secs(),
"per-IP rate limit exceeded"
);
return Err(too_many_requests_response(Some(retry_after)));
}
Ok(next.run(req).await)
}
/// Start a background task to clean up expired rate limiter entries.
///
/// This should be called once during application startup.
/// The task runs every 60 seconds and will be aborted on shutdown.
pub fn spawn_cleanup_task(ip_limiter: Arc<IpRateLimiter>) -> tokio::task::JoinHandle<()> {
tokio::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_secs(60));
loop {
interval.tick().await;
ip_limiter.retain_recent();
}
})
}
/// Create a 429 Too Many Requests response.
fn too_many_requests_response(retry_after: Option<Duration>) -> Response {
let mut headers = HeaderMap::new();
headers.insert("Content-Type", HeaderValue::from_static("application/json"));
if let Some(duration) = retry_after {
let secs = duration.as_secs().max(1);
if let Ok(value) = HeaderValue::from_str(&secs.to_string()) {
headers.insert("Retry-After", value);
}
}
let body = json!({
"error": "Too many requests, please try again later"
});
(StatusCode::TOO_MANY_REQUESTS, headers, body.to_string()).into_response()
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,149 @@
/// Validation constants for input field lengths.
pub const MAX_NAME_LENGTH: usize = 256;
pub const MAX_FOLDER_LENGTH: usize = 128;
pub const MAX_ENTRY_TYPE_LENGTH: usize = 64;
pub const MAX_NOTES_LENGTH: usize = 10000;
pub const MAX_TAG_LENGTH: usize = 64;
pub const MAX_TAG_COUNT: usize = 50;
pub const MAX_META_KEY_LENGTH: usize = 128;
pub const MAX_META_VALUE_LENGTH: usize = 4096;
pub const MAX_META_COUNT: usize = 100;
/// Validate input field lengths for MCP tools.
///
/// Returns an error if any field exceeds its maximum length.
pub fn validate_input_lengths(
name: &str,
folder: Option<&str>,
entry_type: Option<&str>,
notes: Option<&str>,
) -> Result<(), rmcp::ErrorData> {
if name.chars().count() > MAX_NAME_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!("name must be at most {} characters", MAX_NAME_LENGTH),
None,
));
}
if let Some(folder) = folder
&& folder.chars().count() > MAX_FOLDER_LENGTH
{
return Err(rmcp::ErrorData::invalid_params(
format!("folder must be at most {} characters", MAX_FOLDER_LENGTH),
None,
));
}
if let Some(entry_type) = entry_type
&& entry_type.chars().count() > MAX_ENTRY_TYPE_LENGTH
{
return Err(rmcp::ErrorData::invalid_params(
format!("type must be at most {} characters", MAX_ENTRY_TYPE_LENGTH),
None,
));
}
if let Some(notes) = notes
&& notes.chars().count() > MAX_NOTES_LENGTH
{
return Err(rmcp::ErrorData::invalid_params(
format!("notes must be at most {} characters", MAX_NOTES_LENGTH),
None,
));
}
Ok(())
}
/// Validate the tags list.
///
/// Checks total count and per-tag character length.
pub fn validate_tags(tags: &[String]) -> Result<(), rmcp::ErrorData> {
if tags.len() > MAX_TAG_COUNT {
return Err(rmcp::ErrorData::invalid_params(
format!("at most {} tags are allowed", MAX_TAG_COUNT),
None,
));
}
for tag in tags {
if tag.chars().count() > MAX_TAG_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!(
"tag '{}' exceeds the maximum length of {} characters",
tag, MAX_TAG_LENGTH
),
None,
));
}
}
Ok(())
}
/// Validate metadata KV strings (key=value / key:=json format).
///
/// Checks total count and per-key/per-value character lengths.
/// This is a best-effort check on the raw KV strings before parsing;
/// keys containing `:` path separators are checked as a whole.
pub fn validate_meta_entries(entries: &[String]) -> Result<(), rmcp::ErrorData> {
if entries.len() > MAX_META_COUNT {
return Err(rmcp::ErrorData::invalid_params(
format!("at most {} metadata entries are allowed", MAX_META_COUNT),
None,
));
}
for entry in entries {
// key:=json — check both key and JSON value length
if let Some((key, value)) = entry.split_once(":=") {
if key.chars().count() > MAX_META_KEY_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!(
"metadata key '{}' exceeds the maximum length of {} characters",
key, MAX_META_KEY_LENGTH
),
None,
));
}
if value.chars().count() > MAX_META_VALUE_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!(
"metadata JSON value for key '{}' exceeds the maximum length of {} characters",
key, MAX_META_VALUE_LENGTH
),
None,
));
}
continue;
}
// key=value or key@path
if let Some((key, value)) = entry.split_once('=') {
if key.chars().count() > MAX_META_KEY_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!(
"metadata key '{}' exceeds the maximum length of {} characters",
key, MAX_META_KEY_LENGTH
),
None,
));
}
if value.chars().count() > MAX_META_VALUE_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!(
"metadata value for key '{}' exceeds the maximum length of {} characters",
key, MAX_META_VALUE_LENGTH
),
None,
));
}
} else {
// Fallback: entry without = or := — check total length
let max_total = MAX_META_KEY_LENGTH + MAX_META_VALUE_LENGTH;
if entry.chars().count() > max_total {
let preview = entry.chars().take(50).collect::<String>();
return Err(rmcp::ErrorData::invalid_params(
format!(
"metadata entry '{}' exceeds the maximum length of {} characters",
preview, max_total
),
None,
));
}
}
}
Ok(())
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,307 @@
use askama::Template;
use axum::{Json, extract::State, http::StatusCode, response::Response};
use serde::{Deserialize, Serialize};
use tower_sessions::Session;
use secrets_core::crypto::hex;
use secrets_core::service::{
api_key::{ensure_api_key, regenerate_api_key},
user::{change_user_key, get_user_by_id, update_user_key_setup},
};
use crate::AppState;
use super::{SESSION_KEY_VERSION, current_user_id, render_template, require_valid_user};
#[derive(Template)]
#[template(path = "dashboard.html")]
struct DashboardTemplate {
user_name: String,
user_email: String,
has_passphrase: bool,
base_url: String,
version: &'static str,
}
#[derive(Serialize)]
pub(super) struct KeySaltResponse {
has_passphrase: bool,
#[serde(skip_serializing_if = "Option::is_none")]
salt: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
key_check: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
params: Option<serde_json::Value>,
}
#[derive(Deserialize)]
pub(super) struct KeySetupRequest {
/// Hex-encoded 32-byte random salt
salt: String,
/// Hex-encoded AES-256-GCM encryption of "secrets-mcp-key-check" with the derived key
key_check: String,
/// Key derivation parameters, e.g. {"alg":"pbkdf2-sha256","iterations":600000}
params: serde_json::Value,
}
#[derive(Serialize)]
pub(super) struct KeySetupResponse {
ok: bool,
}
#[derive(Deserialize)]
pub(super) struct KeyChangeRequest {
/// Old derived key as 64-char hex — used to decrypt existing secrets
old_key: String,
/// New derived key as 64-char hex — used to re-encrypt secrets
new_key: String,
/// New 32-byte hex salt
salt: String,
/// New key_check: AES-256-GCM of KEY_CHECK_PLAINTEXT with the new key (hex)
key_check: String,
/// New key derivation parameters
params: serde_json::Value,
}
#[derive(Serialize)]
pub(super) struct ApiKeyResponse {
api_key: String,
}
pub(super) async fn dashboard(
State(state): State<AppState>,
session: Session,
) -> Result<Response, StatusCode> {
let user = match require_valid_user(&state.pool, &session, "dashboard").await {
Ok(u) => u,
Err(r) => return Ok(r),
};
let tmpl = DashboardTemplate {
user_name: user.name.clone(),
user_email: user.email.clone().unwrap_or_default(),
has_passphrase: user.key_salt.is_some(),
base_url: state.base_url.clone(),
version: env!("CARGO_PKG_VERSION"),
};
render_template(tmpl)
}
pub(super) async fn api_key_salt(
State(state): State<AppState>,
session: Session,
) -> Result<Json<KeySaltResponse>, StatusCode> {
let user_id = current_user_id(&session)
.await
.ok_or(StatusCode::UNAUTHORIZED)?;
let user = get_user_by_id(&state.pool, user_id)
.await
.map_err(|e| {
tracing::error!(error = %e, %user_id, "failed to load user for key-salt API");
StatusCode::INTERNAL_SERVER_ERROR
})?
.ok_or(StatusCode::UNAUTHORIZED)?;
if user.key_salt.is_none() {
return Ok(Json(KeySaltResponse {
has_passphrase: false,
salt: None,
key_check: None,
params: None,
}));
}
Ok(Json(KeySaltResponse {
has_passphrase: true,
salt: user.key_salt.as_deref().map(hex::encode_hex),
key_check: user.key_check.as_deref().map(hex::encode_hex),
params: user.key_params,
}))
}
pub(super) async fn api_key_setup(
State(state): State<AppState>,
session: Session,
Json(body): Json<KeySetupRequest>,
) -> Result<Json<KeySetupResponse>, StatusCode> {
let user_id = current_user_id(&session)
.await
.ok_or(StatusCode::UNAUTHORIZED)?;
// Guard: if a passphrase is already configured, reject and direct to /api/key-change
let user = get_user_by_id(&state.pool, user_id)
.await
.map_err(|e| {
tracing::error!(error = %e, %user_id, "failed to load user for key-setup guard");
StatusCode::INTERNAL_SERVER_ERROR
})?
.ok_or(StatusCode::UNAUTHORIZED)?;
if user.key_salt.is_some() {
tracing::warn!(%user_id, "key-setup called but passphrase already configured; use /api/key-change");
return Err(StatusCode::CONFLICT);
}
let salt = hex::decode_hex(&body.salt).map_err(|e| {
tracing::warn!(error = %e, "invalid hex in key-setup salt");
StatusCode::BAD_REQUEST
})?;
let key_check = hex::decode_hex(&body.key_check).map_err(|e| {
tracing::warn!(error = %e, "invalid hex in key-setup key_check");
StatusCode::BAD_REQUEST
})?;
if salt.len() != 32 {
tracing::warn!(salt_len = salt.len(), "key-setup salt must be 32 bytes");
return Err(StatusCode::BAD_REQUEST);
}
update_user_key_setup(&state.pool, user_id, &salt, &key_check, &body.params)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to update key setup");
StatusCode::INTERNAL_SERVER_ERROR
})?;
Ok(Json(KeySetupResponse { ok: true }))
}
// ── Change passphrase (re-encrypts all secrets) ───────────────────────────────
pub(super) async fn api_key_change(
State(state): State<AppState>,
session: Session,
Json(body): Json<KeyChangeRequest>,
) -> Result<Json<KeySetupResponse>, StatusCode> {
let user_id = current_user_id(&session)
.await
.ok_or(StatusCode::UNAUTHORIZED)?;
let user = get_user_by_id(&state.pool, user_id)
.await
.map_err(|e| {
tracing::error!(error = %e, %user_id, "failed to load user for key-change");
StatusCode::INTERNAL_SERVER_ERROR
})?
.ok_or(StatusCode::UNAUTHORIZED)?;
// Must have an existing passphrase to change
let existing_key_check = user.key_check.ok_or_else(|| {
tracing::warn!(%user_id, "key-change called but no passphrase configured; use /api/key-setup");
StatusCode::BAD_REQUEST
})?;
// Validate and decode old key
let old_key_bytes = secrets_core::crypto::extract_key_from_hex(&body.old_key).map_err(|e| {
tracing::warn!(error = %e, "invalid old_key hex in key-change");
StatusCode::BAD_REQUEST
})?;
// Verify old_key against the stored key_check
let plaintext = secrets_core::crypto::decrypt(&old_key_bytes, &existing_key_check).map_err(|_| {
tracing::warn!(%user_id, "key-change rejected: old_key does not match stored key_check");
StatusCode::UNAUTHORIZED
})?;
if plaintext != b"secrets-mcp-key-check" {
tracing::warn!(%user_id, "key-change rejected: decrypted key_check content mismatch");
return Err(StatusCode::UNAUTHORIZED);
}
// Validate and decode new key
let new_key_bytes = secrets_core::crypto::extract_key_from_hex(&body.new_key).map_err(|e| {
tracing::warn!(error = %e, "invalid new_key hex in key-change");
StatusCode::BAD_REQUEST
})?;
// Decode new salt and key_check
let new_salt = hex::decode_hex(&body.salt).map_err(|e| {
tracing::warn!(error = %e, "invalid hex in key-change salt");
StatusCode::BAD_REQUEST
})?;
if new_salt.len() != 32 {
tracing::warn!(
salt_len = new_salt.len(),
"key-change salt must be 32 bytes"
);
return Err(StatusCode::BAD_REQUEST);
}
let new_key_check = hex::decode_hex(&body.key_check).map_err(|e| {
tracing::warn!(error = %e, "invalid hex in key-change key_check");
StatusCode::BAD_REQUEST
})?;
change_user_key(
&state.pool,
user_id,
&old_key_bytes,
&new_key_bytes,
&new_salt,
&new_key_check,
&body.params,
)
.await
.map_err(|e| {
tracing::error!(error = %e, %user_id, "failed to change user key");
StatusCode::INTERNAL_SERVER_ERROR
})?;
// Refresh the session's key_version so the current session is not immediately
// invalidated by require_valid_user on the next page load.
match get_user_by_id(&state.pool, user_id).await {
Ok(Some(updated_user)) => {
if let Err(e) = session
.insert(SESSION_KEY_VERSION, updated_user.key_version)
.await
{
tracing::warn!(error = %e, %user_id, "failed to update key_version in session after key change");
}
}
Ok(None) => {
tracing::warn!(%user_id, "user not found after key change; session not updated");
}
Err(e) => {
tracing::warn!(error = %e, %user_id, "failed to reload user after key change; session not updated");
}
}
tracing::info!(%user_id, secrets_count = "(see service log)", "passphrase changed and secrets re-encrypted");
Ok(Json(KeySetupResponse { ok: true }))
}
// ── API Key management ────────────────────────────────────────────────────────
pub(super) async fn api_apikey_get(
State(state): State<AppState>,
session: Session,
) -> Result<Json<ApiKeyResponse>, StatusCode> {
let user_id = current_user_id(&session)
.await
.ok_or(StatusCode::UNAUTHORIZED)?;
let api_key = ensure_api_key(&state.pool, user_id).await.map_err(|e| {
tracing::error!(error = %e, %user_id, "ensure_api_key failed");
StatusCode::INTERNAL_SERVER_ERROR
})?;
Ok(Json(ApiKeyResponse { api_key }))
}
pub(super) async fn api_apikey_regenerate(
State(state): State<AppState>,
session: Session,
) -> Result<Json<ApiKeyResponse>, StatusCode> {
let user_id = current_user_id(&session)
.await
.ok_or(StatusCode::UNAUTHORIZED)?;
let api_key = regenerate_api_key(&state.pool, user_id)
.await
.map_err(|e| {
tracing::error!(error = %e, %user_id, "regenerate_api_key failed");
StatusCode::INTERNAL_SERVER_ERROR
})?;
Ok(Json(ApiKeyResponse { api_key }))
}

View File

@@ -0,0 +1,73 @@
use axum::{
body::Body,
extract::State,
http::{StatusCode, header},
response::{IntoResponse, Response},
};
use crate::AppState;
pub(super) fn text_asset_response(content: &'static str, content_type: &'static str) -> Response {
Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, content_type)
.header(header::CACHE_CONTROL, "public, max-age=86400")
.body(Body::from(content))
.expect("text asset response")
}
pub(super) async fn robots_txt() -> Response {
text_asset_response(
include_str!("../../static/robots.txt"),
"text/plain; charset=utf-8",
)
}
pub(super) async fn llms_txt() -> Response {
text_asset_response(
include_str!("../../static/llms.txt"),
"text/markdown; charset=utf-8",
)
}
pub(super) async fn ai_txt() -> Response {
llms_txt().await
}
pub(super) async fn i18n_js() -> Response {
text_asset_response(
include_str!("../../templates/i18n.js"),
"application/javascript; charset=utf-8",
)
}
pub(super) async fn favicon_svg() -> Response {
Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, "image/svg+xml")
.header(header::CACHE_CONTROL, "public, max-age=86400")
.body(Body::from(include_str!("../../static/favicon.svg")))
.expect("favicon response")
}
/// RFC 9728 — OAuth 2.0 Protected Resource Metadata.
///
/// Advertises that this server accepts Bearer tokens in the `Authorization`
/// header. We deliberately omit `authorization_servers` because this service
/// issues its own API keys (no external OAuth AS is involved). MCP clients
/// that probe this endpoint will see the resource identifier and stop looking
/// for a delegated OAuth flow.
pub(super) async fn oauth_protected_resource_metadata(
State(state): State<AppState>,
) -> impl IntoResponse {
let body = serde_json::json!({
"resource": state.base_url,
"bearer_methods_supported": ["header"],
"resource_documentation": format!("{}/dashboard", state.base_url),
});
(
StatusCode::OK,
[(header::CONTENT_TYPE, "application/json")],
axum::Json(body),
)
}

View File

@@ -0,0 +1,104 @@
use askama::Template;
use axum::{
extract::{Query, State},
http::StatusCode,
response::Response,
};
use chrono::SecondsFormat;
use serde::Deserialize;
use tower_sessions::Session;
use crate::AppState;
use super::{AUDIT_PAGE_LIMIT, paginate, render_template, require_valid_user};
#[derive(Template)]
#[template(path = "audit.html")]
struct AuditPageTemplate {
user_name: String,
user_email: String,
entries: Vec<AuditEntryView>,
current_page: u32,
total_pages: u32,
total_count: i64,
version: &'static str,
}
struct AuditEntryView {
/// RFC3339 UTC for `<time datetime>`; rendered as browser-local in audit.html.
created_at_iso: String,
action: String,
target: String,
detail: String,
}
#[derive(Deserialize)]
pub(super) struct AuditQuery {
page: Option<u32>,
}
fn format_audit_target(folder: &str, entry_type: &str, name: &str) -> String {
// Auth events (folder="auth") use entry_type/name as provider-scoped target.
if folder == "auth" {
format!("{}/{}", entry_type, name)
} else if !folder.is_empty() && !entry_type.is_empty() {
format!("[{}/{}] {}", folder, entry_type, name)
} else if !folder.is_empty() {
format!("[{}] {}", folder, name)
} else {
name.to_string()
}
}
pub(super) async fn audit_page(
State(state): State<AppState>,
session: Session,
Query(aq): Query<AuditQuery>,
) -> Result<Response, StatusCode> {
use secrets_core::service::audit_log::{count_for_user, list_for_user};
let user = match require_valid_user(&state.pool, &session, "audit_page").await {
Ok(u) => u,
Err(r) => return Ok(r),
};
let user_id = user.id;
let page = aq.page.unwrap_or(1).max(1);
let total_count = count_for_user(&state.pool, user_id).await.map_err(|e| {
tracing::error!(error = %e, "failed to count audit log for user");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let (current_page, total_pages, offset) = paginate(page, total_count, AUDIT_PAGE_LIMIT as u32);
let actual_offset = i64::from(offset);
let rows = list_for_user(&state.pool, user_id, AUDIT_PAGE_LIMIT, actual_offset)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to load audit log for user");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let entries = rows
.into_iter()
.map(|row| AuditEntryView {
created_at_iso: row.created_at.to_rfc3339_opts(SecondsFormat::Secs, true),
action: row.action,
target: format_audit_target(&row.folder, &row.entry_type, &row.name),
detail: serde_json::to_string_pretty(&row.detail).unwrap_or_else(|_| "{}".to_string()),
})
.collect();
let tmpl = AuditPageTemplate {
user_name: user.name.clone(),
user_email: user.email.clone().unwrap_or_default(),
entries,
current_page,
total_pages,
total_count,
version: env!("CARGO_PKG_VERSION"),
};
render_template(tmpl)
}

View File

@@ -0,0 +1,360 @@
use std::net::SocketAddr;
use askama::Template;
use axum::{
extract::{ConnectInfo, Path, Query, State},
http::{HeaderMap, StatusCode},
response::{IntoResponse, Redirect, Response},
};
use serde::Deserialize;
use tower_sessions::Session;
use secrets_core::audit::log_login;
use secrets_core::service::user::{
OAuthProfile, bind_oauth_account, find_or_create_user, unbind_oauth_account,
};
use crate::AppState;
use crate::oauth::{OAuthConfig, OAuthUserInfo, google_auth_url, random_state};
use super::{
SESSION_KEY_VERSION, SESSION_LOGIN_PROVIDER, SESSION_OAUTH_BIND_MODE, SESSION_OAUTH_STATE,
SESSION_USER_ID, current_user_id, google_cfg, render_template, request_user_agent,
};
#[derive(Template)]
#[template(path = "login.html")]
struct LoginTemplate {
has_google: bool,
base_url: String,
version: &'static str,
}
#[derive(Template)]
#[template(path = "home.html")]
struct HomeTemplate {
is_logged_in: bool,
base_url: String,
version: &'static str,
}
// ── Home page (public) ───────────────────────────────────────────────────────
pub(super) async fn home_page(
State(state): State<AppState>,
session: Session,
) -> Result<Response, StatusCode> {
let is_logged_in = current_user_id(&session).await.is_some();
let tmpl = HomeTemplate {
is_logged_in,
base_url: state.base_url.clone(),
version: env!("CARGO_PKG_VERSION"),
};
render_template(tmpl)
}
// ── Login page ────────────────────────────────────────────────────────────────
pub(super) async fn login_page(
State(state): State<AppState>,
session: Session,
) -> Result<Response, StatusCode> {
if let Some(_uid) = current_user_id(&session).await {
return Ok(Redirect::to("/dashboard").into_response());
}
let tmpl = LoginTemplate {
has_google: state.google_config.is_some(),
base_url: state.base_url.clone(),
version: env!("CARGO_PKG_VERSION"),
};
render_template(tmpl)
}
// ── Google OAuth ──────────────────────────────────────────────────────────────
pub(super) async fn auth_google(
State(state): State<AppState>,
session: Session,
) -> Result<Response, StatusCode> {
let config = google_cfg(&state).ok_or(StatusCode::SERVICE_UNAVAILABLE)?;
let oauth_state = random_state();
session
.insert(SESSION_OAUTH_STATE, &oauth_state)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to insert oauth_state into session");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let url = google_auth_url(config, &oauth_state);
Ok(Redirect::to(&url).into_response())
}
#[derive(Deserialize)]
pub(super) struct OAuthCallbackQuery {
code: Option<String>,
state: Option<String>,
error: Option<String>,
}
pub(super) async fn auth_google_callback(
State(state): State<AppState>,
connect_info: ConnectInfo<SocketAddr>,
headers: HeaderMap,
session: Session,
Query(params): Query<OAuthCallbackQuery>,
) -> Result<Response, StatusCode> {
let client_ip = Some(crate::client_ip::extract_client_ip_parts(
&headers,
connect_info.0,
));
let user_agent = request_user_agent(&headers);
handle_oauth_callback(
&state,
&session,
params,
"google",
client_ip.as_deref(),
user_agent.as_deref(),
|s, cfg, code| {
Box::pin(crate::oauth::google::exchange_code(
&s.http_client,
cfg,
code,
))
},
)
.await
}
// ── Shared OAuth callback handler ─────────────────────────────────────────────
async fn handle_oauth_callback<F>(
state: &AppState,
session: &Session,
params: OAuthCallbackQuery,
provider: &str,
client_ip: Option<&str>,
user_agent: Option<&str>,
exchange_fn: F,
) -> Result<Response, StatusCode>
where
F: for<'a> Fn(
&'a AppState,
&'a OAuthConfig,
&'a str,
) -> std::pin::Pin<
Box<dyn std::future::Future<Output = anyhow::Result<OAuthUserInfo>> + Send + 'a>,
>,
{
if let Some(err) = params.error {
tracing::warn!(provider, error = %err, "OAuth error");
return Ok(Redirect::to("/login?error=oauth_error").into_response());
}
let Some(code) = params.code else {
tracing::warn!(provider, "OAuth callback missing code");
return Ok(Redirect::to("/login?error=oauth_missing_code").into_response());
};
let Some(returned_state) = params.state.as_deref() else {
tracing::warn!(provider, "OAuth callback missing state");
return Ok(Redirect::to("/login?error=oauth_missing_state").into_response());
};
let expected_state: Option<String> = session.get(SESSION_OAUTH_STATE).await.map_err(|e| {
tracing::error!(provider, error = %e, "failed to read oauth_state from session");
StatusCode::INTERNAL_SERVER_ERROR
})?;
if expected_state.as_deref() != Some(returned_state) {
tracing::warn!(
provider,
expected_present = expected_state.is_some(),
"OAuth state mismatch (empty session often means SameSite=Strict or server restart)"
);
return Ok(Redirect::to("/login?error=oauth_state").into_response());
}
if let Err(e) = session.remove::<String>(SESSION_OAUTH_STATE).await {
tracing::warn!(provider, error = %e, "failed to remove oauth_state from session");
}
let config = match provider {
"google" => state
.google_config
.as_ref()
.ok_or(StatusCode::SERVICE_UNAVAILABLE)?,
_ => return Err(StatusCode::BAD_REQUEST),
};
let user_info = exchange_fn(state, config, code.as_str())
.await
.map_err(|e| {
tracing::error!(provider, error = %e, "failed to exchange OAuth code");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let bind_mode: bool = match session.get::<bool>(SESSION_OAUTH_BIND_MODE).await {
Ok(v) => v.unwrap_or(false),
Err(e) => {
tracing::error!(
provider,
error = %e,
"failed to read oauth_bind_mode from session"
);
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
};
if bind_mode {
let user_id = current_user_id(session)
.await
.ok_or(StatusCode::UNAUTHORIZED)?;
if let Err(e) = session.remove::<bool>(SESSION_OAUTH_BIND_MODE).await {
tracing::warn!(provider, error = %e, "failed to remove oauth_bind_mode from session after bind");
}
let profile = OAuthProfile {
provider: user_info.provider,
provider_id: user_info.provider_id,
email: user_info.email,
name: user_info.name,
avatar_url: user_info.avatar_url,
};
bind_oauth_account(&state.pool, user_id, profile)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to bind OAuth account");
StatusCode::INTERNAL_SERVER_ERROR
})?;
return Ok(Redirect::to("/dashboard?bound=1").into_response());
}
let profile = OAuthProfile {
provider: user_info.provider,
provider_id: user_info.provider_id,
email: user_info.email,
name: user_info.name,
avatar_url: user_info.avatar_url,
};
let (user, _is_new) = find_or_create_user(&state.pool, profile)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to find or create user");
StatusCode::INTERNAL_SERVER_ERROR
})?;
session
.insert(SESSION_USER_ID, user.id.to_string())
.await
.map_err(|e| {
tracing::error!(
error = %e,
user_id = %user.id,
"failed to insert user_id into session after OAuth"
);
StatusCode::INTERNAL_SERVER_ERROR
})?;
session
.insert(SESSION_LOGIN_PROVIDER, &provider)
.await
.map_err(|e| {
tracing::error!(
provider,
error = %e,
"failed to insert login_provider into session after OAuth"
);
StatusCode::INTERNAL_SERVER_ERROR
})?;
if let Err(e) = session.insert(SESSION_KEY_VERSION, user.key_version).await {
tracing::warn!(error = %e, user_id = %user.id, "failed to insert key_version into session after OAuth");
}
log_login(
&state.pool,
"oauth",
provider,
user.id,
client_ip,
user_agent,
)
.await;
Ok(Redirect::to("/dashboard").into_response())
}
// ── Logout ────────────────────────────────────────────────────────────────────
pub(super) async fn auth_logout(session: Session) -> impl IntoResponse {
if let Err(e) = session.flush().await {
tracing::warn!(error = %e, "failed to flush session on logout");
}
Redirect::to("/")
}
// ── Account bind/unbind ───────────────────────────────────────────────────────
pub(super) async fn account_bind_google(
State(state): State<AppState>,
session: Session,
) -> Result<Response, StatusCode> {
let _ = current_user_id(&session)
.await
.ok_or(StatusCode::UNAUTHORIZED)?;
session
.insert(SESSION_OAUTH_BIND_MODE, true)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to insert oauth_bind_mode into session");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let config = google_cfg(&state).ok_or(StatusCode::SERVICE_UNAVAILABLE)?;
let oauth_state = random_state();
if let Err(e) = session.insert(SESSION_OAUTH_STATE, &oauth_state).await {
tracing::error!(error = %e, "failed to insert oauth_state for account bind flow");
if let Err(rm) = session.remove::<bool>(SESSION_OAUTH_BIND_MODE).await {
tracing::warn!(error = %rm, "failed to roll back oauth_bind_mode after oauth_state insert failure");
}
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
let url = google_auth_url(config, &oauth_state);
Ok(Redirect::to(&url).into_response())
}
pub(super) async fn account_unbind(
State(state): State<AppState>,
Path(provider): Path<String>,
session: Session,
) -> Result<Response, StatusCode> {
let user_id = current_user_id(&session)
.await
.ok_or(StatusCode::UNAUTHORIZED)?;
let current_login_provider = session
.get::<String>(SESSION_LOGIN_PROVIDER)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to read login_provider from session");
StatusCode::INTERNAL_SERVER_ERROR
})?;
unbind_oauth_account(
&state.pool,
user_id,
&provider,
current_login_provider.as_deref(),
)
.await
.map_err(|e| {
tracing::warn!(error = %e, "failed to unbind oauth account");
StatusCode::BAD_REQUEST
})?;
Ok(Redirect::to("/dashboard?unbound=1").into_response())
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,313 @@
use askama::Template;
use axum::{
Router,
http::{HeaderMap, StatusCode, header},
response::{Html, IntoResponse, Redirect, Response},
routing::{get, patch, post},
};
use tower_sessions::Session;
use uuid::Uuid;
use crate::AppState;
use crate::oauth::OAuthConfig;
mod account;
mod assets;
mod audit;
mod auth;
mod entries;
// ── Session keys ──────────────────────────────────────────────────────────────
const SESSION_USER_ID: &str = "user_id";
const SESSION_OAUTH_STATE: &str = "oauth_state";
const SESSION_OAUTH_BIND_MODE: &str = "oauth_bind_mode";
const SESSION_LOGIN_PROVIDER: &str = "login_provider";
const SESSION_KEY_VERSION: &str = "key_version";
// ── Page limits ───────────────────────────────────────────────────────────────
/// Cap for HTML list (avoids loading unbounded rows into memory).
const ENTRIES_PAGE_LIMIT: u32 = 50;
const AUDIT_PAGE_LIMIT: i64 = 10;
// ── UI language ───────────────────────────────────────────────────────────────
#[derive(Clone, Copy)]
enum UiLang {
ZhCn,
ZhTw,
En,
}
fn request_ui_lang(headers: &HeaderMap) -> UiLang {
let Some(raw) = headers
.get(header::ACCEPT_LANGUAGE)
.and_then(|v| v.to_str().ok())
else {
return UiLang::ZhCn;
};
let lower = raw.to_ascii_lowercase();
if lower.contains("zh-tw") || lower.contains("zh-hk") || lower.contains("zh-hant") {
UiLang::ZhTw
} else if lower.contains("zh") {
UiLang::ZhCn
} else if lower.contains("en") {
UiLang::En
} else {
UiLang::ZhCn
}
}
fn tr(lang: UiLang, zh_cn: &'static str, zh_tw: &'static str, en: &'static str) -> &'static str {
match lang {
UiLang::ZhCn => zh_cn,
UiLang::ZhTw => zh_tw,
UiLang::En => en,
}
}
// ── App state helpers ─────────────────────────────────────────────────────────
fn google_cfg(state: &AppState) -> Option<&OAuthConfig> {
state.google_config.as_ref()
}
async fn current_user_id(session: &Session) -> Option<Uuid> {
match session.get::<String>(SESSION_USER_ID).await {
Ok(opt) => match opt {
Some(s) => match Uuid::parse_str(&s) {
Ok(id) => Some(id),
Err(e) => {
tracing::warn!(error = %e, user_id_str = %s, "invalid user_id UUID in session");
None
}
},
None => None,
},
Err(e) => {
tracing::warn!(error = %e, "failed to read user_id from session");
None
}
}
}
/// Load and validate the current user from session and DB.
///
/// Returns the user if the session is valid. Flushes the session and returns
/// `Err(Redirect::to("/login"))` when:
/// - the session has no `user_id`,
/// - the user no longer exists in the database, or
/// - the stored `key_version` does not match the DB value (passphrase changed on
/// another device since this session was created).
async fn require_valid_user(
pool: &sqlx::PgPool,
session: &Session,
context: &str,
) -> Result<secrets_core::models::User, Response> {
let Some(user_id) = current_user_id(session).await else {
return Err(Redirect::to("/login").into_response());
};
let user = match secrets_core::service::user::get_user_by_id(pool, user_id).await {
Err(e) => {
tracing::error!(error = %e, %user_id, context, "failed to load user");
return Err(StatusCode::INTERNAL_SERVER_ERROR.into_response());
}
Ok(None) => {
if let Err(e) = session.flush().await {
tracing::warn!(error = %e, "failed to flush stale session");
}
return Err(Redirect::to("/login").into_response());
}
Ok(Some(u)) => u,
};
let session_kv: Option<i64> = match session.get::<i64>(SESSION_KEY_VERSION).await {
Ok(v) => v,
Err(e) => {
tracing::warn!(error = %e, "failed to read key_version from session; treating as missing");
None
}
};
if let Some(kv) = session_kv
&& kv != user.key_version
{
tracing::info!(%user_id, session_kv = kv, db_kv = user.key_version, "key_version mismatch; invalidating session");
if let Err(e) = session.flush().await {
tracing::warn!(error = %e, "failed to flush outdated session");
}
return Err(Redirect::to("/login").into_response());
}
Ok(user)
}
fn request_user_agent(headers: &HeaderMap) -> Option<String> {
headers
.get(header::USER_AGENT)
.and_then(|value| value.to_str().ok())
.map(str::trim)
.filter(|value| !value.is_empty())
.map(ToOwned::to_owned)
}
fn paginate(page: u32, total_count: i64, page_size: u32) -> (u32, u32, u32) {
let page_size = page_size.max(1);
let safe_total_count = u32::try_from(total_count.max(0)).unwrap_or(u32::MAX);
let total_pages = safe_total_count.div_ceil(page_size).max(1);
let current_page = page.max(1).min(total_pages);
let offset = (current_page - 1).saturating_mul(page_size);
(current_page, total_pages, offset)
}
fn render_template<T: Template>(tmpl: T) -> Result<Response, StatusCode> {
let html = tmpl.render().map_err(|e| {
tracing::error!(error = %e, "template render error");
StatusCode::INTERNAL_SERVER_ERROR
})?;
Ok(Html(html).into_response())
}
// ── Routes ────────────────────────────────────────────────────────────────────
pub fn web_router() -> Router<AppState> {
Router::new()
.route("/robots.txt", get(assets::robots_txt))
.route("/llms.txt", get(assets::llms_txt))
.route("/ai.txt", get(assets::ai_txt))
.route("/static/i18n.js", get(assets::i18n_js))
.route("/favicon.svg", get(assets::favicon_svg))
.route(
"/favicon.ico",
get(|| async { Redirect::permanent("/favicon.svg") }),
)
.route(
"/.well-known/oauth-protected-resource",
get(assets::oauth_protected_resource_metadata),
)
.route("/", get(auth::home_page))
.route("/login", get(auth::login_page))
.route("/auth/google", get(auth::auth_google))
.route("/auth/google/callback", get(auth::auth_google_callback))
.route("/auth/logout", post(auth::auth_logout))
.route("/dashboard", get(account::dashboard))
.route("/entries", get(entries::entries_page))
.route("/trash", get(entries::trash_page))
.route("/audit", get(audit::audit_page))
.route("/account/bind/google", get(auth::account_bind_google))
.route("/account/unbind/{provider}", post(auth::account_unbind))
.route("/api/key-salt", get(account::api_key_salt))
.route("/api/key-setup", post(account::api_key_setup))
.route("/api/key-change", post(account::api_key_change))
.route("/api/apikey", get(account::api_apikey_get))
.route("/api/entries/options", get(entries::api_entry_options))
.route(
"/api/apikey/regenerate",
post(account::api_apikey_regenerate),
)
.route(
"/api/entries/{id}",
patch(entries::api_entry_patch).delete(entries::api_entry_delete),
)
.route("/api/trash/{id}/restore", post(entries::api_trash_restore))
.route(
"/api/trash/{id}",
axum::routing::delete(entries::api_trash_purge),
)
.route(
"/api/entries/{entry_id}/secrets/{secret_id}",
axum::routing::delete(entries::api_entry_secret_unlink),
)
.route(
"/api/entries/{id}/secrets/decrypt",
get(entries::api_entry_secrets_decrypt),
)
.route("/api/secrets/{secret_id}", patch(entries::api_secret_patch))
.route(
"/api/secrets/check-name",
get(entries::api_secret_check_name),
)
}
#[cfg(test)]
mod tests {
use std::net::SocketAddr;
use super::*;
#[test]
fn client_ip_ignores_forwarded_headers_without_trusted_proxy() {
let mut headers = HeaderMap::new();
headers.insert("x-forwarded-for", "203.0.113.10".parse().unwrap());
let ip = crate::client_ip::extract_client_ip_parts(
&headers,
SocketAddr::from(([127, 0, 0, 1], 9315)),
);
assert_eq!(ip, "127.0.0.1");
}
#[test]
fn client_ip_uses_valid_forwarded_header_with_trusted_proxy() {
// This test relies on TRUST_PROXY being unset (default); skip if set in env
if std::env::var("TRUST_PROXY").is_ok() {
return;
}
let mut headers = HeaderMap::new();
headers.insert("x-forwarded-for", "203.0.113.10, 10.0.0.1".parse().unwrap());
// Direct connection IP is used when TRUST_PROXY is not set
let ip = crate::client_ip::extract_client_ip_parts(
&headers,
SocketAddr::from(([127, 0, 0, 1], 9315)),
);
assert_eq!(ip, "127.0.0.1");
}
#[test]
fn request_ui_lang_prefers_zh_cn_over_en_fallback() {
let mut headers = HeaderMap::new();
headers.insert(header::ACCEPT_LANGUAGE, "zh-CN, en;q=0.5".parse().unwrap());
assert!(matches!(request_ui_lang(&headers), UiLang::ZhCn));
}
#[test]
fn request_ui_lang_detects_traditional_chinese_variants() {
let mut headers = HeaderMap::new();
headers.insert(
header::ACCEPT_LANGUAGE,
"zh-Hant, en;q=0.5".parse().unwrap(),
);
assert!(matches!(request_ui_lang(&headers), UiLang::ZhTw));
}
#[test]
fn paginate_clamps_page_before_computing_offset() {
let (current_page, total_pages, offset) = paginate(100, 12, 10);
assert_eq!(current_page, 2);
assert_eq!(total_pages, 2);
assert_eq!(offset, 10);
}
#[test]
fn paginate_handles_large_page_without_overflow() {
let (current_page, total_pages, offset) = paginate(u32::MAX, 1, ENTRIES_PAGE_LIMIT);
assert_eq!(current_page, 1);
assert_eq!(total_pages, 1);
assert_eq!(offset, 0);
}
#[test]
fn paginate_saturates_large_total_count() {
let (_, total_pages, _) = paginate(1, i64::MAX, ENTRIES_PAGE_LIMIT);
assert_eq!(total_pages, u32::MAX.div_ceil(ENTRIES_PAGE_LIMIT));
}
}

View File

@@ -13,60 +13,87 @@
--border: #30363d; --text: #e6edf3; --text-muted: #8b949e;
--accent: #58a6ff; --accent-hover: #79b8ff;
}
body { background: var(--bg); color: var(--text); font-family: 'Inter', sans-serif; min-height: 100vh; }
body { background: #0d1117; color: #c9d1d9; font-family: 'Inter', sans-serif; min-height: 100vh; }
.layout { display: flex; min-height: 100vh; }
.sidebar {
width: 220px; flex-shrink: 0; background: var(--surface); border-right: 1px solid var(--border);
padding: 24px 16px; display: flex; flex-direction: column; gap: 20px;
width: 200px; flex-shrink: 0; background: #0b1220; border-right: 1px solid rgba(240,246,252,0.08);
padding: 20px 12px; display: flex; flex-direction: column; gap: 20px;
}
.sidebar-logo { font-family: 'JetBrains Mono', monospace; font-size: 16px; font-weight: 600;
color: var(--text); text-decoration: none; padding: 0 10px; }
.sidebar-logo span { color: var(--accent); }
.sidebar-menu { display: flex; flex-direction: column; gap: 6px; }
.sidebar-logo { font-family: 'Inter', sans-serif; font-size: 16px; font-weight: 700;
color: #fff; text-decoration: none; padding: 0 10px; }
.sidebar-menu { display: grid; gap: 6px; }
.sidebar-link {
padding: 10px 12px; border-radius: 8px; color: var(--text-muted); text-decoration: none;
border: 1px solid transparent; font-size: 13px; font-weight: 500;
padding: 10px 12px; border-radius: 10px; color: #8b949e; text-decoration: none;
font-size: 13px; font-weight: 500;
}
.sidebar-link:hover { background: var(--surface2); color: var(--text); }
.sidebar-link:hover { background: rgba(56,139,253,0.14); color: #fff; }
.sidebar-link.active {
background: rgba(88,166,255,0.12); color: var(--text); border-color: rgba(88,166,255,0.35);
background: rgba(56,139,253,0.14); color: #fff;
}
.content-shell { flex: 1; min-width: 0; display: flex; flex-direction: column; }
.topbar {
background: var(--surface); border-bottom: 1px solid var(--border); padding: 0 24px;
display: flex; align-items: center; gap: 12px; min-height: 52px;
background: transparent; border-bottom: none; padding: 0 24px;
display: flex; align-items: center; gap: 12px; min-height: 44px;
}
.topbar-spacer { flex: 1; }
.nav-user { font-size: 13px; color: var(--text-muted); }
.lang-bar { display: flex; gap: 2px; background: var(--surface2); border-radius: 6px; padding: 2px; }
.lang-btn { padding: 3px 9px; border: none; background: none; color: var(--text-muted);
font-size: 12px; cursor: pointer; border-radius: 4px; }
.lang-btn.active { background: var(--border); color: var(--text); }
.nav-user { font-size: 14px; color: #8b949e; }
.lang-bar { display: flex; gap: 2px; background: rgba(240,246,252,0.06); border-radius: 8px; padding: 2px; }
.lang-btn { padding: 4px 10px; border: none; background: none; color: #8b949e;
font-size: 12px; cursor: pointer; border-radius: 6px; }
.lang-btn.active { background: rgba(240,246,252,0.1); color: #fff; }
.btn-sign-out {
padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border);
background: none; color: var(--text); font-size: 12px; text-decoration: none; cursor: pointer;
padding: 6px 14px; border-radius: 10px; border: 1px solid rgba(240,246,252,0.12);
background: #161b22; color: #c9d1d9; font-size: 13px; text-decoration: none; cursor: pointer;
}
.btn-sign-out:hover { background: var(--surface2); }
.main { padding: 32px 24px 40px; flex: 1; }
.card { background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
padding: 24px; width: 100%; max-width: 1180px; margin: 0 auto; }
.card-title { font-size: 20px; font-weight: 600; margin-bottom: 8px; }
.card-subtitle { color: var(--text-muted); font-size: 13px; margin-bottom: 20px; }
.empty { color: var(--text-muted); font-size: 14px; padding: 20px 0; }
.btn-sign-out:hover { border-color: rgba(56,139,253,0.45); color: #fff; }
.main { padding: 16px 16px 24px; flex: 1; }
.card { background: #111827; border: 1px solid rgba(240,246,252,0.08); border-radius: 18px;
padding: 20px; width: 100%; }
.card-title-row {
display: flex; align-items: center; flex-wrap: wrap; gap: 8px;
margin-bottom: 18px;
}
.card-title { font-size: 22px; font-weight: 700; margin: 0; color: #fff; }
.card-title-count {
display: inline-flex;
align-items: center;
min-height: 24px;
padding: 0 8px;
border: 1px solid rgba(240,246,252,0.08);
border-radius: 999px;
background: #0d1117;
color: #8b949e;
font-size: 12px;
font-weight: 600;
line-height: 1;
font-family: 'JetBrains Mono', monospace;
}
.empty { color: #8b949e; font-size: 14px; padding: 20px 0; }
table { width: 100%; border-collapse: collapse; }
th, td { text-align: left; vertical-align: top; padding: 12px 10px; border-top: 1px solid var(--border); }
th { color: var(--text-muted); font-size: 12px; font-weight: 600; }
td { font-size: 13px; }
th, td { text-align: left; vertical-align: top; padding: 14px 12px; border-top: 1px solid rgba(240,246,252,0.08); }
th { color: #8b949e; font-size: 12px; font-weight: 600; }
td { font-size: 13px; color: #c9d1d9; }
.mono { font-family: 'JetBrains Mono', monospace; }
.detail {
background: var(--bg); border: 1px solid var(--border); border-radius: 8px;
padding: 10px; white-space: pre-wrap; word-break: break-word; font-size: 12px;
max-width: 460px;
.col-detail { min-width: 260px; max-width: 460px; }
.detail-scroll {
height: calc(1.5em * 3 + 20px);
min-height: calc(1.5em * 3 + 20px);
overflow: auto;
resize: vertical;
white-space: pre-wrap;
word-break: break-word;
padding: 10px;
background: #0d1117;
border: 1px solid rgba(240,246,252,0.08);
border-radius: 10px;
font-size: 12px;
font-family: 'JetBrains Mono', monospace;
margin: 0;
}
@media (max-width: 900px) {
.layout { flex-direction: column; }
.sidebar {
width: 100%; border-right: none; border-bottom: 1px solid var(--border);
width: 100%; border-right: none; border-bottom: 1px solid rgba(240,246,252,0.08);
padding: 16px; gap: 14px;
}
.sidebar-menu { flex-direction: row; }
@@ -76,24 +103,43 @@
.topbar { padding: 12px 16px; flex-wrap: wrap; }
table, thead, tbody, th, td, tr { display: block; }
thead { display: none; }
tr { border-top: 1px solid var(--border); padding: 12px 0; }
tr { border-top: 1px solid rgba(240,246,252,0.08); padding: 12px 0; }
td { border-top: none; padding: 6px 0; }
td::before {
display: block; color: var(--text-muted); font-size: 11px;
display: block; color: #8b949e; font-size: 11px;
margin-bottom: 4px; text-transform: uppercase;
content: attr(data-label);
}
.detail { max-width: none; }
}
.pagination {
display: flex; align-items: center; gap: 12px; margin-top: 18px;
justify-content: center; padding: 12px 0;
}
.page-btn {
padding: 8px 12px; border-radius: 10px; border: 1px solid rgba(240,246,252,0.12);
background: #161b22; color: #c9d1d9; text-decoration: none;
font-size: 13px; cursor: pointer;
}
.page-btn:hover { border-color: rgba(56,139,253,0.45); color: #fff; }
.page-btn-disabled {
padding: 8px 12px; border-radius: 10px; border: 1px solid rgba(240,246,252,0.12);
background: #161b22; color: #6e7681; font-size: 13px;
opacity: 0.5; cursor: not-allowed;
}
.page-info {
color: #8b949e; font-size: 13px; font-family: 'JetBrains Mono', monospace;
}
</style>
</head>
<body>
<div class="layout">
<aside class="sidebar">
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<a href="/dashboard" class="sidebar-logo">secrets</a>
<nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link" data-i18n="navMcp">MCP</a>
<a href="/entries" class="sidebar-link" data-i18n="navEntries">条目</a>
<a href="/trash" class="sidebar-link" data-i18n="navTrash">回收站</a>
<a href="/audit" class="sidebar-link active" data-i18n="navAudit">审计</a>
</nav>
</aside>
@@ -114,8 +160,10 @@
<main class="main">
<section class="card">
<div class="card-title" data-i18n="auditTitle">我的审计</div>
<div class="card-subtitle" data-i18n="auditSubtitle">展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。</div>
<div class="card-title-row">
<div class="card-title" data-i18n="auditTitle">我的审计</div>
<span class="card-title-count">{{ total_count }}</span>
</div>
{% if entries.is_empty() %}
<div class="empty" data-i18n="emptyAudit">暂无审计记录。</div>
@@ -135,23 +183,39 @@
<td class="col-time mono" data-label="时间"><time class="audit-local-time" datetime="{{ entry.created_at_iso }}">{{ entry.created_at_iso }}</time></td>
<td class="col-action mono" data-label="动作">{{ entry.action }}</td>
<td class="col-target mono" data-label="目标">{{ entry.target }}</td>
<td class="col-detail" data-label="详情"><pre class="detail">{{ entry.detail }}</pre></td>
<td class="col-detail" data-label="详情">{% if !entry.detail.is_empty() %}<pre class="detail-scroll">{{ entry.detail }}</pre>{% endif %}</td>
</tr>
{% endfor %}
</tbody>
</table>
{% if total_count > 0 %}
<div class="pagination">
{% if current_page > 1 %}
<a href="?page={{ current_page - 1 }}" class="page-btn" data-i18n="prevPage">上一页</a>
{% else %}
<span class="page-btn page-btn-disabled" data-i18n="prevPage">上一页</span>
{% endif %}
<span class="page-info">{{ current_page }} / {{ total_pages }}</span>
{% if current_page < total_pages %}
<a href="?page={{ current_page + 1 }}" class="page-btn" data-i18n="nextPage">下一页</a>
{% else %}
<span class="page-btn page-btn-disabled" data-i18n="nextPage">下一页</span>
{% endif %}
</div>
{% endif %}
{% endif %}
</section>
</main>
</div>
</div>
<script src="/static/i18n.js"></script>
<script src="/static/i18n.js?v={{ version }}"></script>
<script>
(function () {
I18N_PAGE = {
'zh-CN': { pageTitle: 'Secrets — 审计', auditTitle: '我的审计', auditSubtitle: '展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。', emptyAudit: '暂无审计记录。', colTime: '时间', colAction: '动作', colTarget: '目标', colDetail: '详情' },
'zh-TW': { pageTitle: 'Secrets — 審計', auditTitle: '我的審計', auditSubtitle: '顯示最近 100 筆與目前使用者相關的新審計記錄。時間為瀏覽器本地時區。', emptyAudit: '暫無審計記錄。', colTime: '時間', colAction: '動作', colTarget: '目標', colDetail: '詳情' },
en: { pageTitle: 'Secrets — Audit', auditTitle: 'My audit', auditSubtitle: 'Shows the latest 100 audit records related to the current user. Time is in browser local timezone.', emptyAudit: 'No audit records.', colTime: 'Time', colAction: 'Action', colTarget: 'Target', colDetail: 'Detail' }
'zh-CN': { pageTitle: 'Secrets — 审计', auditTitle: '我的审计', emptyAudit: '暂无审计记录。', colTime: '时间', colAction: '动作', colTarget: '目标', colDetail: '详情', prevPage: '上一页', nextPage: '下一页' },
'zh-TW': { pageTitle: 'Secrets — 審計', auditTitle: '我的審計', emptyAudit: '暫無審計記錄。', colTime: '時間', colAction: '動作', colTarget: '目標', colDetail: '詳情', prevPage: '上一頁', nextPage: '下一頁' },
en: { pageTitle: 'Secrets — Audit', auditTitle: 'My audit', emptyAudit: 'No audit records.', colTime: 'Time', colAction: 'Action', colTarget: 'Target', colDetail: 'Detail', prevPage: 'Previous', nextPage: 'Next' }
};
window.applyPageLang = function () {
@@ -175,6 +239,7 @@
el.title = raw + ' (UTC)';
}
});
applyLang();
})();
</script>

View File

@@ -18,110 +18,108 @@
.layout { display: flex; min-height: 100vh; }
.sidebar {
width: 220px; flex-shrink: 0; background: var(--surface); border-right: 1px solid var(--border);
padding: 24px 16px; display: flex; flex-direction: column; gap: 20px;
width: 200px; flex-shrink: 0; background: #0b1220; border-right: 1px solid rgba(240,246,252,0.08);
padding: 20px 12px; display: flex; flex-direction: column; gap: 20px;
}
.sidebar-logo { font-family: 'JetBrains Mono', monospace; font-size: 16px; font-weight: 600;
color: var(--text); text-decoration: none; padding: 0 10px; }
.sidebar-logo span { color: var(--accent); }
.sidebar-menu { display: flex; flex-direction: column; gap: 6px; }
.sidebar-logo { font-family: 'Inter', sans-serif; font-size: 16px; font-weight: 700;
color: #fff; text-decoration: none; padding: 0 10px; }
.sidebar-menu { display: grid; gap: 6px; }
.sidebar-link {
padding: 10px 12px; border-radius: 8px; color: var(--text-muted); text-decoration: none;
border: 1px solid transparent; font-size: 13px; font-weight: 500;
}
.sidebar-link:hover { background: var(--surface2); color: var(--text); }
.sidebar-link.active {
background: rgba(88,166,255,0.12); color: var(--text); border-color: rgba(88,166,255,0.35);
padding: 10px 12px; border-radius: 10px; color: #8b949e; text-decoration: none;
font-size: 13px; font-weight: 500;
}
.sidebar-link:hover { background: rgba(56,139,253,0.14); color: #fff; }
.sidebar-link.active { background: rgba(56,139,253,0.14); color: #fff; }
.content-shell { flex: 1; min-width: 0; display: flex; flex-direction: column; }
.topbar {
background: var(--surface); border-bottom: 1px solid var(--border); padding: 0 24px;
display: flex; align-items: center; gap: 12px; min-height: 52px;
background: transparent; border-bottom: none; padding: 0 24px;
display: flex; align-items: center; gap: 12px; min-height: 44px;
}
.topbar-spacer { flex: 1; }
.nav-user { font-size: 13px; color: var(--text-muted); }
.lang-bar { display: flex; gap: 2px; background: var(--surface2); border-radius: 6px; padding: 2px; }
.lang-btn { padding: 3px 9px; border: none; background: none; color: var(--text-muted);
font-size: 12px; cursor: pointer; border-radius: 4px; }
.lang-btn.active { background: var(--border); color: var(--text); }
.btn-sign-out { padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border);
background: none; color: var(--text); font-size: 12px; cursor: pointer; }
.btn-sign-out:hover { background: var(--surface2); }
.nav-user { font-size: 14px; color: #8b949e; }
.lang-bar { display: flex; gap: 2px; background: rgba(240,246,252,0.06); border-radius: 8px; padding: 2px; }
.lang-btn { padding: 4px 10px; border: none; background: none; color: #8b949e;
font-size: 12px; cursor: pointer; border-radius: 6px; }
.lang-btn.active { background: rgba(240,246,252,0.1); color: #fff; }
.btn-sign-out {
padding: 6px 14px; border-radius: 10px; border: 1px solid rgba(240,246,252,0.12);
background: #161b22; color: #c9d1d9; font-size: 13px; text-decoration: none; cursor: pointer;
}
.btn-sign-out:hover { border-color: rgba(56,139,253,0.45); color: #fff; }
/* Main content column */
.main { display: flex; flex-direction: column; align-items: center;
padding: 24px 20px 8px; min-height: 0; }
.main { padding: 16px 16px 0; flex: 1; min-height: 0; display: flex; flex-direction: column; }
.app-footer {
margin-top: auto;
text-align: center;
padding: 4px 20px 12px;
font-size: 12px;
color: #9da7b3;
padding: 12px 0;
font-size: 11px;
color: var(--text-muted);
font-family: 'JetBrains Mono', monospace;
margin-top: auto;
}
.card { background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
padding: 24px; width: 100%; max-width: 980px; }
.card-title { font-size: 18px; font-weight: 600; margin-bottom: 24px; }
.card { background: #111827; border: 1px solid rgba(240,246,252,0.08); border-radius: 18px;
padding: 20px; width: 100%; }
.card-title { font-size: 22px; font-weight: 700; margin-bottom: 24px; color: #fff; }
/* Form */
.field { margin-bottom: 12px; }
.field label { display: block; font-size: 12px; color: var(--text-muted); margin-bottom: 5px; }
.field input { width: 100%; background: var(--bg); border: 1px solid var(--border);
color: var(--text); padding: 9px 12px; border-radius: 6px;
.field label { display: block; font-size: 12px; color: #8b949e; margin-bottom: 5px; }
.field input { width: 100%; background: #0d1117; border: 1px solid rgba(240,246,252,0.08);
color: #c9d1d9; padding: 9px 12px; border-radius: 10px;
font-size: 13px; outline: none; }
.field input:focus { border-color: var(--accent); }
.field input:focus { border-color: rgba(56,139,253,0.5); }
.pw-field { position: relative; }
.pw-field > input { padding-right: 42px; }
.pw-toggle {
position: absolute; right: 6px; top: 50%; transform: translateY(-50%);
display: flex; align-items: center; justify-content: center;
width: 32px; height: 32px; border: none; border-radius: 6px;
background: transparent; color: var(--text-muted); cursor: pointer;
width: 32px; height: 32px; border: none; border-radius: 8px;
background: transparent; color: #8b949e; cursor: pointer;
}
.pw-toggle:hover { color: var(--text); background: var(--surface2); }
.pw-toggle:focus-visible { outline: 2px solid var(--accent); outline-offset: 2px; }
.pw-toggle:hover { color: #c9d1d9; background: rgba(240,246,252,0.06); }
.pw-toggle:focus-visible { outline: 2px solid rgba(56,139,253,0.5); outline-offset: 2px; }
.pw-icon svg { display: block; }
.pw-icon.hidden { display: none; }
.error-msg { color: var(--danger); font-size: 12px; margin-top: 6px; display: none; }
.error-msg { color: #f85149; font-size: 12px; margin-top: 6px; display: none; }
/* Buttons */
.btn-primary { display: inline-flex; align-items: center; gap: 6px; width: 100%;
justify-content: center; padding: 10px 20px; border-radius: 7px;
border: none; background: var(--accent); color: #0d1117;
justify-content: center; padding: 10px 20px; border-radius: 10px;
border: none; background: #388bfd; color: #fff;
font-size: 14px; font-weight: 600; cursor: pointer; transition: background 0.15s; }
.btn-primary:hover { background: var(--accent-hover); }
.btn-primary:hover { background: #58a6ff; }
.btn-primary:disabled { opacity: 0.5; cursor: not-allowed; }
.btn-sm { display: inline-flex; align-items: center; gap: 4px; padding: 5px 12px;
border-radius: 5px; border: 1px solid var(--border); background: none;
color: var(--text-muted); font-size: 12px; cursor: pointer; }
.btn-sm:hover { color: var(--text); border-color: var(--text-muted); }
.btn-sm { display: inline-flex; align-items: center; gap: 4px; padding: 8px 12px;
border-radius: 10px; border: 1px solid rgba(240,246,252,0.12); background: #161b22;
color: #8b949e; font-size: 13px; cursor: pointer; font-family: inherit; }
.btn-sm:hover { border-color: rgba(56,139,253,0.45); color: #fff; }
.btn-copy { display: flex; align-items: center; gap: 8px; width: 100%; justify-content: center;
padding: 11px 20px; border-radius: 7px; border: 1px solid var(--success);
background: rgba(63,185,80,0.1); color: var(--success);
font-size: 14px; font-weight: 600; cursor: pointer; transition: all 0.15s; }
padding: 11px 20px; border-radius: 10px; border: 1px solid #3fb950;
background: rgba(63,185,80,0.1); color: #3fb950;
font-size: 14px; font-weight: 600; cursor: pointer; transition: all 0.15s; font-family: inherit; }
.btn-copy:hover { background: rgba(63,185,80,0.2); }
.btn-copy.copied { background: var(--success); color: #0d1117; border-color: var(--success); }
.btn-copy.copied { background: #3fb950; color: #0d1117; border-color: #3fb950; }
/* Config format switcher */
.config-tabs { display: grid; grid-template-columns: repeat(2, minmax(0, 1fr)); gap: 10px; margin-bottom: 12px; }
.config-tab { padding: 12px 14px; border-radius: 10px; border: 1px solid var(--border);
background: var(--surface2); color: var(--text-muted); cursor: pointer;
.config-tab { padding: 12px 14px; border-radius: 10px; border: 1px solid rgba(240,246,252,0.08);
background: #161b22; color: #8b949e; cursor: pointer;
font-family: inherit; text-align: left; transition: border-color 0.15s, background 0.15s, transform 0.15s; }
.config-tab:hover { color: var(--text); border-color: var(--accent); transform: translateY(-1px); }
.config-tab.active { background: rgba(88,166,255,0.1); color: var(--text); border-color: var(--accent); }
.config-tab:hover { color: #c9d1d9; border-color: rgba(56,139,253,0.45); transform: translateY(-1px); }
.config-tab.active { background: rgba(56,139,253,0.14); color: #fff; border-color: rgba(56,139,253,0.3); }
.config-tab-title { display: block; font-size: 13px; font-weight: 600; color: inherit; }
/* Config box */
.config-wrap { position: relative; margin-bottom: 14px; }
.config-box { background: var(--bg); border: 1px solid var(--border); border-radius: 8px;
.config-box { background: #0d1117; border: 1px solid rgba(240,246,252,0.08); border-radius: 10px;
padding: 16px; font-family: 'JetBrains Mono', monospace; font-size: 11px;
line-height: 1.7; color: var(--text); overflow-x: auto; white-space: pre; }
.config-box.locked { color: var(--text-muted); filter: blur(3px); user-select: none;
line-height: 1.7; color: #c9d1d9; overflow-x: auto; white-space: pre; }
.config-box.locked { color: #8b949e; filter: blur(3px); user-select: none;
pointer-events: none; }
.config-key { color: #79c0ff; }
.config-str { color: #a5d6ff; }
.config-val { color: var(--accent); }
.config-val { color: #58a6ff; }
/* Divider */
.divider { border: none; border-top: 1px solid var(--border); margin: 20px 0; }
.divider { border: none; border-top: 1px solid rgba(240,246,252,0.08); margin: 20px 0; }
/* Actions row */
.actions-row { display: flex; gap: 8px; flex-wrap: wrap; justify-content: center; }
@@ -135,34 +133,29 @@
.modal-bd { display: none; position: fixed; inset: 0; background: rgba(0,0,0,0.75);
z-index: 100; align-items: center; justify-content: center; }
.modal-bd.open { display: flex; }
.modal { background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
.modal { background: #111827; border: 1px solid rgba(240,246,252,0.08); border-radius: 18px;
padding: 28px; width: 100%; max-width: 420px; }
.modal h3 { font-size: 16px; font-weight: 600; margin-bottom: 16px; }
.modal h3 { font-size: 18px; font-weight: 700; margin-bottom: 16px; color: #fff; }
.modal-actions { display: flex; gap: 8px; margin-top: 16px; }
.btn-modal-ok { flex: 1; padding: 8px; border-radius: 6px; border: none;
background: var(--accent); color: #0d1117; font-size: 13px;
font-weight: 600; cursor: pointer; }
.btn-modal-ok:hover { background: var(--accent-hover); }
.btn-modal-cancel { padding: 8px 16px; border-radius: 6px; border: 1px solid var(--border);
background: none; color: var(--text); font-size: 13px; cursor: pointer; }
.btn-modal-cancel:hover { background: var(--surface2); }
.btn-modal-ok { flex: 1; padding: 8px; border-radius: 10px; border: none;
background: #388bfd; color: #fff; font-size: 13px;
font-weight: 600; cursor: pointer; font-family: inherit; }
.btn-modal-ok:hover { background: #58a6ff; }
.btn-modal-cancel { padding: 8px 16px; border-radius: 10px; border: 1px solid rgba(240,246,252,0.12);
background: #161b22; color: #c9d1d9; font-size: 13px; cursor: pointer; font-family: inherit; }
.btn-modal-cancel:hover { border-color: rgba(56,139,253,0.45); color: #fff; }
@media (max-width: 900px) {
.layout { flex-direction: column; }
.sidebar {
width: 100%; border-right: none; border-bottom: 1px solid var(--border);
width: 100%; border-right: none; border-bottom: 1px solid rgba(240,246,252,0.08);
padding: 16px; gap: 14px;
}
.sidebar-menu { flex-direction: row; }
.sidebar-link { flex: 1; text-align: center; }
}
@media (max-width: 720px) {
.config-tabs { grid-template-columns: 1fr; }
.sidebar-menu { flex-direction: row; flex-wrap: wrap; }
.sidebar-link { flex: 1; text-align: center; min-width: 72px; }
.main { padding: 20px 12px 28px; }
.card { padding: 16px; }
.topbar { padding: 12px 16px; flex-wrap: wrap; }
.main { padding: 16px 12px 6px; }
.app-footer { padding: 4px 12px 10px; }
.card { padding: 18px; }
}
</style>
@@ -171,11 +164,12 @@
<div class="layout">
<aside class="sidebar">
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<a href="/dashboard" class="sidebar-logo">secrets</a>
<nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link active">MCP</a>
<a href="/entries" class="sidebar-link">条目</a>
<a href="/audit" class="sidebar-link">审计</a>
<a href="/dashboard" class="sidebar-link active" data-i18n="navMcp">MCP</a>
<a href="/entries" class="sidebar-link" data-i18n="navEntries">条目</a>
<a href="/trash" class="sidebar-link" data-i18n="navTrash">回收站</a>
<a href="/audit" class="sidebar-link" data-i18n="navAudit">审计</a>
</nav>
</aside>
@@ -293,18 +287,27 @@
<button class="btn-sm" onclick="confirmRegenerate()" data-i18n="btnRegen">重置 API Key</button>
</div>
</div>
</div>
</div>
<footer class="app-footer">{{ version }}</footer>
</div>
</div>
</div>
</div><!-- /main -->
</div><!-- /content-shell -->
</div><!-- /layout -->
<!-- ── Change passphrase modal ──────────────────────────────────────────────── -->
<div class="modal-bd" id="change-modal">
<div class="modal">
<h3 data-i18n="changeTitle">更换密码</h3>
<div class="field">
<label data-i18n="labelCurrent">当前密码</label>
<div class="pw-field">
<input type="password" id="change-pass-old" data-i18n-ph="phCurrent" autocomplete="current-password">
<button type="button" class="pw-toggle" data-target="change-pass-old" aria-pressed="false"
onclick="togglePwVisibility(this)" aria-label="">
<span class="pw-icon pw-icon-show" aria-hidden="true"><svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><path d="M1 12s4-8 11-8 11 8 11 8-4 8-11 8-11-8-11-8z"/><circle cx="12" cy="12" r="3"/></svg></span>
<span class="pw-icon pw-icon-hide hidden" aria-hidden="true"><svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><path d="M17.94 17.94A10.07 10.07 0 0 1 12 20c-7 0-11-8-11-8a18.45 18.45 0 0 1 5.06-5.94M9.9 4.24A9.12 9.12 0 0 1 12 4c7 0 11 8 11 8a18.5 18.5 0 0 1-2.16 3.19m-6.72-1.07a3 3 0 1 1-4.24-4.24"/><line x1="1" y1="1" x2="23" y2="23"/></svg></span>
</button>
</div>
</div>
<div class="field">
<label data-i18n="labelNew">新密码</label>
<div class="pw-field">
@@ -340,13 +343,16 @@
const T = {
'zh-CN': {
navMcp: 'MCP', navEntries: '条目', navTrash: '回收站', navAudit: '审计',
signOut: '退出',
lockedTitle: '获取 MCP 配置',
labelPassphrase: '加密密码',
labelConfirm: '确认密码',
labelNew: '新密码',
labelCurrent: '当前密码',
phPassphrase: '输入密码…',
phConfirm: '再次输入…',
phCurrent: '输入当前密码…',
btnSetup: '设置并获取配置',
btnUnlock: '解锁并获取配置',
setupNote: '密码不会上传服务器。遗忘后数据将无法恢复。',
@@ -354,6 +360,7 @@ const T = {
errShort: '密码至少需要 8 个字符。',
errMismatch: '两次输入不一致。',
errWrong: '密码错误,请重试。',
errWrongOld: '当前密码错误,请重试。',
unlockedTitle: 'MCP 配置',
tabMcp: 'Cursor、Claude Code、Codex、Gemini CLI',
tabOpencode: 'OpenCode',
@@ -374,13 +381,16 @@ const T = {
ariaHidePw: '隐藏密码',
},
'zh-TW': {
navMcp: 'MCP', navEntries: '條目', navTrash: '回收站', navAudit: '審計',
signOut: '登出',
lockedTitle: '取得 MCP 設定',
labelPassphrase: '加密密碼',
labelConfirm: '確認密碼',
labelNew: '新密碼',
labelCurrent: '目前密碼',
phPassphrase: '輸入密碼…',
phConfirm: '再次輸入…',
phCurrent: '輸入目前密碼…',
btnSetup: '設定並取得設定',
btnUnlock: '解鎖並取得設定',
setupNote: '密碼不會上傳伺服器。遺忘後資料將無法復原。',
@@ -388,6 +398,7 @@ const T = {
errShort: '密碼至少需要 8 個字元。',
errMismatch: '兩次輸入不一致。',
errWrong: '密碼錯誤,請重試。',
errWrongOld: '目前密碼錯誤,請重試。',
unlockedTitle: 'MCP 設定',
tabMcp: 'Cursor、Claude Code、Codex、Gemini CLI',
tabOpencode: 'OpenCode',
@@ -408,13 +419,16 @@ const T = {
ariaHidePw: '隱藏密碼',
},
'en': {
navMcp: 'MCP', navEntries: 'Entries', navTrash: 'Trash', navAudit: 'Audit',
signOut: 'Sign out',
lockedTitle: 'Get MCP Config',
labelPassphrase: 'Encryption password',
labelConfirm: 'Confirm password',
labelNew: 'New password',
labelCurrent: 'Current password',
phPassphrase: 'Enter password…',
phConfirm: 'Repeat password…',
phCurrent: 'Enter current password…',
btnSetup: 'Set up & get config',
btnUnlock: 'Unlock & get config',
setupNote: 'Your password never leaves this device. If forgotten, encrypted data cannot be recovered.',
@@ -422,6 +436,7 @@ const T = {
errShort: 'Password must be at least 8 characters.',
errMismatch: 'Passwords do not match.',
errWrong: 'Incorrect password, please try again.',
errWrongOld: 'Current password is incorrect, please try again.',
unlockedTitle: 'MCP Config',
tabMcp: 'Cursor, Claude Code, Codex, Gemini CLI',
tabOpencode: 'OpenCode',
@@ -557,20 +572,16 @@ function buildSecretsConfigText(apiKey, encKey) {
return lines.length < 3 ? wrapped : lines.slice(1, -1).join('\n');
}
/** OpenCode: local stdio bridge to Streamable HTTP MCP (mcp-remote --transport http-only). */
/** OpenCode: native Streamable HTTP transport (no mcp-remote bridge needed). */
function buildOpencodeEntry(apiKey, encKey) {
return {
type: 'local',
command: [
'npx', '-y', 'mcp-remote',
BASE_URL + '/mcp',
'--header',
'Authorization: Bearer ' + apiKey,
'--header',
'X-Encryption-Key: ' + encKey,
'--transport',
'http-only'
]
type: 'remote',
url: BASE_URL + '/mcp',
headers: {
'Authorization': 'Bearer ' + apiKey,
'X-Encryption-Key': encKey
},
oauth: false
};
}
@@ -832,14 +843,16 @@ async function confirmRegenerate() {
// ── Change passphrase modal ────────────────────────────────────────────────────
function openChangeModal() {
document.getElementById('change-pass-old').value = '';
document.getElementById('change-pass1').value = '';
document.getElementById('change-pass2').value = '';
document.getElementById('change-pass-old').type = 'password';
document.getElementById('change-pass1').type = 'password';
document.getElementById('change-pass2').type = 'password';
document.getElementById('change-error').style.display = 'none';
document.getElementById('change-modal').classList.add('open');
syncPwToggleI18n();
setTimeout(() => document.getElementById('change-pass1').focus(), 50);
setTimeout(() => document.getElementById('change-pass-old').focus(), 50);
}
function closeChangeModal() {
@@ -847,11 +860,13 @@ function closeChangeModal() {
}
async function doChange() {
const passOld = document.getElementById('change-pass-old').value;
const pass1 = document.getElementById('change-pass1').value;
const pass2 = document.getElementById('change-pass2').value;
const errEl = document.getElementById('change-error');
errEl.style.display = 'none';
if (!passOld) { showErr(errEl, t('errEmpty')); return; }
if (!pass1) { showErr(errEl, t('errEmpty')); return; }
if (pass1.length < 8) { showErr(errEl, t('errShort')); return; }
if (pass1 !== pass2) { showErr(errEl, t('errMismatch')); return; }
@@ -860,24 +875,39 @@ async function doChange() {
btn.disabled = true;
btn.innerHTML = '<span class="spinner" style="border-top-color:#0d1117"></span>';
try {
const salt = crypto.getRandomValues(new Uint8Array(32));
const cryptoKey = await deriveKey(pass1, salt, true);
const keyCheckHex = await encryptKeyCheck(cryptoKey);
const hexKey = await exportKeyHex(cryptoKey);
// Fetch current salt to derive old key for verification
const saltResp = await fetchAuth('/api/key-salt');
if (!saltResp.ok) throw new Error('HTTP ' + saltResp.status);
const saltData = await saltResp.json();
if (!saltData.has_passphrase) throw new Error('No passphrase configured');
const resp = await fetchAuth('/api/key-setup', {
// Derive old key and verify it
const oldCryptoKey = await deriveKey(passOld, hexToBytes(saltData.salt), true);
const validOld = await verifyKeyCheck(oldCryptoKey, saltData.key_check);
if (!validOld) { showErr(errEl, t('errWrongOld')); return; }
const oldHexKey = await exportKeyHex(oldCryptoKey);
// Derive new key
const newSalt = crypto.getRandomValues(new Uint8Array(32));
const newCryptoKey = await deriveKey(pass1, newSalt, true);
const newKeyCheckHex = await encryptKeyCheck(newCryptoKey);
const newHexKey = await exportKeyHex(newCryptoKey);
const resp = await fetchAuth('/api/key-change', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
salt: bytesToHex(salt),
key_check: keyCheckHex,
old_key: oldHexKey,
new_key: newHexKey,
salt: bytesToHex(newSalt),
key_check: newKeyCheckHex,
params: { alg: 'pbkdf2-sha256', iterations: PBKDF2_ITERATIONS }
})
});
if (!resp.ok) throw new Error('HTTP ' + resp.status);
currentEncKey = hexKey;
sessionStorage.setItem('enc_key', hexKey);
currentEncKey = newHexKey;
sessionStorage.setItem('enc_key', newHexKey);
renderRealConfig();
closeChangeModal();
} catch (e) {

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@ var I18N_SHARED = {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: '条目',
navTrash: '回收站',
navAudit: '审计',
signOut: '退出',
mobileLabelTime: '时间',
@@ -14,6 +15,7 @@ var I18N_SHARED = {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: '條目',
navTrash: '回收站',
navAudit: '審計',
signOut: '登出',
mobileLabelTime: '時間',
@@ -25,6 +27,7 @@ var I18N_SHARED = {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: 'Entries',
navTrash: 'Trash',
navAudit: 'Audit',
signOut: 'Sign out',
mobileLabelTime: 'Time',

View File

@@ -0,0 +1,272 @@
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="icon" href="/favicon.svg?v={{ version }}" type="image/svg+xml">
<title>Secrets — 回收站</title>
<style>
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;600&family=Inter:wght@400;500;600&display=swap');
:root {
--bg: #0d1117; --surface: #161b22; --surface2: #21262d;
--border: #30363d; --text: #e6edf3; --text-muted: #8b949e;
--accent: #58a6ff; --accent-hover: #79b8ff;
}
body { background: var(--bg); color: var(--text); font-family: 'Inter', sans-serif; min-height: 100vh; }
.layout { display: flex; min-height: 100vh; }
.sidebar {
width: 200px; flex-shrink: 0; background: #0b1220; border-right: 1px solid rgba(240,246,252,0.08);
padding: 20px 12px; display: flex; flex-direction: column; gap: 20px;
}
.sidebar-logo { font-family: 'Inter', sans-serif; font-size: 16px; font-weight: 700;
color: #fff; text-decoration: none; padding: 0 10px; }
.sidebar-menu { display: grid; gap: 6px; }
.sidebar-link {
padding: 10px 12px; border-radius: 10px; color: #8b949e; text-decoration: none;
font-size: 13px; font-weight: 500;
}
.sidebar-link:hover { background: rgba(56,139,253,0.14); color: #fff; }
.sidebar-link.active { background: rgba(56,139,253,0.14); color: #fff; }
.content-shell { flex: 1; min-width: 0; display: flex; flex-direction: column; }
.topbar {
background: transparent; border-bottom: none; padding: 0 24px;
display: flex; align-items: center; gap: 12px; min-height: 44px;
}
.topbar-spacer { flex: 1; }
.nav-user { font-size: 14px; color: #8b949e; }
.lang-bar { display: flex; gap: 2px; background: rgba(240,246,252,0.06); border-radius: 8px; padding: 2px; }
.lang-btn { padding: 4px 10px; border: none; background: none; color: #8b949e;
font-size: 12px; cursor: pointer; border-radius: 6px; }
.lang-btn.active { background: rgba(240,246,252,0.1); color: #fff; }
.btn-sign-out {
padding: 6px 14px; border-radius: 10px; border: 1px solid rgba(240,246,252,0.12);
background: #161b22; color: #c9d1d9; font-size: 13px; text-decoration: none; cursor: pointer;
}
.btn-sign-out:hover { border-color: rgba(56,139,253,0.45); color: #fff; }
.main { padding: 16px 16px 24px; flex: 1; }
.card { background: #111827; border: 1px solid rgba(240,246,252,0.08); border-radius: 18px;
padding: 20px; width: 100%; }
.card-title { font-size: 22px; font-weight: 700; margin-bottom: 8px; color: #fff; }
.card-subtitle { color: #8b949e; font-size: 14px; margin-bottom: 18px; }
table { width: 100%; border-collapse: collapse; }
th, td { text-align: left; padding: 14px 12px; border-top: 1px solid rgba(240,246,252,0.08); vertical-align: top; }
th { color: #8b949e; font-size: 12px; font-weight: 600; }
td { font-size: 13px; color: #c9d1d9; }
.mono { font-family: 'JetBrains Mono', monospace; }
.row-actions { display: flex; gap: 8px; flex-wrap: wrap; }
.btn { border: 1px solid rgba(240,246,252,0.12); background: #161b22; color: #c9d1d9; border-radius: 10px; padding: 8px 12px; cursor: pointer; font-size: 13px; font-family: inherit; }
.btn:hover { border-color: rgba(56,139,253,0.45); color: #fff; }
.btn-danger { color: #f85149; }
.empty { color: #8b949e; font-size: 14px; padding: 20px 0; }
.pagination {
display: flex; align-items: center; gap: 12px; margin-top: 18px;
justify-content: center; padding: 12px 0;
}
.page-btn {
padding: 8px 12px; border-radius: 10px; border: 1px solid rgba(240,246,252,0.12);
background: #161b22; color: #c9d1d9; text-decoration: none;
font-size: 13px; cursor: pointer;
}
.page-btn:hover { border-color: rgba(56,139,253,0.45); color: #fff; }
.page-btn.disabled {
padding: 8px 12px; border-radius: 10px; border: 1px solid rgba(240,246,252,0.12);
background: #161b22; color: #6e7681; font-size: 13px;
opacity: 0.5; cursor: not-allowed;
}
.page-info {
color: #8b949e; font-size: 13px; font-family: 'JetBrains Mono', monospace;
}
@media (max-width: 900px) {
.layout { flex-direction: column; }
.sidebar {
width: 100%; border-right: none; border-bottom: 1px solid rgba(240,246,252,0.08);
padding: 16px; gap: 14px;
}
.sidebar-menu { flex-direction: row; flex-wrap: wrap; }
.sidebar-link { flex: 1; text-align: center; min-width: 72px; }
.main { padding: 20px 12px 28px; }
.card { padding: 16px; }
.topbar { padding: 12px 16px; flex-wrap: wrap; }
table, thead, tbody, th, td, tr { display: block; }
thead { display: none; }
tr { border-top: 1px solid rgba(240,246,252,0.08); padding: 12px 0; }
td { border-top: none; padding: 6px 0; }
td::before {
display: block; color: #8b949e; font-size: 11px;
margin-bottom: 4px; text-transform: uppercase;
content: attr(data-label);
}
}
</style>
</head>
<body>
<div class="layout">
<aside class="sidebar">
<a href="/dashboard" class="sidebar-logo">secrets</a>
<nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link" data-i18n="navMcp">MCP</a>
<a href="/entries" class="sidebar-link" data-i18n="navEntries">条目</a>
<a href="/trash" class="sidebar-link active" data-i18n="navTrash">回收站</a>
<a href="/audit" class="sidebar-link" data-i18n="navAudit">审计</a>
</nav>
</aside>
<div class="content-shell">
<div class="topbar">
<span class="topbar-spacer"></span>
<span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span>
<div class="lang-bar">
<button class="lang-btn" onclick="setLang('zh-CN')"></button>
<button class="lang-btn" onclick="setLang('zh-TW')"></button>
<button class="lang-btn" onclick="setLang('en')">EN</button>
</div>
<form action="/auth/logout" method="post" style="display:inline">
<button type="submit" class="btn-sign-out" data-i18n="signOut">退出</button>
</form>
</div>
<main class="main">
<section class="card">
<div class="card-title" data-i18n="trashTitle">回收站</div>
<div class="card-subtitle" data-i18n="trashSubtitle">已删除条目会保留 3 个月,可在此恢复或永久删除。</div>
{% if entries.is_empty() %}
<div class="empty" data-i18n="emptyTrash">回收站为空。</div>
{% else %}
<table>
<thead>
<tr>
<th data-i18n="colName">名称</th>
<th data-i18n="colType">类型</th>
<th data-i18n="colFolder">文件夹</th>
<th data-i18n="colDeletedAt">删除时间</th>
<th data-i18n="colActions">操作</th>
</tr>
</thead>
<tbody>
{% for entry in entries %}
<tr data-trash-entry-id="{{ entry.id }}">
<td class="mono" data-label="名称">{{ entry.name }}</td>
<td class="mono" data-label="类型">{{ entry.entry_type }}</td>
<td class="mono" data-label="文件夹">{{ entry.folder }}</td>
<td class="mono" data-label="删除时间" title="{{ entry.deleted_at_iso }}">{{ entry.deleted_at_label }}</td>
<td data-label="操作">
<div class="row-actions">
<button type="button" class="btn btn-restore" data-i18n="btnRestore">恢复</button>
<button type="button" class="btn btn-danger btn-purge" data-i18n="btnPurge">永久删除</button>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
{% endif %}
{% if total_count > 0 %}
<div class="pagination">
{% if current_page > 1 %}
<a class="page-btn" href="/trash?page={{ current_page - 1 }}" data-i18n="prevPage">上一页</a>
{% else %}
<span class="page-btn disabled" data-i18n="prevPage">上一页</span>
{% endif %}
<span class="page-info">{{ current_page }} / {{ total_pages }}</span>
{% if current_page < total_pages %}
<a class="page-btn" href="/trash?page={{ current_page + 1 }}" data-i18n="nextPage">下一页</a>
{% else %}
<span class="page-btn disabled" data-i18n="nextPage">下一页</span>
{% endif %}
</div>
{% endif %}
</section>
</main>
</div>
</div>
<script src="/static/i18n.js?v={{ version }}"></script>
<script>
(function () {
I18N_PAGE = {
'zh-CN': {
navMcp: 'MCP', navEntries: '条目', navTrash: '回收站', navAudit: '审计',
signOut: '退出', trashTitle: '回收站', trashSubtitle: '已删除条目会保留 3 个月,可在此恢复或永久删除。',
emptyTrash: '回收站为空。', colName: '名称', colType: '类型', colFolder: '文件夹',
colDeletedAt: '删除时间', colActions: '操作', btnRestore: '恢复', btnPurge: '永久删除',
prevPage: '上一页', nextPage: '下一页',
mobileLabelName: '名称', mobileLabelType: '类型', mobileLabelFolder: '文件夹',
mobileLabelDeletedAt: '删除时间', mobileLabelActions: '操作'
},
'zh-TW': {
navMcp: 'MCP', navEntries: '條目', navTrash: '回收站', navAudit: '審計',
signOut: '退出', trashTitle: '回收站', trashSubtitle: '已刪除條目會保留 3 個月,可在此恢復或永久刪除。',
emptyTrash: '回收站為空。', colName: '名稱', colType: '類型', colFolder: '文件夾',
colDeletedAt: '刪除時間', colActions: '操作', btnRestore: '恢復', btnPurge: '永久刪除',
prevPage: '上一頁', nextPage: '下一頁',
mobileLabelName: '名稱', mobileLabelType: '類型', mobileLabelFolder: '文件夾',
mobileLabelDeletedAt: '刪除時間', mobileLabelActions: '操作'
},
en: {
navMcp: 'MCP', navEntries: 'Entries', navTrash: 'Trash', navAudit: 'Audit',
signOut: 'Sign out', trashTitle: 'Trash', trashSubtitle: 'Deleted entries are kept for 3 months. Restore or permanently delete them here.',
emptyTrash: 'Trash is empty.', colName: 'Name', colType: 'Type', colFolder: 'Folder',
colDeletedAt: 'Deleted at', colActions: 'Actions', btnRestore: 'Restore', btnPurge: 'Purge',
prevPage: 'Previous', nextPage: 'Next',
mobileLabelName: 'Name', mobileLabelType: 'Type', mobileLabelFolder: 'Folder',
mobileLabelDeletedAt: 'Deleted at', mobileLabelActions: 'Actions'
}
};
window.applyPageLang = function () {
document.querySelectorAll('tbody tr').forEach(function (tr) {
['Name', 'Type', 'Folder', 'DeletedAt', 'Actions'].forEach(function (col) {
var td = tr.querySelector('[data-label]');
});
});
};
applyLang();
})();
document.querySelectorAll('tr[data-trash-entry-id]').forEach(function (row) {
var entryId = row.getAttribute('data-trash-entry-id');
var restoreButton = row.querySelector('.btn-restore');
var purgeButton = row.querySelector('.btn-purge');
restoreButton.addEventListener('click', function () {
fetch('/api/trash/' + encodeURIComponent(entryId) + '/restore', {
method: 'POST',
credentials: 'same-origin'
}).then(function (response) {
return response.json().then(function (body) {
if (!response.ok) throw new Error(body.error || ('HTTP ' + response.status));
return body;
});
}).then(function () {
row.remove();
if (!document.querySelector('tr[data-trash-entry-id]')) window.location.reload();
}).catch(function (error) {
window.alert(error.message || String(error));
});
});
purgeButton.addEventListener('click', function () {
if (!window.confirm(t('confirmPurge') || '确定永久删除该条目?此操作不可撤销。')) return;
fetch('/api/trash/' + encodeURIComponent(entryId), {
method: 'DELETE',
credentials: 'same-origin'
}).then(function (response) {
return response.json().then(function (body) {
if (!response.ok) throw new Error(body.error || ('HTTP ' + response.status));
return body;
});
}).then(function () {
row.remove();
if (!document.querySelector('tr[data-trash-entry-id]')) window.location.reload();
}).catch(function (error) {
window.alert(error.message || String(error));
});
});
});
</script>
</body>
</html>

View File

@@ -31,7 +31,23 @@ GOOGLE_CLIENT_SECRET=
# ─── 日志(可选)──────────────────────────────────────────────────────
# RUST_LOG=secrets_mcp=debug
# ─── 注意 ─────────────────────────────────────────────────────────────
# SERVER_MASTER_KEY 已不再需要。
# 新架构E2EE加密密钥由用户密码短语在客户端本地派生服务端不持有原始密钥。
# 仅在需要迁移旧版 wrapped_key 数据时临时启用。
# ─── 数据库连接池(可选)──────────────────────────────────────────────
# 最大连接数,默认 10
# SECRETS_DATABASE_POOL_SIZE=10
# 获取连接超时秒数,默认 5
# SECRETS_DATABASE_ACQUIRE_TIMEOUT=5
# ─── 限流(可选)──────────────────────────────────────────────────────
# 全局限流速率req/s默认 100
# RATE_LIMIT_GLOBAL_PER_SECOND=100
# 全局限流突发量,默认 200
# RATE_LIMIT_GLOBAL_BURST=200
# 单 IP 限流速率req/s默认 20
# RATE_LIMIT_IP_PER_SECOND=20
# 单 IP 限流突发量,默认 40
# RATE_LIMIT_IP_BURST=40
# ─── 代理信任(可选)─────────────────────────────────────────────────
# 设为 1/true/yes 时从 X-Forwarded-For / X-Real-IP 提取客户端 IP
# 仅在反代环境下启用,否则客户端可伪造 IP 绕过限流
# TRUST_PROXY=1

View File

@@ -0,0 +1,392 @@
# Metadata Value Search & Entry Relations (DAG)
## Overview
Two new features for secrets-mcp:
1. **Metadata Value Search** — fuzzy search across all JSON scalar values in `metadata`, excluding keys
2. **Entry Relations** — directional parent-child associations between entries (DAG, multiple parents allowed, cycle detection)
---
## Feature 1: Metadata Value Search
### Problem
The existing `query` parameter in `secrets_find`/`secrets_search` searches `metadata::text ILIKE`, which matches keys, JSON punctuation, and structural characters. Users want to search **only metadata values** (e.g. find entries where any metadata value contains "1.2.3.4", regardless of key name).
### Solution
Add a new `metadata_query` filter to `SearchParams` that uses PostgreSQL `jsonb_path_query` to iterate over only scalar values (strings, numbers, booleans), then applies ILIKE matching.
### Changes
#### secrets-core
**`crates/secrets-core/src/service/search.rs`**
- Add `metadata_query: Option<&'a str>` field to `SearchParams`
- In `entry_where_clause_and_next_idx`, when `metadata_query` is set, add:
```sql
EXISTS (
SELECT 1 FROM jsonb_path_query(
entries.metadata,
'strict $.** ? (@.type() != "object" && @.type() != "array")'
) AS val
WHERE (val#>>'{}') ILIKE $N ESCAPE '\'
)
```
- Bind `ilike_pattern(metadata_query)` at the correct `$N` position in both `fetch_entries_paged` and `count_entries`
#### secrets-mcp (MCP tools)
**`crates/secrets-mcp/src/tools.rs`**
- Add `metadata_query` field to `FindInput`:
```rust
#[schemars(description = "Fuzzy search across metadata values only (keys excluded)")]
metadata_query: Option<String>,
```
- Add same field to `SearchInput`
- Pass `metadata_query` through to `SearchParams` in both `secrets_find` and `secrets_search` handlers
#### secrets-mcp (Web)
**`crates/secrets-mcp/src/web/entries.rs`**
- Add `metadata_query: Option<String>` to `EntriesQuery`
- Thread it into all `SearchParams` usages (count, list, folder counts)
- Pass it into template context
- Add `metadata_query` to `EntriesPageTemplate` and filter form hidden fields
- Include `metadata_query` in pagination `href` links
**`crates/secrets-mcp/templates/entries.html`**
- Add a "metadata 值" text input to the filter bar (after name, before type)
- Preserve value in the input on re-render
### i18n Keys
| Key | zh | zh-Hant | en |
|-----|-----|---------|-----|
| `filterMetaLabel` | 元数据值 | 元数据值 | Metadata value |
| `filterMetaPlaceholder` | 搜索元数据值 | 搜尋元資料值 | Search metadata values |
### Performance Notes
- The `jsonb_path_query` with `$.**` scans all nested values recursively; this is a sequential scan on the metadata column per row
- The existing GIN index on `metadata jsonb_path_ops` supports `@>` containment queries but NOT this pattern
- For production datasets > 10k entries, consider a generated column or materialized search column in a future iteration
- First version prioritizes semantic correctness over index optimization
---
## Feature 2: Entry Relations (DAG)
### Data Model
New table `entry_relations`:
```sql
CREATE TABLE IF NOT EXISTS entry_relations (
parent_entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
child_entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (parent_entry_id, child_entry_id),
CHECK (parent_entry_id <> child_entry_id)
);
CREATE INDEX idx_entry_relations_parent ON entry_relations(parent_entry_id);
CREATE INDEX idx_entry_relations_child ON entry_relations(child_entry_id);
-- Enforce multi-tenant isolation: parent and child must belong to same user
ALTER TABLE entry_relations ADD CONSTRAINT fk_parent_user
FOREIGN KEY (parent_entry_id) REFERENCES entries(id) ON DELETE CASCADE;
ALTER TABLE entry_relations ADD CONSTRAINT fk_child_user
FOREIGN KEY (child_entry_id) REFERENCES entries(id) ON DELETE CASCADE;
```
Shared secrets already use `entry_secrets` as an N:N relation, so this is consistent with the existing pattern.
### Cycle Detection
On every `INSERT INTO entry_relations(parent, child)`, check that no path exists from `child` back to `parent`:
```sql
-- Returns true if adding (parent, child) would create a cycle
SELECT EXISTS(
SELECT 1 FROM entry_relations
WHERE child_entry_id = $1 -- $1 = proposed parent
START WITH parent_entry_id = $2 -- $2 = proposed child
CONNECT BY PRIOR child_entry_id = parent_entry_id
);
```
Wait — PostgreSQL doesn't support `START WITH ... CONNECT BY`. Use recursive CTE instead:
```sql
WITH RECURSIVE chain AS (
SELECT parent_entry_id AS ancestor
FROM entry_relations
WHERE child_entry_id = $1 -- proposed child
UNION ALL
SELECT er.parent_entry_id
FROM entry_relations er
JOIN chain c ON c.ancestor = er.child_entry_id
)
SELECT EXISTS(SELECT 1 FROM chain WHERE ancestor = $2);
-- $1 = proposed child, $2 = proposed parent
```
If `EXISTS` returns true, reject with `AppError::Validation { message: "cycle detected" }`.
### secrets-core Changes
**New file: `crates/secrets-core/src/service/relations.rs`**
```rust
pub struct RelationSummary {
pub parent_id: Uuid,
pub parent_name: String,
pub parent_folder: String,
pub parent_type: String,
}
pub struct AddRelationParams<'a> {
pub parent_entry_id: Uuid,
pub child_entry_id: Uuid,
pub user_id: Option<Uuid>,
}
pub struct RemoveRelationParams<'a> {
pub parent_entry_id: Uuid,
pub child_entry_id: Uuid,
pub user_id: Option<Uuid>,
}
/// Add a parent→child relation. Validates:
/// - Both entries exist and belong to the same user
/// - No self-reference (enforced by CHECK constraint)
/// - No cycle (recursive CTE check)
pub async fn add_relation(pool: &PgPool, params: AddRelationParams<'_>) -> Result<()>
/// Remove a parent→child relation.
pub async fn remove_relation(pool: &PgPool, params: RemoveRelationParams<'_>) -> Result<()>
/// Get all parents of an entry (with summary info).
pub async fn get_parents(pool: &PgPool, entry_id: Uuid, user_id: Option<Uuid>) -> Result<Vec<RelationSummary>>
/// Get all children of an entry (with summary info).
pub async fn get_children(pool: &PgPool, entry_id: Uuid, user_id: Option<Uuid>) -> Result<Vec<RelationSummary>>
/// Get parents + children for a batch of entry IDs (for list pages).
pub async fn get_relations_for_entries(
pool: &PgPool,
entry_ids: &[Uuid],
user_id: Option<Uuid>,
) -> Result<HashMap<Uuid, Vec<RelationSummary>>>
```
**`crates/secrets-core/src/service/mod.rs`** — add `pub mod relations;`
**`crates/secrets-core/src/db.rs`** — add entry_relations table creation in `migrate()`
**`crates/secrets-core/src/error.rs`** — no new error variant needed; use `AppError::Validation { message }` for cycle detection and permission errors
### MCP Tool Changes
**`crates/secrets-mcp/src/tools.rs`**
1. **`secrets_add`** (`AddInput`): add optional `parent_ids: Option<Vec<String>>` field
- Description: "UUIDs of parent entries to link. Creates parent→child relations."
- After creating the entry, call `relations::add_relation` for each parent
2. **`secrets_update`** (`UpdateInput`): add two fields:
- `add_parent_ids: Option<Vec<String>>` — "UUIDs of parent entries to link"
- `remove_parent_ids: Option<Vec<String>>` — "UUIDs of parent entries to unlink"
3. **`secrets_find`** and `secrets_search` output: add `parents` and `children` arrays to each entry result:
```json
{
"id": "...",
"name": "...",
"parents": [{"id": "...", "name": "...", "folder": "...", "type": "..."}],
"children": [{"id": "...", "name": "...", "folder": "...", "type": "..."}]
}
```
- Fetch relations for all returned entry IDs in a single batch query
### Web Changes
**`crates/secrets-mcp/src/web/entries.rs`**
1. **New API endpoints:**
- `POST /api/entries/{id}/relations` — add parent relation
- Body: `{ "parent_id": "uuid" }`
- Validates same-user ownership and cycle detection
- `DELETE /api/entries/{id}/relations/{parent_id}` — remove parent relation
- `GET /api/entries/options?q=xxx` — lightweight search for parent selection modal
- Returns `[{ "id": "...", "name": "...", "folder": "...", "type": "..." }]`
- Used by the edit dialog's parent selection autocomplete
2. **Entry list template data** — include parent/child counts per entry row
3. **`api_entry_patch`** — extend `EntryPatchBody` with optional `parent_ids: Option<Vec<Uuid>>`
- When present, replace all parent relations for this entry with the given list
- This is simpler than incremental add/remove in the Web UI context
**`crates/secrets-mcp/templates/entries.html`**
1. **List table**: add a "关联" (relations) column showing parent/child counts as clickable chips
2. **Edit dialog**: add "上级条目" (parent entries) section
- Show current parents as removable chips
- Add a search-as-you-type input that queries `/api/entries/options`
- Click a search result to add it as parent
- On save, send `parent_ids` in the PATCH body
3. **View dialog / detail**: show "下级条目" (children) list with clickable links that navigate to the child entry
4. **i18n**: add keys for all new UI elements
### i18n Keys (Entry Relations)
| Key | zh | zh-Hant | en |
|-----|-----|---------|-----|
| `colRelations` | 关联 | 關聯 | Relations |
| `parentEntriesLabel` | 上级条目 | 上級條目 | Parent entries |
| `childrenEntriesLabel` | 下级条目 | 下級條目 | Child entries |
| `addParentLabel` | 添加上级 | 新增上級 | Add parent |
| `removeParentLabel` | 移除上级 | 移除上級 | Remove parent |
| `searchEntriesPlaceholder` | 搜索条目… | 搜尋條目… | Search entries… |
| `noParents` | 无上级 | 無上級 | No parents |
| `noChildren` | 无下级 | 無下級 | No children |
| `relationCycleError` | 无法添加:会形成循环引用 | 無法新增:會形成循環引用 | Cannot add: would create a cycle |
### Audit Logging
Log relation changes in the existing `audit::log_tx` system:
- Action: `"add_relation"` / `"remove_relation"`
- Detail JSON: `{ "parent_id": "...", "parent_name": "...", "child_id": "...", "child_name": "..." }`
### Export / Import
**`ExportEntry`** — add optional `parents: Vec<ParentRef>` where:
```rust
pub struct ParentRef {
pub folder: String,
pub name: String,
}
```
- On export, resolve each entry's parent IDs to `(folder, name)` pairs
- On import, two-phase:
1. Create all entries (skip parents)
2. For each entry with `parents`, resolve `(folder, name)` → `entry_id` and call `add_relation`
3. If a parent reference cannot be resolved, log a warning and skip it (don't fail the entire import)
### History / Rollback
- Relation changes are **not** versioned in `entries_history`. They are tracked only via `audit_log`.
- Rationale: relations are a cross-entry concern; rolling them back alongside entry fields would require complex multi-entry coordination. The audit log provides sufficient traceability.
- If the user explicitly requests rollback of relations in the future, it can be implemented as a separate feature.
---
## Implementation Order
### Phase 1: Metadata Value Search
1. `secrets-core/src/service/search.rs` — add `metadata_query` to `SearchParams`, implement SQL condition
2. `secrets-mcp/src/tools.rs` — add `metadata_query` to `FindInput` and `SearchInput`, wire through
3. `secrets-mcp/src/web/entries.rs` — add `metadata_query` to `EntriesQuery`, `SearchParams`, pagination, folder counts
4. `secrets-mcp/templates/entries.html` — add input field, i18n
5. Test: existing `query` still works; `metadata_query` only matches values
### Phase 2: Entry Relations (Core)
1. `secrets-core/src/db.rs` — add `entry_relations` table to `migrate()`
2. `secrets-core/src/service/relations.rs` — implement `add_relation`, `remove_relation`, `get_parents`, `get_children`, `get_relations_for_entries`, cycle detection
3. `secrets-core/src/service/mod.rs` — add `pub mod relations`
4. Test: add/remove/query relations, cycle detection, same-user validation
### Phase 3: Entry Relations (MCP)
1. `secrets-mcp/src/tools.rs` — extend `AddInput`, `UpdateInput` with parent IDs
2. `secrets-mcp/src/tools.rs` — extend `secrets_find`/`secrets_search` output with `parents`/`children`
3. Test: MCP tools work end-to-end
### Phase 4: Entry Relations (Web)
1. `secrets-mcp/src/web/entries.rs` — add API endpoints, extend `EntryPatchBody`, extend template data
2. `secrets-mcp/templates/entries.html` — add relations column, edit dialog parent selector, view dialog children list
3. Test: Web UI works end-to-end
### Phase 5: Export / Import (Optional)
1. `secrets-core/src/models.rs` — add `parents` to `ExportEntry`
2. `secrets-core/src/service/export.rs` — populate parents
3. `secrets-core/src/service/import.rs` — two-phase import with relation resolution
---
## Database Migration
Add to `secrets-core/src/db.rs` `migrate()`:
```sql
CREATE TABLE IF NOT EXISTS entry_relations (
parent_entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
child_entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (parent_entry_id, child_entry_id),
CHECK (parent_entry_id <> child_entry_id)
);
CREATE INDEX IF NOT EXISTS idx_entry_relations_parent ON entry_relations(parent_entry_id);
CREATE INDEX IF NOT EXISTS idx_entry_relations_child ON entry_relations(child_entry_id);
```
This is idempotent (uses `IF NOT EXISTS`) and will run automatically on next startup.
---
## Security Considerations
- **Same-user isolation**: `add_relation` must verify both `parent_entry_id` and `child_entry_id` belong to the same `user_id` (or both are `NULL` for legacy single-user mode)
- **Cycle detection**: Recursive CTE query prevents any directed cycle, regardless of depth
- **CASCADE delete**: When an entry is deleted, all its relation edges are automatically removed via the `ON DELETE CASCADE` foreign key. This is the same pattern used by `entry_secrets`.
---
## Testing Checklist
### Metadata Search
- [ ] `metadata_query=1.2.3.4` matches entries where any metadata value contains "1.2.3.4"
- [ ] `metadata_query=1.2.3.4` does NOT match entries where only the key contains "1.2.3.4"
- [ ] `metadata_query` works with nested metadata (e.g. `{"server": {"ip": "1.2.3.4"}}`)
- [ ] `metadata_query` combined with `folder`/`type`/`tags` filters works correctly
- [ ] `metadata_query` with special characters (`%`, `_`) is properly escaped
- [ ] Existing `query` parameter behavior is unchanged
- [ ] Web filter bar preserves `metadata_query` across pagination and folder tab clicks
### Entry Relations
- [ ] Can add a parent→child relation between two entries
- [ ] Can add multiple parents to a single entry
- [ ] Cannot add self-referencing relation (CHECK constraint)
- [ ] Cannot create a direct cycle (A→B→A)
- [ ] Cannot create an indirect cycle (A→B→C→A)
- [ ] Cannot link entries from different users
- [ ] Deleting an entry removes all its relation edges but leaves related entries intact
- [ ] MCP `secrets_add` with `parent_ids` creates relations
- [ ] MCP `secrets_update` with `add_parent_ids`/`remove_parent_ids` modifies relations
- [ ] MCP `secrets_find`/`secrets_search` output includes `parents` and `children`
- [ ] Web entry list shows relation counts
- [ ] Web edit dialog allows adding/removing parents
- [ ] Web entry view shows children with navigation links

View File

@@ -0,0 +1,54 @@
# 将编辑弹窗中的密文管理功能移到查看密文弹窗
## 当前状态
- **编辑弹窗**密文重命名input、类型修改select、解绑×按钮、name 可用性校验
- **查看密文弹窗**:解密后显示值、复制、显示/隐藏密码
- **列表行**:密文 chipsname+type+ 解绑按钮
## 变更内容
### 1. 编辑弹窗 — 移除密文区域
- 移除 HTML 中 `#edit-secrets-list` 所在的 `.modal-secrets` div第559行
- 移除 JS 中 `renderEditSecrets``bindSecretValidation` 函数
- 移除 `openEdit` 中读取/渲染 `data-entry-secrets` 的逻辑
- 移除 `edit-save` 中 secret rename/type PATCH 逻辑
- 移除编辑弹窗内的 unlink 事件监听器第1492-1517行
- `refreshListAfterSave` 不再处理 secretRows 参数
### 2. 查看密文弹窗 — 增加管理功能
在每个解密字段行中增加:
- **重命名输入框**inline edit带 debounce 校验)
- **类型下拉选择**
- **解绑按钮**
- **保存按钮**(逐行或统一保存)
- 复用现有的 `PATCH /api/secrets/{id}``DELETE /api/entries/{entry_id}/secrets/{secret_id}` 接口
需要在 `openView` 中额外传入 `data-entry-secrets`(含 secret id/name/type以便将管理功能与解密值关联。
### 3. 列表行 — 保留只读摘要
- 保留密文 chips 的 name + type 展示
- **移除** chips 上的解绑按钮(×)
- **移除**列表行的 unlink 事件监听器第1466-1490行
### 4. i18n 更新
- 为查看弹窗新增重命名、类型修改、解绑相关的中英文翻译
- 清理编辑弹窗中不再需要的 i18n key
### 5. CSS 调整
- 查看弹窗中为管理控件添加样式input/select/button 行内布局)
## 不涉及的变更
- 后端 API 无需修改(复用现有接口)
- 版本 bump视为前次 0.5.11 的一部分tag 尚未被 CI 创建)
## 涉及文件
- `crates/secrets-mcp/templates/entries.html`HTML + JS + CSS
- `crates/secrets-mcp/src/web/entries.rs`(无需修改,复用现有 API

View File

@@ -5,13 +5,39 @@ set -euo pipefail
repo_root="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
cd "$repo_root"
# ── 版本解析 ──────────────────────────────────────────────────────────────
version="$(grep -m1 '^version' crates/secrets-mcp/Cargo.toml | sed 's/.*"\(.*\)".*/\1/')"
tag="secrets-mcp-${version}"
echo "==> 当前 secrets-mcp 版本: ${version}"
echo "==> 检查是否已存在 tag: ${tag}"
if git rev-parse "refs/tags/${tag}" >/dev/null 2>&1; then
# ── 版本 bump 硬检查 ──────────────────────────────────────────────────────
# 若工作区存在 crates/** 或 Cargo.toml/Cargo.lock 变更,且版本号与父提交一致,则视为未发版,直接失败。
has_code_changes=false
diff_stat="$(jj diff --stat 2>/dev/null || true)"
if [ -n "$diff_stat" ]; then
# 仅 crates/ 或根 Cargo.toml 变更视为行为变更Cargo.lock 为构建产物,不触发版本检查
if echo "$diff_stat" | grep -qE 'crates/|^[^ ]*Cargo\.toml'; then
has_code_changes=true
fi
fi
if [ "$has_code_changes" = true ]; then
parent_version="$(jj file show --revision @- crates/secrets-mcp/Cargo.toml 2>/dev/null | grep -m1 '^version' | sed 's/.*"\(.*\)".*/\1/' || true)"
if [ -z "$parent_version" ]; then
# 无法读取父版本(例如初始提交),跳过此检查
echo "==> 无法读取父提交版本,跳过 bump 检查"
elif [ "$version" = "$parent_version" ]; then
echo "==> 错误: 工作区包含 crates/ 或 Cargo 变更,但版本号未 bump${version} == ${parent_version}"
echo " 按规则,每次代码变更必须 bump crates/secrets-mcp/Cargo.toml 中的 version。"
exit 1
else
echo "==> 版本已 bump: ${parent_version}${version}"
fi
fi
echo "==> 检查是否已存在 tag: ${tag}"
if jj log --no-graph --revisions "tag(${tag})" --limit 1 >/dev/null 2>&1; then
echo "提示: 已存在 tag ${tag},将按重复构建处理,不阻断检查。"
echo "如需创建新的发布版本,请先 bump crates/secrets-mcp/Cargo.toml 中的 version。"
else

View File

@@ -0,0 +1 @@
entry_id,secret_name,secret_value
1 entry_id secret_name secret_value

View File

@@ -0,0 +1,383 @@
#!/usr/bin/env python3
"""
Batch re-encrypt secret fields from a CSV file.
CSV format:
entry_id,secret_name,secret_value
019d...,api_key,sk-xxxx
019d...,password,hunter2
The script groups rows by entry_id, then calls `secrets_update` with `secrets_obj`
so the server re-encrypts the provided plaintext values with the current key.
Warnings:
- Keep the CSV outside version control whenever possible.
- Delete the filled CSV after the repair is complete.
"""
from __future__ import annotations
import argparse
import csv
import json
import sys
import urllib.error
import urllib.request
from collections import OrderedDict
from pathlib import Path
from typing import Any
DEFAULT_USER_AGENT = "Cursor/3.0.12 (darwin arm64)"
REQUIRED_COLUMNS = {"entry_id", "secret_name", "secret_value"}
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="Repair secret ciphertexts by re-submitting plaintext via secrets_update."
)
parser.add_argument(
"--csv",
required=True,
help="Path to CSV file with columns: entry_id,secret_name,secret_value",
)
parser.add_argument(
"--mcp-json",
default=str(Path.home() / ".cursor" / "mcp.json"),
help="Path to mcp.json used to resolve URL and headers",
)
parser.add_argument(
"--server",
default="secrets",
help="MCP server name inside mcp.json (default: secrets)",
)
parser.add_argument("--url", help="Override MCP URL")
parser.add_argument("--auth", help="Override Authorization header value")
parser.add_argument("--encryption-key", help="Override X-Encryption-Key header value")
parser.add_argument(
"--user-agent",
default=DEFAULT_USER_AGENT,
help=f"User-Agent header (default: {DEFAULT_USER_AGENT})",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Parse and print grouped updates without sending requests",
)
return parser.parse_args()
def load_mcp_config(path: str, server_name: str) -> dict[str, Any]:
data = json.loads(Path(path).read_text(encoding="utf-8"))
servers = data.get("mcpServers", {})
if server_name not in servers:
raise KeyError(f"Server '{server_name}' not found in {path}")
return servers[server_name]
def resolve_connection_settings(args: argparse.Namespace) -> tuple[str, str, str]:
server = load_mcp_config(args.mcp_json, args.server)
headers = server.get("headers", {})
url = args.url or server.get("url")
auth = args.auth or headers.get("Authorization")
encryption_key = args.encryption_key or headers.get("X-Encryption-Key")
if not url:
raise ValueError("Missing MCP URL. Pass --url or configure it in mcp.json.")
if not auth:
raise ValueError(
"Missing Authorization header. Pass --auth or configure it in mcp.json."
)
if not encryption_key:
raise ValueError(
"Missing X-Encryption-Key. Pass --encryption-key or configure it in mcp.json."
)
return url, auth, encryption_key
def load_updates(csv_path: str) -> OrderedDict[str, OrderedDict[str, str]]:
grouped: OrderedDict[str, OrderedDict[str, str]] = OrderedDict()
with Path(csv_path).open("r", encoding="utf-8-sig", newline="") as fh:
reader = csv.DictReader(fh)
fieldnames = set(reader.fieldnames or [])
missing = REQUIRED_COLUMNS - fieldnames
if missing:
raise ValueError(
"CSV missing required columns: " + ", ".join(sorted(missing))
)
for line_no, row in enumerate(reader, start=2):
entry_id = (row.get("entry_id") or "").strip()
secret_name = (row.get("secret_name") or "").strip()
secret_value = row.get("secret_value") or ""
if not entry_id and not secret_name and not secret_value:
continue
if not entry_id:
raise ValueError(f"Line {line_no}: entry_id is required")
if not secret_name:
raise ValueError(f"Line {line_no}: secret_name is required")
entry_group = grouped.setdefault(entry_id, OrderedDict())
if secret_name in entry_group:
raise ValueError(
f"Line {line_no}: duplicate secret_name '{secret_name}' for entry_id '{entry_id}'"
)
entry_group[secret_name] = secret_value
if not grouped:
raise ValueError("CSV contains no updates")
return grouped
def post_json(
url: str,
payload: dict[str, Any],
auth: str,
encryption_key: str,
user_agent: str,
session_id: str | None = None,
) -> tuple[int, str | None, str]:
headers = {
"Content-Type": "application/json",
"Accept": "application/json, text/event-stream",
"Authorization": auth,
"X-Encryption-Key": encryption_key,
"User-Agent": user_agent,
}
if session_id:
headers["mcp-session-id"] = session_id
req = urllib.request.Request(
url,
data=json.dumps(payload).encode("utf-8"),
headers=headers,
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=30) as resp:
return (
resp.status,
resp.headers.get("mcp-session-id") or session_id,
resp.read().decode("utf-8"),
)
except urllib.error.HTTPError as exc:
body = exc.read().decode("utf-8", errors="replace")
return exc.code, session_id, body
def parse_sse_json(body: str) -> list[dict[str, Any]]:
items: list[dict[str, Any]] = []
for line in body.splitlines():
if line.startswith("data: {"):
items.append(json.loads(line[6:]))
return items
def initialize_session(
url: str, auth: str, encryption_key: str, user_agent: str
) -> str:
status, session_id, body = post_json(
url,
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-06-18",
"capabilities": {},
"clientInfo": {"name": "repair-script", "version": "1.0"},
},
},
auth,
encryption_key,
user_agent,
)
if status != 200 or not session_id:
raise RuntimeError(f"initialize failed: status={status}, body={body[:500]}")
status, _, body = post_json(
url,
{"jsonrpc": "2.0", "method": "notifications/initialized", "params": {}},
auth,
encryption_key,
user_agent,
session_id,
)
if status not in (200, 202):
raise RuntimeError(
f"notifications/initialized failed: status={status}, body={body[:500]}"
)
return session_id
def load_entry_index(
url: str, auth: str, encryption_key: str, user_agent: str, session_id: str
) -> dict[str, tuple[str, str]]:
status, _, body = post_json(
url,
{
"jsonrpc": "2.0",
"id": 999_001,
"method": "tools/call",
"params": {
"name": "secrets_find",
"arguments": {
"limit": 1000,
},
},
},
auth,
encryption_key,
user_agent,
session_id,
)
items = parse_sse_json(body)
last = items[-1] if items else {"raw": body[:1000]}
if status != 200:
raise RuntimeError(
f"secrets_find failed: status={status}, body={body[:500]}"
)
if "error" in last:
raise RuntimeError(f"secrets_find returned error: {last}")
content = last.get("result", {}).get("content", [])
if not content:
raise RuntimeError("secrets_find returned no content")
payload = json.loads(content[0]["text"])
index: dict[str, tuple[str, str]] = {}
for entry in payload.get("entries", []):
entry_id = entry.get("id")
name = entry.get("name")
folder = entry.get("folder", "")
if entry_id and name is not None:
index[entry_id] = (name, folder)
return index
def call_secrets_update(
url: str,
auth: str,
encryption_key: str,
user_agent: str,
session_id: str,
request_id: int,
entry_id: str,
entry_name: str,
entry_folder: str,
secrets_obj: dict[str, str],
) -> dict[str, Any]:
payload = {
"jsonrpc": "2.0",
"id": request_id,
"method": "tools/call",
"params": {
"name": "secrets_update",
"arguments": {
"id": entry_id,
"name": entry_name,
"folder": entry_folder,
"secrets_obj": secrets_obj,
# Pass the key as an argument too, so repair can still work
# even when a client/proxy mishandles custom headers.
"encryption_key": encryption_key,
},
},
}
status, _, body = post_json(
url, payload, auth, encryption_key, user_agent, session_id
)
items = parse_sse_json(body)
last = items[-1] if items else {"raw": body[:1000]}
if status != 200:
raise RuntimeError(
f"secrets_update failed for {entry_id}: status={status}, body={body[:500]}"
)
return last
def main() -> int:
args = parse_args()
try:
url, auth, encryption_key = resolve_connection_settings(args)
updates = load_updates(args.csv)
except Exception as exc:
print(f"ERROR: {exc}", file=sys.stderr)
return 1
print(f"Loaded {len(updates)} entries from {args.csv}")
if args.dry_run:
for entry_id, secrets_obj in updates.items():
print(
json.dumps(
{"id": entry_id, "secrets_obj": secrets_obj},
ensure_ascii=False,
indent=2,
)
)
return 0
try:
session_id = initialize_session(url, auth, encryption_key, args.user_agent)
entry_index = load_entry_index(
url, auth, encryption_key, args.user_agent, session_id
)
except Exception as exc:
print(f"ERROR: {exc}", file=sys.stderr)
return 1
success = 0
failures = 0
for request_id, (entry_id, secrets_obj) in enumerate(updates.items(), start=2):
try:
if entry_id not in entry_index:
raise RuntimeError(
f"entry id not found in secrets_find results: {entry_id}"
)
entry_name, entry_folder = entry_index[entry_id]
result = call_secrets_update(
url,
auth,
encryption_key,
args.user_agent,
session_id,
request_id,
entry_id,
entry_name,
entry_folder,
secrets_obj,
)
if "error" in result:
failures += 1
print(
json.dumps(
{"id": entry_id, "status": "error", "result": result},
ensure_ascii=False,
),
file=sys.stderr,
)
else:
success += 1
print(
json.dumps(
{"id": entry_id, "status": "ok", "result": result},
ensure_ascii=False,
)
)
except Exception as exc:
failures += 1
print(f"{entry_id}: ERROR: {exc}", file=sys.stderr)
print(f"Done. success={success} failure={failures}")
return 0 if failures == 0 else 2
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,95 +0,0 @@
#!/bin/bash
# 同步测试环境数据到生产环境
# 用法: ./scripts/sync-test-to-prod.sh
set -euo pipefail
# PostgreSQL 客户端工具路径 (Homebrew libpq)
export PATH="/opt/homebrew/opt/libpq/bin:$PATH"
# SSL 配置
export PGSSLMODE=verify-full
export PGSSLROOTCERT=/etc/ssl/cert.pem
# 测试环境
TEST_DB="postgres://postgres:Voson_2026_Pg18!@db.refining.ltd:5432/secrets-nn-test"
# 生产环境
PROD_DB="postgres://postgres:Voson_2026_Pg18!@db.refining.ltd:5432/secrets-nn-prod"
echo "========================================="
echo " 测试环境 -> 生产环境 数据同步"
echo "========================================="
echo ""
# 确认操作
read -p "⚠️ 此操作将覆盖生产环境数据,确认继续? (yes/no): " confirm
if [ "$confirm" != "yes" ]; then
echo "已取消"
exit 0
fi
echo ""
echo "步骤 1/4: 导出测试环境数据..."
TEMP_DIR=$(mktemp -d)
trap "rm -rf $TEMP_DIR" EXIT
# 导出测试环境数据(不含审计日志和历史记录)
pg_dump "$TEST_DB" \
--table=entries \
--table=secrets \
--table=entry_secrets \
--table=users \
--table=oauth_accounts \
--data-only \
--column-inserts \
--no-owner \
--no-privileges \
> "$TEMP_DIR/test_data.sql"
echo "✓ 测试数据已导出到临时文件"
echo " 文件大小: $(du -h "$TEMP_DIR/test_data.sql" | cut -f1)"
echo ""
echo "步骤 2/4: 备份当前生产数据..."
pg_dump "$PROD_DB" \
--table=entries \
--table=secrets \
--table=entry_secrets \
--table=users \
--table=oauth_accounts \
--data-only \
--column-inserts \
--no-owner \
--no-privileges \
> "$TEMP_DIR/prod_backup_$(date +%Y%m%d_%H%M%S).sql"
echo "✓ 生产数据已备份"
echo ""
echo "步骤 3/4: 清空生产环境目标表..."
psql "$PROD_DB" <<'SQL'
TRUNCATE TABLE entry_secrets CASCADE;
TRUNCATE TABLE secrets CASCADE;
TRUNCATE TABLE entries CASCADE;
SQL
echo "✓ 生产环境目标表已清空"
echo ""
echo "步骤 4/4: 导入测试数据到生产环境..."
psql "$PROD_DB" -f "$TEMP_DIR/test_data.sql" 2>&1 | tail -20
echo ""
echo "验证数据..."
echo "生产环境数据统计:"
psql "$PROD_DB" -c "SELECT 'users' as table_name, count(*) FROM users UNION ALL SELECT 'entries', count(*) FROM entries UNION ALL SELECT 'secrets', count(*) FROM secrets UNION ALL SELECT 'entry_secrets', count(*) FROM entry_secrets UNION ALL SELECT 'oauth_accounts', count(*) FROM oauth_accounts ORDER BY table_name;"
echo ""
echo "========================================="
echo " ✓ 数据同步完成!"
echo "========================================="
echo ""
echo "提示:"
echo " - 生产数据备份已保存在临时目录"
echo " - 临时文件将在脚本退出后自动删除"