Compare commits

...

20 Commits

Author SHA1 Message Date
voson
2b994141b8 release(secrets-mcp): 0.5.5 — 生产 CORS 显式 allow_methods,修复 tower-http 启动 panic
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m59s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
credentials + wildcard methods/headers 被 tower-http 禁止;生产环境改为 GET/POST/PATCH/DELETE/OPTIONS 白名单。
2026-04-05 12:27:40 +08:00
voson
9d6ac5c13a release(secrets-mcp): 0.5.4 — Web 分页修正与 hex 解码;批量删除上限;MCP @ 路径检测
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m55s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Failing after 6s
2026-04-05 12:14:40 +08:00
voson
1860cce86c release(secrets-mcp): 0.5.3 — 审计日志分页与 Web;CONTRIBUTING;文档与模板修正 2026-04-05 11:34:04 +08:00
dd24f7cc44 release: secrets-mcp 0.5.2
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 6m7s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Failing after 6s
Bump version: secrets-mcp-0.5.1 tag already existed while crates had further changes.

Made-with: Cursor
2026-04-05 10:38:50 +08:00
voson
aefad33870 chore(secrets-mcp): 0.5.1 — 移除 entry type 归一化,MCP 参数兼容字符串形式
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m22s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- 去掉 taxonomy 对 entry type 的自动映射与 metadata.subtype 回填;仅 trim 后入库
- MCP tools:Vec/Map/bool 等可选字段支持 JSON 内嵌字符串解析,并改进解析失败提示
- 新增 deser 单元测试;README/AGENTS 与 models 注释同步

Made-with: Cursor
2026-04-04 21:27:33 +08:00
voson
0ffb81e57f feat: entry update links existing secrets (link_secret_names)
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m19s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- secrets-core: update flow validates and applies secret links
- secrets-mcp: MCP tool params and UI for managing links on edit
- Align errors and templates; minor crypto/.gitignore tweaks

Made-with: Cursor
2026-04-04 20:30:32 +08:00
voson
4a1654c820 docs: update MCP tools list, env vars, taxonomy and deploy structure 2026-04-04 18:04:35 +08:00
voson
a15e2eaf4a docs: align README with removed SQL migration scripts
Made-with: Cursor
2026-04-04 17:58:26 +08:00
voson
1518388374 chore(release): secrets-mcp 0.4.0
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m19s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
Bump version for the N:N entry_secrets data model and related MCP/Web
changes. Remove superseded SQL migration artifacts; rely on auto-migrate.
Add structured errors, taxonomy normalization, and web i18n helpers.

Made-with: Cursor
2026-04-04 17:58:12 +08:00
b99d821644 Merge pull request 'refactor/entry-secret-nn' (#1) from refactor/entry-secret-nn into main
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 2m42s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Failing after 6s
Reviewed-on: #1
2026-04-03 19:44:47 +08:00
voson
32f275f88a feat(secrets-mcp): bump 0.3.9 and normalize listen address log
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m7s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been skipped
Prepare a new release version and improve startup log readability by showing localhost for loopback bind addresses without changing runtime binding behavior.

Made-with: Cursor
2026-04-03 19:36:12 +08:00
王松
c6fb457734 feat(nn): entry–secret N:N, unique secret names, web unlink
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Failing after 2m37s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been skipped
Bump secrets-mcp to 0.3.8 (tag 0.3.7 already used).

- Junction table entry_secrets; secrets user-scoped with type
- Per-user unique secrets.name; link_secret_names on add
- Manual migrations + migrate script; MCP/tool and Web updates

Made-with: Cursor
2026-04-03 17:37:04 +08:00
df701f21b9 feat(secrets-mcp): 共享 key 删除时自动迁移并重定向 (v0.3.7)
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m4s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
删除仍被 metadata.key_ref 引用的 key 条目时,在同一事务内将密文复制到首个引用方,
其余引用方的 key_ref 重定向到新 owner;env_map 解析 key_ref 时不再限定 type=key。
Web 删除 API 返回 migrated;Dashboard 删除成功后提示迁移。

Bump secrets-mcp to 0.3.7;补充删除迁移相关单测(需 SECRETS_DATABASE_URL)。

Made-with: Cursor
2026-04-03 09:27:20 +08:00
c3c536200e feat(secrets-mcp): Web 条目编辑 API 与 Notes 列表展示优化(0.3.6)
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m55s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- secrets-core: EntryWriteRow;按 id 更新/删除(含并发冲突与唯一键)
- Web: PATCH/DELETE /api/entries/{id};列表编辑/删除与错误映射
- entries 模板:Notes 限高滚动;空 Notes 不显示占位框
- 版本 0.3.5 → 0.3.6,同步 Cargo.lock

Made-with: Cursor
2026-04-02 14:58:10 +08:00
7909f7102d feat(secrets-mcp): 条目页按 folder/type 筛选并发版 0.3.5
- entries 路由支持 ?folder=&type= 查询,与搜索层 SearchParams 对齐
- 条目列表页增加筛选表单与说明文案
- 版本 0.3.4 → 0.3.5,同步 Cargo.lock

Made-with: Cursor
2026-04-02 14:37:36 +08:00
87a29af82d feat(web): 条目列表页 /entries 与总条数统计
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m56s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- 新增受会话保护的 GET /entries,仅展示 entries 非敏感列
- search: list_entries、count_entries 共享筛选条件;分页与计数不读 secrets
- 侧边栏在 dashboard/audit 增加「条目」入口
- secrets-mcp 0.3.4(tag 尚未存在)

Made-with: Cursor
2026-04-02 11:26:51 +08:00
1b11f7e976 release(secrets-mcp): v0.3.3 — 强制 PostgreSQL TLS 校验
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m54s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Failing after 7s
显式引入数据库 TLS 配置并在生产环境拒绝弱 sslmode,避免连接静默降级。同步更新 deploy/README 与运维 runbook,落地 db.refining.ltd 的证书与服务器配置流程。

Made-with: Cursor
2026-04-01 15:18:14 +08:00
08e81363c9 release(secrets-mcp): v0.3.2 — 修复 key_ref 多租户与歧义
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m41s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- env_map:key_ref 解析传入 user_id;支持 folder/name;多条匹配时报错
- 文档同步 key_ref 说明
- bump secrets-mcp 0.3.1 → 0.3.2,更新 Cargo.lock

Made-with: Cursor
2026-03-27 10:45:12 +08:00
voson
beade4503d release(secrets-mcp): v0.3.1
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m45s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 5s
- MCP: secrets_find, secrets_overview; secrets_get id-only; id on update/delete/history/rollback
- Add meta_obj/secrets_obj; delete guard; env_map/instructions updates
- Core: resolve_entry_by_id; get_*_by_id validates entry + tenant before decrypt

Made-with: Cursor
2026-03-26 17:35:56 +08:00
voson
409fd78a35 Release secrets-mcp 0.3.0: folder/type schema and MCP folder disambiguation
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m39s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 5s
- Rename namespace/kind to folder/type on entries, audit_log, and history tables;
  add notes. Unique key is (user_id, folder, name).
- Service layer and MCP tools support name-first lookup with optional folder when
  multiple entries share the same name.
- secrets_delete dry_run uses the same disambiguation as real deletes.
- Add scripts/migrate-v0.3.0.sql for manual DB migration. Refresh README and
  AGENTS.md.

Made-with: Cursor
2026-03-26 15:12:28 +08:00
46 changed files with 7076 additions and 978 deletions

5
.gitignore vendored
View File

@@ -2,6 +2,7 @@
.env
.DS_Store
.cursor/
# Google OAuth 下载的 JSON 凭据文件
client_secret_*.apps.googleusercontent.com.json
*.pem
tmp/
client_secret_*.apps.googleusercontent.com.json
node_modules/

108
AGENTS.md
View File

@@ -2,12 +2,37 @@
本仓库为 **MCP SaaS**`secrets-core`(业务与持久化)+ `secrets-mcp`Streamable HTTP MCP、Web、OAuth、API Key。对外入口见 `crates/secrets-mcp`
## 版本控制
本仓库使用 **[Jujutsu (jj)](https://jj-vcs.dev/)** 作为版本控制系统(纯 jj 模式,无 `.git` 目录)。
### 常用 jj 命令对照
| 操作 | jj 命令 |
|------|---------|
| 查看历史 | `jj log` / `jj log 'all()'` |
| 查看状态 | `jj status` |
| 新建提交 | `jj commit` |
| 创建新变更 | `jj new` |
| 变基 | `jj rebase` |
| 合并提交 | `jj squash` |
| 撤销操作 | `jj undo` |
| 查看标签 | `jj tag list` |
| 查看分支 | `jj bookmark list` |
| 推送远端 | `jj git push` |
| 拉取远端 | `jj git fetch` |
### 注意事项
- 本仓库为**纯 jj 模式**,无 `.git` 目录;本地不要使用 `git` 命令
- CI/CDGitea Actions仍通过 Git 协议拉取代码Runner 侧自动使用 `git`,无需修改
- 检查标签是否存在时使用 `jj log --no-graph --revisions "tag(${tag})"` 而非 `git rev-parse`
## 提交 / 推送硬规则(优先于下文)
**每次提交和推送前必须执行以下检查,无论是否明确「发版」:**
1. 涉及 `crates/**`、根目录 `Cargo.toml`/`Cargo.lock``secrets-mcp` 行为变更的提交,默认视为**需要发版**,除非明确说明「本次不发版」。
2. 提交前检查 `crates/secrets-mcp/Cargo.toml``version`,再查 tag`git tag -l 'secrets-mcp-*'`。若当前版本对应 tag 已存在且有代码变更,**必须 bump 版本号**并 `cargo build` 同步 `Cargo.lock`
2. 提交前检查 `crates/secrets-mcp/Cargo.toml``version`,再查 tag`jj tag list`。若当前版本对应 tag 已存在且有代码变更,**必须 bump 版本号**并 `cargo build` 同步 `Cargo.lock`
3. 提交前运行 `./scripts/release-check.sh`(版本/tag + `fmt` + `clippy --locked` + `test --locked`)。若脚本不存在或不可用,至少运行 `cargo fmt -- --check && cargo clippy --locked -- -D warnings && cargo test --locked`
## 项目结构
@@ -29,7 +54,8 @@ secrets/
- **建议库名**`secrets-mcp`(专用实例,与历史库名区分)。
- **连接**:环境变量 **`SECRETS_DATABASE_URL`**(本分支无本地配置文件路径)。
- **表**`entries`(含 `user_id`)、`secrets``entries_history``secrets_history``audit_log``users``oauth_accounts`,首次连接 **auto-migrate**
- **表**`entries`(含 `user_id`)、`secrets``entries_history``secrets_history``audit_log``users``oauth_accounts`,首次连接 **auto-migrate**`secrets-core``migrate`
- **Web 会话**:与上项 **同一数据库 URL**`secrets-mcp` 启动时对 tower-sessions 的 PostgreSQL 存储 **auto-migrate**(会话表与业务表共存于该实例,无需第二套连接串)。
### 表结构(摘录)
@@ -37,27 +63,41 @@ secrets/
entries (
id UUID PRIMARY KEY DEFAULT uuidv7(),
user_id UUID, -- 多租户NULL=遗留行;非空=归属用户
namespace VARCHAR(64) NOT NULL,
kind VARCHAR(64) NOT NULL,
folder VARCHAR(128) NOT NULL DEFAULT '',
type VARCHAR(64) NOT NULL DEFAULT '',
name VARCHAR(256) NOT NULL,
notes TEXT NOT NULL DEFAULT '',
tags TEXT[] NOT NULL DEFAULT '{}',
metadata JSONB NOT NULL DEFAULT '{}',
version BIGINT NOT NULL DEFAULT 1,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
-- 唯一UNIQUE(user_id, folder, name) WHERE user_id IS NOT NULL
-- UNIQUE(folder, name) WHERE user_id IS NULL单租户遗留
```
```sql
secrets (
id UUID PRIMARY KEY DEFAULT uuidv7(),
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
field_name VARCHAR(256) NOT NULL,
user_id UUID,
name VARCHAR(256) NOT NULL,
type VARCHAR(64) NOT NULL DEFAULT 'text',
encrypted BYTEA NOT NULL DEFAULT '\x',
version BIGINT NOT NULL DEFAULT 1,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE(entry_id, field_name)
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
-- 唯一UNIQUE(user_id, name) WHERE user_id IS NOT NULL
```
```sql
entry_secrets (
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
secret_id UUID NOT NULL REFERENCES secrets(id) ON DELETE CASCADE,
sort_order INT NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY(entry_id, secret_id)
)
```
@@ -82,30 +122,44 @@ oauth_accounts (
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
provider VARCHAR(32) NOT NULL,
provider_id VARCHAR(256) NOT NULL,
...
email VARCHAR(256),
name VARCHAR(256),
avatar_url TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE(provider, provider_id)
)
-- 另有唯一索引 UNIQUE(user_id, provider)(迁移中 idx_oauth_accounts_user_provider同一用户每种 provider 至多一条关联。
```
### audit_log / history
与迁移脚本一致:`audit_log``entries_history``secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs``migrate` SQL。`audit_log` 中普通业务事件的 `namespace/kind/name` 对应 entry 坐标;登录类事件固定使用 `namespace='auth'`,此时 `kind/name` 表示认证目标而非 entry 身份。
与迁移脚本一致:`audit_log``entries_history``secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs``migrate` SQL。`audit_log` 含可选 **`user_id`**(多租户下标识操作者;可空以兼容遗留数据)。`audit_log` 中普通业务事件使用 **`folder` / `type` / `name`** 对应 entry 坐标;登录类事件固定使用 **`folder='auth'`**,此时 `type`/`name` 表示认证目标而非 entry 身份。
### MCP 消歧AI 调用)
`name` 定位条目的工具(`secrets_update` / `secrets_history` / `secrets_rollback` / `secrets_delete` 单条模式):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder`。也可直接传 `id`UUID跳过消歧。
注意:`secrets_get` 只接受 UUID `id`(来自 `secrets_find` 结果),不支持按 `name` 定位。
### 字段职责
| 字段 | 含义 | 示例 |
|------|------|------|
| `namespace` | 隔离空间 | `refining` |
| `kind` | 记录类型 | `server`, `service`, `key` |
| `name` | 标识名 | `gitea`, `i-example0…` |
| `folder` | 隔离空间(参与唯一键) | `refining` |
| `type` | 软分类(不参与唯一键,用户自定义) | `server`, `service`, `account`, `person`, `document` |
| `name` | 标识名 | `gitea`, `aliyun` |
| `notes` | 非敏感说明 | 自由文本 |
| `tags` | 标签 | `["aliyun","prod"]` |
| `metadata` | 明文描述 | `ip``url``key_ref` |
| `secrets.field_name` | 加密字段名(明文 | `token`, `ssh_key` |
| `metadata` | 明文描述 | `ip``url``subtype` |
| `secrets.name` | 密钥名称(调用方提供 | `token`, `ssh_key`, `password` |
| `secrets.type` | 密钥类型(调用方提供,默认 `text` | `text`, `password`, `key` |
| `secrets.encrypted` | 密文 | AES-GCM |
### PEM 共享(`key_ref`
### 共享密钥N:N 关联
将共享 PEM 存为 `kind=key` 的 entry其它记录在 `metadata.key_ref` 指向该 key 的 `name`。更新 key 记录后,引用方通过服务层解析合并逻辑即可使用新密钥(实现见 `secrets_core::service`
多个 entry 可共享同一 secret 字段,通过 `entry_secrets` 中间表关联
添加条目时通过 `link_secret_names` 参数指定要关联的已有 secret name`(user_id, name)` 精确匹配)。
删除 entry 时仅解除关联secret 本身若仍被引用则保留;不再被任何 entry 引用时自动清理。
## 代码规范
@@ -117,6 +171,14 @@ oauth_accounts (
- 加密:密钥由用户密码短语通过 **PBKDF2-SHA256600k 次)** 在客户端派生,服务端只存 `key_salt`/`key_check`/`key_params`不持有原始密钥。Web 客户端在浏览器本地完成加解密MCP 客户端通过 `X-Encryption-Key` 请求头传递密钥,服务端临时解密后返回明文。
- MCPtools 参数与 JSON Schema`schemars`)保持同步,鉴权以请求扩展中的用户上下文为准。
## 生产 CORS
生产环境 CORS 使用显式请求头白名单(`build_cors_layer`),而非 `allow_headers(Any)`
因为 `tower-http` 禁止 `allow_credentials(true)``allow_headers(Any)` 同时使用。
**维护约束**:若 MCP 协议或客户端新增自定义请求头,必须同步更新 `production_allowed_headers()`
当前允许的请求头:`Authorization``Content-Type``X-Encryption-Key``mcp-session-id``x-mcp-session`
## 提交前检查
```bash
@@ -135,7 +197,7 @@ cargo test --locked
```bash
grep '^version' crates/secrets-mcp/Cargo.toml
git tag -l 'secrets-mcp-*'
jj tag list
```
## CI/CD
@@ -153,9 +215,19 @@ git tag -l 'secrets-mcp-*'
| 变量 | 说明 |
|------|------|
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL URL。 |
| `SECRETS_DATABASE_SSL_MODE` | 可选但强烈建议生产必填。推荐 `verify-full`(至少 `verify-ca`)。 |
| `SECRETS_DATABASE_SSL_ROOT_CERT` | 可选。私有 CA 或自签链路时指定 CA 根证书路径。 |
| `SECRETS_DATABASE_POOL_SIZE` | 可选。连接池最大连接数,默认 `10`。 |
| `SECRETS_DATABASE_ACQUIRE_TIMEOUT` | 可选。获取连接超时秒数,默认 `5`。 |
| `SECRETS_ENV` | 可选。设为 `prod` / `production` 时会拒绝弱 PostgreSQL TLS 模式。 |
| `BASE_URL` | 对外基址OAuth 回调 `${BASE_URL}/auth/google/callback`。 |
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`(容器/远程直接暴露时需改为 `0.0.0.0:9315`)。 |
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;仅运行时配置。 |
| `RUST_LOG` | 如 `secrets_mcp=debug`。 |
| `RATE_LIMIT_GLOBAL_PER_SECOND` | 可选。全局限流速率,默认 `100` req/s。 |
| `RATE_LIMIT_GLOBAL_BURST` | 可选。全局限流突发量,默认 `200`。 |
| `RATE_LIMIT_IP_PER_SECOND` | 可选。单 IP 限流速率,默认 `20` req/s。 |
| `RATE_LIMIT_IP_BURST` | 可选。单 IP 限流突发量,默认 `40`。 |
| `TRUST_PROXY` | 可选。设为 `1`/`true`/`yes` 时从 `X-Forwarded-For` / `X-Real-IP` 提取客户端 IP。 |
> `SERVER_MASTER_KEY` 已不再需要。新架构下密钥由用户密码短语在客户端派生,服务端不持有。

55
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,55 @@
# Contributing
## 版本控制
本仓库使用 **[Jujutsu (jj)](https://jj-vcs.dev/)**。请勿使用 `git` 命令。
```bash
jj log # 查看历史
jj status # 查看状态
jj new # 创建新变更
jj commit # 提交
jj rebase # 变基
jj squash # 合并提交
jj git push # 推送到远端
```
详见 [AGENTS.md](AGENTS.md) 的「版本控制」章节。
## 本地开发
```bash
# 复制环境变量
cp deploy/.env.example .env
# 填写数据库连接等配置后
cargo build
cargo test --locked
```
## 提交前检查
每次提交前必须通过:
```bash
cargo fmt -- --check
cargo clippy --locked -- -D warnings
cargo test --locked
```
或使用脚本:
```bash
./scripts/release-check.sh
```
## 发版规则
涉及 `crates/**`、根目录 `Cargo.toml`/`Cargo.lock``secrets-mcp` 行为变更的提交,默认需要发版。
1. 检查 `crates/secrets-mcp/Cargo.toml``version`
2. 运行 `jj tag list` 确认对应 tag 是否已存在
3. 若 tag 已存在且有代码变更,**必须 bump 版本**并 `cargo build` 同步 `Cargo.lock`
4. 通过 release-check 后再提交
详见 [AGENTS.md](AGENTS.md) 的「提交 / 推送硬规则」章节。

136
Cargo.lock generated
View File

@@ -464,6 +464,20 @@ dependencies = [
"syn",
]
[[package]]
name = "dashmap"
version = "6.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5041cc499144891f3790297212f32a74fb938e5136a14943f338ef9e0ae276cf"
dependencies = [
"cfg-if",
"crossbeam-utils",
"hashbrown 0.14.5",
"lock_api",
"once_cell",
"parking_lot_core",
]
[[package]]
name = "der"
version = "0.7.10"
@@ -596,6 +610,12 @@ version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d9c4f5dac5e15c24eb999c26181a6ca40b39fe946cbe4c263c7209467bc83af2"
[[package]]
name = "foldhash"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb"
[[package]]
name = "form_urlencoded"
version = "1.2.2"
@@ -687,6 +707,12 @@ version = "0.3.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "037711b3d59c33004d3856fbdc83b99d4ff37a24768fa1be9ce3538a1cde4393"
[[package]]
name = "futures-timer"
version = "3.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f288b0a4f20f9a56b5d1da57e2227c661b7b16168e2f72365f57b63326e29b24"
[[package]]
name = "futures-util"
version = "0.3.32"
@@ -765,6 +791,35 @@ dependencies = [
"polyval",
]
[[package]]
name = "governor"
version = "0.10.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9efcab3c1958580ff1f25a2a41be1668f7603d849bb63af523b208a3cc1223b8"
dependencies = [
"cfg-if",
"dashmap",
"futures-sink",
"futures-timer",
"futures-util",
"getrandom 0.3.4",
"hashbrown 0.16.1",
"nonzero_ext",
"parking_lot",
"portable-atomic",
"quanta",
"rand 0.9.2",
"smallvec",
"spinning_top",
"web-time",
]
[[package]]
name = "hashbrown"
version = "0.14.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1"
[[package]]
name = "hashbrown"
version = "0.15.5"
@@ -773,7 +828,7 @@ checksum = "9229cfe53dfd69f0609a49f65461bd93001ea1ef889cd5529dd176593f5338a1"
dependencies = [
"allocator-api2",
"equivalent",
"foldhash",
"foldhash 0.1.5",
]
[[package]]
@@ -781,6 +836,11 @@ name = "hashbrown"
version = "0.16.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
dependencies = [
"allocator-api2",
"equivalent",
"foldhash 0.2.0",
]
[[package]]
name = "hashlink"
@@ -1283,6 +1343,12 @@ dependencies = [
"windows-sys 0.61.2",
]
[[package]]
name = "nonzero_ext"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38bf9645c8b145698bb0b18a4637dcacbc421ea49bef2317e4fd8065a387cf21"
[[package]]
name = "nu-ansi-term"
version = "0.50.3"
@@ -1463,6 +1529,12 @@ dependencies = [
"universal-hash",
]
[[package]]
name = "portable-atomic"
version = "1.13.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c33a9471896f1c69cecef8d20cbe2f7accd12527ce60845ff44c153bb2a21b49"
[[package]]
name = "potential_utf"
version = "0.1.4"
@@ -1506,6 +1578,21 @@ dependencies = [
"unicode-ident",
]
[[package]]
name = "quanta"
version = "0.12.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f3ab5a9d756f0d97bdc89019bd2e4ea098cf9cde50ee7564dde6b81ccc8f06c7"
dependencies = [
"crossbeam-utils",
"libc",
"once_cell",
"raw-cpuid",
"wasi",
"web-sys",
"winapi",
]
[[package]]
name = "quinn"
version = "0.11.9"
@@ -1658,6 +1745,15 @@ version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c8d0fd677905edcbeedbf2edb6494d676f0e98d54d5cf9bda0b061cb8fb8aba"
[[package]]
name = "raw-cpuid"
version = "11.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "498cd0dc59d73224351ee52a95fee0f1a617a2eae0e7d9d720cc622c73a54186"
dependencies = [
"bitflags",
]
[[package]]
name = "redox_syscall"
version = "0.5.18"
@@ -1953,6 +2049,7 @@ dependencies = [
"aes-gcm",
"anyhow",
"chrono",
"hex",
"rand 0.10.0",
"serde",
"serde_json",
@@ -1960,6 +2057,7 @@ dependencies = [
"sha2",
"sqlx",
"tempfile",
"thiserror",
"tokio",
"toml",
"tracing",
@@ -1968,7 +2066,7 @@ dependencies = [
[[package]]
name = "secrets-mcp"
version = "0.2.2"
version = "0.5.5"
dependencies = [
"anyhow",
"askama",
@@ -1976,6 +2074,7 @@ dependencies = [
"axum-extra",
"chrono",
"dotenvy",
"governor",
"http",
"rand 0.10.0",
"reqwest",
@@ -1994,6 +2093,7 @@ dependencies = [
"tower-sessions-sqlx-store-chrono",
"tracing",
"tracing-subscriber",
"url",
"urlencoding",
"uuid",
]
@@ -2194,6 +2294,15 @@ dependencies = [
"lock_api",
]
[[package]]
name = "spinning_top"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d96d2d1d716fb500937168cc09353ffdc7a012be8475ac7308e1bdf0e3923300"
dependencies = [
"lock_api",
]
[[package]]
name = "spki"
version = "0.7.3"
@@ -2716,6 +2825,7 @@ dependencies = [
"futures-util",
"http",
"http-body",
"http-body-util",
"iri-string",
"pin-project-lite",
"tower",
@@ -3166,6 +3276,28 @@ dependencies = [
"wasite",
]
[[package]]
name = "winapi"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
dependencies = [
"winapi-i686-pc-windows-gnu",
"winapi-x86_64-pc-windows-gnu",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windows-core"
version = "0.62.2"

View File

@@ -28,6 +28,7 @@ rand = "^0.10.0"
# Utils
anyhow = "^1.0.102"
thiserror = "^2"
chrono = { version = "^0.4.44", features = ["serde"] }
uuid = { version = "^1.22.0", features = ["serde"] }
tracing = "^0.1"

View File

@@ -17,7 +17,10 @@ cargo build --release -p secrets-mcp
| 变量 | 说明 |
|------|------|
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL 连接串(建议专用库,如 `secrets-mcp`)。 |
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL 连接串(推荐使用域名,例如 `db.refining.ltd`,避免直连 IP)。 |
| `SECRETS_DATABASE_SSL_MODE` | 可选但强烈建议生产必填。推荐 `verify-full`(至少 `verify-ca`),避免回退到弱 TLS 模式。 |
| `SECRETS_DATABASE_SSL_ROOT_CERT` | 可选。私有 CA 或自签链路时指定 CA 根证书路径(如 `/etc/secrets/pg-ca.crt`)。 |
| `SECRETS_ENV` | 可选。设为 `prod` / `production` 时会拒绝弱 PostgreSQL TLS 模式(`prefer``disable``allow``require`)。 |
| `BASE_URL` | 对外访问基址OAuth 回调为 `{BASE_URL}/auth/google/callback`。默认 `http://localhost:9315`。 |
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`。容器内或直接对外暴露端口时请改为 `0.0.0.0:9315`;反代时常为 `127.0.0.1:9315`。 |
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;不配置则无 Google 登录入口。运行时从环境读取,勿写入 CI、勿打入二进制。 |
@@ -27,8 +30,55 @@ cargo build --release -p secrets-mcp
cargo run -p secrets-mcp
```
生产推荐示例PostgreSQL TLS
```bash
SECRETS_DATABASE_URL=postgres://postgres:***@db.refining.ltd:5432/secrets-mcp
SECRETS_DATABASE_SSL_MODE=verify-full
SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
SECRETS_ENV=production
```
- **Web**`BASE_URL`登录、Dashboard、设置密码短语、创建 API Key
- **MCP**Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer <api_key>` + `X-Encryption-Key: <hex>` 请求头。
- **MCP**Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer <api_key>` + `X-Encryption-Key: <hex>` 请求头(读密文工具须带密钥)
## PostgreSQL TLS 加固
- 推荐将数据库域名单独设置为 `db.refining.ltd`,服务域名保持 `secrets.refining.app`
- 数据库证书建议使用可校验链路(如 Let's Encrypt 或私有 CA并保证证书 `SAN` 包含 `db.refining.ltd`
- PostgreSQL 侧建议使用 `hostssl` 规则限制应用来源(如 `47.238.146.244/32`),逐步移除公网明文 `host` 访问。
- 应用端推荐 `SECRETS_DATABASE_SSL_MODE=verify-full`;仅在过渡阶段可临时用 `verify-ca`
- 可执行运维步骤见 [`deploy/postgres-tls-hardening.md`](deploy/postgres-tls-hardening.md)。
## MCP 与 AI 工作流v0.3+
条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun``ricnsmart/aliyun`)。
### 工具列表
| 工具 | 需要加密密钥 | 说明 |
|------|-------------|------|
| `secrets_find` | 否 | 发现条目(返回含 secret_fields schema支持 `name_query` 模糊匹配 |
| `secrets_search` | 否 | 搜索条目,支持 `query`/`folder`/`type`/`name` 过滤、`sort`/`offset` 分页、`summary` 摘要模式 |
| `secrets_get` | 是 | 按 UUID `id` 获取单条条目及解密后的 secrets |
| `secrets_add` | 是 | 添加新条目,支持 `meta_obj`/`secrets_obj` JSON 对象参数、`secret_types` 指定密钥类型、`link_secret_names` 关联已有 secret |
| `secrets_update` | 是 | 更新条目,支持 `id``name`+`folder` 定位 |
| `secrets_delete` | 否 | 删除条目,支持 `id``name`+`folder` 定位;`dry_run=true` 预览删除 |
| `secrets_history` | 否 | 查看条目历史,支持 `id``name`+`folder` 定位 |
| `secrets_rollback` | 是 | 回滚条目到指定历史版本,支持 `id``name`+`folder` 定位 |
| `secrets_export` | 是 | 导出条目(含解密明文),支持 JSON/TOML/YAML 格式 |
| `secrets_env_map` | 是 | 将 secrets 转换为环境变量映射(`UPPER(entry)_UPPER(field)` 格式),支持 `prefix` |
| `secrets_overview` | 否 | 返回各 folder 和 type 的 entry 计数概览 |
### 消歧规则
- **按 `name` 定位的工具**`secrets_update` / `secrets_delete` / `secrets_history` / `secrets_rollback`):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder`。也可直接传 `id`UUID跳过消歧。
- **`secrets_get`** 仅支持通过 `id`UUID获取。
- **`secrets_delete`** 的 `dry_run=true` 与真实删除使用相同消歧规则——唯一则预览一条,多条则报错并要求 `folder`
### 共享密钥
N:N 关联下,删除 entry 仅解除关联,被共享的 secret 若仍被其他 entry 引用则保留;无引用时自动清理。
## 加密架构(混合 E2EE
@@ -122,31 +172,40 @@ flowchart LR
## 数据模型
主表 **`entries`**`namespace``kind``name``tags``metadata`,多租户时带 `user_id`+ 子表 **`secrets`**(每行一个加密字段:`field_name``encrypted`)。另有 `entries_history``secrets_history``audit_log`,以及 **`users`**(含 `key_salt``key_check``key_params``api_key`)、**`oauth_accounts`**。首次连库自动迁移建表。
主表 **`entries`**`folder``type``name``notes``tags``metadata`,多租户时带 `user_id`+ 子表 **`secrets`**(每行一个加密字段:`name``type``encrypted`,通过 `entry_secrets` 中间表与 entry 建立 N:N 关联)。**唯一性**`UNIQUE(user_id, folder, name)``user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history``secrets_history``audit_log`,以及 **`users`**(含 `key_salt``key_check``key_params``api_key`)、**`oauth_accounts`**。首次连库自动迁移建表`secrets-core``migrate`);已有库在进程启动时亦由同一 `migrate()` 增量补齐表、索引与 N:N 结构。若需从更早版本对照一次性 SQL可在 git 历史中检索已移除的 `scripts/migrate-v0.3.0.sql`。**Web 登录会话**tower-sessions使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp``PostgresStore::migrate`),无需额外环境变量
| 位置 | 字段 | 说明 |
|------|------|------|
| entries | namespace | 一级隔离,如 `refining``ricnsmart` |
| entries | kind | `server``service``key` 等(可扩展 |
| entries | name | 人类可读标识 |
| entries | metadata | 明文 JSONip、url、`key_ref` 等) |
| secrets | field_name | 明文字段名,便于 schema 展示 |
| entries | folder | 组织/隔离空间,如 `refining``ricnsmart`;参与唯一键 |
| entries | type | 软分类,用户自定义,如 `server``service``account``person``document`(不参与唯一键 |
| entries | name | 人类可读标识;与 `folder` 一起在用户内唯一 |
| entries | notes | 非敏感说明文本 |
| entries | metadata | 明文 JSONip、url、subtype 等) |
| secrets | name | 密钥名称(调用方提供) |
| secrets | type | 密钥类型(调用方提供,默认 `text` |
| secrets | encrypted | AES-GCM 密文(含 nonce |
| users | key_salt | PBKDF2 salt32B首次设置密码短语时写入 |
| users | key_check | 派生密钥加密已知常量,用于验证密码短语 |
| users | key_params | 派生算法参数,如 `{"alg":"pbkdf2-sha256","iterations":600000}` |
### PEM 共享(`key_ref`
### 共享密钥N:N 关联
同一 PEM 可被多条 `server` 记录引用:将 PEM 存为 `kind=key` 的 entry在服务器条目的 `metadata.key_ref` 中写 key 的名称;轮换时只更新 key 对应记录即可。
多个条目可共享同一密文字段,通过 `entry_secrets` 中间表实现 N:N 关联:
- 添加条目时可通过 `link_secret_names` 参数关联已有的 secret`(user_id, name)` 精确匹配查找)
- 同一 secret 可被多个 entry 引用,删除某 entry 不会级联删除被共享的 secret
- 当 secret 不再被任何 entry 引用时,自动清理(`NOT EXISTS` 子查询)
### 类型Type
`type` 字段用于软分类,由用户自由填写,不做任何自动转换或归一化。常见示例:`server``service``account``person``document`,但任何值均可接受。
## 审计日志
`add``update``delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。
其中业务条目事件使用 `[namespace/kind] name` 语义;登录类事件使用 `namespace='auth'`,此时 `kind/name` 表示认证目标(例如 `oauth/google`),不表示某条 secrets entry。
`add``update``delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。多租户场景下可写 **`user_id`**(可空,兼容遗留行)。
业务条目事件使用 **`folder` / `type` / `name`**;登录类事件使用 **`folder='auth'`**,此时 `type`/`name` 表示认证目标(例如 `oauth` / `google`),不表示某条 secrets entry。
```sql
SELECT action, namespace, kind, name, detail, created_at
SELECT action, folder, type, name, detail, user_id, created_at
FROM audit_log
ORDER BY created_at DESC
LIMIT 20;
@@ -157,9 +216,18 @@ LIMIT 20;
```
Cargo.toml
crates/secrets-core/ # db / crypto / models / audit / service
src/
taxonomy.rs # SECRET_TYPE_OPTIONSsecret 字段类型下拉选项)
service/ # 业务逻辑add, search, update, delete, export, env_map 等)
crates/secrets-mcp/ # MCP HTTP、Web、OAuth、API Key
scripts/
deploy/ # systemd、.env 示例
release-check.sh # 发版前 fmt / clippy / test
setup-gitea-actions.sh
sync-test-to-prod.sh # 测试库同步到生产(按需)
deploy/
.env.example # 环境变量模板
secrets-mcp.service # systemd 服务文件(生产部署用)
postgres-tls-hardening.md # PostgreSQL TLS 加固运维手册
```
## CI/CDGitea Actions

View File

@@ -10,7 +10,9 @@ path = "src/lib.rs"
[dependencies]
aes-gcm.workspace = true
anyhow.workspace = true
thiserror.workspace = true
chrono.workspace = true
hex = "0.4"
rand.workspace = true
serde.workspace = true
serde_json.workspace = true

View File

@@ -3,7 +3,7 @@ use sqlx::{PgPool, Postgres, Transaction};
use uuid::Uuid;
pub const ACTION_LOGIN: &str = "login";
pub const NAMESPACE_AUTH: &str = "auth";
pub const FOLDER_AUTH: &str = "auth";
fn login_detail(provider: &str, client_ip: Option<&str>, user_agent: Option<&str>) -> Value {
json!({
@@ -16,7 +16,7 @@ fn login_detail(provider: &str, client_ip: Option<&str>, user_agent: Option<&str
/// Write a login audit entry without requiring an explicit transaction.
pub async fn log_login(
pool: &PgPool,
kind: &str,
entry_type: &str,
provider: &str,
user_id: Uuid,
client_ip: Option<&str>,
@@ -24,22 +24,22 @@ pub async fn log_login(
) {
let detail = login_detail(provider, client_ip, user_agent);
let result: Result<_, sqlx::Error> = sqlx::query(
"INSERT INTO audit_log (user_id, action, namespace, kind, name, detail) \
"INSERT INTO audit_log (user_id, action, folder, type, name, detail) \
VALUES ($1, $2, $3, $4, $5, $6)",
)
.bind(user_id)
.bind(ACTION_LOGIN)
.bind(NAMESPACE_AUTH)
.bind(kind)
.bind(FOLDER_AUTH)
.bind(entry_type)
.bind(provider)
.bind(&detail)
.execute(pool)
.await;
if let Err(e) = result {
tracing::warn!(error = %e, kind, provider, "failed to write login audit log");
tracing::warn!(error = %e, entry_type, provider, "failed to write login audit log");
} else {
tracing::debug!(kind, provider, ?user_id, "login audit logged");
tracing::debug!(entry_type, provider, ?user_id, "login audit logged");
}
}
@@ -48,19 +48,19 @@ pub async fn log_tx(
tx: &mut Transaction<'_, Postgres>,
user_id: Option<Uuid>,
action: &str,
namespace: &str,
kind: &str,
folder: &str,
entry_type: &str,
name: &str,
detail: Value,
) {
let result: Result<_, sqlx::Error> = sqlx::query(
"INSERT INTO audit_log (user_id, action, namespace, kind, name, detail) \
"INSERT INTO audit_log (user_id, action, folder, type, name, detail) \
VALUES ($1, $2, $3, $4, $5, $6)",
)
.bind(user_id)
.bind(action)
.bind(namespace)
.bind(kind)
.bind(folder)
.bind(entry_type)
.bind(name)
.bind(&detail)
.execute(&mut **tx)
@@ -69,7 +69,7 @@ pub async fn log_tx(
if let Err(e) = result {
tracing::warn!(error = %e, "failed to write audit log");
} else {
tracing::debug!(action, namespace, kind, name, "audit logged");
tracing::debug!(action, folder, entry_type, name, "audit logged");
}
}

View File

@@ -1,4 +1,15 @@
use anyhow::Result;
use std::path::PathBuf;
use anyhow::{Context, Result};
use sqlx::postgres::PgSslMode;
#[derive(Debug, Clone)]
pub struct DatabaseConfig {
pub url: String,
pub ssl_mode: Option<PgSslMode>,
pub ssl_root_cert: Option<PathBuf>,
pub enforce_strict_tls: bool,
}
/// Resolve database URL from environment.
/// Priority: `SECRETS_DATABASE_URL` env var → error.
@@ -18,3 +29,54 @@ pub fn resolve_db_url(override_url: &str) -> Result<String> {
Example: SECRETS_DATABASE_URL=postgres://user:pass@host:port/dbname"
)
}
fn env_var_non_empty(name: &str) -> Option<String> {
std::env::var(name)
.ok()
.filter(|value| !value.trim().is_empty())
}
fn parse_ssl_mode_from_env() -> Result<Option<PgSslMode>> {
let Some(mode) = env_var_non_empty("SECRETS_DATABASE_SSL_MODE") else {
return Ok(None);
};
let parsed = mode.parse::<PgSslMode>().with_context(|| {
format!(
"Invalid SECRETS_DATABASE_SSL_MODE='{mode}'. Use one of: disable, allow, prefer, require, verify-ca, verify-full."
)
})?;
Ok(Some(parsed))
}
fn resolve_ssl_root_cert_from_env() -> Result<Option<PathBuf>> {
let Some(path) = env_var_non_empty("SECRETS_DATABASE_SSL_ROOT_CERT") else {
return Ok(None);
};
let path = PathBuf::from(path);
if !path.exists() {
anyhow::bail!(
"SECRETS_DATABASE_SSL_ROOT_CERT points to a missing file: {}",
path.display()
);
}
Ok(Some(path))
}
fn is_production_env() -> bool {
matches!(
env_var_non_empty("SECRETS_ENV")
.as_deref()
.map(|value| value.to_ascii_lowercase()),
Some(value) if value == "prod" || value == "production"
)
}
pub fn resolve_db_config(override_url: &str) -> Result<DatabaseConfig> {
Ok(DatabaseConfig {
url: resolve_db_url(override_url)?,
ssl_mode: parse_ssl_mode_from_env()?,
ssl_root_cert: resolve_ssl_root_cert_from_env()?,
enforce_strict_tls: is_production_env(),
})
}

View File

@@ -5,6 +5,8 @@ use aes_gcm::{
use anyhow::{Context, Result, bail};
use serde_json::Value;
use crate::error::AppError;
const NONCE_LEN: usize = 12;
// ─── AES-256-GCM encrypt / decrypt ───────────────────────────────────────────
@@ -38,7 +40,7 @@ pub fn decrypt(master_key: &[u8; 32], data: &[u8]) -> Result<Vec<u8>> {
let nonce = Nonce::from_slice(nonce_bytes);
cipher
.decrypt(nonce, ciphertext)
.map_err(|_| anyhow::anyhow!("decryption failed — wrong master key or corrupted data"))
.map_err(|_| AppError::DecryptionFailed.into())
}
// ─── JSON helpers ─────────────────────────────────────────────────────────────
@@ -59,7 +61,7 @@ pub fn decrypt_json(master_key: &[u8; 32], data: &[u8]) -> Result<Value> {
/// Parse a 64-char hex string (from X-Encryption-Key header) into a 32-byte key.
pub fn extract_key_from_hex(hex_str: &str) -> Result<[u8; 32]> {
let bytes = hex::decode_hex(hex_str.trim())?;
let bytes = ::hex::decode(hex_str.trim())?;
if bytes.len() != 32 {
bail!(
"X-Encryption-Key must be 64 hex chars (32 bytes), got {} bytes",
@@ -74,21 +76,14 @@ pub fn extract_key_from_hex(hex_str: &str) -> Result<[u8; 32]> {
// ─── Public hex helpers ───────────────────────────────────────────────────────
pub mod hex {
use anyhow::{Result, bail};
use anyhow::Result;
pub fn encode_hex(bytes: &[u8]) -> String {
bytes.iter().map(|b| format!("{:02x}", b)).collect()
}
pub fn decode_hex(s: &str) -> Result<Vec<u8>> {
let s = s.trim();
if !s.len().is_multiple_of(2) {
bail!("hex string has odd length");
}
(0..s.len())
.step_by(2)
.map(|i| u8::from_str_radix(&s[i..i + 2], 16).map_err(|e| anyhow::anyhow!("{}", e)))
.collect()
Ok(::hex::decode(s.trim())?)
}
}

View File

@@ -1,16 +1,66 @@
use anyhow::Result;
use std::str::FromStr;
use anyhow::{Context, Result};
use serde_json::Value;
use sqlx::PgPool;
use sqlx::postgres::PgPoolOptions;
use sqlx::postgres::{PgConnectOptions, PgPoolOptions, PgSslMode};
pub async fn create_pool(database_url: &str) -> Result<PgPool> {
use crate::config::DatabaseConfig;
fn build_connect_options(config: &DatabaseConfig) -> Result<PgConnectOptions> {
let mut options = PgConnectOptions::from_str(&config.url)
.with_context(|| "failed to parse SECRETS_DATABASE_URL".to_string())?;
if let Some(mode) = config.ssl_mode {
options = options.ssl_mode(mode);
}
if let Some(path) = &config.ssl_root_cert {
options = options.ssl_root_cert(path);
}
if config.enforce_strict_tls
&& !matches!(
options.get_ssl_mode(),
PgSslMode::VerifyCa | PgSslMode::VerifyFull
)
{
anyhow::bail!(
"Refusing to start in production with weak PostgreSQL TLS mode. \
Set SECRETS_DATABASE_SSL_MODE=verify-ca or verify-full."
);
}
Ok(options)
}
pub async fn create_pool(config: &DatabaseConfig) -> Result<PgPool> {
tracing::debug!("connecting to database");
let connect_options = build_connect_options(config)?;
// Connection pool configuration from environment
let max_connections = std::env::var("SECRETS_DATABASE_POOL_SIZE")
.ok()
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(10);
let acquire_timeout_secs = std::env::var("SECRETS_DATABASE_ACQUIRE_TIMEOUT")
.ok()
.and_then(|v| v.parse::<u64>().ok())
.unwrap_or(5);
let pool = PgPoolOptions::new()
.max_connections(10)
.acquire_timeout(std::time::Duration::from_secs(5))
.connect(database_url)
.max_connections(max_connections)
.acquire_timeout(std::time::Duration::from_secs(acquire_timeout_secs))
.max_lifetime(std::time::Duration::from_secs(1800)) // 30 minutes
.idle_timeout(std::time::Duration::from_secs(600)) // 10 minutes
.connect_with(connect_options)
.await?;
tracing::debug!("database connection established");
tracing::debug!(
max_connections,
acquire_timeout_secs,
"database connection established"
);
Ok(pool)
}
@@ -22,9 +72,10 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
CREATE TABLE IF NOT EXISTS entries (
id UUID PRIMARY KEY DEFAULT uuidv7(),
user_id UUID,
namespace VARCHAR(64) NOT NULL,
kind VARCHAR(64) NOT NULL,
folder VARCHAR(128) NOT NULL DEFAULT '',
type VARCHAR(64) NOT NULL DEFAULT '',
name VARCHAR(256) NOT NULL,
notes TEXT NOT NULL DEFAULT '',
tags TEXT[] NOT NULL DEFAULT '{}',
metadata JSONB NOT NULL DEFAULT '{}',
version BIGINT NOT NULL DEFAULT 1,
@@ -34,16 +85,16 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
-- Legacy unique constraint without user_id (single-user mode)
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy
ON entries(namespace, kind, name)
ON entries(folder, name)
WHERE user_id IS NULL;
-- Multi-user unique constraint
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user
ON entries(user_id, namespace, kind, name)
ON entries(user_id, folder, name)
WHERE user_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_entries_namespace ON entries(namespace);
CREATE INDEX IF NOT EXISTS idx_entries_kind ON entries(kind);
CREATE INDEX IF NOT EXISTS idx_entries_folder ON entries(folder) WHERE folder <> '';
CREATE INDEX IF NOT EXISTS idx_entries_type ON entries(type) WHERE type <> '';
CREATE INDEX IF NOT EXISTS idx_entries_user_id ON entries(user_id) WHERE user_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_entries_tags ON entries USING GIN(tags);
CREATE INDEX IF NOT EXISTS idx_entries_metadata ON entries USING GIN(metadata jsonb_path_ops);
@@ -51,39 +102,53 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
-- ── secrets: one row per encrypted field ─────────────────────────────────
CREATE TABLE IF NOT EXISTS secrets (
id UUID PRIMARY KEY DEFAULT uuidv7(),
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
field_name VARCHAR(256) NOT NULL,
user_id UUID,
name VARCHAR(256) NOT NULL,
type VARCHAR(64) NOT NULL DEFAULT 'text',
encrypted BYTEA NOT NULL DEFAULT '\x',
version BIGINT NOT NULL DEFAULT 1,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE(entry_id, field_name)
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_secrets_entry_id ON secrets(entry_id);
CREATE INDEX IF NOT EXISTS idx_secrets_user_id ON secrets(user_id) WHERE user_id IS NOT NULL;
CREATE UNIQUE INDEX IF NOT EXISTS idx_secrets_unique_user_name
ON secrets(user_id, name) WHERE user_id IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_secrets_name ON secrets(name);
CREATE INDEX IF NOT EXISTS idx_secrets_type ON secrets(type);
-- ── entry_secrets: N:N relation ────────────────────────────────────────────
CREATE TABLE IF NOT EXISTS entry_secrets (
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
secret_id UUID NOT NULL REFERENCES secrets(id) ON DELETE CASCADE,
sort_order INT NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY(entry_id, secret_id)
);
CREATE INDEX IF NOT EXISTS idx_entry_secrets_secret_id ON entry_secrets(secret_id);
-- ── audit_log: append-only operation log ─────────────────────────────────
CREATE TABLE IF NOT EXISTS audit_log (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
user_id UUID,
action VARCHAR(32) NOT NULL,
namespace VARCHAR(64) NOT NULL,
kind VARCHAR(64) NOT NULL,
folder VARCHAR(128) NOT NULL DEFAULT '',
type VARCHAR(64) NOT NULL DEFAULT '',
name VARCHAR(256) NOT NULL,
detail JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC);
CREATE INDEX IF NOT EXISTS idx_audit_log_ns_kind ON audit_log(namespace, kind);
CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type ON audit_log(folder, type);
CREATE INDEX IF NOT EXISTS idx_audit_log_user_id ON audit_log(user_id) WHERE user_id IS NOT NULL;
-- ── entries_history ───────────────────────────────────────────────────────
CREATE TABLE IF NOT EXISTS entries_history (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
entry_id UUID NOT NULL,
namespace VARCHAR(64) NOT NULL,
kind VARCHAR(64) NOT NULL,
folder VARCHAR(128) NOT NULL DEFAULT '',
type VARCHAR(64) NOT NULL DEFAULT '',
name VARCHAR(256) NOT NULL,
version BIGINT NOT NULL,
action VARCHAR(16) NOT NULL,
@@ -94,8 +159,8 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
CREATE INDEX IF NOT EXISTS idx_entries_history_entry_id
ON entries_history(entry_id, version DESC);
CREATE INDEX IF NOT EXISTS idx_entries_history_ns_kind_name
ON entries_history(namespace, kind, name, version DESC);
CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name
ON entries_history(folder, type, name, version DESC);
-- Backfill: add user_id to entries_history for multi-tenant isolation
ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID;
@@ -103,29 +168,25 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
ON entries_history(user_id) WHERE user_id IS NOT NULL;
ALTER TABLE entries_history DROP COLUMN IF EXISTS actor;
-- Backfill: add notes to entries if not present (fresh installs already have it)
ALTER TABLE entries ADD COLUMN IF NOT EXISTS notes TEXT NOT NULL DEFAULT '';
-- ── secrets_history: field-level snapshot ────────────────────────────────
CREATE TABLE IF NOT EXISTS secrets_history (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
entry_id UUID NOT NULL,
secret_id UUID NOT NULL,
entry_version BIGINT NOT NULL,
field_name VARCHAR(256) NOT NULL,
name VARCHAR(256) NOT NULL,
encrypted BYTEA NOT NULL DEFAULT '\x',
action VARCHAR(16) NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_secrets_history_entry_id
ON secrets_history(entry_id, entry_version DESC);
CREATE INDEX IF NOT EXISTS idx_secrets_history_secret_id
ON secrets_history(secret_id);
-- Drop redundant actor column (derivable via entries_history JOIN)
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
-- Drop redundant actor column; user_id already identifies the business user
ALTER TABLE audit_log DROP COLUMN IF EXISTS actor;
-- ── users ─────────────────────────────────────────────────────────────────
CREATE TABLE IF NOT EXISTS users (
id UUID PRIMARY KEY DEFAULT uuidv7(),
@@ -178,6 +239,16 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
END IF;
END $$;
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'fk_secrets_user_id'
) THEN
ALTER TABLE secrets
ADD CONSTRAINT fk_secrets_user_id
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE SET NULL;
END IF;
END $$;
DO $$ BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'fk_audit_log_user_id'
@@ -191,12 +262,179 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
)
.execute(pool)
.await?;
migrate_schema(pool).await?;
restore_plaintext_api_keys(pool).await?;
tracing::debug!("migrations complete");
Ok(())
}
/// Idempotent schema migration: rename namespace→folder, kind→type in existing databases.
async fn migrate_schema(pool: &PgPool) -> Result<()> {
sqlx::raw_sql(
r#"
-- ── entries: rename namespace→folder, kind→type ──────────────────────────
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'namespace'
) THEN
ALTER TABLE entries RENAME COLUMN namespace TO folder;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'kind'
) THEN
ALTER TABLE entries RENAME COLUMN kind TO type;
END IF;
END $$;
-- ── audit_log: rename namespace→folder, kind→type ────────────────────────
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'namespace'
) THEN
ALTER TABLE audit_log RENAME COLUMN namespace TO folder;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'kind'
) THEN
ALTER TABLE audit_log RENAME COLUMN kind TO type;
END IF;
END $$;
-- ── entries_history: rename namespace→folder, kind→type ──────────────────
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'namespace'
) THEN
ALTER TABLE entries_history RENAME COLUMN namespace TO folder;
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'kind'
) THEN
ALTER TABLE entries_history RENAME COLUMN kind TO type;
END IF;
END $$;
-- ── Set empty defaults for new folder/type columns ────────────────────────
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'folder'
) THEN
UPDATE entries SET folder = '' WHERE folder IS NULL;
ALTER TABLE entries ALTER COLUMN folder SET NOT NULL;
ALTER TABLE entries ALTER COLUMN folder SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries' AND column_name = 'type'
) THEN
UPDATE entries SET type = '' WHERE type IS NULL;
ALTER TABLE entries ALTER COLUMN type SET NOT NULL;
ALTER TABLE entries ALTER COLUMN type SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'folder'
) THEN
UPDATE audit_log SET folder = '' WHERE folder IS NULL;
ALTER TABLE audit_log ALTER COLUMN folder SET NOT NULL;
ALTER TABLE audit_log ALTER COLUMN folder SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'audit_log' AND column_name = 'type'
) THEN
UPDATE audit_log SET type = '' WHERE type IS NULL;
ALTER TABLE audit_log ALTER COLUMN type SET NOT NULL;
ALTER TABLE audit_log ALTER COLUMN type SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'folder'
) THEN
UPDATE entries_history SET folder = '' WHERE folder IS NULL;
ALTER TABLE entries_history ALTER COLUMN folder SET NOT NULL;
ALTER TABLE entries_history ALTER COLUMN folder SET DEFAULT '';
END IF;
END $$;
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'entries_history' AND column_name = 'type'
) THEN
UPDATE entries_history SET type = '' WHERE type IS NULL;
ALTER TABLE entries_history ALTER COLUMN type SET NOT NULL;
ALTER TABLE entries_history ALTER COLUMN type SET DEFAULT '';
END IF;
END $$;
-- ── Rebuild unique indexes on entries: folder is now part of the key ────────
-- (user_id, folder, name) allows same name in different folders.
DROP INDEX IF EXISTS idx_entries_unique_legacy;
DROP INDEX IF EXISTS idx_entries_unique_user;
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy
ON entries(folder, name)
WHERE user_id IS NULL;
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user
ON entries(user_id, folder, name)
WHERE user_id IS NOT NULL;
-- ── Replace old namespace/kind indexes ────────────────────────────────────
DROP INDEX IF EXISTS idx_entries_namespace;
DROP INDEX IF EXISTS idx_entries_kind;
DROP INDEX IF EXISTS idx_audit_log_ns_kind;
DROP INDEX IF EXISTS idx_entries_history_ns_kind_name;
CREATE INDEX IF NOT EXISTS idx_entries_folder
ON entries(folder) WHERE folder <> '';
CREATE INDEX IF NOT EXISTS idx_entries_type
ON entries(type) WHERE type <> '';
CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type
ON audit_log(folder, type);
CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name
ON entries_history(folder, type, name, version DESC);
-- ── Drop legacy actor columns ─────────────────────────────────────────────
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
ALTER TABLE audit_log DROP COLUMN IF EXISTS actor;
"#,
)
.execute(pool)
.await?;
Ok(())
}
async fn restore_plaintext_api_keys(pool: &PgPool) -> Result<()> {
let has_users_api_key: bool = sqlx::query_scalar(
"SELECT EXISTS (
@@ -265,8 +503,8 @@ async fn restore_plaintext_api_keys(pool: &PgPool) -> Result<()> {
pub struct EntrySnapshotParams<'a> {
pub entry_id: uuid::Uuid,
pub user_id: Option<uuid::Uuid>,
pub namespace: &'a str,
pub kind: &'a str,
pub folder: &'a str,
pub entry_type: &'a str,
pub name: &'a str,
pub version: i64,
pub action: &'a str,
@@ -280,12 +518,12 @@ pub async fn snapshot_entry_history(
) -> Result<()> {
sqlx::query(
"INSERT INTO entries_history \
(entry_id, namespace, kind, name, version, action, tags, metadata, user_id) \
(entry_id, folder, type, name, version, action, tags, metadata, user_id) \
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)",
)
.bind(p.entry_id)
.bind(p.namespace)
.bind(p.kind)
.bind(p.folder)
.bind(p.entry_type)
.bind(p.name)
.bind(p.version)
.bind(p.action)
@@ -300,10 +538,8 @@ pub async fn snapshot_entry_history(
// ── Secret field-level history snapshot ──────────────────────────────────────
pub struct SecretSnapshotParams<'a> {
pub entry_id: uuid::Uuid,
pub secret_id: uuid::Uuid,
pub entry_version: i64,
pub field_name: &'a str,
pub name: &'a str,
pub encrypted: &'a [u8],
pub action: &'a str,
}
@@ -314,13 +550,11 @@ pub async fn snapshot_secret_history(
) -> Result<()> {
sqlx::query(
"INSERT INTO secrets_history \
(entry_id, secret_id, entry_version, field_name, encrypted, action) \
VALUES ($1, $2, $3, $4, $5, $6)",
(secret_id, name, encrypted, action) \
VALUES ($1, $2, $3, $4)",
)
.bind(p.entry_id)
.bind(p.secret_id)
.bind(p.entry_version)
.bind(p.field_name)
.bind(p.name)
.bind(p.encrypted)
.bind(p.action)
.execute(&mut **tx)

View File

@@ -0,0 +1,172 @@
use sqlx::error::DatabaseError;
/// Structured business errors for the secrets service.
///
/// These replace ad-hoc `anyhow` strings for expected failure modes,
/// allowing MCP and Web layers to map to appropriate protocol-level errors.
#[derive(Debug, thiserror::Error)]
pub enum AppError {
#[error("A secret with the name '{secret_name}' already exists for this user")]
ConflictSecretName { secret_name: String },
#[error("An entry with folder='{folder}' and name='{name}' already exists")]
ConflictEntryName { folder: String, name: String },
#[error("Entry not found")]
NotFoundEntry,
#[error("User not found")]
NotFoundUser,
#[error("Secret not found")]
NotFoundSecret,
#[error("Authentication failed")]
AuthenticationFailed,
#[error("Unauthorized: insufficient permissions")]
Unauthorized,
#[error("Validation failed: {message}")]
Validation { message: String },
#[error("Concurrent modification detected")]
ConcurrentModification,
#[error("Decryption failed — the encryption key may be incorrect")]
DecryptionFailed,
#[error("Encryption key not set — user must set passphrase first")]
EncryptionKeyNotSet,
#[error(transparent)]
Internal(#[from] anyhow::Error),
}
impl AppError {
/// Try to convert a sqlx database error into a structured `AppError`.
///
/// The caller should provide the context (which table was being written,
/// what values were being inserted) so we can produce a meaningful error.
pub fn from_db_error(err: sqlx::Error, ctx: DbErrorContext<'_>) -> Self {
if let sqlx::Error::Database(ref db_err) = err
&& db_err.code().as_deref() == Some("23505")
{
return Self::from_unique_violation(db_err.as_ref(), ctx);
}
AppError::Internal(err.into())
}
fn from_unique_violation(db_err: &dyn DatabaseError, ctx: DbErrorContext<'_>) -> Self {
let constraint = db_err.constraint();
match constraint {
Some("idx_secrets_unique_user_name") => AppError::ConflictSecretName {
secret_name: ctx.secret_name.unwrap_or("unknown").to_string(),
},
Some("idx_entries_unique_user") | Some("idx_entries_unique_legacy") => {
AppError::ConflictEntryName {
folder: ctx.folder.unwrap_or("").to_string(),
name: ctx.name.unwrap_or("unknown").to_string(),
}
}
_ => {
// Fall back to message-based detection for unnamed constraints
let msg = db_err.message();
if msg.contains("secrets") {
AppError::ConflictSecretName {
secret_name: ctx.secret_name.unwrap_or("unknown").to_string(),
}
} else {
AppError::ConflictEntryName {
folder: ctx.folder.unwrap_or("").to_string(),
name: ctx.name.unwrap_or("unknown").to_string(),
}
}
}
}
}
}
/// Context hints used when converting a database error to `AppError`.
#[derive(Debug, Default, Clone, Copy)]
pub struct DbErrorContext<'a> {
pub secret_name: Option<&'a str>,
pub folder: Option<&'a str>,
pub name: Option<&'a str>,
}
impl<'a> DbErrorContext<'a> {
pub fn secret_name(name: &'a str) -> Self {
Self {
secret_name: Some(name),
..Default::default()
}
}
pub fn entry(folder: &'a str, name: &'a str) -> Self {
Self {
folder: Some(folder),
name: Some(name),
..Default::default()
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn app_error_display_messages() {
let err = AppError::ConflictSecretName {
secret_name: "token".to_string(),
};
assert!(err.to_string().contains("token"));
let err = AppError::ConflictEntryName {
folder: "refining".to_string(),
name: "gitea".to_string(),
};
assert!(err.to_string().contains("refining"));
assert!(err.to_string().contains("gitea"));
let err = AppError::NotFoundEntry;
assert_eq!(err.to_string(), "Entry not found");
let err = AppError::NotFoundUser;
assert_eq!(err.to_string(), "User not found");
let err = AppError::NotFoundSecret;
assert_eq!(err.to_string(), "Secret not found");
let err = AppError::AuthenticationFailed;
assert_eq!(err.to_string(), "Authentication failed");
let err = AppError::Unauthorized;
assert!(err.to_string().contains("Unauthorized"));
let err = AppError::Validation {
message: "too long".to_string(),
};
assert!(err.to_string().contains("too long"));
let err = AppError::ConcurrentModification;
assert!(err.to_string().contains("Concurrent modification"));
let err = AppError::EncryptionKeyNotSet;
assert!(err.to_string().contains("Encryption key not set"));
}
#[test]
fn db_error_context_helpers() {
let ctx = DbErrorContext::secret_name("my_key");
assert_eq!(ctx.secret_name, Some("my_key"));
assert!(ctx.folder.is_none());
let ctx = DbErrorContext::entry("prod", "db-creds");
assert_eq!(ctx.folder, Some("prod"));
assert_eq!(ctx.name, Some("db-creds"));
assert!(ctx.secret_name.is_none());
}
}

View File

@@ -2,5 +2,7 @@ pub mod audit;
pub mod config;
pub mod crypto;
pub mod db;
pub mod error;
pub mod models;
pub mod service;
pub mod taxonomy;

View File

@@ -4,15 +4,18 @@ use serde_json::Value;
use std::collections::BTreeMap;
use uuid::Uuid;
/// A top-level entry (server, service, key, …).
/// A top-level entry (server, service, account, person, …).
/// Sensitive fields are stored separately in `secrets`.
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct Entry {
pub id: Uuid,
pub user_id: Option<Uuid>,
pub namespace: String,
pub kind: String,
pub folder: String,
#[serde(rename = "type")]
#[sqlx(rename = "type")]
pub entry_type: String,
pub name: String,
pub notes: String,
pub tags: Vec<String>,
pub metadata: Value,
pub version: i64,
@@ -24,8 +27,11 @@ pub struct Entry {
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct SecretField {
pub id: Uuid,
pub entry_id: Uuid,
pub field_name: String,
pub user_id: Option<Uuid>,
pub name: String,
#[serde(rename = "type")]
#[sqlx(rename = "type")]
pub secret_type: String,
/// AES-256-GCM ciphertext: nonce(12B) || ciphertext+tag
pub encrypted: Vec<u8>,
pub version: i64,
@@ -40,15 +46,47 @@ pub struct SecretField {
pub struct EntryRow {
pub id: Uuid,
pub version: i64,
pub folder: String,
#[sqlx(rename = "type")]
pub entry_type: String,
pub tags: Vec<String>,
pub metadata: Value,
pub notes: String,
}
/// Entry row including `name` (used for id-scoped web / service updates).
#[derive(Debug, sqlx::FromRow)]
pub struct EntryWriteRow {
pub id: Uuid,
pub version: i64,
pub folder: String,
#[sqlx(rename = "type")]
pub entry_type: String,
pub name: String,
pub tags: Vec<String>,
pub metadata: Value,
pub notes: String,
}
impl From<&EntryWriteRow> for EntryRow {
fn from(r: &EntryWriteRow) -> Self {
EntryRow {
id: r.id,
version: r.version,
folder: r.folder.clone(),
entry_type: r.entry_type.clone(),
tags: r.tags.clone(),
metadata: r.metadata.clone(),
notes: r.notes.clone(),
}
}
}
/// Minimal secret field row fetched before snapshots or cascade deletes.
#[derive(Debug, sqlx::FromRow)]
pub struct SecretFieldRow {
pub id: Uuid,
pub field_name: String,
pub name: String,
pub encrypted: Vec<u8>,
}
@@ -128,10 +166,14 @@ pub struct ExportData {
/// A single entry with decrypted secrets for export/import.
#[derive(Debug, Serialize, Deserialize)]
pub struct ExportEntry {
pub namespace: String,
pub kind: String,
pub name: String,
#[serde(default)]
pub folder: String,
#[serde(default, rename = "type")]
pub entry_type: String,
#[serde(default)]
pub notes: String,
#[serde(default)]
pub tags: Vec<String>,
#[serde(default)]
pub metadata: Value,
@@ -181,8 +223,10 @@ pub struct AuditLogEntry {
pub id: i64,
pub user_id: Option<Uuid>,
pub action: String,
pub namespace: String,
pub kind: String,
pub folder: String,
#[serde(rename = "type")]
#[sqlx(rename = "type")]
pub entry_type: String,
pub name: String,
pub detail: Value,
pub created_at: DateTime<Utc>,

View File

@@ -1,11 +1,13 @@
use anyhow::Result;
use serde_json::{Map, Value};
use sqlx::PgPool;
use std::collections::{BTreeSet, HashSet};
use std::fs;
use uuid::Uuid;
use crate::crypto;
use crate::db;
use crate::error::{AppError, DbErrorContext};
use crate::models::EntryRow;
// ── Key/value parsing helpers ─────────────────────────────────────────────────
@@ -159,52 +161,63 @@ pub fn flatten_json_fields(prefix: &str, value: &Value) -> Vec<(String, Value)>
#[derive(Debug, serde::Serialize)]
pub struct AddResult {
pub namespace: String,
pub kind: String,
pub name: String,
pub folder: String,
#[serde(rename = "type")]
pub entry_type: String,
pub tags: Vec<String>,
pub meta_keys: Vec<String>,
pub secret_keys: Vec<String>,
}
pub struct AddParams<'a> {
pub namespace: &'a str,
pub kind: &'a str,
pub name: &'a str,
pub folder: &'a str,
pub entry_type: &'a str,
pub notes: &'a str,
pub tags: &'a [String],
pub meta_entries: &'a [String],
pub secret_entries: &'a [String],
pub secret_types: &'a std::collections::HashMap<String, String>,
pub link_secret_names: &'a [String],
/// Optional user_id for multi-user isolation (None = single-user CLI mode)
pub user_id: Option<Uuid>,
}
pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> Result<AddResult> {
let metadata = build_json(params.meta_entries)?;
let Value::Object(metadata_map) = build_json(params.meta_entries)? else {
unreachable!("build_json always returns a JSON object");
};
let entry_type = params.entry_type.trim();
let metadata = Value::Object(metadata_map);
let secret_json = build_json(params.secret_entries)?;
let meta_keys = collect_key_paths(params.meta_entries)?;
let secret_keys = collect_key_paths(params.secret_entries)?;
let flat_fields = flatten_json_fields("", &secret_json);
let new_secret_names: BTreeSet<String> =
flat_fields.iter().map(|(name, _)| name.clone()).collect();
let link_secret_names =
validate_link_secret_names(params.link_secret_names, &new_secret_names)?;
let mut tx = pool.begin().await?;
// Fetch existing entry (user-scoped or global depending on user_id)
// Fetch existing entry by (user_id, folder, name) — the natural unique key
let existing: Option<EntryRow> = if let Some(uid) = params.user_id {
sqlx::query_as(
"SELECT id, version, tags, metadata FROM entries \
WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4",
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id = $1 AND folder = $2 AND name = $3",
)
.bind(uid)
.bind(params.namespace)
.bind(params.kind)
.bind(params.folder)
.bind(params.name)
.fetch_optional(&mut *tx)
.await?
} else {
sqlx::query_as(
"SELECT id, version, tags, metadata FROM entries \
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3",
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id IS NULL AND folder = $1 AND name = $2",
)
.bind(params.namespace)
.bind(params.kind)
.bind(params.folder)
.bind(params.name)
.fetch_optional(&mut *tx)
.await?
@@ -216,8 +229,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
db::EntrySnapshotParams {
entry_id: ex.id,
user_id: params.user_id,
namespace: params.namespace,
kind: params.kind,
folder: params.folder,
entry_type,
name: params.name,
version: ex.version,
action: "add",
@@ -230,12 +243,20 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
tracing::warn!(error = %e, "failed to snapshot entry history before upsert");
}
// Upsert the entry row. On conflict (existing entry with same user_id+folder+name),
// the entry columns are replaced wholesale. The old secret associations are torn down
// below within the same transaction, so the whole operation is atomic: if any step
// after this point fails, the transaction rolls back and the entry reverts to its
// pre-upsert state (including the version bump that happened in the DO UPDATE clause).
let entry_id: Uuid = if let Some(uid) = params.user_id {
sqlx::query_scalar(
r#"INSERT INTO entries (user_id, namespace, kind, name, tags, metadata, version, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, 1, NOW())
ON CONFLICT (user_id, namespace, kind, name) WHERE user_id IS NOT NULL
r#"INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata, version, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, $7, 1, NOW())
ON CONFLICT (user_id, folder, name) WHERE user_id IS NOT NULL
DO UPDATE SET
folder = EXCLUDED.folder,
type = EXCLUDED.type,
notes = EXCLUDED.notes,
tags = EXCLUDED.tags,
metadata = EXCLUDED.metadata,
version = entries.version + 1,
@@ -243,35 +264,41 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
RETURNING id"#,
)
.bind(uid)
.bind(params.namespace)
.bind(params.kind)
.bind(params.folder)
.bind(entry_type)
.bind(params.name)
.bind(params.notes)
.bind(params.tags)
.bind(&metadata)
.fetch_one(&mut *tx)
.await?
} else {
sqlx::query_scalar(
r#"INSERT INTO entries (namespace, kind, name, tags, metadata, version, updated_at)
VALUES ($1, $2, $3, $4, $5, 1, NOW())
ON CONFLICT (namespace, kind, name) WHERE user_id IS NULL
r#"INSERT INTO entries (folder, type, name, notes, tags, metadata, version, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, 1, NOW())
ON CONFLICT (folder, name) WHERE user_id IS NULL
DO UPDATE SET
folder = EXCLUDED.folder,
type = EXCLUDED.type,
notes = EXCLUDED.notes,
tags = EXCLUDED.tags,
metadata = EXCLUDED.metadata,
version = entries.version + 1,
updated_at = NOW()
RETURNING id"#,
)
.bind(params.namespace)
.bind(params.kind)
.bind(params.folder)
.bind(entry_type)
.bind(params.name)
.bind(params.notes)
.bind(params.tags)
.bind(&metadata)
.fetch_one(&mut *tx)
.await?
};
let new_entry_version: i64 = sqlx::query_scalar("SELECT version FROM entries WHERE id = $1")
let current_entry_version: i64 =
sqlx::query_scalar("SELECT version FROM entries WHERE id = $1")
.bind(entry_id)
.fetch_one(&mut *tx)
.await?;
@@ -282,10 +309,10 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
db::EntrySnapshotParams {
entry_id,
user_id: params.user_id,
namespace: params.namespace,
kind: params.kind,
folder: params.folder,
entry_type,
name: params.name,
version: new_entry_version,
version: current_entry_version,
action: "create",
tags: params.tags,
metadata: &metadata,
@@ -300,11 +327,15 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
#[derive(sqlx::FromRow)]
struct ExistingField {
id: Uuid,
field_name: String,
name: String,
encrypted: Vec<u8>,
}
let existing_fields: Vec<ExistingField> =
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1")
let existing_fields: Vec<ExistingField> = sqlx::query_as(
"SELECT s.id, s.name, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1",
)
.bind(entry_id)
.fetch_all(&mut *tx)
.await?;
@@ -313,10 +344,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
if let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
entry_id,
secret_id: f.id,
entry_version: new_entry_version - 1,
field_name: &f.field_name,
name: &f.name,
encrypted: &f.encrypted,
action: "add",
},
@@ -327,29 +356,88 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
}
}
sqlx::query("DELETE FROM secrets WHERE entry_id = $1")
let orphan_candidates: Vec<Uuid> = existing_fields.iter().map(|f| f.id).collect();
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1")
.bind(entry_id)
.execute(&mut *tx)
.await?;
if !orphan_candidates.is_empty() {
sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = ANY($1) \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(&orphan_candidates)
.execute(&mut *tx)
.await?;
}
}
for (field_name, field_value) in &flat_fields {
let encrypted = crypto::encrypt_json(master_key, field_value)?;
let secret_type = params
.secret_types
.get(field_name)
.map(|s| s.as_str())
.unwrap_or("text");
let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
)
.bind(params.user_id)
.bind(field_name)
.bind(secret_type)
.bind(&encrypted)
.fetch_one(&mut *tx)
.await
.map_err(|e| AppError::from_db_error(e, DbErrorContext::secret_name(field_name)))?;
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
.bind(entry_id)
.bind(secret_id)
.execute(&mut *tx)
.await?;
}
let flat_fields = flatten_json_fields("", &secret_json);
for (field_name, field_value) in &flat_fields {
let encrypted = crypto::encrypt_json(master_key, field_value)?;
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
for link_name in &link_secret_names {
let secret_ids: Vec<Uuid> = if let Some(uid) = params.user_id {
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id = $1 AND name = $2")
.bind(uid)
.bind(link_name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id IS NULL AND name = $1")
.bind(link_name)
.fetch_all(&mut *tx)
.await?
};
match secret_ids.len() {
0 => anyhow::bail!("Not found: secret named '{}'", link_name),
1 => {
sqlx::query(
"INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING",
)
.bind(entry_id)
.bind(field_name)
.bind(&encrypted)
.bind(secret_ids[0])
.execute(&mut *tx)
.await?;
}
n => anyhow::bail!(
"Ambiguous: {} secrets named '{}' found. Please deduplicate names first.",
n,
link_name
),
}
}
crate::audit::log_tx(
&mut tx,
params.user_id,
"add",
params.namespace,
params.kind,
params.folder,
entry_type,
params.name,
serde_json::json!({
"tags": params.tags,
@@ -362,18 +450,46 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
tx.commit().await?;
Ok(AddResult {
namespace: params.namespace.to_string(),
kind: params.kind.to_string(),
name: params.name.to_string(),
folder: params.folder.to_string(),
entry_type: entry_type.to_string(),
tags: params.tags.to_vec(),
meta_keys,
secret_keys,
})
}
fn validate_link_secret_names(
link_secret_names: &[String],
new_secret_names: &BTreeSet<String>,
) -> Result<Vec<String>> {
let mut deduped = Vec::new();
let mut seen = HashSet::new();
for raw in link_secret_names {
let trimmed = raw.trim();
if trimmed.is_empty() {
anyhow::bail!("link_secret_names contains an empty name");
}
if new_secret_names.contains(trimmed) {
anyhow::bail!(
"Conflict: secret '{}' is provided both in secrets/secrets_obj and link_secret_names",
trimmed
);
}
if seen.insert(trimmed.to_string()) {
deduped.push(trimmed.to_string());
}
}
Ok(deduped)
}
#[cfg(test)]
mod tests {
use super::*;
use sqlx::PgPool;
use std::collections::BTreeSet;
#[test]
fn parse_nested_file_shorthand() {
@@ -402,4 +518,267 @@ mod tests {
assert_eq!(fields[1].0, "credentials.type");
assert_eq!(fields[2].0, "username");
}
#[test]
fn validate_link_secret_names_conflict_with_new_secret() {
let mut new_names = BTreeSet::new();
new_names.insert("password".to_string());
let err = validate_link_secret_names(&[String::from("password")], &new_names)
.expect_err("must fail on overlap");
assert!(
err.to_string()
.contains("provided both in secrets/secrets_obj and link_secret_names")
);
}
#[test]
fn validate_link_secret_names_dedup_and_trim() {
let names = vec![
" shared_key ".to_string(),
"shared_key".to_string(),
"runner_token".to_string(),
];
let deduped = validate_link_secret_names(&names, &BTreeSet::new()).unwrap();
assert_eq!(deduped, vec!["shared_key", "runner_token"]);
}
async fn maybe_test_pool() -> Option<PgPool> {
let Ok(url) = std::env::var("SECRETS_DATABASE_URL") else {
eprintln!("skip add linkage tests: SECRETS_DATABASE_URL is not set");
return None;
};
let Ok(pool) = PgPool::connect(&url).await else {
eprintln!("skip add linkage tests: cannot connect to database");
return None;
};
if let Err(e) = crate::db::migrate(&pool).await {
eprintln!("skip add linkage tests: migrate failed: {e}");
return None;
}
Some(pool)
}
async fn cleanup_test_rows(pool: &PgPool, marker: &str) -> Result<()> {
sqlx::query(
"DELETE FROM entries WHERE user_id IS NULL AND (name LIKE $1 OR folder LIKE $1)",
)
.bind(format!("%{marker}%"))
.execute(pool)
.await?;
sqlx::query(
"DELETE FROM secrets WHERE user_id IS NULL AND name LIKE $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = secrets.id)",
)
.bind(format!("%{marker}%"))
.execute(pool)
.await?;
Ok(())
}
#[tokio::test]
async fn add_links_existing_secret_by_unique_name() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("link_unique_{}", &suffix[..8]);
let secret_name = format!("{}_secret", marker);
let entry_name = format!("{}_entry", marker);
cleanup_test_rows(&pool, &marker).await?;
let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, 'text', $2) RETURNING id",
)
.bind(&secret_name)
.bind(vec![1_u8, 2, 3])
.fetch_one(&pool)
.await?;
run(
&pool,
AddParams {
name: &entry_name,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[],
secret_types: &Default::default(),
link_secret_names: std::slice::from_ref(&secret_name),
user_id: None,
},
&[0_u8; 32],
)
.await?;
let linked: bool = sqlx::query_scalar(
"SELECT EXISTS( \
SELECT 1 FROM entry_secrets es \
JOIN entries e ON e.id = es.entry_id \
WHERE e.user_id IS NULL AND e.name = $1 AND es.secret_id = $2 \
)",
)
.bind(&entry_name)
.bind(secret_id)
.fetch_one(&pool)
.await?;
assert!(linked);
cleanup_test_rows(&pool, &marker).await?;
Ok(())
}
#[tokio::test]
async fn add_link_secret_name_not_found_fails() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("link_missing_{}", &suffix[..8]);
let secret_name = format!("{}_secret", marker);
let entry_name = format!("{}_entry", marker);
cleanup_test_rows(&pool, &marker).await?;
let err = run(
&pool,
AddParams {
name: &entry_name,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[],
secret_types: &Default::default(),
link_secret_names: std::slice::from_ref(&secret_name),
user_id: None,
},
&[0_u8; 32],
)
.await
.expect_err("must fail when linked secret is not found");
assert!(err.to_string().contains("Not found: secret named"));
cleanup_test_rows(&pool, &marker).await?;
Ok(())
}
#[tokio::test]
async fn add_link_secret_name_ambiguous_fails() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("link_amb_{}", &suffix[..8]);
let secret_name = format!("{}_dup_secret", marker);
let entry_name = format!("{}_entry", marker);
cleanup_test_rows(&pool, &marker).await?;
sqlx::query(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, 'text', $2)",
)
.bind(&secret_name)
.bind(vec![1_u8])
.execute(&pool)
.await?;
sqlx::query(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, 'text', $2)",
)
.bind(&secret_name)
.bind(vec![2_u8])
.execute(&pool)
.await?;
let err = run(
&pool,
AddParams {
name: &entry_name,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[],
secret_types: &Default::default(),
link_secret_names: std::slice::from_ref(&secret_name),
user_id: None,
},
&[0_u8; 32],
)
.await
.expect_err("must fail on ambiguous linked secret name");
assert!(err.to_string().contains("Ambiguous:"));
cleanup_test_rows(&pool, &marker).await?;
Ok(())
}
#[tokio::test]
async fn add_duplicate_secret_name_returns_conflict_error() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("dup_secret_{}", &suffix[..8]);
let entry_name = format!("{}_entry", marker);
let secret_name = "shared_token";
cleanup_test_rows(&pool, &marker).await?;
// First add succeeds
run(
&pool,
AddParams {
name: &entry_name,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[format!("{}=value1", secret_name)],
secret_types: &Default::default(),
link_secret_names: &[],
user_id: None,
},
&[0_u8; 32],
)
.await?;
// Second add with same secret name under same user_id should fail with ConflictSecretName
let entry_name2 = format!("{}_entry2", marker);
let err = run(
&pool,
AddParams {
name: &entry_name2,
folder: &marker,
entry_type: "service",
notes: "",
tags: &[],
meta_entries: &[],
secret_entries: &[format!("{}=value2", secret_name)],
secret_types: &Default::default(),
link_secret_names: &[],
user_id: None,
},
&[0_u8; 32],
)
.await
.expect_err("must fail on duplicate secret name");
let app_err = err
.downcast_ref::<crate::error::AppError>()
.expect("error should be AppError");
assert!(
matches!(app_err, crate::error::AppError::ConflictSecretName { .. }),
"expected ConflictSecretName, got: {}",
app_err
);
cleanup_test_rows(&pool, &marker).await?;
Ok(())
}
}

View File

@@ -2,6 +2,8 @@ use anyhow::Result;
use sqlx::PgPool;
use uuid::Uuid;
use crate::error::AppError;
const KEY_PREFIX: &str = "sk_";
/// Generate a new API key: `sk_<64 hex chars>` = 67 characters total.
@@ -14,23 +16,32 @@ pub fn generate_api_key() -> String {
}
/// Return the user's existing API key, or generate and store a new one if NULL.
/// Uses a transaction with atomic update to prevent TOCTOU race conditions.
pub async fn ensure_api_key(pool: &PgPool, user_id: Uuid) -> Result<String> {
let existing: Option<(Option<String>,)> =
sqlx::query_as("SELECT api_key FROM users WHERE id = $1")
.bind(user_id)
.fetch_optional(pool)
.await?;
let mut tx = pool.begin().await?;
if let Some((Some(key),)) = existing {
// Lock the row and check existing key
let existing: (Option<String>,) =
sqlx::query_as("SELECT api_key FROM users WHERE id = $1 FOR UPDATE")
.bind(user_id)
.fetch_optional(&mut *tx)
.await?
.ok_or(AppError::NotFoundUser)?;
if let Some(key) = existing.0 {
tx.commit().await?;
return Ok(key);
}
// Generate and store new key atomically
let new_key = generate_api_key();
sqlx::query("UPDATE users SET api_key = $1 WHERE id = $2")
.bind(&new_key)
.bind(user_id)
.execute(pool)
.execute(&mut *tx)
.await?;
tx.commit().await?;
Ok(new_key)
}

View File

@@ -4,20 +4,36 @@ use uuid::Uuid;
use crate::models::AuditLogEntry;
pub async fn list_for_user(pool: &PgPool, user_id: Uuid, limit: i64) -> Result<Vec<AuditLogEntry>> {
pub async fn list_for_user(
pool: &PgPool,
user_id: Uuid,
limit: i64,
offset: i64,
) -> Result<Vec<AuditLogEntry>> {
let limit = limit.clamp(1, 200);
let offset = offset.max(0);
let rows = sqlx::query_as(
"SELECT id, user_id, action, namespace, kind, name, detail, created_at \
"SELECT id, user_id, action, folder, type, name, detail, created_at \
FROM audit_log \
WHERE user_id = $1 \
ORDER BY created_at DESC, id DESC \
LIMIT $2",
LIMIT $2 OFFSET $3",
)
.bind(user_id)
.bind(limit)
.bind(offset)
.fetch_all(pool)
.await?;
Ok(rows)
}
pub async fn count_for_user(pool: &PgPool, user_id: Uuid) -> Result<i64> {
let count: i64 =
sqlx::query_scalar("SELECT COUNT(*)::bigint FROM audit_log WHERE user_id = $1")
.bind(user_id)
.fetch_one(pool)
.await?;
Ok(count)
}

View File

@@ -4,13 +4,14 @@ use sqlx::PgPool;
use uuid::Uuid;
use crate::db;
use crate::models::{EntryRow, SecretFieldRow};
use crate::models::{EntryRow, EntryWriteRow, SecretFieldRow};
#[derive(Debug, serde::Serialize)]
pub struct DeletedEntry {
pub namespace: String,
pub kind: String,
pub name: String,
pub folder: String,
#[serde(rename = "type")]
pub entry_type: String,
}
#[derive(Debug, serde::Serialize)]
@@ -20,34 +21,89 @@ pub struct DeleteResult {
}
pub struct DeleteParams<'a> {
pub namespace: &'a str,
pub kind: Option<&'a str>,
/// If set, delete a single entry by name.
pub name: Option<&'a str>,
/// Folder filter for bulk delete.
pub folder: Option<&'a str>,
/// Type filter for bulk delete.
pub entry_type: Option<&'a str>,
pub dry_run: bool,
pub user_id: Option<Uuid>,
}
/// Maximum number of entries that can be deleted in a single bulk operation.
/// Prevents accidental mass deletion when filters are too broad.
pub const MAX_BULK_DELETE: usize = 1000;
/// Delete a single entry by id (multi-tenant: `user_id` must match).
pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<DeleteResult> {
let mut tx = pool.begin().await?;
let row: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
WHERE id = $1 AND user_id = $2 FOR UPDATE",
)
.bind(entry_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await?;
let row = match row {
Some(r) => r,
None => {
tx.rollback().await?;
anyhow::bail!("Entry not found");
}
};
let folder = row.folder.clone();
let entry_type = row.entry_type.clone();
let name = row.name.clone();
let entry_row: EntryRow = (&row).into();
snapshot_and_delete(
&mut tx,
&folder,
&entry_type,
&name,
&entry_row,
Some(user_id),
)
.await?;
crate::audit::log_tx(
&mut tx,
Some(user_id),
"delete",
&folder,
&entry_type,
&name,
json!({ "source": "web", "entry_id": entry_id }),
)
.await;
tx.commit().await?;
Ok(DeleteResult {
deleted: vec![DeletedEntry {
name,
folder,
entry_type,
}],
dry_run: false,
})
}
pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult> {
match params.name {
Some(name) => {
let kind = params
.kind
.ok_or_else(|| anyhow::anyhow!("--kind is required when --name is specified"))?;
delete_one(
pool,
params.namespace,
kind,
name,
params.dry_run,
params.user_id,
)
.await
}
Some(name) => delete_one(pool, name, params.folder, params.dry_run, params.user_id).await,
None => {
if params.folder.is_none() && params.entry_type.is_none() {
anyhow::bail!(
"Bulk delete requires at least one of: name, folder, or type filter."
);
}
delete_bulk(
pool,
params.namespace,
params.kind,
params.folder,
params.entry_type,
params.dry_run,
params.user_id,
)
@@ -58,93 +114,175 @@ pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult
async fn delete_one(
pool: &PgPool,
namespace: &str,
kind: &str,
name: &str,
folder: Option<&str>,
dry_run: bool,
user_id: Option<Uuid>,
) -> Result<DeleteResult> {
if dry_run {
let exists: bool = if let Some(uid) = user_id {
sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries \
WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4)",
// Dry-run uses the same disambiguation logic as actual delete:
// - 0 matches → nothing to delete
// - 1 match → show what would be deleted (with correct folder/type)
// - 2+ matches → disambiguation error (same as non-dry-run)
#[derive(sqlx::FromRow)]
struct DryRunRow {
#[allow(dead_code)]
id: Uuid,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
}
let rows: Vec<DryRunRow> = if let Some(uid) = user_id {
if let Some(f) = folder {
sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3",
)
.bind(uid)
.bind(namespace)
.bind(kind)
.bind(f)
.bind(name)
.fetch_one(pool)
.fetch_all(pool)
.await?
} else {
sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries \
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3)",
sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND name = $2",
)
.bind(namespace)
.bind(kind)
.bind(uid)
.bind(name)
.fetch_one(pool)
.fetch_all(pool)
.await?
}
} else if let Some(f) = folder {
sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2",
)
.bind(f)
.bind(name)
.fetch_all(pool)
.await?
} else {
sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND name = $1",
)
.bind(name)
.fetch_all(pool)
.await?
};
let deleted = if exists {
vec![DeletedEntry {
namespace: namespace.to_string(),
kind: kind.to_string(),
name: name.to_string(),
}]
} else {
vec![]
};
return Ok(DeleteResult {
deleted,
return match rows.len() {
0 => Ok(DeleteResult {
deleted: vec![],
dry_run: true,
});
}),
1 => {
let row = rows.into_iter().next().unwrap();
Ok(DeleteResult {
deleted: vec![DeletedEntry {
name: name.to_string(),
folder: row.folder,
entry_type: row.entry_type,
}],
dry_run: true,
})
}
_ => {
let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect();
anyhow::bail!(
"Ambiguous: {} entries named '{}' found in folders: [{}]. \
Specify 'folder' to disambiguate.",
rows.len(),
name,
folders.join(", ")
)
}
};
}
let mut tx = pool.begin().await?;
let row: Option<EntryRow> = if let Some(uid) = user_id {
// Fetch matching rows with FOR UPDATE; use folder when provided to resolve ambiguity.
let rows: Vec<EntryRow> = if let Some(uid) = user_id {
if let Some(f) = folder {
sqlx::query_as(
"SELECT id, version, tags, metadata FROM entries \
WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4 FOR UPDATE",
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id = $1 AND folder = $2 AND name = $3 FOR UPDATE",
)
.bind(uid)
.bind(namespace)
.bind(kind)
.bind(f)
.bind(name)
.fetch_optional(&mut *tx)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_as(
"SELECT id, version, tags, metadata FROM entries \
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE",
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id = $1 AND name = $2 FOR UPDATE",
)
.bind(namespace)
.bind(kind)
.bind(uid)
.bind(name)
.fetch_optional(&mut *tx)
.fetch_all(&mut *tx)
.await?
}
} else if let Some(f) = folder {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id IS NULL AND folder = $1 AND name = $2 FOR UPDATE",
)
.bind(f)
.bind(name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id IS NULL AND name = $1 FOR UPDATE",
)
.bind(name)
.fetch_all(&mut *tx)
.await?
};
let Some(row) = row else {
let row = match rows.len() {
0 => {
tx.rollback().await?;
return Ok(DeleteResult {
deleted: vec![],
dry_run: false,
});
}
1 => rows.into_iter().next().unwrap(),
_ => {
tx.rollback().await?;
let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect();
anyhow::bail!(
"Ambiguous: {} entries named '{}' found in folders: [{}]. \
Specify 'folder' to disambiguate.",
rows.len(),
name,
folders.join(", ")
)
}
};
snapshot_and_delete(&mut tx, namespace, kind, name, &row, user_id).await?;
crate::audit::log_tx(&mut tx, user_id, "delete", namespace, kind, name, json!({})).await;
let folder = row.folder.clone();
let entry_type = row.entry_type.clone();
snapshot_and_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?;
crate::audit::log_tx(
&mut tx,
user_id,
"delete",
&folder,
&entry_type,
name,
json!({}),
)
.await;
tx.commit().await?;
Ok(DeleteResult {
deleted: vec![DeletedEntry {
namespace: namespace.to_string(),
kind: kind.to_string(),
name: name.to_string(),
folder,
entry_type,
}],
dry_run: false,
})
@@ -152,8 +290,8 @@ async fn delete_one(
async fn delete_bulk(
pool: &PgPool,
namespace: &str,
kind: Option<&str>,
folder: Option<&str>,
entry_type: Option<&str>,
dry_run: bool,
user_id: Option<Uuid>,
) -> Result<DeleteResult> {
@@ -161,62 +299,59 @@ async fn delete_bulk(
struct FullEntryRow {
id: Uuid,
version: i64,
kind: String,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
name: String,
metadata: serde_json::Value,
tags: Vec<String>,
notes: String,
}
let rows: Vec<FullEntryRow> = match (user_id, kind) {
(Some(uid), Some(k)) => {
sqlx::query_as(
"SELECT id, version, kind, name, metadata, tags FROM entries \
WHERE user_id = $1 AND namespace = $2 AND kind = $3 ORDER BY name",
)
.bind(uid)
.bind(namespace)
.bind(k)
.fetch_all(pool)
.await?
let mut conditions: Vec<String> = Vec::new();
let mut idx: i32 = 1;
if user_id.is_some() {
conditions.push(format!("user_id = ${}", idx));
idx += 1;
} else {
conditions.push("user_id IS NULL".to_string());
}
(Some(uid), None) => {
sqlx::query_as(
"SELECT id, version, kind, name, metadata, tags FROM entries \
WHERE user_id = $1 AND namespace = $2 ORDER BY kind, name",
)
.bind(uid)
.bind(namespace)
.fetch_all(pool)
.await?
if folder.is_some() {
conditions.push(format!("folder = ${}", idx));
idx += 1;
}
(None, Some(k)) => {
sqlx::query_as(
"SELECT id, version, kind, name, metadata, tags FROM entries \
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 ORDER BY name",
)
.bind(namespace)
.bind(k)
.fetch_all(pool)
.await?
if entry_type.is_some() {
conditions.push(format!("type = ${}", idx));
idx += 1;
}
(None, None) => {
sqlx::query_as(
"SELECT id, version, kind, name, metadata, tags FROM entries \
WHERE user_id IS NULL AND namespace = $1 ORDER BY kind, name",
)
.bind(namespace)
.fetch_all(pool)
.await?
}
};
let where_clause = format!("WHERE {}", conditions.join(" AND "));
let _ = idx; // used only for placeholder numbering in conditions
if dry_run {
let sql = format!(
"SELECT id, version, folder, type, name, metadata, tags, notes \
FROM entries {where_clause} ORDER BY type, name"
);
let mut q = sqlx::query_as::<_, FullEntryRow>(&sql);
if let Some(uid) = user_id {
q = q.bind(uid);
}
if let Some(f) = folder {
q = q.bind(f);
}
if let Some(t) = entry_type {
q = q.bind(t);
}
let rows = q.fetch_all(pool).await?;
let deleted = rows
.iter()
.map(|r| DeletedEntry {
namespace: namespace.to_string(),
kind: r.kind.clone(),
name: r.name.clone(),
folder: r.folder.clone(),
entry_type: r.entry_type.clone(),
})
.collect();
return Ok(DeleteResult {
@@ -225,37 +360,73 @@ async fn delete_bulk(
});
}
let mut tx = pool.begin().await?;
let sql = format!(
"SELECT id, version, folder, type, name, metadata, tags, notes \
FROM entries {where_clause} ORDER BY type, name FOR UPDATE"
);
let mut q = sqlx::query_as::<_, FullEntryRow>(&sql);
if let Some(uid) = user_id {
q = q.bind(uid);
}
if let Some(f) = folder {
q = q.bind(f);
}
if let Some(t) = entry_type {
q = q.bind(t);
}
let rows = q.fetch_all(&mut *tx).await?;
if rows.len() > MAX_BULK_DELETE {
tx.rollback().await?;
anyhow::bail!(
"Bulk delete would affect {} entries (limit: {}). \
Narrow your filters or delete entries individually.",
rows.len(),
MAX_BULK_DELETE,
);
}
let mut deleted = Vec::with_capacity(rows.len());
for row in &rows {
let entry_row = EntryRow {
let entry_row: EntryRow = EntryRow {
id: row.id,
version: row.version,
folder: row.folder.clone(),
entry_type: row.entry_type.clone(),
tags: row.tags.clone(),
metadata: row.metadata.clone(),
notes: row.notes.clone(),
};
let mut tx = pool.begin().await?;
snapshot_and_delete(
&mut tx, namespace, &row.kind, &row.name, &entry_row, user_id,
&mut tx,
&row.folder,
&row.entry_type,
&row.name,
&entry_row,
user_id,
)
.await?;
crate::audit::log_tx(
&mut tx,
user_id,
"delete",
namespace,
&row.kind,
&row.folder,
&row.entry_type,
&row.name,
json!({"bulk": true}),
)
.await;
tx.commit().await?;
deleted.push(DeletedEntry {
namespace: namespace.to_string(),
kind: row.kind.clone(),
name: row.name.clone(),
folder: row.folder.clone(),
entry_type: row.entry_type.clone(),
});
}
tx.commit().await?;
Ok(DeleteResult {
deleted,
dry_run: false,
@@ -264,8 +435,8 @@ async fn delete_bulk(
async fn snapshot_and_delete(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
namespace: &str,
kind: &str,
folder: &str,
entry_type: &str,
name: &str,
row: &EntryRow,
user_id: Option<Uuid>,
@@ -275,8 +446,8 @@ async fn snapshot_and_delete(
db::EntrySnapshotParams {
entry_id: row.id,
user_id,
namespace,
kind,
folder,
entry_type,
name,
version: row.version,
action: "delete",
@@ -289,8 +460,12 @@ async fn snapshot_and_delete(
tracing::warn!(error = %e, "failed to snapshot entry history before delete");
}
let fields: Vec<SecretFieldRow> =
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1")
let fields: Vec<SecretFieldRow> = sqlx::query_as(
"SELECT s.id, s.name, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1",
)
.bind(row.id)
.fetch_all(&mut **tx)
.await?;
@@ -299,10 +474,8 @@ async fn snapshot_and_delete(
if let Err(e) = db::snapshot_secret_history(
tx,
db::SecretSnapshotParams {
entry_id: row.id,
secret_id: f.id,
entry_version: row.version,
field_name: &f.field_name,
name: &f.name,
encrypted: &f.encrypted,
action: "delete",
},
@@ -318,5 +491,171 @@ async fn snapshot_and_delete(
.execute(&mut **tx)
.await?;
let secret_ids: Vec<Uuid> = fields.iter().map(|f| f.id).collect();
if !secret_ids.is_empty() {
sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = ANY($1) \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(&secret_ids)
.execute(&mut **tx)
.await?;
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use sqlx::PgPool;
async fn maybe_test_pool() -> Option<PgPool> {
let Ok(url) = std::env::var("SECRETS_DATABASE_URL") else {
eprintln!("skip delete tests: SECRETS_DATABASE_URL is not set");
return None;
};
let Ok(pool) = PgPool::connect(&url).await else {
eprintln!("skip delete tests: cannot connect to database");
return None;
};
if let Err(e) = crate::db::migrate(&pool).await {
eprintln!("skip delete tests: migrate failed: {e}");
return None;
}
Some(pool)
}
async fn cleanup_single_user_rows(pool: &PgPool, marker: &str) -> Result<()> {
sqlx::query(
"DELETE FROM entries WHERE user_id IS NULL AND (name LIKE $1 OR folder LIKE $1)",
)
.bind(format!("%{marker}%"))
.execute(pool)
.await?;
sqlx::query(
"DELETE FROM secrets WHERE user_id IS NULL AND name LIKE $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = secrets.id)",
)
.bind(format!("%{marker}%"))
.execute(pool)
.await?;
Ok(())
}
#[tokio::test]
async fn delete_dry_run_reports_matching_entry_without_writes() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("delete_dry_{}", &suffix[..8]);
let entry_name = format!("{}_entry", marker);
cleanup_single_user_rows(&pool, &marker).await?;
sqlx::query(
"INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata) \
VALUES (NULL, $1, 'service', $2, '', '{}', '{}')",
)
.bind(&marker)
.bind(&entry_name)
.execute(&pool)
.await?;
let result = run(
&pool,
DeleteParams {
name: Some(&entry_name),
folder: Some(&marker),
entry_type: None,
dry_run: true,
user_id: None,
},
)
.await?;
assert!(result.dry_run);
assert_eq!(result.deleted.len(), 1);
assert_eq!(result.deleted[0].name, entry_name);
let still_exists: bool = sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2)",
)
.bind(&marker)
.bind(&entry_name)
.fetch_one(&pool)
.await?;
assert!(still_exists);
cleanup_single_user_rows(&pool, &marker).await?;
Ok(())
}
#[tokio::test]
async fn delete_by_id_removes_entry_and_orphan_secret() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let suffix = Uuid::from_u128(rand::random()).to_string();
let marker = format!("delete_id_{}", &suffix[..8]);
let user_id = Uuid::from_u128(rand::random());
let entry_name = format!("{}_entry", marker);
let secret_name = format!("{}_secret", marker);
sqlx::query("DELETE FROM entries WHERE user_id = $1 AND folder = $2")
.bind(user_id)
.bind(&marker)
.execute(&pool)
.await?;
sqlx::query("DELETE FROM secrets WHERE user_id = $1 AND name = $2")
.bind(user_id)
.bind(&secret_name)
.execute(&pool)
.await?;
let entry_id: Uuid = sqlx::query_scalar(
"INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata) \
VALUES ($1, $2, 'service', $3, '', '{}', '{}') RETURNING id",
)
.bind(user_id)
.bind(&marker)
.bind(&entry_name)
.fetch_one(&pool)
.await?;
let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, 'text', $3) RETURNING id",
)
.bind(user_id)
.bind(&secret_name)
.bind(vec![1_u8, 2, 3])
.fetch_one(&pool)
.await?;
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
.bind(entry_id)
.bind(secret_id)
.execute(&pool)
.await?;
let result = delete_by_id(&pool, entry_id, user_id).await?;
assert!(!result.dry_run);
assert_eq!(result.deleted.len(), 1);
assert_eq!(result.deleted[0].name, entry_name);
let entry_exists: bool =
sqlx::query_scalar("SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1)")
.bind(entry_id)
.fetch_one(&pool)
.await?;
let secret_exists: bool =
sqlx::query_scalar("SELECT EXISTS(SELECT 1 FROM secrets WHERE id = $1)")
.bind(secret_id)
.fetch_one(&pool)
.await?;
assert!(!entry_exists);
assert!(!secret_exists);
Ok(())
}
}

View File

@@ -12,8 +12,8 @@ use crate::service::search::{fetch_entries, fetch_secrets_for_entries};
#[allow(clippy::too_many_arguments)]
pub async fn build_env_map(
pool: &PgPool,
namespace: Option<&str>,
kind: Option<&str>,
folder: Option<&str>,
entry_type: Option<&str>,
name: Option<&str>,
tags: &[String],
only_fields: &[String],
@@ -21,12 +21,13 @@ pub async fn build_env_map(
master_key: &[u8; 32],
user_id: Option<Uuid>,
) -> Result<HashMap<String, String>> {
let entries = fetch_entries(pool, namespace, kind, name, tags, None, user_id).await?;
let entries = fetch_entries(pool, folder, entry_type, name, tags, None, user_id).await?;
let mut combined: HashMap<String, String> = HashMap::new();
for entry in &entries {
let entry_map = build_entry_env_map(pool, entry, only_fields, prefix, master_key).await?;
let entry_map =
build_entry_env_map(pool, entry, only_fields, prefix, master_key, user_id).await?;
combined.extend(entry_map);
}
@@ -39,6 +40,7 @@ async fn build_entry_env_map(
only_fields: &[String],
prefix: &str,
master_key: &[u8; 32],
_user_id: Option<Uuid>,
) -> Result<HashMap<String, String>> {
let entry_ids = vec![entry.id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
@@ -49,7 +51,7 @@ async fn build_entry_env_map(
} else {
all_fields
.iter()
.filter(|f| only_fields.contains(&f.field_name))
.filter(|f| only_fields.contains(&f.name))
.collect()
};
@@ -61,44 +63,11 @@ async fn build_entry_env_map(
let key = format!(
"{}_{}",
effective_prefix,
f.field_name.to_uppercase().replace(['-', '.'], "_")
f.name.to_uppercase().replace(['-', '.'], "_")
);
map.insert(key, json_to_env_string(&decrypted));
}
// Resolve key_ref
if let Some(key_ref) = entry.metadata.get("key_ref").and_then(|v| v.as_str()) {
let key_entries = fetch_entries(
pool,
Some(&entry.namespace),
Some("key"),
Some(key_ref),
&[],
None,
None,
)
.await?;
if let Some(key_entry) = key_entries.first() {
let key_ids = vec![key_entry.id];
let key_fields_map = fetch_secrets_for_entries(pool, &key_ids).await?;
let empty = vec![];
let key_fields = key_fields_map.get(&key_entry.id).unwrap_or(&empty);
let key_prefix = env_prefix(key_entry, prefix);
for f in key_fields {
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
let key_var = format!(
"{}_{}",
key_prefix,
f.field_name.to_uppercase().replace(['-', '.'], "_")
);
map.insert(key_var, json_to_env_string(&decrypted));
}
} else {
tracing::warn!(key_ref, "key_ref target not found");
}
}
Ok(map)
}

View File

@@ -9,8 +9,8 @@ use crate::models::{ExportData, ExportEntry, ExportFormat};
use crate::service::search::{fetch_entries, fetch_secrets_for_entries};
pub struct ExportParams<'a> {
pub namespace: Option<&'a str>,
pub kind: Option<&'a str>,
pub folder: Option<&'a str>,
pub entry_type: Option<&'a str>,
pub name: Option<&'a str>,
pub tags: &'a [String],
pub query: Option<&'a str>,
@@ -25,8 +25,8 @@ pub async fn export(
) -> Result<ExportData> {
let entries = fetch_entries(
pool,
params.namespace,
params.kind,
params.folder,
params.entry_type,
params.name,
params.tags,
params.query,
@@ -55,16 +55,17 @@ pub async fn export(
let mut map = BTreeMap::new();
for f in fields {
let decrypted = crypto::decrypt_json(mk, &f.encrypted)?;
map.insert(f.field_name.clone(), decrypted);
map.insert(f.name.clone(), decrypted);
}
Some(map)
}
};
export_entries.push(ExportEntry {
namespace: entry.namespace.clone(),
kind: entry.kind.clone(),
name: entry.name.clone(),
folder: entry.folder.clone(),
entry_type: entry.entry_type.clone(),
notes: entry.notes.clone(),
tags: entry.tags.clone(),
metadata: entry.metadata.clone(),
secrets,

View File

@@ -5,31 +5,19 @@ use std::collections::HashMap;
use uuid::Uuid;
use crate::crypto;
use crate::service::search::{fetch_entries, fetch_secrets_for_entries};
use crate::service::search::{fetch_secrets_for_entries, resolve_entry, resolve_entry_by_id};
/// Decrypt a single named field from an entry.
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
pub async fn get_secret_field(
pool: &PgPool,
namespace: &str,
kind: &str,
name: &str,
folder: Option<&str>,
field_name: &str,
master_key: &[u8; 32],
user_id: Option<Uuid>,
) -> Result<Value> {
let entries = fetch_entries(
pool,
Some(namespace),
Some(kind),
Some(name),
&[],
None,
user_id,
)
.await?;
let entry = entries
.first()
.ok_or_else(|| anyhow::anyhow!("Not found: [{}/{}] {}", namespace, kind, name))?;
let entry = resolve_entry(pool, name, folder, user_id).await?;
let entry_ids = vec![entry.id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
@@ -37,34 +25,22 @@ pub async fn get_secret_field(
let field = fields
.iter()
.find(|f| f.field_name == field_name)
.find(|f| f.name == field_name)
.ok_or_else(|| anyhow::anyhow!("Secret field '{}' not found", field_name))?;
crypto::decrypt_json(master_key, &field.encrypted)
}
/// Decrypt all secret fields from an entry. Returns a map field_name → decrypted Value.
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
pub async fn get_all_secrets(
pool: &PgPool,
namespace: &str,
kind: &str,
name: &str,
folder: Option<&str>,
master_key: &[u8; 32],
user_id: Option<Uuid>,
) -> Result<HashMap<String, Value>> {
let entries = fetch_entries(
pool,
Some(namespace),
Some(kind),
Some(name),
&[],
None,
user_id,
)
.await?;
let entry = entries
.first()
.ok_or_else(|| anyhow::anyhow!("Not found: [{}/{}] {}", namespace, kind, name))?;
let entry = resolve_entry(pool, name, folder, user_id).await?;
let entry_ids = vec![entry.id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
@@ -73,7 +49,56 @@ pub async fn get_all_secrets(
let mut map = HashMap::new();
for f in fields {
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
map.insert(f.field_name.clone(), decrypted);
map.insert(f.name.clone(), decrypted);
}
Ok(map)
}
/// Decrypt a single named field from an entry, located by its UUID.
pub async fn get_secret_field_by_id(
pool: &PgPool,
entry_id: Uuid,
field_name: &str,
master_key: &[u8; 32],
user_id: Option<Uuid>,
) -> Result<Value> {
resolve_entry_by_id(pool, entry_id, user_id)
.await
.map_err(|_| anyhow::anyhow!("Entry with id '{}' not found", entry_id))?;
let entry_ids = vec![entry_id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
let fields = secrets_map.get(&entry_id).map(Vec::as_slice).unwrap_or(&[]);
let field = fields
.iter()
.find(|f| f.name == field_name)
.ok_or_else(|| anyhow::anyhow!("Secret field '{}' not found", field_name))?;
crypto::decrypt_json(master_key, &field.encrypted)
}
/// Decrypt all secret fields from an entry, located by its UUID.
/// Returns a map field_name → decrypted Value.
pub async fn get_all_secrets_by_id(
pool: &PgPool,
entry_id: Uuid,
master_key: &[u8; 32],
user_id: Option<Uuid>,
) -> Result<HashMap<String, Value>> {
// Validate entry exists (and that it belongs to the requesting user)
resolve_entry_by_id(pool, entry_id, user_id)
.await
.map_err(|_| anyhow::anyhow!("Entry with id '{}' not found", entry_id))?;
let entry_ids = vec![entry_id];
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
let fields = secrets_map.get(&entry_id).map(Vec::as_slice).unwrap_or(&[]);
let mut map = HashMap::new();
for f in fields {
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
map.insert(f.name.clone(), decrypted);
}
Ok(map)
}

View File

@@ -3,6 +3,8 @@ use serde_json::Value;
use sqlx::PgPool;
use uuid::Uuid;
use crate::service::search::resolve_entry;
#[derive(Debug, serde::Serialize)]
pub struct HistoryEntry {
pub version: i64,
@@ -10,11 +12,12 @@ pub struct HistoryEntry {
pub created_at: String,
}
/// Return version history for the entry identified by `name`.
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
pub async fn run(
pool: &PgPool,
namespace: &str,
kind: &str,
name: &str,
folder: Option<&str>,
limit: u32,
user_id: Option<Uuid>,
) -> Result<Vec<HistoryEntry>> {
@@ -25,32 +28,16 @@ pub async fn run(
created_at: chrono::DateTime<chrono::Utc>,
}
let rows: Vec<Row> = if let Some(uid) = user_id {
sqlx::query_as(
let entry = resolve_entry(pool, name, folder, user_id).await?;
let rows: Vec<Row> = sqlx::query_as(
"SELECT version, action, created_at FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id = $4 \
ORDER BY id DESC LIMIT $5",
WHERE entry_id = $1 ORDER BY id DESC LIMIT $2",
)
.bind(namespace)
.bind(kind)
.bind(name)
.bind(uid)
.bind(entry.id)
.bind(limit as i64)
.fetch_all(pool)
.await?
} else {
sqlx::query_as(
"SELECT version, action, created_at FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id IS NULL \
ORDER BY id DESC LIMIT $4",
)
.bind(namespace)
.bind(kind)
.bind(name)
.bind(limit as i64)
.fetch_all(pool)
.await?
};
.await?;
Ok(rows
.into_iter()
@@ -64,12 +51,11 @@ pub async fn run(
pub async fn run_json(
pool: &PgPool,
namespace: &str,
kind: &str,
name: &str,
folder: Option<&str>,
limit: u32,
user_id: Option<Uuid>,
) -> Result<Value> {
let entries = run(pool, namespace, kind, name, limit, user_id).await?;
let entries = run(pool, name, folder, limit, user_id).await?;
Ok(serde_json::to_value(entries)?)
}

View File

@@ -47,10 +47,9 @@ pub async fn run(
for entry in &data.entries {
let exists: bool = sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries \
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id IS NOT DISTINCT FROM $4)",
WHERE folder = $1 AND name = $2 AND user_id IS NOT DISTINCT FROM $3)",
)
.bind(&entry.namespace)
.bind(&entry.kind)
.bind(&entry.folder)
.bind(&entry.name)
.bind(params.user_id)
.fetch_one(pool)
@@ -59,9 +58,7 @@ pub async fn run(
if exists && !params.force {
return Err(anyhow::anyhow!(
"Import aborted: conflict on [{}/{}/{}]",
entry.namespace,
entry.kind,
"Import aborted: conflict on '{}'",
entry.name
));
}
@@ -81,12 +78,15 @@ pub async fn run(
match add_run(
pool,
AddParams {
namespace: &entry.namespace,
kind: &entry.kind,
name: &entry.name,
folder: &entry.folder,
entry_type: &entry.entry_type,
notes: &entry.notes,
tags: &entry.tags,
meta_entries: &meta_entries,
secret_entries: &secret_entries,
secret_types: &Default::default(),
link_secret_names: &[],
user_id: params.user_id,
},
master_key,
@@ -98,8 +98,6 @@ pub async fn run(
}
Err(e) => {
tracing::error!(
namespace = entry.namespace,
kind = entry.kind,
name = entry.name,
error = %e,
"failed to import entry"

View File

@@ -3,92 +3,145 @@ use serde_json::Value;
use sqlx::PgPool;
use uuid::Uuid;
use crate::crypto;
use crate::db;
#[derive(Debug, serde::Serialize)]
pub struct RollbackResult {
pub namespace: String,
pub kind: String,
pub name: String,
pub folder: String,
#[serde(rename = "type")]
pub entry_type: String,
pub restored_version: i64,
}
/// Roll back entry `name` to `to_version` (or the most recent snapshot if None).
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
pub async fn run(
pool: &PgPool,
namespace: &str,
kind: &str,
name: &str,
folder: Option<&str>,
to_version: Option<i64>,
master_key: &[u8; 32],
user_id: Option<Uuid>,
) -> Result<RollbackResult> {
#[derive(sqlx::FromRow)]
struct EntryHistoryRow {
entry_id: Uuid,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
version: i64,
action: String,
tags: Vec<String>,
metadata: Value,
}
let snap: Option<EntryHistoryRow> = if let Some(ver) = to_version {
if let Some(uid) = user_id {
sqlx::query_as(
"SELECT entry_id, version, action, tags, metadata FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 AND version = $4 \
AND user_id = $5 ORDER BY id DESC LIMIT 1",
// Disambiguate: find the unique entry_id for (name, folder).
// Query entries_history by entry_id once we know it; first resolve via name + optional folder.
let entry_id: Option<Uuid> = if let Some(uid) = user_id {
if let Some(f) = folder {
sqlx::query_scalar(
"SELECT DISTINCT entry_id FROM entries_history \
WHERE name = $1 AND folder = $2 AND user_id = $3 LIMIT 1",
)
.bind(namespace)
.bind(kind)
.bind(name)
.bind(ver)
.bind(f)
.bind(uid)
.fetch_optional(pool)
.await?
} else {
sqlx::query_as(
"SELECT entry_id, version, action, tags, metadata FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 AND version = $4 \
AND user_id IS NULL ORDER BY id DESC LIMIT 1",
let ids: Vec<Uuid> = sqlx::query_scalar(
"SELECT DISTINCT entry_id FROM entries_history \
WHERE name = $1 AND user_id = $2",
)
.bind(namespace)
.bind(kind)
.bind(name)
.bind(ver)
.fetch_optional(pool)
.await?
.bind(uid)
.fetch_all(pool)
.await?;
match ids.len() {
0 => None,
1 => Some(ids[0]),
_ => {
let folders: Vec<String> = sqlx::query_scalar(
"SELECT DISTINCT folder FROM entries_history \
WHERE name = $1 AND user_id = $2",
)
.bind(name)
.bind(uid)
.fetch_all(pool)
.await?;
anyhow::bail!(
"Ambiguous: entries named '{}' exist in folders: [{}]. \
Specify 'folder' to disambiguate.",
name,
folders.join(", ")
)
}
} else if let Some(uid) = user_id {
sqlx::query_as(
"SELECT entry_id, version, action, tags, metadata FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 \
AND user_id = $4 ORDER BY id DESC LIMIT 1",
}
}
} else if let Some(f) = folder {
sqlx::query_scalar(
"SELECT DISTINCT entry_id FROM entries_history \
WHERE name = $1 AND folder = $2 AND user_id IS NULL LIMIT 1",
)
.bind(namespace)
.bind(kind)
.bind(name)
.bind(uid)
.bind(f)
.fetch_optional(pool)
.await?
} else {
let ids: Vec<Uuid> = sqlx::query_scalar(
"SELECT DISTINCT entry_id FROM entries_history \
WHERE name = $1 AND user_id IS NULL",
)
.bind(name)
.fetch_all(pool)
.await?;
match ids.len() {
0 => None,
1 => Some(ids[0]),
_ => {
let folders: Vec<String> = sqlx::query_scalar(
"SELECT DISTINCT folder FROM entries_history \
WHERE name = $1 AND user_id IS NULL",
)
.bind(name)
.fetch_all(pool)
.await?;
anyhow::bail!(
"Ambiguous: entries named '{}' exist in folders: [{}]. \
Specify 'folder' to disambiguate.",
name,
folders.join(", ")
)
}
}
};
let entry_id = entry_id.ok_or_else(|| anyhow::anyhow!("No history found for '{}'", name))?;
let snap: Option<EntryHistoryRow> = if let Some(ver) = to_version {
sqlx::query_as(
"SELECT folder, type, version, action, tags, metadata \
FROM entries_history \
WHERE entry_id = $1 AND version = $2 ORDER BY id DESC LIMIT 1",
)
.bind(entry_id)
.bind(ver)
.fetch_optional(pool)
.await?
} else {
sqlx::query_as(
"SELECT entry_id, version, action, tags, metadata FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 \
AND user_id IS NULL ORDER BY id DESC LIMIT 1",
"SELECT folder, type, version, action, tags, metadata \
FROM entries_history \
WHERE entry_id = $1 ORDER BY id DESC LIMIT 1",
)
.bind(namespace)
.bind(kind)
.bind(name)
.bind(entry_id)
.fetch_optional(pool)
.await?
};
let snap = snap.ok_or_else(|| {
anyhow::anyhow!(
"No history found for [{}/{}] {}{}.",
namespace,
kind,
"No history found for '{}'{}.",
name,
to_version
.map(|v| format!(" at version {}", v))
@@ -96,33 +149,7 @@ pub async fn run(
)
})?;
#[derive(sqlx::FromRow)]
struct SecretHistoryRow {
field_name: String,
encrypted: Vec<u8>,
action: String,
}
let field_snaps: Vec<SecretHistoryRow> = sqlx::query_as(
"SELECT field_name, encrypted, action FROM secrets_history \
WHERE entry_id = $1 AND entry_version = $2 ORDER BY field_name",
)
.bind(snap.entry_id)
.bind(snap.version)
.fetch_all(pool)
.await?;
for f in &field_snaps {
if f.action != "delete" && !f.encrypted.is_empty() {
crypto::decrypt_json(master_key, &f.encrypted).map_err(|e| {
anyhow::anyhow!(
"Cannot decrypt snapshot for field '{}': {}",
f.field_name,
e
)
})?;
}
}
let _ = master_key;
let mut tx = pool.begin().await?;
@@ -130,43 +157,32 @@ pub async fn run(
struct LiveEntry {
id: Uuid,
version: i64,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
tags: Vec<String>,
metadata: Value,
#[allow(dead_code)]
notes: String,
}
// Query live entry with correct user_id scoping to avoid PK conflicts
let live: Option<LiveEntry> = if let Some(uid) = user_id {
sqlx::query_as(
"SELECT id, version, tags, metadata FROM entries \
WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4 FOR UPDATE",
// Lock the live entry if it exists (matched by entry_id for precision).
let live: Option<LiveEntry> = sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE id = $1 FOR UPDATE",
)
.bind(uid)
.bind(namespace)
.bind(kind)
.bind(name)
.bind(entry_id)
.fetch_optional(&mut *tx)
.await?
} else {
sqlx::query_as(
"SELECT id, version, tags, metadata FROM entries \
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE",
)
.bind(namespace)
.bind(kind)
.bind(name)
.fetch_optional(&mut *tx)
.await?
};
.await?;
let entry_id = if let Some(ref lr) = live {
// Snapshot current state before overwriting
let live_entry_id = if let Some(ref lr) = live {
if let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
entry_id: lr.id,
user_id,
namespace,
kind,
folder: &lr.folder,
entry_type: &lr.entry_type,
name,
version: lr.version,
action: "rollback",
@@ -182,11 +198,15 @@ pub async fn run(
#[derive(sqlx::FromRow)]
struct LiveField {
id: Uuid,
field_name: String,
name: String,
encrypted: Vec<u8>,
}
let live_fields: Vec<LiveField> =
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1")
let live_fields: Vec<LiveField> = sqlx::query_as(
"SELECT s.id, s.name, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1",
)
.bind(lr.id)
.fetch_all(&mut *tx)
.await?;
@@ -195,10 +215,8 @@ pub async fn run(
if let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
entry_id: lr.id,
secret_id: f.id,
entry_version: lr.version,
field_name: &f.field_name,
name: &f.name,
encrypted: &f.encrypted,
action: "rollback",
},
@@ -209,7 +227,6 @@ pub async fn run(
}
}
// Update the existing row in-place to preserve its primary key and user_id
sqlx::query(
"UPDATE entries SET tags = $1, metadata = $2, version = version + 1, \
updated_at = NOW() WHERE id = $3",
@@ -222,16 +239,15 @@ pub async fn run(
lr.id
} else {
// No live entry — insert a fresh one with a new UUID
if let Some(uid) = user_id {
sqlx::query_scalar(
"INSERT INTO entries \
(user_id, namespace, kind, name, tags, metadata, version, updated_at) \
VALUES ($1, $2, $3, $4, $5, $6, $7, NOW()) RETURNING id",
(user_id, folder, type, name, notes, tags, metadata, version, updated_at) \
VALUES ($1, $2, $3, $4, '', $5, $6, $7, NOW()) RETURNING id",
)
.bind(uid)
.bind(namespace)
.bind(kind)
.bind(&snap.folder)
.bind(&snap.entry_type)
.bind(name)
.bind(&snap.tags)
.bind(&snap.metadata)
@@ -241,11 +257,11 @@ pub async fn run(
} else {
sqlx::query_scalar(
"INSERT INTO entries \
(namespace, kind, name, tags, metadata, version, updated_at) \
VALUES ($1, $2, $3, $4, $5, $6, NOW()) RETURNING id",
(folder, type, name, notes, tags, metadata, version, updated_at) \
VALUES ($1, $2, $3, '', $4, $5, $6, NOW()) RETURNING id",
)
.bind(namespace)
.bind(kind)
.bind(&snap.folder)
.bind(&snap.entry_type)
.bind(name)
.bind(&snap.tags)
.bind(&snap.metadata)
@@ -255,29 +271,16 @@ pub async fn run(
}
};
sqlx::query("DELETE FROM secrets WHERE entry_id = $1")
.bind(entry_id)
.execute(&mut *tx)
.await?;
for f in &field_snaps {
if f.action == "delete" {
continue;
}
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
.bind(entry_id)
.bind(&f.field_name)
.bind(&f.encrypted)
.execute(&mut *tx)
.await?;
}
// In N:N mode, rollback restores entry metadata/tags only.
// Secret snapshots are kept for audit but secret linkage/content is not rewritten here.
let _ = live_entry_id;
crate::audit::log_tx(
&mut tx,
user_id,
"rollback",
namespace,
kind,
&snap.folder,
&snap.entry_type,
name,
serde_json::json!({
"restored_version": snap.version,
@@ -289,9 +292,9 @@ pub async fn run(
tx.commit().await?;
Ok(RollbackResult {
namespace: namespace.to_string(),
kind: kind.to_string(),
name: name.to_string(),
folder: snap.folder,
entry_type: snap.entry_type,
restored_version: snap.version,
})
}

View File

@@ -8,10 +8,23 @@ use crate::models::{Entry, SecretField};
pub const FETCH_ALL_LIMIT: u32 = 100_000;
/// Build an ILIKE pattern for fuzzy matching, escaping `%` and `_` literals.
pub fn ilike_pattern(value: &str) -> String {
format!(
"%{}%",
value
.replace('\\', "\\\\")
.replace('%', "\\%")
.replace('_', "\\_")
)
}
pub struct SearchParams<'a> {
pub namespace: Option<&'a str>,
pub kind: Option<&'a str>,
pub folder: Option<&'a str>,
pub entry_type: Option<&'a str>,
pub name: Option<&'a str>,
/// Fuzzy match on `entries.name` only (ILIKE with escaped `%`/`_`).
pub name_query: Option<&'a str>,
pub tags: &'a [String],
pub query: Option<&'a str>,
pub sort: &'a str,
@@ -27,49 +40,50 @@ pub struct SearchResult {
pub secret_schemas: HashMap<Uuid, Vec<SecretField>>,
}
pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult> {
let entries = fetch_entries_paged(pool, &params).await?;
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
let secret_schemas = if !entry_ids.is_empty() {
fetch_secret_schemas(pool, &entry_ids).await?
} else {
HashMap::new()
};
Ok(SearchResult {
entries,
secret_schemas,
})
}
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
pub async fn fetch_entries(
pool: &PgPool,
namespace: Option<&str>,
kind: Option<&str>,
name: Option<&str>,
tags: &[String],
query: Option<&str>,
user_id: Option<Uuid>,
) -> Result<Vec<Entry>> {
let params = SearchParams {
namespace,
kind,
name,
tags,
query,
sort: "name",
limit: FETCH_ALL_LIMIT,
offset: 0,
user_id,
};
/// List `entries` rows matching params (paged, ordered per `params.sort`).
/// Does not read the `secrets` table.
pub async fn list_entries(pool: &PgPool, params: SearchParams<'_>) -> Result<Vec<Entry>> {
fetch_entries_paged(pool, &params).await
}
async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<Entry>> {
/// Count `entries` rows matching the same filters as [`list_entries`] (ignores `sort` / `limit` / `offset`).
/// Does not read the `secrets` table.
pub async fn count_entries(pool: &PgPool, a: &SearchParams<'_>) -> Result<i64> {
let (where_clause, _) = entry_where_clause_and_next_idx(a);
let sql = format!("SELECT COUNT(*)::bigint FROM entries {where_clause}");
let mut q = sqlx::query_scalar::<_, i64>(&sql);
if let Some(uid) = a.user_id {
q = q.bind(uid);
}
if let Some(v) = a.folder {
q = q.bind(v);
}
if let Some(v) = a.entry_type {
q = q.bind(v);
}
if let Some(v) = a.name {
q = q.bind(v);
}
if let Some(v) = a.name_query {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
for tag in a.tags {
q = q.bind(tag);
}
if let Some(v) = a.query {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
let n = q.fetch_one(pool).await?;
Ok(n)
}
/// Shared WHERE clause and the next `$n` index (for LIMIT/OFFSET in paged queries).
fn entry_where_clause_and_next_idx(a: &SearchParams<'_>) -> (String, i32) {
let mut conditions: Vec<String> = Vec::new();
let mut idx: i32 = 1;
// user_id filtering — always comes first when present
if a.user_id.is_some() {
conditions.push(format!("user_id = ${}", idx));
idx += 1;
@@ -77,18 +91,22 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
conditions.push("user_id IS NULL".to_string());
}
if a.namespace.is_some() {
conditions.push(format!("namespace = ${}", idx));
if a.folder.is_some() {
conditions.push(format!("folder = ${}", idx));
idx += 1;
}
if a.kind.is_some() {
conditions.push(format!("kind = ${}", idx));
if a.entry_type.is_some() {
conditions.push(format!("type = ${}", idx));
idx += 1;
}
if a.name.is_some() {
conditions.push(format!("name = ${}", idx));
idx += 1;
}
if a.name_query.is_some() {
conditions.push(format!("name ILIKE ${} ESCAPE '\\'", idx));
idx += 1;
}
if !a.tags.is_empty() {
let placeholders: Vec<String> = a
.tags
@@ -106,14 +124,66 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
}
if a.query.is_some() {
conditions.push(format!(
"(name ILIKE ${i} ESCAPE '\\' OR namespace ILIKE ${i} ESCAPE '\\' \
OR kind ILIKE ${i} ESCAPE '\\' OR metadata::text ILIKE ${i} ESCAPE '\\' \
"(name ILIKE ${i} ESCAPE '\\' OR folder ILIKE ${i} ESCAPE '\\' \
OR type ILIKE ${i} ESCAPE '\\' OR notes ILIKE ${i} ESCAPE '\\' \
OR metadata::text ILIKE ${i} ESCAPE '\\' \
OR EXISTS (SELECT 1 FROM unnest(tags) t WHERE t ILIKE ${i} ESCAPE '\\'))",
i = idx
));
idx += 1;
}
let where_clause = if conditions.is_empty() {
String::new()
} else {
format!("WHERE {}", conditions.join(" AND "))
};
(where_clause, idx)
}
pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult> {
let entries = fetch_entries_paged(pool, &params).await?;
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
let secret_schemas = if !entry_ids.is_empty() {
fetch_secret_schemas(pool, &entry_ids).await?
} else {
HashMap::new()
};
Ok(SearchResult {
entries,
secret_schemas,
})
}
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
#[allow(clippy::too_many_arguments)]
pub async fn fetch_entries(
pool: &PgPool,
folder: Option<&str>,
entry_type: Option<&str>,
name: Option<&str>,
tags: &[String],
query: Option<&str>,
user_id: Option<Uuid>,
) -> Result<Vec<Entry>> {
let params = SearchParams {
folder,
entry_type,
name,
name_query: None,
tags,
query,
sort: "name",
limit: FETCH_ALL_LIMIT,
offset: 0,
user_id,
};
list_entries(pool, params).await
}
async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<Entry>> {
let (where_clause, idx) = entry_where_clause_and_next_idx(a);
let order = match a.sort {
"updated" => "updated_at DESC",
"created" => "created_at DESC",
@@ -121,40 +191,36 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
};
let limit_idx = idx;
idx += 1;
let offset_idx = idx;
let where_clause = if conditions.is_empty() {
String::new()
} else {
format!("WHERE {}", conditions.join(" AND "))
};
let offset_idx = idx + 1;
let sql = format!(
"SELECT id, user_id, \
namespace, kind, name, tags, metadata, version, created_at, updated_at \
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
created_at, updated_at \
FROM entries {where_clause} ORDER BY {order} LIMIT ${limit_idx} OFFSET ${offset_idx}"
);
let mut q = sqlx::query_as::<_, EntryRaw>(&sql);
if let Some(uid) = a.user_id {
q = q.bind(uid);
}
if let Some(v) = a.namespace {
if let Some(v) = a.folder {
q = q.bind(v);
}
if let Some(v) = a.kind {
if let Some(v) = a.entry_type {
q = q.bind(v);
}
if let Some(v) = a.name {
q = q.bind(v);
}
if let Some(v) = a.name_query {
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
for tag in a.tags {
q = q.bind(tag);
}
if let Some(v) = a.query {
let pattern = format!("%{}%", v.replace('%', "\\%").replace('_', "\\_"));
let pattern = ilike_pattern(v);
q = q.bind(pattern);
}
q = q.bind(a.limit as i64).bind(a.offset as i64);
@@ -171,8 +237,12 @@ pub async fn fetch_secret_schemas(
if entry_ids.is_empty() {
return Ok(HashMap::new());
}
let fields: Vec<SecretField> = sqlx::query_as(
"SELECT * FROM secrets WHERE entry_id = ANY($1) ORDER BY entry_id, field_name",
let fields: Vec<EntrySecretRow> = sqlx::query_as(
"SELECT es.entry_id, s.id, s.user_id, s.name, s.type, s.encrypted, s.version, s.created_at, s.updated_at \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = ANY($1) \
ORDER BY es.entry_id, es.sort_order, s.name",
)
.bind(entry_ids)
.fetch_all(pool)
@@ -180,7 +250,8 @@ pub async fn fetch_secret_schemas(
let mut map: HashMap<Uuid, Vec<SecretField>> = HashMap::new();
for f in fields {
map.entry(f.entry_id).or_default().push(f);
let entry_id = f.entry_id;
map.entry(entry_id).or_default().push(f.secret());
}
Ok(map)
}
@@ -193,8 +264,12 @@ pub async fn fetch_secrets_for_entries(
if entry_ids.is_empty() {
return Ok(HashMap::new());
}
let fields: Vec<SecretField> = sqlx::query_as(
"SELECT * FROM secrets WHERE entry_id = ANY($1) ORDER BY entry_id, field_name",
let fields: Vec<EntrySecretRow> = sqlx::query_as(
"SELECT es.entry_id, s.id, s.user_id, s.name, s.type, s.encrypted, s.version, s.created_at, s.updated_at \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = ANY($1) \
ORDER BY es.entry_id, es.sort_order, s.name",
)
.bind(entry_ids)
.fetch_all(pool)
@@ -202,20 +277,87 @@ pub async fn fetch_secrets_for_entries(
let mut map: HashMap<Uuid, Vec<SecretField>> = HashMap::new();
for f in fields {
map.entry(f.entry_id).or_default().push(f);
let entry_id = f.entry_id;
map.entry(entry_id).or_default().push(f.secret());
}
Ok(map)
}
// ── Internal raw row (because user_id is nullable in DB) ─────────────────────
/// Resolve exactly one entry by its UUID primary key.
///
/// Returns an error if the entry does not exist or does not belong to the given user.
pub async fn resolve_entry_by_id(
pool: &PgPool,
id: Uuid,
user_id: Option<Uuid>,
) -> Result<crate::models::Entry> {
let row: Option<EntryRaw> = if let Some(uid) = user_id {
sqlx::query_as(
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
created_at, updated_at FROM entries WHERE id = $1 AND user_id = $2",
)
.bind(id)
.bind(uid)
.fetch_optional(pool)
.await?
} else {
sqlx::query_as(
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
created_at, updated_at FROM entries WHERE id = $1 AND user_id IS NULL",
)
.bind(id)
.fetch_optional(pool)
.await?
};
row.map(Entry::from)
.ok_or_else(|| anyhow::anyhow!("Entry with id '{}' not found", id))
}
/// Resolve exactly one entry by name, with optional folder for disambiguation.
///
/// - If `folder` is provided: exact `(folder, name)` match.
/// - If `folder` is None and exactly one entry matches: returns it.
/// - If `folder` is None and multiple entries match: returns an error listing
/// the folders and asking the caller to specify one.
pub async fn resolve_entry(
pool: &PgPool,
name: &str,
folder: Option<&str>,
user_id: Option<Uuid>,
) -> Result<crate::models::Entry> {
let entries = fetch_entries(pool, folder, None, Some(name), &[], None, user_id).await?;
match entries.len() {
0 => {
if let Some(f) = folder {
anyhow::bail!("Not found: '{}' in folder '{}'", name, f)
} else {
anyhow::bail!("Not found: '{}'", name)
}
}
1 => Ok(entries.into_iter().next().unwrap()),
_ => {
let folders: Vec<&str> = entries.iter().map(|e| e.folder.as_str()).collect();
anyhow::bail!(
"Ambiguous: {} entries named '{}' found in folders: [{}]. \
Specify 'folder' to disambiguate.",
entries.len(),
name,
folders.join(", ")
)
}
}
}
// ── Internal raw row (because user_id is nullable in DB) ─────────────────────
#[derive(sqlx::FromRow)]
struct EntryRaw {
id: Uuid,
user_id: Option<Uuid>,
namespace: String,
kind: String,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
name: String,
notes: String,
tags: Vec<String>,
metadata: Value,
version: i64,
@@ -228,9 +370,10 @@ impl From<EntryRaw> for Entry {
Entry {
id: r.id,
user_id: r.user_id,
namespace: r.namespace,
kind: r.kind,
folder: r.folder,
entry_type: r.entry_type,
name: r.name,
notes: r.notes,
tags: r.tags,
metadata: r.metadata,
version: r.version,
@@ -239,3 +382,42 @@ impl From<EntryRaw> for Entry {
}
}
}
#[derive(sqlx::FromRow)]
struct EntrySecretRow {
entry_id: Uuid,
id: Uuid,
user_id: Option<Uuid>,
name: String,
#[sqlx(rename = "type")]
secret_type: String,
encrypted: Vec<u8>,
version: i64,
created_at: chrono::DateTime<chrono::Utc>,
updated_at: chrono::DateTime<chrono::Utc>,
}
impl EntrySecretRow {
fn secret(self) -> SecretField {
SecretField {
id: self.id,
user_id: self.user_id,
name: self.name,
secret_type: self.secret_type,
encrypted: self.encrypted,
version: self.version,
created_at: self.created_at,
updated_at: self.updated_at,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn ilike_pattern_escapes_backslash_percent_and_underscore() {
assert_eq!(ilike_pattern(r"hello\_100%"), r"%hello\\\_100\%%");
}
}

View File

@@ -5,7 +5,8 @@ use uuid::Uuid;
use crate::crypto;
use crate::db;
use crate::models::EntryRow;
use crate::error::{AppError, DbErrorContext};
use crate::models::{EntryRow, EntryWriteRow};
use crate::service::add::{
collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path,
parse_kv, remove_path,
@@ -13,27 +14,34 @@ use crate::service::add::{
#[derive(Debug, serde::Serialize)]
pub struct UpdateResult {
pub namespace: String,
pub kind: String,
pub name: String,
pub folder: String,
#[serde(rename = "type")]
pub entry_type: String,
pub add_tags: Vec<String>,
pub remove_tags: Vec<String>,
pub meta_keys: Vec<String>,
pub remove_meta: Vec<String>,
pub secret_keys: Vec<String>,
pub remove_secrets: Vec<String>,
pub linked_secrets: Vec<String>,
pub unlinked_secrets: Vec<String>,
}
pub struct UpdateParams<'a> {
pub namespace: &'a str,
pub kind: &'a str,
pub name: &'a str,
/// Optional folder for disambiguation when multiple entries share the same name.
pub folder: Option<&'a str>,
pub notes: Option<&'a str>,
pub add_tags: &'a [String],
pub remove_tags: &'a [String],
pub meta_entries: &'a [String],
pub remove_meta: &'a [String],
pub secret_entries: &'a [String],
pub secret_types: &'a std::collections::HashMap<String, String>,
pub remove_secrets: &'a [String],
pub link_secret_names: &'a [String],
pub unlink_secret_names: &'a [String],
pub user_id: Option<Uuid>,
}
@@ -44,45 +52,73 @@ pub async fn run(
) -> Result<UpdateResult> {
let mut tx = pool.begin().await?;
let row: Option<EntryRow> = if let Some(uid) = params.user_id {
// Fetch matching rows with FOR UPDATE; use folder when provided to resolve ambiguity.
let rows: Vec<EntryRow> = if let Some(uid) = params.user_id {
if let Some(folder) = params.folder {
sqlx::query_as(
"SELECT id, version, tags, metadata FROM entries \
WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4 FOR UPDATE",
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id = $1 AND folder = $2 AND name = $3 FOR UPDATE",
)
.bind(uid)
.bind(params.namespace)
.bind(params.kind)
.bind(folder)
.bind(params.name)
.fetch_optional(&mut *tx)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_as(
"SELECT id, version, tags, metadata FROM entries \
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE",
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id = $1 AND name = $2 FOR UPDATE",
)
.bind(params.namespace)
.bind(params.kind)
.bind(uid)
.bind(params.name)
.fetch_optional(&mut *tx)
.fetch_all(&mut *tx)
.await?
}
} else if let Some(folder) = params.folder {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id IS NULL AND folder = $1 AND name = $2 FOR UPDATE",
)
.bind(folder)
.bind(params.name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_as(
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
WHERE user_id IS NULL AND name = $1 FOR UPDATE",
)
.bind(params.name)
.fetch_all(&mut *tx)
.await?
};
let row = row.ok_or_else(|| {
anyhow::anyhow!(
"Not found: [{}/{}] {}. Use `add` to create it first.",
params.namespace,
params.kind,
params.name
let row = match rows.len() {
0 => {
tx.rollback().await?;
return Err(AppError::NotFoundEntry.into());
}
1 => rows.into_iter().next().unwrap(),
_ => {
tx.rollback().await?;
let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect();
anyhow::bail!(
"Ambiguous: {} entries named '{}' found in folders: [{}]. \
Specify 'folder' to disambiguate.",
rows.len(),
params.name,
folders.join(", ")
)
})?;
}
};
if let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
entry_id: row.id,
user_id: params.user_id,
namespace: params.namespace,
kind: params.kind,
folder: &row.folder,
entry_type: &row.entry_type,
name: params.name,
version: row.version,
action: "update",
@@ -117,12 +153,16 @@ pub async fn run(
}
let metadata = Value::Object(meta_map);
let new_notes = params.notes.unwrap_or(&row.notes);
let result = sqlx::query(
"UPDATE entries SET tags = $1, metadata = $2, version = version + 1, updated_at = NOW() \
WHERE id = $3 AND version = $4",
"UPDATE entries SET tags = $1, metadata = $2, notes = $3, \
version = version + 1, updated_at = NOW() \
WHERE id = $4 AND version = $5",
)
.bind(&tags)
.bind(&metadata)
.bind(new_notes)
.bind(row.id)
.bind(row.version)
.execute(&mut *tx)
@@ -130,16 +170,9 @@ pub async fn run(
if result.rows_affected() == 0 {
tx.rollback().await?;
anyhow::bail!(
"Concurrent modification detected for [{}/{}] {}. Please retry.",
params.namespace,
params.kind,
params.name
);
return Err(AppError::ConcurrentModification.into());
}
let new_version = row.version + 1;
for entry in params.secret_entries {
let (path, field_value) = parse_kv(entry)?;
let flat = flatten_json_fields("", &{
@@ -157,7 +190,10 @@ pub async fn run(
encrypted: Vec<u8>,
}
let ef: Option<ExistingField> = sqlx::query_as(
"SELECT id, encrypted FROM secrets WHERE entry_id = $1 AND field_name = $2",
"SELECT s.id, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1 AND s.name = $2",
)
.bind(row.id)
.bind(field_name)
@@ -168,10 +204,8 @@ pub async fn run(
&& let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
entry_id: row.id,
secret_id: ef.id,
entry_version: row.version,
field_name,
name: field_name,
encrypted: &ef.encrypted,
action: "update",
},
@@ -181,16 +215,36 @@ pub async fn run(
tracing::warn!(error = %e, "failed to snapshot secret field history");
}
if let Some(ef) = ef {
sqlx::query(
"INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3) \
ON CONFLICT (entry_id, field_name) DO UPDATE SET \
encrypted = EXCLUDED.encrypted, version = secrets.version + 1, updated_at = NOW()",
"UPDATE secrets SET encrypted = $1, version = version + 1, updated_at = NOW() WHERE id = $2",
)
.bind(row.id)
.bind(field_name)
.bind(&encrypted)
.bind(ef.id)
.execute(&mut *tx)
.await?;
} else {
let secret_type = params
.secret_types
.get(field_name)
.map(|s| s.as_str())
.unwrap_or("text");
let secret_id: Uuid = sqlx::query_scalar(
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
)
.bind(params.user_id)
.bind(field_name.to_string())
.bind(secret_type)
.bind(&encrypted)
.fetch_one(&mut *tx)
.await
.map_err(|e| AppError::from_db_error(e, DbErrorContext::secret_name(field_name)))?;
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
.bind(row.id)
.bind(secret_id)
.execute(&mut *tx)
.await?;
}
}
}
@@ -204,7 +258,10 @@ pub async fn run(
encrypted: Vec<u8>,
}
let field: Option<FieldToDelete> = sqlx::query_as(
"SELECT id, encrypted FROM secrets WHERE entry_id = $1 AND field_name = $2",
"SELECT s.id, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1 AND s.name = $2",
)
.bind(row.id)
.bind(&field_name)
@@ -215,10 +272,8 @@ pub async fn run(
if let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
entry_id: row.id,
secret_id: f.id,
entry_version: new_version,
field_name: &field_name,
name: &field_name,
encrypted: &f.encrypted,
action: "delete",
},
@@ -227,10 +282,114 @@ pub async fn run(
{
tracing::warn!(error = %e, "failed to snapshot secret field history before delete");
}
sqlx::query("DELETE FROM secrets WHERE id = $1")
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
.bind(row.id)
.bind(f.id)
.execute(&mut *tx)
.await?;
sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(f.id)
.execute(&mut *tx)
.await?;
}
}
// Link existing secrets by name
let mut linked_secrets = Vec::new();
for link_name in params.link_secret_names {
let link_name = link_name.trim();
if link_name.is_empty() {
anyhow::bail!("link_secret_names contains an empty name");
}
let secret_ids: Vec<Uuid> = if let Some(uid) = params.user_id {
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id = $1 AND name = $2")
.bind(uid)
.bind(link_name)
.fetch_all(&mut *tx)
.await?
} else {
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id IS NULL AND name = $1")
.bind(link_name)
.fetch_all(&mut *tx)
.await?
};
match secret_ids.len() {
0 => anyhow::bail!("Not found: secret named '{}'", link_name),
1 => {
sqlx::query(
"INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING",
)
.bind(row.id)
.bind(secret_ids[0])
.execute(&mut *tx)
.await?;
linked_secrets.push(link_name.to_string());
}
n => anyhow::bail!(
"Ambiguous: {} secrets named '{}' found. Please deduplicate names first.",
n,
link_name
),
}
}
// Unlink secrets by name
let mut unlinked_secrets = Vec::new();
for unlink_name in params.unlink_secret_names {
let unlink_name = unlink_name.trim();
if unlink_name.is_empty() {
continue;
}
#[derive(sqlx::FromRow)]
struct SecretToUnlink {
id: Uuid,
encrypted: Vec<u8>,
}
let secret: Option<SecretToUnlink> = sqlx::query_as(
"SELECT s.id, s.encrypted \
FROM entry_secrets es \
JOIN secrets s ON s.id = es.secret_id \
WHERE es.entry_id = $1 AND s.name = $2",
)
.bind(row.id)
.bind(unlink_name)
.fetch_optional(&mut *tx)
.await?;
if let Some(s) = secret {
if let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
secret_id: s.id,
name: unlink_name,
encrypted: &s.encrypted,
action: "delete",
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot secret field history before unlink");
}
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
.bind(row.id)
.bind(s.id)
.execute(&mut *tx)
.await?;
sqlx::query(
"DELETE FROM secrets s \
WHERE s.id = $1 \
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
)
.bind(s.id)
.execute(&mut *tx)
.await?;
unlinked_secrets.push(unlink_name.to_string());
}
}
@@ -243,8 +402,8 @@ pub async fn run(
&mut tx,
params.user_id,
"update",
params.namespace,
params.kind,
&row.folder,
&row.entry_type,
params.name,
serde_json::json!({
"add_tags": params.add_tags,
@@ -253,6 +412,8 @@ pub async fn run(
"remove_meta": remove_meta_keys,
"secret_keys": secret_keys,
"remove_secrets": remove_secret_keys,
"linked_secrets": linked_secrets,
"unlinked_secrets": unlinked_secrets,
}),
)
.await;
@@ -260,14 +421,134 @@ pub async fn run(
tx.commit().await?;
Ok(UpdateResult {
namespace: params.namespace.to_string(),
kind: params.kind.to_string(),
name: params.name.to_string(),
folder: row.folder.clone(),
entry_type: row.entry_type.clone(),
add_tags: params.add_tags.to_vec(),
remove_tags: params.remove_tags.to_vec(),
meta_keys,
remove_meta: remove_meta_keys,
secret_keys,
remove_secrets: remove_secret_keys,
linked_secrets,
unlinked_secrets,
})
}
/// Update non-sensitive entry columns by primary key (multi-tenant: `user_id` must match).
/// Does not read or modify `secrets` rows.
pub struct UpdateEntryFieldsByIdParams<'a> {
pub folder: &'a str,
pub entry_type: &'a str,
pub name: &'a str,
pub notes: &'a str,
pub tags: &'a [String],
pub metadata: &'a serde_json::Value,
}
pub async fn update_fields_by_id(
pool: &PgPool,
entry_id: Uuid,
user_id: Uuid,
params: UpdateEntryFieldsByIdParams<'_>,
) -> Result<()> {
if params.folder.chars().count() > 128 {
anyhow::bail!("folder must be at most 128 characters");
}
if params.entry_type.chars().count() > 64 {
anyhow::bail!("type must be at most 64 characters");
}
if params.name.chars().count() > 256 {
anyhow::bail!("name must be at most 256 characters");
}
let mut tx = pool.begin().await?;
let row: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
WHERE id = $1 AND user_id = $2 FOR UPDATE",
)
.bind(entry_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await?;
let row = match row {
Some(r) => r,
None => {
tx.rollback().await?;
return Err(AppError::NotFoundEntry.into());
}
};
if let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
entry_id: row.id,
user_id: Some(user_id),
folder: &row.folder,
entry_type: &row.entry_type,
name: &row.name,
version: row.version,
action: "update",
tags: &row.tags,
metadata: &row.metadata,
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot entry history before web update");
}
let entry_type = params.entry_type.trim();
let res = sqlx::query(
"UPDATE entries SET folder = $1, type = $2, name = $3, notes = $4, tags = $5, metadata = $6, \
version = version + 1, updated_at = NOW() \
WHERE id = $7 AND version = $8",
)
.bind(params.folder)
.bind(entry_type)
.bind(params.name)
.bind(params.notes)
.bind(params.tags)
.bind(params.metadata)
.bind(row.id)
.bind(row.version)
.execute(&mut *tx)
.await
.map_err(|e| {
if let sqlx::Error::Database(ref d) = e
&& d.code().as_deref() == Some("23505")
{
return AppError::ConflictEntryName {
folder: params.folder.to_string(),
name: params.name.to_string(),
};
}
AppError::Internal(e.into())
})?;
if res.rows_affected() == 0 {
tx.rollback().await?;
return Err(AppError::ConcurrentModification.into());
}
crate::audit::log_tx(
&mut tx,
Some(user_id),
"update",
params.folder,
entry_type,
params.name,
serde_json::json!({
"source": "web",
"entry_id": entry_id,
"fields": ["folder", "type", "name", "notes", "tags", "metadata"],
}),
)
.await;
tx.commit().await?;
Ok(())
}

View File

@@ -16,14 +16,17 @@ pub struct OAuthProfile {
/// Find or create a user from an OAuth profile.
/// Returns (user, is_new) where is_new indicates first-time registration.
pub async fn find_or_create_user(pool: &PgPool, profile: OAuthProfile) -> Result<(User, bool)> {
// Check if this OAuth account already exists
// Use a transaction with FOR UPDATE to prevent TOCTOU race conditions
let mut tx = pool.begin().await?;
// Check if this OAuth account already exists (with row lock)
let existing: Option<OauthAccount> = sqlx::query_as(
"SELECT id, user_id, provider, provider_id, email, name, avatar_url, created_at \
FROM oauth_accounts WHERE provider = $1 AND provider_id = $2",
FROM oauth_accounts WHERE provider = $1 AND provider_id = $2 FOR UPDATE",
)
.bind(&profile.provider)
.bind(&profile.provider_id)
.fetch_optional(pool)
.fetch_optional(&mut *tx)
.await?;
if let Some(oa) = existing {
@@ -32,8 +35,9 @@ pub async fn find_or_create_user(pool: &PgPool, profile: OAuthProfile) -> Result
FROM users WHERE id = $1",
)
.bind(oa.user_id)
.fetch_one(pool)
.fetch_one(&mut *tx)
.await?;
tx.commit().await?;
return Ok((user, false));
}
@@ -43,8 +47,6 @@ pub async fn find_or_create_user(pool: &PgPool, profile: OAuthProfile) -> Result
.clone()
.unwrap_or_else(|| profile.email.clone().unwrap_or_else(|| "User".to_string()));
let mut tx = pool.begin().await?;
let user: User = sqlx::query_as(
"INSERT INTO users (email, name, avatar_url) \
VALUES ($1, $2, $3) \
@@ -125,13 +127,16 @@ pub async fn bind_oauth_account(
user_id: Uuid,
profile: OAuthProfile,
) -> Result<OauthAccount> {
// Check if this provider_id is already linked to someone else
// Use a transaction with FOR UPDATE to prevent TOCTOU race conditions
let mut tx = pool.begin().await?;
// Check if this provider_id is already linked to someone else (with row lock)
let conflict: Option<(Uuid,)> = sqlx::query_as(
"SELECT user_id FROM oauth_accounts WHERE provider = $1 AND provider_id = $2",
"SELECT user_id FROM oauth_accounts WHERE provider = $1 AND provider_id = $2 FOR UPDATE",
)
.bind(&profile.provider)
.bind(&profile.provider_id)
.fetch_optional(pool)
.fetch_optional(&mut *tx)
.await?;
if let Some((existing_user_id,)) = conflict {
@@ -148,11 +153,11 @@ pub async fn bind_oauth_account(
}
let existing_provider_for_user: Option<(String,)> = sqlx::query_as(
"SELECT provider_id FROM oauth_accounts WHERE user_id = $1 AND provider = $2",
"SELECT provider_id FROM oauth_accounts WHERE user_id = $1 AND provider = $2 FOR UPDATE",
)
.bind(user_id)
.bind(&profile.provider)
.fetch_optional(pool)
.fetch_optional(&mut *tx)
.await?;
if existing_provider_for_user.is_some() {
@@ -174,9 +179,10 @@ pub async fn bind_oauth_account(
.bind(&profile.email)
.bind(&profile.name)
.bind(&profile.avatar_url)
.fetch_one(pool)
.fetch_one(&mut *tx)
.await?;
tx.commit().await?;
Ok(account)
}

View File

@@ -0,0 +1,4 @@
/// Canonical secret type options for UI dropdowns.
pub const SECRET_TYPE_OPTIONS: &[&str] = &[
"text", "password", "token", "api-key", "ssh-key", "url", "phone", "id-card",
];

View File

@@ -1,6 +1,6 @@
[package]
name = "secrets-mcp"
version = "0.2.2"
version = "0.5.5"
edition.workspace = true
[[bin]]
@@ -17,9 +17,10 @@ rmcp = { version = "1", features = ["server", "macros", "transport-streamable-ht
axum = "0.8"
axum-extra = { version = "0.10", features = ["typed-header"] }
tower = "0.5"
tower-http = { version = "0.6", features = ["cors", "trace"] }
tower-http = { version = "0.6", features = ["cors", "trace", "limit"] }
tower-sessions = "0.14"
tower-sessions-sqlx-store-chrono = { version = "0.14", features = ["postgres"] }
governor = { version = "0.10", features = ["std", "jitter"] }
time = "0.3"
# OAuth (manual token exchange via reqwest)
@@ -44,3 +45,4 @@ dotenvy.workspace = true
urlencoding = "2"
schemars = "1"
http = "1"
url = "2"

View File

@@ -1,7 +1,5 @@
use std::net::SocketAddr;
use axum::{
extract::{ConnectInfo, Request, State},
extract::{Request, State},
http::StatusCode,
middleware::Next,
response::Response,
@@ -11,29 +9,14 @@ use uuid::Uuid;
use secrets_core::service::api_key::validate_api_key;
use crate::client_ip;
/// Injected into request extensions after Bearer token validation.
#[derive(Clone, Debug)]
pub struct AuthUser {
pub user_id: Uuid,
}
fn log_client_ip(req: &Request) -> Option<String> {
if let Some(first) = req
.headers()
.get("x-forwarded-for")
.and_then(|v| v.to_str().ok())
.and_then(|s| s.split(',').next())
{
let s = first.trim();
if !s.is_empty() {
return Some(s.to_string());
}
}
req.extensions()
.get::<ConnectInfo<SocketAddr>>()
.map(|c| c.ip().to_string())
}
/// Axum middleware that validates Bearer API keys for the /mcp route.
/// Passes all non-MCP paths through without authentication.
pub async fn bearer_auth_middleware(
@@ -43,7 +26,7 @@ pub async fn bearer_auth_middleware(
) -> Result<Response, StatusCode> {
let path = req.uri().path();
let method = req.method().as_str();
let client_ip = log_client_ip(&req);
let client_ip = client_ip::extract_client_ip(&req);
// Only authenticate /mcp paths
if !path.starts_with("/mcp") {
@@ -66,7 +49,7 @@ pub async fn bearer_auth_middleware(
tracing::warn!(
method,
path,
client_ip = client_ip.as_deref(),
%client_ip,
"invalid Authorization header format on /mcp (expected Bearer …)"
);
return Err(StatusCode::UNAUTHORIZED);
@@ -75,7 +58,7 @@ pub async fn bearer_auth_middleware(
tracing::warn!(
method,
path,
client_ip = client_ip.as_deref(),
%client_ip,
"missing Authorization header on /mcp"
);
return Err(StatusCode::UNAUTHORIZED);
@@ -93,7 +76,7 @@ pub async fn bearer_auth_middleware(
tracing::warn!(
method,
path,
client_ip = client_ip.as_deref(),
%client_ip,
key_prefix = %&raw_key.chars().take(12).collect::<String>(),
key_len = raw_key.len(),
"invalid api key (not found in database — e.g. revoked key or DB was reset; update MCP client Bearer token)"
@@ -104,7 +87,7 @@ pub async fn bearer_auth_middleware(
tracing::error!(
method,
path,
client_ip = client_ip.as_deref(),
%client_ip,
error = %e,
"api key validation error"
);

View File

@@ -0,0 +1,65 @@
use axum::extract::Request;
use std::net::{IpAddr, SocketAddr};
/// Extract the client IP from a request.
///
/// When the `TRUST_PROXY` environment variable is set to `1` or `true`, the
/// `X-Forwarded-For` and `X-Real-IP` headers are consulted first, which is
/// appropriate when the service runs behind a trusted reverse proxy (e.g.
/// Caddy). Otherwise — or if those headers are absent/empty — the direct TCP
/// connection address from `ConnectInfo` is used.
///
/// **Important**: only enable `TRUST_PROXY` when the application is guaranteed
/// to receive traffic exclusively through a controlled reverse proxy. Enabling
/// it on a directly-exposed port allows clients to spoof their IP address and
/// bypass per-IP rate limiting.
pub fn extract_client_ip(req: &Request) -> String {
if trust_proxy_enabled() {
if let Some(ip) = forwarded_for_ip(req.headers()) {
return ip;
}
if let Some(ip) = real_ip(req.headers()) {
return ip;
}
}
connect_info_ip(req).unwrap_or_else(|| "unknown".to_string())
}
fn trust_proxy_enabled() -> bool {
static CACHE: std::sync::OnceLock<bool> = std::sync::OnceLock::new();
*CACHE.get_or_init(|| {
matches!(
std::env::var("TRUST_PROXY").as_deref(),
Ok("1") | Ok("true") | Ok("yes")
)
})
}
fn forwarded_for_ip(headers: &axum::http::HeaderMap) -> Option<String> {
let value = headers.get("x-forwarded-for")?.to_str().ok()?;
let first = value.split(',').next()?.trim();
if first.is_empty() {
None
} else {
validate_ip(first)
}
}
fn real_ip(headers: &axum::http::HeaderMap) -> Option<String> {
let value = headers.get("x-real-ip")?.to_str().ok()?;
let ip = value.trim();
if ip.is_empty() { None } else { validate_ip(ip) }
}
/// Validate that a string is a valid IP address.
/// Returns Some(ip) if valid, None otherwise.
fn validate_ip(s: &str) -> Option<String> {
s.parse::<IpAddr>().ok().map(|ip| ip.to_string())
}
fn connect_info_ip(req: &Request) -> Option<String> {
req.extensions()
.get::<axum::extract::ConnectInfo<SocketAddr>>()
.map(|c| c.0.ip().to_string())
}

View File

@@ -0,0 +1,54 @@
use secrets_core::error::AppError;
/// Map a structured `AppError` to an MCP protocol error.
///
/// This replaces the previous pattern of swallowing all errors into `-32603`.
pub fn app_error_to_mcp(err: &AppError) -> rmcp::ErrorData {
match err {
AppError::ConflictSecretName { secret_name } => rmcp::ErrorData::invalid_request(
format!(
"A secret with the name '{secret_name}' already exists for your account. \
Secret names must be unique per user."
),
None,
),
AppError::ConflictEntryName { folder, name } => rmcp::ErrorData::invalid_request(
format!(
"An entry with folder='{folder}' and name='{name}' already exists. \
The combination of folder and name must be unique."
),
None,
),
AppError::NotFoundEntry => rmcp::ErrorData::invalid_request(
"Entry not found. Use secrets_find to discover existing entries.",
None,
),
AppError::NotFoundUser => rmcp::ErrorData::invalid_request("User not found.", None),
AppError::NotFoundSecret => rmcp::ErrorData::invalid_request("Secret not found.", None),
AppError::AuthenticationFailed => rmcp::ErrorData::invalid_request(
"Authentication failed. Please check your API key or login credentials.",
None,
),
AppError::Unauthorized => rmcp::ErrorData::invalid_request(
"Unauthorized: you do not have permission to access this resource.",
None,
),
AppError::Validation { message } => rmcp::ErrorData::invalid_request(message.clone(), None),
AppError::ConcurrentModification => rmcp::ErrorData::invalid_request(
"The entry was modified by another request. Please refresh and try again.",
None,
),
AppError::DecryptionFailed => rmcp::ErrorData::invalid_request(
"Decryption failed — the encryption key may be incorrect or does not match the data.",
None,
),
AppError::EncryptionKeyNotSet => rmcp::ErrorData::invalid_request(
"Encryption key not set. You must set a passphrase before using this feature.",
None,
),
AppError::Internal(_) => rmcp::ErrorData::internal_error(
"Request failed due to a server error. Check service logs if you need details.",
None,
),
}
}

View File

@@ -1,7 +1,11 @@
mod auth;
mod client_ip;
mod error;
mod logging;
mod oauth;
mod rate_limit;
mod tools;
mod validation;
mod web;
use std::net::SocketAddr;
@@ -21,7 +25,7 @@ use tower_sessions_sqlx_store_chrono::PostgresStore;
use tracing_subscriber::EnvFilter;
use tracing_subscriber::fmt::time::FormatTime;
use secrets_core::config::resolve_db_url;
use secrets_core::config::resolve_db_config;
use secrets_core::db::{create_pool, migrate};
use crate::oauth::OAuthConfig;
@@ -40,6 +44,14 @@ fn load_env_var(name: &str) -> Option<String> {
std::env::var(name).ok().filter(|s| !s.is_empty())
}
/// Pretty-print bind address in logs (`127.0.0.1` → `localhost`); actual socket bind unchanged.
fn listen_addr_log_display(bind_addr: &str) -> String {
bind_addr
.strip_prefix("127.0.0.1:")
.map(|port| format!("localhost:{port}"))
.unwrap_or_else(|| bind_addr.to_string())
}
fn load_oauth_config(prefix: &str, base_url: &str, path: &str) -> Option<OAuthConfig> {
let client_id = load_env_var(&format!("{}_CLIENT_ID", prefix))?;
let client_secret = load_env_var(&format!("{}_CLIENT_SECRET", prefix))?;
@@ -78,9 +90,9 @@ async fn main() -> Result<()> {
.init();
// ── Database ──────────────────────────────────────────────────────────────
let db_url = resolve_db_url("")
let db_config = resolve_db_config("")
.context("Database not configured. Set SECRETS_DATABASE_URL environment variable.")?;
let pool = create_pool(&db_url)
let pool = create_pool(&db_config)
.await
.context("failed to connect to database")?;
migrate(&pool)
@@ -144,10 +156,20 @@ async fn main() -> Result<()> {
);
// ── Router ────────────────────────────────────────────────────────────────
let cors = CorsLayer::new()
.allow_origin(Any)
.allow_methods(Any)
.allow_headers(Any);
// CORS: restrict origins in production, allow all in development
let is_production = matches!(
load_env_var("SECRETS_ENV")
.as_deref()
.map(|s| s.to_ascii_lowercase())
.as_deref(),
Some("prod" | "production")
);
let cors = build_cors_layer(&base_url, is_production);
// Rate limiting
let rate_limit_state = rate_limit::RateLimitState::new();
let rate_limit_cleanup = rate_limit::spawn_cleanup_task(rate_limit_state.ip_limiter.clone());
let router = Router::new()
.merge(web::web_router())
@@ -159,6 +181,10 @@ async fn main() -> Result<()> {
pool,
auth::bearer_auth_middleware,
))
.layer(axum::middleware::from_fn_with_state(
rate_limit_state.clone(),
rate_limit::rate_limit_middleware,
))
.layer(session_layer)
.layer(cors)
.with_state(app_state);
@@ -168,7 +194,10 @@ async fn main() -> Result<()> {
.await
.with_context(|| format!("failed to bind to {}", bind_addr))?;
tracing::info!("Secrets MCP Server listening on http://{}", bind_addr);
tracing::info!(
"Secrets MCP Server listening on http://{}",
listen_addr_log_display(&bind_addr)
);
tracing::info!("MCP endpoint: {}/mcp", base_url);
axum::serve(
@@ -180,12 +209,135 @@ async fn main() -> Result<()> {
.context("server error")?;
session_cleanup.abort();
rate_limit_cleanup.abort();
Ok(())
}
async fn shutdown_signal() {
tokio::signal::ctrl_c()
.await
.expect("failed to install CTRL+C signal handler");
let ctrl_c = tokio::signal::ctrl_c();
#[cfg(unix)]
let terminate = async {
tokio::signal::unix::signal(tokio::signal::unix::SignalKind::terminate())
.expect("failed to install SIGTERM handler")
.recv()
.await;
};
#[cfg(not(unix))]
let terminate = std::future::pending::<()>();
tokio::select! {
_ = ctrl_c => {},
_ = terminate => {},
}
tracing::info!("Shutting down gracefully...");
}
/// Production CORS allowed headers.
///
/// When adding a new custom header to the MCP or Web API, this list must be
/// updated accordingly — otherwise browsers will block the request during
/// the CORS preflight check.
fn production_allowed_headers() -> [axum::http::HeaderName; 5] {
[
axum::http::header::AUTHORIZATION,
axum::http::header::CONTENT_TYPE,
axum::http::HeaderName::from_static("x-encryption-key"),
axum::http::HeaderName::from_static("mcp-session-id"),
axum::http::HeaderName::from_static("x-mcp-session"),
]
}
/// Production CORS allowed methods.
///
/// Keep this list explicit because tower-http rejects
/// `allow_credentials(true)` together with `allow_methods(Any)`.
fn production_allowed_methods() -> [axum::http::Method; 5] {
[
axum::http::Method::GET,
axum::http::Method::POST,
axum::http::Method::PATCH,
axum::http::Method::DELETE,
axum::http::Method::OPTIONS,
]
}
/// Build the CORS layer for the application.
///
/// In production mode the origin is restricted to the BASE_URL origin
/// (scheme://host:port, path stripped) and credentials are allowed.
/// `allow_headers` and `allow_methods` use explicit whitelists to avoid the
/// tower-http restriction on `allow_credentials(true)` + wildcards.
///
/// In development mode all origins, methods and headers are allowed.
fn build_cors_layer(base_url: &str, is_production: bool) -> CorsLayer {
if is_production {
let allowed_origin = if let Ok(parsed) = base_url.parse::<url::Url>() {
let origin = parsed.origin().ascii_serialization();
origin
.parse::<axum::http::HeaderValue>()
.unwrap_or_else(|_| panic!("invalid BASE_URL origin: {}", origin))
} else {
base_url
.parse::<axum::http::HeaderValue>()
.unwrap_or_else(|_| panic!("invalid BASE_URL: {}", base_url))
};
CorsLayer::new()
.allow_origin(allowed_origin)
.allow_methods(production_allowed_methods())
.allow_headers(production_allowed_headers())
.allow_credentials(true)
} else {
CorsLayer::new()
.allow_origin(Any)
.allow_methods(Any)
.allow_headers(Any)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn production_cors_does_not_panic() {
let layer = build_cors_layer("https://secrets.example.com/app", true);
let _ = layer;
}
#[test]
fn production_cors_headers_include_all_required() {
let headers = production_allowed_headers();
let names: Vec<&str> = headers.iter().map(|h| h.as_str()).collect();
assert!(names.contains(&"authorization"));
assert!(names.contains(&"content-type"));
assert!(names.contains(&"x-encryption-key"));
assert!(names.contains(&"mcp-session-id"));
assert!(names.contains(&"x-mcp-session"));
}
#[test]
fn production_cors_methods_include_all_required() {
let methods = production_allowed_methods();
assert!(methods.contains(&axum::http::Method::GET));
assert!(methods.contains(&axum::http::Method::POST));
assert!(methods.contains(&axum::http::Method::PATCH));
assert!(methods.contains(&axum::http::Method::DELETE));
assert!(methods.contains(&axum::http::Method::OPTIONS));
}
#[test]
fn production_cors_normalizes_base_url_with_path() {
let url = url::Url::parse("https://secrets.example.com/secrets/app").unwrap();
let origin = url.origin().ascii_serialization();
assert_eq!(origin, "https://secrets.example.com");
}
#[test]
fn development_cors_allows_everything() {
let layer = build_cors_layer("http://localhost:9315", false);
let _ = layer;
}
}

View File

@@ -0,0 +1,160 @@
use std::num::NonZeroU32;
use std::sync::Arc;
use std::time::Duration;
use axum::{
extract::{Request, State},
http::{HeaderMap, HeaderValue, StatusCode},
middleware::Next,
response::{IntoResponse, Response},
};
use governor::{
Quota, RateLimiter,
clock::{Clock, DefaultClock},
state::{InMemoryState, NotKeyed, keyed::DashMapStateStore},
};
use serde_json::json;
use crate::client_ip;
/// Per-IP rate limiter (keyed by client IP string)
type IpRateLimiter = RateLimiter<String, DashMapStateStore<String>, DefaultClock>;
/// Global rate limiter (not keyed)
type GlobalRateLimiter = RateLimiter<NotKeyed, InMemoryState, DefaultClock>;
/// Parse a u32 env value into NonZeroU32, logging a warning and falling back
/// to the default if the value is zero.
fn nz_or_log(value: u32, default: u32, name: &str) -> NonZeroU32 {
NonZeroU32::new(value).unwrap_or_else(|| {
tracing::warn!(
configured = value,
default,
"{name} must be non-zero, using default"
);
NonZeroU32::new(default).unwrap()
})
}
#[derive(Clone)]
pub struct RateLimitState {
pub ip_limiter: Arc<IpRateLimiter>,
pub global_limiter: Arc<GlobalRateLimiter>,
}
impl RateLimitState {
/// Create a new RateLimitState with default limits.
///
/// Default limits (can be overridden via environment variables):
/// - Global: 100 req/s, burst 200
/// - Per-IP: 20 req/s, burst 40
pub fn new() -> Self {
let global_rate = std::env::var("RATE_LIMIT_GLOBAL_PER_SECOND")
.ok()
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(100);
let global_burst = std::env::var("RATE_LIMIT_GLOBAL_BURST")
.ok()
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(200);
let ip_rate = std::env::var("RATE_LIMIT_IP_PER_SECOND")
.ok()
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(20);
let ip_burst = std::env::var("RATE_LIMIT_IP_BURST")
.ok()
.and_then(|v| v.parse::<u32>().ok())
.unwrap_or(40);
let global_rate_nz = nz_or_log(global_rate, 100, "RATE_LIMIT_GLOBAL_PER_SECOND");
let global_burst_nz = nz_or_log(global_burst, 200, "RATE_LIMIT_GLOBAL_BURST");
let ip_rate_nz = nz_or_log(ip_rate, 20, "RATE_LIMIT_IP_PER_SECOND");
let ip_burst_nz = nz_or_log(ip_burst, 40, "RATE_LIMIT_IP_BURST");
let global_quota = Quota::per_second(global_rate_nz).allow_burst(global_burst_nz);
let ip_quota = Quota::per_second(ip_rate_nz).allow_burst(ip_burst_nz);
tracing::info!(
global_rate = global_rate_nz.get(),
global_burst = global_burst_nz.get(),
ip_rate = ip_rate_nz.get(),
ip_burst = ip_burst_nz.get(),
"rate limiter initialized"
);
Self {
global_limiter: Arc::new(RateLimiter::direct(global_quota)),
ip_limiter: Arc::new(RateLimiter::dashmap(ip_quota)),
}
}
}
/// Rate limiting middleware function.
///
/// Checks both global and per-IP rate limits before allowing the request through.
/// Returns 429 Too Many Requests if either limit is exceeded.
pub async fn rate_limit_middleware(
State(rl): State<RateLimitState>,
req: Request,
next: Next,
) -> Result<Response, Response> {
// Check global rate limit first
if let Err(negative) = rl.global_limiter.check() {
let retry_after = negative.wait_time_from(DefaultClock::default().now());
tracing::warn!(
retry_after_secs = retry_after.as_secs(),
"global rate limit exceeded"
);
return Err(too_many_requests_response(Some(retry_after)));
}
// Check per-IP rate limit
let key = client_ip::extract_client_ip(&req);
if let Err(negative) = rl.ip_limiter.check_key(&key) {
let retry_after = negative.wait_time_from(DefaultClock::default().now());
tracing::warn!(
client_ip = %key,
retry_after_secs = retry_after.as_secs(),
"per-IP rate limit exceeded"
);
return Err(too_many_requests_response(Some(retry_after)));
}
Ok(next.run(req).await)
}
/// Start a background task to clean up expired rate limiter entries.
///
/// This should be called once during application startup.
/// The task runs every 60 seconds and will be aborted on shutdown.
pub fn spawn_cleanup_task(ip_limiter: Arc<IpRateLimiter>) -> tokio::task::JoinHandle<()> {
tokio::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_secs(60));
loop {
interval.tick().await;
ip_limiter.retain_recent();
}
})
}
/// Create a 429 Too Many Requests response.
fn too_many_requests_response(retry_after: Option<Duration>) -> Response {
let mut headers = HeaderMap::new();
headers.insert("Content-Type", HeaderValue::from_static("application/json"));
if let Some(duration) = retry_after {
let secs = duration.as_secs().max(1);
if let Ok(value) = HeaderValue::from_str(&secs.to_string()) {
headers.insert("Retry-After", value);
}
}
let body = json!({
"error": "Too many requests, please try again later"
});
(StatusCode::TOO_MANY_REQUESTS, headers, body.to_string()).into_response()
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,149 @@
/// Validation constants for input field lengths.
pub const MAX_NAME_LENGTH: usize = 256;
pub const MAX_FOLDER_LENGTH: usize = 128;
pub const MAX_ENTRY_TYPE_LENGTH: usize = 64;
pub const MAX_NOTES_LENGTH: usize = 10000;
pub const MAX_TAG_LENGTH: usize = 64;
pub const MAX_TAG_COUNT: usize = 50;
pub const MAX_META_KEY_LENGTH: usize = 128;
pub const MAX_META_VALUE_LENGTH: usize = 4096;
pub const MAX_META_COUNT: usize = 100;
/// Validate input field lengths for MCP tools.
///
/// Returns an error if any field exceeds its maximum length.
pub fn validate_input_lengths(
name: &str,
folder: Option<&str>,
entry_type: Option<&str>,
notes: Option<&str>,
) -> Result<(), rmcp::ErrorData> {
if name.chars().count() > MAX_NAME_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!("name must be at most {} characters", MAX_NAME_LENGTH),
None,
));
}
if let Some(folder) = folder
&& folder.chars().count() > MAX_FOLDER_LENGTH
{
return Err(rmcp::ErrorData::invalid_params(
format!("folder must be at most {} characters", MAX_FOLDER_LENGTH),
None,
));
}
if let Some(entry_type) = entry_type
&& entry_type.chars().count() > MAX_ENTRY_TYPE_LENGTH
{
return Err(rmcp::ErrorData::invalid_params(
format!("type must be at most {} characters", MAX_ENTRY_TYPE_LENGTH),
None,
));
}
if let Some(notes) = notes
&& notes.chars().count() > MAX_NOTES_LENGTH
{
return Err(rmcp::ErrorData::invalid_params(
format!("notes must be at most {} characters", MAX_NOTES_LENGTH),
None,
));
}
Ok(())
}
/// Validate the tags list.
///
/// Checks total count and per-tag character length.
pub fn validate_tags(tags: &[String]) -> Result<(), rmcp::ErrorData> {
if tags.len() > MAX_TAG_COUNT {
return Err(rmcp::ErrorData::invalid_params(
format!("at most {} tags are allowed", MAX_TAG_COUNT),
None,
));
}
for tag in tags {
if tag.chars().count() > MAX_TAG_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!(
"tag '{}' exceeds the maximum length of {} characters",
tag, MAX_TAG_LENGTH
),
None,
));
}
}
Ok(())
}
/// Validate metadata KV strings (key=value / key:=json format).
///
/// Checks total count and per-key/per-value character lengths.
/// This is a best-effort check on the raw KV strings before parsing;
/// keys containing `:` path separators are checked as a whole.
pub fn validate_meta_entries(entries: &[String]) -> Result<(), rmcp::ErrorData> {
if entries.len() > MAX_META_COUNT {
return Err(rmcp::ErrorData::invalid_params(
format!("at most {} metadata entries are allowed", MAX_META_COUNT),
None,
));
}
for entry in entries {
// key:=json — check both key and JSON value length
if let Some((key, value)) = entry.split_once(":=") {
if key.chars().count() > MAX_META_KEY_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!(
"metadata key '{}' exceeds the maximum length of {} characters",
key, MAX_META_KEY_LENGTH
),
None,
));
}
if value.chars().count() > MAX_META_VALUE_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!(
"metadata JSON value for key '{}' exceeds the maximum length of {} characters",
key, MAX_META_VALUE_LENGTH
),
None,
));
}
continue;
}
// key=value or key@path
if let Some((key, value)) = entry.split_once('=') {
if key.chars().count() > MAX_META_KEY_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!(
"metadata key '{}' exceeds the maximum length of {} characters",
key, MAX_META_KEY_LENGTH
),
None,
));
}
if value.chars().count() > MAX_META_VALUE_LENGTH {
return Err(rmcp::ErrorData::invalid_params(
format!(
"metadata value for key '{}' exceeds the maximum length of {} characters",
key, MAX_META_VALUE_LENGTH
),
None,
));
}
} else {
// Fallback: entry without = or := — check total length
let max_total = MAX_META_KEY_LENGTH + MAX_META_VALUE_LENGTH;
if entry.chars().count() > max_total {
let preview = entry.chars().take(50).collect::<String>();
return Err(rmcp::ErrorData::invalid_params(
format!(
"metadata entry '{}' exceeds the maximum length of {} characters",
preview, max_total
),
None,
));
}
}
}
Ok(())
}

File diff suppressed because it is too large Load Diff

View File

@@ -38,6 +38,10 @@
}
.topbar-spacer { flex: 1; }
.nav-user { font-size: 13px; color: var(--text-muted); }
.lang-bar { display: flex; gap: 2px; background: var(--surface2); border-radius: 6px; padding: 2px; }
.lang-btn { padding: 3px 9px; border: none; background: none; color: var(--text-muted);
font-size: 12px; cursor: pointer; border-radius: 4px; }
.lang-btn.active { background: var(--border); color: var(--text); }
.btn-sign-out {
padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border);
background: none; color: var(--text); font-size: 12px; text-decoration: none; cursor: pointer;
@@ -46,8 +50,25 @@
.main { padding: 32px 24px 40px; flex: 1; }
.card { background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
padding: 24px; width: 100%; max-width: 1180px; margin: 0 auto; }
.card-title { font-size: 20px; font-weight: 600; margin-bottom: 8px; }
.card-subtitle { color: var(--text-muted); font-size: 13px; margin-bottom: 20px; }
.card-title-row {
display: flex; align-items: center; flex-wrap: wrap; gap: 8px;
margin-bottom: 20px;
}
.card-title { font-size: 20px; font-weight: 600; margin: 0; }
.card-title-count {
display: inline-flex;
align-items: center;
min-height: 24px;
padding: 0 8px;
border: 1px solid var(--border);
border-radius: 999px;
background: var(--bg);
color: var(--text-muted);
font-size: 12px;
font-weight: 600;
line-height: 1;
font-family: 'JetBrains Mono', monospace;
}
.empty { color: var(--text-muted); font-size: 14px; padding: 20px 0; }
table { width: 100%; border-collapse: collapse; }
th, td { text-align: left; vertical-align: top; padding: 12px 10px; border-top: 1px solid var(--border); }
@@ -77,13 +98,28 @@
td::before {
display: block; color: var(--text-muted); font-size: 11px;
margin-bottom: 4px; text-transform: uppercase;
content: attr(data-label);
}
td.col-time::before { content: "Time"; }
td.col-action::before { content: "Action"; }
td.col-target::before { content: "Target"; }
td.col-detail::before { content: "Detail"; }
.detail { max-width: none; }
}
.pagination {
display: flex; align-items: center; gap: 8px; margin-top: 20px;
justify-content: center; padding: 12px 0;
}
.page-btn {
padding: 6px 14px; border-radius: 6px; border: 1px solid var(--border);
background: var(--surface); color: var(--text); text-decoration: none;
font-size: 13px; cursor: pointer;
}
.page-btn:hover { background: var(--surface2); }
.page-btn-disabled {
padding: 6px 14px; border-radius: 6px; border: 1px solid var(--border);
background: var(--surface); color: var(--text-muted); font-size: 13px;
opacity: 0.5; cursor: not-allowed;
}
.page-info {
color: var(--text-muted); font-size: 13px; font-family: 'JetBrains Mono', monospace;
}
</style>
</head>
<body>
@@ -91,8 +127,9 @@
<aside class="sidebar">
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link">MCP</a>
<a href="/audit" class="sidebar-link active">审计</a>
<a href="/dashboard" class="sidebar-link" data-i18n="navMcp">MCP</a>
<a href="/entries" class="sidebar-link" data-i18n="navEntries">条目</a>
<a href="/audit" class="sidebar-link active" data-i18n="navAudit">审计</a>
</nav>
</aside>
@@ -100,46 +137,89 @@
<div class="topbar">
<span class="topbar-spacer"></span>
<span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span>
<div class="lang-bar">
<button class="lang-btn" onclick="setLang('zh-CN')"></button>
<button class="lang-btn" onclick="setLang('zh-TW')"></button>
<button class="lang-btn" onclick="setLang('en')">EN</button>
</div>
<form action="/auth/logout" method="post" style="display:inline">
<button type="submit" class="btn-sign-out">退出</button>
<button type="submit" class="btn-sign-out" data-i18n="signOut">退出</button>
</form>
</div>
<main class="main">
<section class="card">
<div class="card-title">我的审计</div>
<div class="card-subtitle">展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。</div>
<div class="card-title-row">
<div class="card-title" data-i18n="auditTitle">我的审计</div>
<span class="card-title-count">{{ total_count }}</span>
</div>
{% if entries.is_empty() %}
<div class="empty">暂无审计记录。</div>
<div class="empty" data-i18n="emptyAudit">暂无审计记录。</div>
{% else %}
<table>
<thead>
<tr>
<th>时间</th>
<th>动作</th>
<th>目标</th>
<th>详情</th>
<th data-i18n="colTime">时间</th>
<th data-i18n="colAction">动作</th>
<th data-i18n="colTarget">目标</th>
<th data-i18n="colDetail">详情</th>
</tr>
</thead>
<tbody>
{% for entry in entries %}
<tr>
<td class="col-time mono"><time class="audit-local-time" datetime="{{ entry.created_at_iso }}">{{ entry.created_at_iso }}</time></td>
<td class="col-action mono">{{ entry.action }}</td>
<td class="col-target mono">{{ entry.target }}</td>
<td class="col-detail"><pre class="detail">{{ entry.detail }}</pre></td>
<td class="col-time mono" data-label="时间"><time class="audit-local-time" datetime="{{ entry.created_at_iso }}">{{ entry.created_at_iso }}</time></td>
<td class="col-action mono" data-label="动作">{{ entry.action }}</td>
<td class="col-target mono" data-label="目标">{{ entry.target }}</td>
<td class="col-detail" data-label="详情"><pre class="detail">{{ entry.detail }}</pre></td>
</tr>
{% endfor %}
</tbody>
</table>
{% if total_count > 0 %}
<div class="pagination">
{% if current_page > 1 %}
<a href="?page={{ current_page - 1 }}" class="page-btn" data-i18n="prevPage">上一页</a>
{% else %}
<span class="page-btn page-btn-disabled" data-i18n="prevPage">上一页</span>
{% endif %}
<span class="page-info">{{ current_page }} / {{ total_pages }}</span>
{% if current_page < total_pages %}
<a href="?page={{ current_page + 1 }}" class="page-btn" data-i18n="nextPage">下一页</a>
{% else %}
<span class="page-btn page-btn-disabled" data-i18n="nextPage">下一页</span>
{% endif %}
</div>
{% endif %}
{% endif %}
</section>
</main>
</div>
</div>
<script src="/static/i18n.js"></script>
<script>
(function () {
I18N_PAGE = {
'zh-CN': { pageTitle: 'Secrets — 审计', auditTitle: '我的审计', emptyAudit: '暂无审计记录。', colTime: '时间', colAction: '动作', colTarget: '目标', colDetail: '详情', prevPage: '上一页', nextPage: '下一页' },
'zh-TW': { pageTitle: 'Secrets — 審計', auditTitle: '我的審計', emptyAudit: '暫無審計記錄。', colTime: '時間', colAction: '動作', colTarget: '目標', colDetail: '詳情', prevPage: '上一頁', nextPage: '下一頁' },
en: { pageTitle: 'Secrets — Audit', auditTitle: 'My audit', emptyAudit: 'No audit records.', colTime: 'Time', colAction: 'Action', colTarget: 'Target', colDetail: 'Detail', prevPage: 'Previous', nextPage: 'Next' }
};
window.applyPageLang = function () {
document.querySelectorAll('tbody tr').forEach(function (tr) {
var time = tr.querySelector('.col-time');
var action = tr.querySelector('.col-action');
var target = tr.querySelector('.col-target');
var detail = tr.querySelector('.col-detail');
if (time) time.setAttribute('data-label', t('mobileLabelTime'));
if (action) action.setAttribute('data-label', t('mobileLabelAction'));
if (target) target.setAttribute('data-label', t('mobileLabelTarget'));
if (detail) detail.setAttribute('data-label', t('mobileLabelDetail'));
});
};
document.querySelectorAll('time.audit-local-time[datetime]').forEach(function (el) {
var raw = el.getAttribute('datetime');
var d = raw ? new Date(raw) : null;
@@ -148,6 +228,7 @@
el.title = raw + ' (UTC)';
}
});
applyLang();
})();
</script>
</body>

View File

@@ -174,6 +174,7 @@
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link active">MCP</a>
<a href="/entries" class="sidebar-link">条目</a>
<a href="/audit" class="sidebar-link">审计</a>
</nav>
</aside>

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,76 @@
var I18N_SHARED = {
'zh-CN': {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: '条目',
navAudit: '审计',
signOut: '退出',
mobileLabelTime: '时间',
mobileLabelAction: '动作',
mobileLabelTarget: '目标',
mobileLabelDetail: '详情'
},
'zh-TW': {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: '條目',
navAudit: '審計',
signOut: '登出',
mobileLabelTime: '時間',
mobileLabelAction: '動作',
mobileLabelTarget: '目標',
mobileLabelDetail: '詳情'
},
en: {
pageTitleBase: 'Secrets',
navMcp: 'MCP',
navEntries: 'Entries',
navAudit: 'Audit',
signOut: 'Sign out',
mobileLabelTime: 'Time',
mobileLabelAction: 'Action',
mobileLabelTarget: 'Target',
mobileLabelDetail: 'Detail'
}
};
var currentLang = localStorage.getItem('lang') || 'zh-CN';
var I18N_PAGE = {};
function t(key) {
var dict = I18N_PAGE[currentLang] || I18N_PAGE['en'] || {};
var val = dict[key] || (I18N_SHARED[currentLang] && I18N_SHARED[currentLang][key]) || (I18N_SHARED.en && I18N_SHARED.en[key]) || key;
return val;
}
function tf(key, vars) {
var tpl = t(key);
return Object.keys(vars || {}).reduce(function (acc, k) {
return acc.replace(new RegExp('\\{' + k + '\\}', 'g'), String(vars[k]));
}, tpl);
}
function applyLang() {
document.documentElement.lang = currentLang;
var title = t('pageTitle');
if (title) document.title = title;
document.querySelectorAll('[data-i18n]').forEach(function (el) {
var key = el.getAttribute('data-i18n');
el.textContent = t(key);
});
document.querySelectorAll('[data-i18n-ph]').forEach(function (el) {
var key = el.getAttribute('data-i18n-ph');
el.placeholder = t(key);
});
document.querySelectorAll('.lang-btn').forEach(function (btn) {
var map = { 'zh-CN': '简', 'zh-TW': '繁', en: 'EN' };
btn.classList.toggle('active', btn.textContent === map[currentLang]);
});
if (typeof applyPageLang === 'function') applyPageLang();
}
window.setLang = function (lang) {
currentLang = lang;
localStorage.setItem('lang', lang);
applyLang();
};

View File

@@ -3,7 +3,13 @@
# ─── 数据库 ───────────────────────────────────────────────────────────
# Web 会话tower-sessions与业务数据共用此库启动时会自动 migrate 会话表,无需额外环境变量。
SECRETS_DATABASE_URL=postgres://postgres:PASSWORD@HOST:PORT/secrets-mcp
SECRETS_DATABASE_URL=postgres://postgres:PASSWORD@db.refining.ltd:5432/secrets-mcp
# 强烈建议生产使用 verify-full至少 verify-ca
SECRETS_DATABASE_SSL_MODE=verify-full
# 私有 CA 或自建链路时填写 CA 根证书路径;使用公共受信 CA 可留空
# SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
# 当设为 prod/production 时,服务会拒绝弱 TLS 模式prefer/disable/allow/require
SECRETS_ENV=production
# ─── 服务地址 ─────────────────────────────────────────────────────────
# 内网监听地址Cloudflare / Nginx 反代时填内网端口)
@@ -24,8 +30,3 @@ GOOGLE_CLIENT_SECRET=
# ─── 日志(可选)──────────────────────────────────────────────────────
# RUST_LOG=secrets_mcp=debug
# ─── 注意 ─────────────────────────────────────────────────────────────
# SERVER_MASTER_KEY 已不再需要。
# 新架构E2EE加密密钥由用户密码短语在客户端本地派生服务端不持有原始密钥。
# 仅在需要迁移旧版 wrapped_key 数据时临时启用。

View File

@@ -0,0 +1,92 @@
# PostgreSQL TLS Hardening Runbook
This runbook applies to:
- PostgreSQL server: `47.117.131.22` (`db.refining.ltd`)
- `secrets-mcp` app server: `47.238.146.244` (`secrets.refining.app`)
## 1) Issue certificate for `db.refining.ltd` (Let's Encrypt + Cloudflare DNS-01)
Install `acme.sh` on the PostgreSQL server and use a Cloudflare API token with DNS edit permission for the target zone.
```bash
curl https://get.acme.sh | sh -s email=ops@refining.ltd
export CF_Token="your_cloudflare_dns_token"
export CF_Zone_ID="your_zone_id"
~/.acme.sh/acme.sh --issue --dns dns_cf -d db.refining.ltd --keylength ec-256
```
Install cert/key into a PostgreSQL-readable path:
```bash
sudo mkdir -p /etc/postgresql/tls
sudo ~/.acme.sh/acme.sh --install-cert -d db.refining.ltd --ecc \
--fullchain-file /etc/postgresql/tls/fullchain.pem \
--key-file /etc/postgresql/tls/privkey.pem \
--reloadcmd "systemctl reload postgresql || systemctl restart postgresql"
sudo chown -R postgres:postgres /etc/postgresql/tls
sudo chmod 600 /etc/postgresql/tls/privkey.pem
sudo chmod 644 /etc/postgresql/tls/fullchain.pem
```
## 2) Configure PostgreSQL TLS and access rules
In `postgresql.conf`:
```conf
ssl = on
ssl_cert_file = '/etc/postgresql/tls/fullchain.pem'
ssl_key_file = '/etc/postgresql/tls/privkey.pem'
```
In `pg_hba.conf`, allow app traffic via TLS only (example):
```conf
hostssl secrets-mcp postgres 47.238.146.244/32 scram-sha-256
```
Keep a safe admin path (`local` socket or restricted source CIDR) before removing old plaintext `host` rules.
Reload PostgreSQL:
```bash
sudo systemctl reload postgresql
```
## 3) Verify server-side TLS
```bash
openssl s_client -starttls postgres -connect db.refining.ltd:5432 -servername db.refining.ltd
```
The handshake should succeed and the certificate should match `db.refining.ltd`.
## 4) Update `secrets-mcp` app server env
Use environment values like:
```bash
SECRETS_DATABASE_URL=postgres://postgres:***@db.refining.ltd:5432/secrets-mcp
SECRETS_DATABASE_SSL_MODE=verify-full
SECRETS_ENV=production
```
If you use private CA instead of public CA, also set:
```bash
SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
```
Restart `secrets-mcp` after updating env.
## 5) Verify from app server
Run positive and negative checks:
- Positive: app starts, migrations pass, dashboard + MCP API work.
- Negative:
- wrong hostname -> connection fails
- wrong CA file -> connection fails
- disable TLS on DB -> connection fails
This ensures no silent downgrade to weak TLS in production.

View File

@@ -1,22 +0,0 @@
-- Run against prod BEFORE deploying secrets-mcp with FK migration.
-- Requires: write access to SECRETS_DATABASE_URL.
-- Example: psql "$SECRETS_DATABASE_URL" -v ON_ERROR_STOP=1 -f scripts/cleanup-orphan-user-ids.sql
BEGIN;
UPDATE entries
SET user_id = NULL
WHERE user_id IS NOT NULL
AND NOT EXISTS (SELECT 1 FROM users u WHERE u.id = entries.user_id);
UPDATE entries_history
SET user_id = NULL
WHERE user_id IS NOT NULL
AND NOT EXISTS (SELECT 1 FROM users u WHERE u.id = entries_history.user_id);
UPDATE audit_log
SET user_id = NULL
WHERE user_id IS NOT NULL
AND NOT EXISTS (SELECT 1 FROM users u WHERE u.id = audit_log.user_id);
COMMIT;

View File

@@ -11,7 +11,7 @@ tag="secrets-mcp-${version}"
echo "==> 当前 secrets-mcp 版本: ${version}"
echo "==> 检查是否已存在 tag: ${tag}"
if git rev-parse "refs/tags/${tag}" >/dev/null 2>&1; then
if jj log --no-graph --revisions "tag(${tag})" --limit 1 >/dev/null 2>&1; then
echo "提示: 已存在 tag ${tag},将按重复构建处理,不阻断检查。"
echo "如需创建新的发布版本,请先 bump crates/secrets-mcp/Cargo.toml 中的 version。"
else

95
scripts/sync-test-to-prod.sh Executable file
View File

@@ -0,0 +1,95 @@
#!/bin/bash
# 同步测试环境数据到生产环境
# 用法: ./scripts/sync-test-to-prod.sh
set -euo pipefail
# PostgreSQL 客户端工具路径 (Homebrew libpq)
export PATH="/opt/homebrew/opt/libpq/bin:$PATH"
# SSL 配置
export PGSSLMODE=verify-full
export PGSSLROOTCERT=/etc/ssl/cert.pem
# 测试环境
TEST_DB="postgres://postgres:Voson_2026_Pg18!@db.refining.ltd:5432/secrets-nn-test"
# 生产环境
PROD_DB="postgres://postgres:Voson_2026_Pg18!@db.refining.ltd:5432/secrets-nn-prod"
echo "========================================="
echo " 测试环境 -> 生产环境 数据同步"
echo "========================================="
echo ""
# 确认操作
read -p "⚠️ 此操作将覆盖生产环境数据,确认继续? (yes/no): " confirm
if [ "$confirm" != "yes" ]; then
echo "已取消"
exit 0
fi
echo ""
echo "步骤 1/4: 导出测试环境数据..."
TEMP_DIR=$(mktemp -d)
trap "rm -rf $TEMP_DIR" EXIT
# 导出测试环境数据(不含审计日志和历史记录)
pg_dump "$TEST_DB" \
--table=entries \
--table=secrets \
--table=entry_secrets \
--table=users \
--table=oauth_accounts \
--data-only \
--column-inserts \
--no-owner \
--no-privileges \
> "$TEMP_DIR/test_data.sql"
echo "✓ 测试数据已导出到临时文件"
echo " 文件大小: $(du -h "$TEMP_DIR/test_data.sql" | cut -f1)"
echo ""
echo "步骤 2/4: 备份当前生产数据..."
pg_dump "$PROD_DB" \
--table=entries \
--table=secrets \
--table=entry_secrets \
--table=users \
--table=oauth_accounts \
--data-only \
--column-inserts \
--no-owner \
--no-privileges \
> "$TEMP_DIR/prod_backup_$(date +%Y%m%d_%H%M%S).sql"
echo "✓ 生产数据已备份"
echo ""
echo "步骤 3/4: 清空生产环境目标表..."
psql "$PROD_DB" <<'SQL'
TRUNCATE TABLE entry_secrets CASCADE;
TRUNCATE TABLE secrets CASCADE;
TRUNCATE TABLE entries CASCADE;
SQL
echo "✓ 生产环境目标表已清空"
echo ""
echo "步骤 4/4: 导入测试数据到生产环境..."
psql "$PROD_DB" -f "$TEMP_DIR/test_data.sql" 2>&1 | tail -20
echo ""
echo "验证数据..."
echo "生产环境数据统计:"
psql "$PROD_DB" -c "SELECT 'users' as table_name, count(*) FROM users UNION ALL SELECT 'entries', count(*) FROM entries UNION ALL SELECT 'secrets', count(*) FROM secrets UNION ALL SELECT 'entry_secrets', count(*) FROM entry_secrets UNION ALL SELECT 'oauth_accounts', count(*) FROM oauth_accounts ORDER BY table_name;"
echo ""
echo "========================================="
echo " ✓ 数据同步完成!"
echo "========================================="
echo ""
echo "提示:"
echo " - 生产数据备份已保存在临时目录"
echo " - 临时文件将在脚本退出后自动删除"