Compare commits
12 Commits
secrets-mc
...
secrets-mc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cab234cfcb | ||
|
|
e0fee639c1 | ||
|
|
7c53bfb782 | ||
|
|
63cb3a8216 | ||
|
|
2b994141b8 | ||
|
|
9d6ac5c13a | ||
|
|
1860cce86c | ||
| dd24f7cc44 | |||
|
|
aefad33870 | ||
|
|
0ffb81e57f | ||
|
|
4a1654c820 | ||
|
|
a15e2eaf4a |
@@ -208,6 +208,7 @@ jobs:
|
||||
DEPLOY_HOST: ${{ vars.DEPLOY_HOST }}
|
||||
DEPLOY_USER: ${{ vars.DEPLOY_USER }}
|
||||
DEPLOY_SSH_KEY: ${{ secrets.DEPLOY_SSH_KEY }}
|
||||
DEPLOY_KNOWN_HOSTS: ${{ vars.DEPLOY_KNOWN_HOSTS }}
|
||||
run: |
|
||||
if [ -z "$DEPLOY_HOST" ] || [ -z "$DEPLOY_USER" ] || [ -z "$DEPLOY_SSH_KEY" ]; then
|
||||
echo "部署跳过:请配置 vars.DEPLOY_HOST、vars.DEPLOY_USER 与 secrets.DEPLOY_SSH_KEY"
|
||||
@@ -216,19 +217,26 @@ jobs:
|
||||
|
||||
echo "$DEPLOY_SSH_KEY" > /tmp/deploy_key
|
||||
chmod 600 /tmp/deploy_key
|
||||
trap 'rm -f /tmp/deploy_key' EXIT
|
||||
|
||||
scp -i /tmp/deploy_key -o StrictHostKeyChecking=no \
|
||||
if [ -n "$DEPLOY_KNOWN_HOSTS" ]; then
|
||||
echo "$DEPLOY_KNOWN_HOSTS" > /tmp/deploy_known_hosts
|
||||
ssh_opts="-o UserKnownHostsFile=/tmp/deploy_known_hosts -o StrictHostKeyChecking=yes"
|
||||
else
|
||||
ssh_opts="-o StrictHostKeyChecking=accept-new"
|
||||
fi
|
||||
|
||||
scp -i /tmp/deploy_key $ssh_opts \
|
||||
"/tmp/artifact/${MCP_BINARY}" \
|
||||
"${DEPLOY_USER}@${DEPLOY_HOST}:/tmp/secrets-mcp.new"
|
||||
|
||||
ssh -i /tmp/deploy_key -o StrictHostKeyChecking=no "${DEPLOY_USER}@${DEPLOY_HOST}" "
|
||||
ssh -i /tmp/deploy_key $ssh_opts "${DEPLOY_USER}@${DEPLOY_HOST}" "
|
||||
sudo mv /tmp/secrets-mcp.new /opt/secrets-mcp/secrets-mcp
|
||||
sudo chmod +x /opt/secrets-mcp/secrets-mcp
|
||||
sudo systemctl restart secrets-mcp
|
||||
sleep 2
|
||||
sudo systemctl is-active secrets-mcp && echo '服务启动成功' || (sudo journalctl -u secrets-mcp -n 20 && exit 1)
|
||||
"
|
||||
rm -f /tmp/deploy_key
|
||||
|
||||
- name: 飞书通知
|
||||
if: always()
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -4,4 +4,5 @@
|
||||
.cursor/
|
||||
*.pem
|
||||
tmp/
|
||||
client_secret_*.apps.googleusercontent.com.json
|
||||
client_secret_*.apps.googleusercontent.com.json
|
||||
node_modules/
|
||||
3
.vscode/tasks.json
vendored
3
.vscode/tasks.json
vendored
@@ -22,7 +22,6 @@
|
||||
"label": "test: workspace",
|
||||
"type": "shell",
|
||||
"command": "cargo test --workspace --locked",
|
||||
"dependsOn": "build",
|
||||
"group": { "kind": "test", "isDefault": true }
|
||||
},
|
||||
{
|
||||
@@ -35,7 +34,7 @@
|
||||
"label": "clippy: workspace",
|
||||
"type": "shell",
|
||||
"command": "cargo clippy --workspace --locked -- -D warnings",
|
||||
"dependsOn": "build"
|
||||
"problemMatcher": []
|
||||
},
|
||||
{
|
||||
"label": "ci: release-check",
|
||||
|
||||
53
AGENTS.md
53
AGENTS.md
@@ -2,12 +2,37 @@
|
||||
|
||||
本仓库为 **MCP SaaS**:`secrets-core`(业务与持久化)+ `secrets-mcp`(Streamable HTTP MCP、Web、OAuth、API Key)。对外入口见 `crates/secrets-mcp`。
|
||||
|
||||
## 版本控制
|
||||
|
||||
本仓库使用 **[Jujutsu (jj)](https://jj-vcs.dev/)** 作为版本控制系统(纯 jj 模式,无 `.git` 目录)。
|
||||
|
||||
### 常用 jj 命令对照
|
||||
|
||||
| 操作 | jj 命令 |
|
||||
|------|---------|
|
||||
| 查看历史 | `jj log` / `jj log 'all()'` |
|
||||
| 查看状态 | `jj status` |
|
||||
| 新建提交 | `jj commit` |
|
||||
| 创建新变更 | `jj new` |
|
||||
| 变基 | `jj rebase` |
|
||||
| 合并提交 | `jj squash` |
|
||||
| 撤销操作 | `jj undo` |
|
||||
| 查看标签 | `jj tag list` |
|
||||
| 查看分支 | `jj bookmark list` |
|
||||
| 推送远端 | `jj git push` |
|
||||
| 拉取远端 | `jj git fetch` |
|
||||
|
||||
### 注意事项
|
||||
- 本仓库为**纯 jj 模式**,无 `.git` 目录;本地不要使用 `git` 命令
|
||||
- CI/CD(Gitea Actions)仍通过 Git 协议拉取代码,Runner 侧自动使用 `git`,无需修改
|
||||
- 检查标签是否存在时使用 `jj log --no-graph --revisions "tag(${tag})"` 而非 `git rev-parse`
|
||||
|
||||
## 提交 / 推送硬规则(优先于下文)
|
||||
|
||||
**每次提交和推送前必须执行以下检查,无论是否明确「发版」:**
|
||||
|
||||
1. 涉及 `crates/**`、根目录 `Cargo.toml`/`Cargo.lock`、`secrets-mcp` 行为变更的提交,默认视为**需要发版**,除非明确说明「本次不发版」。
|
||||
2. 提交前检查 `crates/secrets-mcp/Cargo.toml` 的 `version`,再查 tag:`git tag -l 'secrets-mcp-*'`。若当前版本对应 tag 已存在且有代码变更,**必须 bump 版本号**并 `cargo build` 同步 `Cargo.lock`。
|
||||
2. 提交前检查 `crates/secrets-mcp/Cargo.toml` 的 `version`,再查 tag:`jj tag list`。若当前版本对应 tag 已存在且有代码变更,**必须 bump 版本号**并 `cargo build` 同步 `Cargo.lock`。
|
||||
3. 提交前运行 `./scripts/release-check.sh`(版本/tag + `fmt` + `clippy --locked` + `test --locked`)。若脚本不存在或不可用,至少运行 `cargo fmt -- --check && cargo clippy --locked -- -D warnings && cargo test --locked`。
|
||||
|
||||
## 项目结构
|
||||
@@ -112,14 +137,16 @@ oauth_accounts (
|
||||
|
||||
### MCP 消歧(AI 调用)
|
||||
|
||||
按 `name` 定位条目的工具(`get` / `update` / 单条 `delete` / `history` / `rollback`):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder`。`secrets_delete` 的 `dry_run=true` 与真实删除使用相同消歧规则。
|
||||
按 `name` 定位条目的工具(`secrets_update` / `secrets_history` / `secrets_rollback` / `secrets_delete` 单条模式):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder`。也可直接传 `id`(UUID)跳过消歧。
|
||||
|
||||
注意:`secrets_get` 只接受 UUID `id`(来自 `secrets_find` 结果),不支持按 `name` 定位。
|
||||
|
||||
### 字段职责
|
||||
|
||||
| 字段 | 含义 | 示例 |
|
||||
|------|------|------|
|
||||
| `folder` | 隔离空间(参与唯一键) | `refining` |
|
||||
| `type` | 软分类(不参与唯一键) | `server`, `service`, `person`, `document` |
|
||||
| `type` | 软分类(不参与唯一键,用户自定义) | `server`, `service`, `account`, `person`, `document` |
|
||||
| `name` | 标识名 | `gitea`, `aliyun` |
|
||||
| `notes` | 非敏感说明 | 自由文本 |
|
||||
| `tags` | 标签 | `["aliyun","prod"]` |
|
||||
@@ -144,6 +171,14 @@ oauth_accounts (
|
||||
- 加密:密钥由用户密码短语通过 **PBKDF2-SHA256(600k 次)** 在客户端派生,服务端只存 `key_salt`/`key_check`/`key_params`,不持有原始密钥。Web 客户端在浏览器本地完成加解密;MCP 客户端通过 `X-Encryption-Key` 请求头传递密钥,服务端临时解密后返回明文。
|
||||
- MCP:tools 参数与 JSON Schema(`schemars`)保持同步,鉴权以请求扩展中的用户上下文为准。
|
||||
|
||||
## 生产 CORS
|
||||
|
||||
生产环境 CORS 使用显式请求头白名单(`build_cors_layer`),而非 `allow_headers(Any)`,
|
||||
因为 `tower-http` 禁止 `allow_credentials(true)` 与 `allow_headers(Any)` 同时使用。
|
||||
|
||||
**维护约束**:若 MCP 协议或客户端新增自定义请求头,必须同步更新 `production_allowed_headers()`。
|
||||
当前允许的请求头:`Authorization`、`Content-Type`、`X-Encryption-Key`、`mcp-session-id`、`x-mcp-session`。
|
||||
|
||||
## 提交前检查
|
||||
|
||||
```bash
|
||||
@@ -162,7 +197,7 @@ cargo test --locked
|
||||
|
||||
```bash
|
||||
grep '^version' crates/secrets-mcp/Cargo.toml
|
||||
git tag -l 'secrets-mcp-*'
|
||||
jj tag list
|
||||
```
|
||||
|
||||
## CI/CD
|
||||
@@ -180,9 +215,19 @@ git tag -l 'secrets-mcp-*'
|
||||
| 变量 | 说明 |
|
||||
|------|------|
|
||||
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL URL。 |
|
||||
| `SECRETS_DATABASE_SSL_MODE` | 可选但强烈建议生产必填。推荐 `verify-full`(至少 `verify-ca`)。 |
|
||||
| `SECRETS_DATABASE_SSL_ROOT_CERT` | 可选。私有 CA 或自签链路时指定 CA 根证书路径。 |
|
||||
| `SECRETS_DATABASE_POOL_SIZE` | 可选。连接池最大连接数,默认 `10`。 |
|
||||
| `SECRETS_DATABASE_ACQUIRE_TIMEOUT` | 可选。获取连接超时秒数,默认 `5`。 |
|
||||
| `SECRETS_ENV` | 可选。设为 `prod` / `production` 时会拒绝弱 PostgreSQL TLS 模式。 |
|
||||
| `BASE_URL` | 对外基址;OAuth 回调 `${BASE_URL}/auth/google/callback`。 |
|
||||
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`(容器/远程直接暴露时需改为 `0.0.0.0:9315`)。 |
|
||||
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;仅运行时配置。 |
|
||||
| `RUST_LOG` | 如 `secrets_mcp=debug`。 |
|
||||
| `RATE_LIMIT_GLOBAL_PER_SECOND` | 可选。全局限流速率,默认 `100` req/s。 |
|
||||
| `RATE_LIMIT_GLOBAL_BURST` | 可选。全局限流突发量,默认 `200`。 |
|
||||
| `RATE_LIMIT_IP_PER_SECOND` | 可选。单 IP 限流速率,默认 `20` req/s。 |
|
||||
| `RATE_LIMIT_IP_BURST` | 可选。单 IP 限流突发量,默认 `40`。 |
|
||||
| `TRUST_PROXY` | 可选。设为 `1`/`true`/`yes` 时从 `X-Forwarded-For` / `X-Real-IP` 提取客户端 IP。 |
|
||||
|
||||
> `SERVER_MASTER_KEY` 已不再需要。新架构下密钥由用户密码短语在客户端派生,服务端不持有。
|
||||
|
||||
55
CONTRIBUTING.md
Normal file
55
CONTRIBUTING.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Contributing
|
||||
|
||||
## 版本控制
|
||||
|
||||
本仓库使用 **[Jujutsu (jj)](https://jj-vcs.dev/)**。请勿使用 `git` 命令。
|
||||
|
||||
```bash
|
||||
jj log # 查看历史
|
||||
jj status # 查看状态
|
||||
jj new # 创建新变更
|
||||
jj commit # 提交
|
||||
jj rebase # 变基
|
||||
jj squash # 合并提交
|
||||
jj git push # 推送到远端
|
||||
```
|
||||
|
||||
详见 [AGENTS.md](AGENTS.md) 的「版本控制」章节。
|
||||
|
||||
## 本地开发
|
||||
|
||||
```bash
|
||||
# 复制环境变量
|
||||
cp deploy/.env.example .env
|
||||
|
||||
# 填写数据库连接等配置后
|
||||
cargo build
|
||||
cargo test --locked
|
||||
```
|
||||
|
||||
## 提交前检查
|
||||
|
||||
每次提交前必须通过:
|
||||
|
||||
```bash
|
||||
cargo fmt -- --check
|
||||
cargo clippy --locked -- -D warnings
|
||||
cargo test --locked
|
||||
```
|
||||
|
||||
或使用脚本:
|
||||
|
||||
```bash
|
||||
./scripts/release-check.sh
|
||||
```
|
||||
|
||||
## 发版规则
|
||||
|
||||
涉及 `crates/**`、根目录 `Cargo.toml`/`Cargo.lock`、`secrets-mcp` 行为变更的提交,默认需要发版。
|
||||
|
||||
1. 检查 `crates/secrets-mcp/Cargo.toml` 的 `version`
|
||||
2. 运行 `jj tag list` 确认对应 tag 是否已存在
|
||||
3. 若 tag 已存在且有代码变更,**必须 bump 版本**并 `cargo build` 同步 `Cargo.lock`
|
||||
4. 通过 release-check 后再提交
|
||||
|
||||
详见 [AGENTS.md](AGENTS.md) 的「提交 / 推送硬规则」章节。
|
||||
135
Cargo.lock
generated
135
Cargo.lock
generated
@@ -464,6 +464,20 @@ dependencies = [
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dashmap"
|
||||
version = "6.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5041cc499144891f3790297212f32a74fb938e5136a14943f338ef9e0ae276cf"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"crossbeam-utils",
|
||||
"hashbrown 0.14.5",
|
||||
"lock_api",
|
||||
"once_cell",
|
||||
"parking_lot_core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "der"
|
||||
version = "0.7.10"
|
||||
@@ -596,6 +610,12 @@ version = "0.1.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d9c4f5dac5e15c24eb999c26181a6ca40b39fe946cbe4c263c7209467bc83af2"
|
||||
|
||||
[[package]]
|
||||
name = "foldhash"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb"
|
||||
|
||||
[[package]]
|
||||
name = "form_urlencoded"
|
||||
version = "1.2.2"
|
||||
@@ -687,6 +707,12 @@ version = "0.3.32"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "037711b3d59c33004d3856fbdc83b99d4ff37a24768fa1be9ce3538a1cde4393"
|
||||
|
||||
[[package]]
|
||||
name = "futures-timer"
|
||||
version = "3.0.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f288b0a4f20f9a56b5d1da57e2227c661b7b16168e2f72365f57b63326e29b24"
|
||||
|
||||
[[package]]
|
||||
name = "futures-util"
|
||||
version = "0.3.32"
|
||||
@@ -765,6 +791,35 @@ dependencies = [
|
||||
"polyval",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "governor"
|
||||
version = "0.10.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9efcab3c1958580ff1f25a2a41be1668f7603d849bb63af523b208a3cc1223b8"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"dashmap",
|
||||
"futures-sink",
|
||||
"futures-timer",
|
||||
"futures-util",
|
||||
"getrandom 0.3.4",
|
||||
"hashbrown 0.16.1",
|
||||
"nonzero_ext",
|
||||
"parking_lot",
|
||||
"portable-atomic",
|
||||
"quanta",
|
||||
"rand 0.9.2",
|
||||
"smallvec",
|
||||
"spinning_top",
|
||||
"web-time",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "hashbrown"
|
||||
version = "0.14.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1"
|
||||
|
||||
[[package]]
|
||||
name = "hashbrown"
|
||||
version = "0.15.5"
|
||||
@@ -773,7 +828,7 @@ checksum = "9229cfe53dfd69f0609a49f65461bd93001ea1ef889cd5529dd176593f5338a1"
|
||||
dependencies = [
|
||||
"allocator-api2",
|
||||
"equivalent",
|
||||
"foldhash",
|
||||
"foldhash 0.1.5",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -781,6 +836,11 @@ name = "hashbrown"
|
||||
version = "0.16.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
|
||||
dependencies = [
|
||||
"allocator-api2",
|
||||
"equivalent",
|
||||
"foldhash 0.2.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "hashlink"
|
||||
@@ -1283,6 +1343,12 @@ dependencies = [
|
||||
"windows-sys 0.61.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nonzero_ext"
|
||||
version = "0.3.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "38bf9645c8b145698bb0b18a4637dcacbc421ea49bef2317e4fd8065a387cf21"
|
||||
|
||||
[[package]]
|
||||
name = "nu-ansi-term"
|
||||
version = "0.50.3"
|
||||
@@ -1463,6 +1529,12 @@ dependencies = [
|
||||
"universal-hash",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "portable-atomic"
|
||||
version = "1.13.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c33a9471896f1c69cecef8d20cbe2f7accd12527ce60845ff44c153bb2a21b49"
|
||||
|
||||
[[package]]
|
||||
name = "potential_utf"
|
||||
version = "0.1.4"
|
||||
@@ -1506,6 +1578,21 @@ dependencies = [
|
||||
"unicode-ident",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "quanta"
|
||||
version = "0.12.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f3ab5a9d756f0d97bdc89019bd2e4ea098cf9cde50ee7564dde6b81ccc8f06c7"
|
||||
dependencies = [
|
||||
"crossbeam-utils",
|
||||
"libc",
|
||||
"once_cell",
|
||||
"raw-cpuid",
|
||||
"wasi",
|
||||
"web-sys",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "quinn"
|
||||
version = "0.11.9"
|
||||
@@ -1658,6 +1745,15 @@ version = "0.10.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0c8d0fd677905edcbeedbf2edb6494d676f0e98d54d5cf9bda0b061cb8fb8aba"
|
||||
|
||||
[[package]]
|
||||
name = "raw-cpuid"
|
||||
version = "11.6.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "498cd0dc59d73224351ee52a95fee0f1a617a2eae0e7d9d720cc622c73a54186"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "redox_syscall"
|
||||
version = "0.5.18"
|
||||
@@ -1953,6 +2049,7 @@ dependencies = [
|
||||
"aes-gcm",
|
||||
"anyhow",
|
||||
"chrono",
|
||||
"hex",
|
||||
"rand 0.10.0",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -1969,7 +2066,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "secrets-mcp"
|
||||
version = "0.4.0"
|
||||
version = "0.5.7"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"askama",
|
||||
@@ -1977,6 +2074,7 @@ dependencies = [
|
||||
"axum-extra",
|
||||
"chrono",
|
||||
"dotenvy",
|
||||
"governor",
|
||||
"http",
|
||||
"rand 0.10.0",
|
||||
"reqwest",
|
||||
@@ -1995,6 +2093,7 @@ dependencies = [
|
||||
"tower-sessions-sqlx-store-chrono",
|
||||
"tracing",
|
||||
"tracing-subscriber",
|
||||
"url",
|
||||
"urlencoding",
|
||||
"uuid",
|
||||
]
|
||||
@@ -2195,6 +2294,15 @@ dependencies = [
|
||||
"lock_api",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "spinning_top"
|
||||
version = "0.3.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d96d2d1d716fb500937168cc09353ffdc7a012be8475ac7308e1bdf0e3923300"
|
||||
dependencies = [
|
||||
"lock_api",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "spki"
|
||||
version = "0.7.3"
|
||||
@@ -2717,6 +2825,7 @@ dependencies = [
|
||||
"futures-util",
|
||||
"http",
|
||||
"http-body",
|
||||
"http-body-util",
|
||||
"iri-string",
|
||||
"pin-project-lite",
|
||||
"tower",
|
||||
@@ -3167,6 +3276,28 @@ dependencies = [
|
||||
"wasite",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "winapi"
|
||||
version = "0.3.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
|
||||
dependencies = [
|
||||
"winapi-i686-pc-windows-gnu",
|
||||
"winapi-x86_64-pc-windows-gnu",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "winapi-i686-pc-windows-gnu"
|
||||
version = "0.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
|
||||
|
||||
[[package]]
|
||||
name = "winapi-x86_64-pc-windows-gnu"
|
||||
version = "0.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
|
||||
|
||||
[[package]]
|
||||
name = "windows-core"
|
||||
version = "0.62.2"
|
||||
|
||||
56
README.md
56
README.md
@@ -25,6 +25,13 @@ cargo build --release -p secrets-mcp
|
||||
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`。容器内或直接对外暴露端口时请改为 `0.0.0.0:9315`;反代时常为 `127.0.0.1:9315`。 |
|
||||
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;不配置则无 Google 登录入口。运行时从环境读取,勿写入 CI、勿打入二进制。 |
|
||||
| `RUST_LOG` | 可选;日志级别,如 `secrets_mcp=debug`。 |
|
||||
| `SECRETS_DATABASE_POOL_SIZE` | 可选。连接池最大连接数,默认 `10`。 |
|
||||
| `SECRETS_DATABASE_ACQUIRE_TIMEOUT` | 可选。获取连接超时秒数,默认 `5`。 |
|
||||
| `RATE_LIMIT_GLOBAL_PER_SECOND` | 可选。全局限流速率,默认 `100` req/s。 |
|
||||
| `RATE_LIMIT_GLOBAL_BURST` | 可选。全局限流突发量,默认 `200`。 |
|
||||
| `RATE_LIMIT_IP_PER_SECOND` | 可选。单 IP 限流速率,默认 `20` req/s。 |
|
||||
| `RATE_LIMIT_IP_BURST` | 可选。单 IP 限流突发量,默认 `40`。 |
|
||||
| `TRUST_PROXY` | 可选。设为 `1`/`true`/`yes` 时从 `X-Forwarded-For` / `X-Real-IP` 提取客户端 IP;仅在反代环境下启用。 |
|
||||
|
||||
```bash
|
||||
cargo run -p secrets-mcp
|
||||
@@ -54,10 +61,31 @@ SECRETS_ENV=production
|
||||
|
||||
条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun` 与 `ricnsmart/aliyun`)。
|
||||
|
||||
- **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。
|
||||
- **`secrets_get` / `secrets_update` / `secrets_delete`(按 name)/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。
|
||||
- **`secrets_delete`**:`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder`。
|
||||
- **共享密钥**:N:N 关联下,删除 entry 仅解除关联,被共享的 secret 若仍被其他 entry 引用则保留;无引用时自动清理。
|
||||
### 工具列表
|
||||
|
||||
| 工具 | 需要加密密钥 | 说明 |
|
||||
|------|-------------|------|
|
||||
| `secrets_find` | 否 | 发现条目(返回含 secret_fields schema),支持 `name_query` 模糊匹配 |
|
||||
| `secrets_search` | 否 | 搜索条目,支持 `query`/`folder`/`type`/`name` 过滤、`sort`/`offset` 分页、`summary` 摘要模式 |
|
||||
| `secrets_get` | 是 | 按 UUID `id` 获取单条条目及解密后的 secrets |
|
||||
| `secrets_add` | 是 | 添加新条目,支持 `meta_obj`/`secrets_obj` JSON 对象参数、`secret_types` 指定密钥类型、`link_secret_names` 关联已有 secret |
|
||||
| `secrets_update` | 是 | 更新条目,支持 `id` 或 `name`+`folder` 定位 |
|
||||
| `secrets_delete` | 否 | 删除条目,支持 `id` 或 `name`+`folder` 定位;`dry_run=true` 预览删除 |
|
||||
| `secrets_history` | 否 | 查看条目历史,支持 `id` 或 `name`+`folder` 定位 |
|
||||
| `secrets_rollback` | 是 | 回滚条目到指定历史版本,支持 `id` 或 `name`+`folder` 定位 |
|
||||
| `secrets_export` | 是 | 导出条目(含解密明文),支持 JSON/TOML/YAML 格式 |
|
||||
| `secrets_env_map` | 是 | 将 secrets 转换为环境变量映射(`UPPER(entry)_UPPER(field)` 格式),支持 `prefix` |
|
||||
| `secrets_overview` | 否 | 返回各 folder 和 type 的 entry 计数概览 |
|
||||
|
||||
### 消歧规则
|
||||
|
||||
- **按 `name` 定位的工具**(`secrets_update` / `secrets_delete` / `secrets_history` / `secrets_rollback`):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder`。也可直接传 `id`(UUID)跳过消歧。
|
||||
- **`secrets_get`** 仅支持通过 `id`(UUID)获取。
|
||||
- **`secrets_delete`** 的 `dry_run=true` 与真实删除使用相同消歧规则——唯一则预览一条,多条则报错并要求 `folder`。
|
||||
|
||||
### 共享密钥
|
||||
|
||||
N:N 关联下,删除 entry 仅解除关联,被共享的 secret 若仍被其他 entry 引用则保留;无引用时自动清理。
|
||||
|
||||
## 加密架构(混合 E2EE)
|
||||
|
||||
@@ -151,12 +179,12 @@ flowchart LR
|
||||
|
||||
## 数据模型
|
||||
|
||||
主表 **`entries`**(`folder`、`type`、`name`、`notes`、`tags`、`metadata`,多租户时带 `user_id`)+ 子表 **`secrets`**(每行一个加密字段:`name`、`type`、`encrypted`,通过 `entry_secrets` 中间表与 entry 建立 N:N 关联)。**唯一性**:`UNIQUE(user_id, folder, name)`(`user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history`、`secrets_history`、`audit_log`,以及 **`users`**(含 `key_salt`、`key_check`、`key_params`、`api_key`)、**`oauth_accounts`**。首次连库自动迁移建表(`secrets-core` 的 `migrate`);已有库可对照 [`scripts/migrate-v0.3.0.sql`](scripts/migrate-v0.3.0.sql) 做列重命名与索引重建。**Web 登录会话**(tower-sessions)使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp` 中 `PostgresStore::migrate`),无需额外环境变量。
|
||||
主表 **`entries`**(`folder`、`type`、`name`、`notes`、`tags`、`metadata`,多租户时带 `user_id`)+ 子表 **`secrets`**(每行一个加密字段:`name`、`type`、`encrypted`,通过 `entry_secrets` 中间表与 entry 建立 N:N 关联)。**唯一性**:`UNIQUE(user_id, folder, name)`(`user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history`、`secrets_history`、`audit_log`,以及 **`users`**(含 `key_salt`、`key_check`、`key_params`、`api_key`)、**`oauth_accounts`**。首次连库自动迁移建表(`secrets-core` 的 `migrate`);已有库在进程启动时亦由同一 `migrate()` 增量补齐表、索引与 N:N 结构。若需从更早版本对照一次性 SQL,可在 git 历史中检索已移除的 `scripts/migrate-v0.3.0.sql`。**Web 登录会话**(tower-sessions)使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp` 中 `PostgresStore::migrate`),无需额外环境变量。
|
||||
|
||||
| 位置 | 字段 | 说明 |
|
||||
|------|------|------|
|
||||
| entries | folder | 组织/隔离空间,如 `refining`、`ricnsmart`;参与唯一键 |
|
||||
| entries | type | 软分类,如 `server`、`service`、`person`、`document`(可扩展,不参与唯一键) |
|
||||
| entries | type | 软分类,用户自定义,如 `server`、`service`、`account`、`person`、`document`(不参与唯一键) |
|
||||
| entries | name | 人类可读标识;与 `folder` 一起在用户内唯一 |
|
||||
| entries | notes | 非敏感说明文本 |
|
||||
| entries | metadata | 明文 JSON(ip、url、subtype 等) |
|
||||
@@ -174,6 +202,10 @@ flowchart LR
|
||||
- 同一 secret 可被多个 entry 引用,删除某 entry 不会级联删除被共享的 secret
|
||||
- 当 secret 不再被任何 entry 引用时,自动清理(`NOT EXISTS` 子查询)
|
||||
|
||||
### 类型(Type)
|
||||
|
||||
`type` 字段用于软分类,由用户自由填写,不做任何自动转换或归一化。常见示例:`server`、`service`、`account`、`person`、`document`,但任何值均可接受。
|
||||
|
||||
## 审计日志
|
||||
|
||||
`add`、`update`、`delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。多租户场景下可写 **`user_id`**(可空,兼容遗留行)。
|
||||
@@ -191,10 +223,18 @@ LIMIT 20;
|
||||
```
|
||||
Cargo.toml
|
||||
crates/secrets-core/ # db / crypto / models / audit / service
|
||||
src/
|
||||
taxonomy.rs # SECRET_TYPE_OPTIONS(secret 字段类型下拉选项)
|
||||
service/ # 业务逻辑(add, search, update, delete, export, env_map 等)
|
||||
crates/secrets-mcp/ # MCP HTTP、Web、OAuth、API Key
|
||||
scripts/
|
||||
migrate-v0.3.0.sql # 可选:手动 SQL 迁移(namespace/kind → folder/type、唯一键含 folder)
|
||||
deploy/ # systemd、.env 示例
|
||||
release-check.sh # 发版前 fmt / clippy / test
|
||||
setup-gitea-actions.sh
|
||||
sync-test-to-prod.sh # 测试库同步到生产(按需)
|
||||
deploy/
|
||||
.env.example # 环境变量模板
|
||||
secrets-mcp.service # systemd 服务文件(生产部署用)
|
||||
postgres-tls-hardening.md # PostgreSQL TLS 加固运维手册
|
||||
```
|
||||
|
||||
## CI/CD(Gitea Actions)
|
||||
|
||||
@@ -12,6 +12,7 @@ aes-gcm.workspace = true
|
||||
anyhow.workspace = true
|
||||
thiserror.workspace = true
|
||||
chrono.workspace = true
|
||||
hex = "0.4"
|
||||
rand.workspace = true
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
|
||||
@@ -5,6 +5,8 @@ use aes_gcm::{
|
||||
use anyhow::{Context, Result, bail};
|
||||
use serde_json::Value;
|
||||
|
||||
use crate::error::AppError;
|
||||
|
||||
const NONCE_LEN: usize = 12;
|
||||
|
||||
// ─── AES-256-GCM encrypt / decrypt ───────────────────────────────────────────
|
||||
@@ -38,7 +40,7 @@ pub fn decrypt(master_key: &[u8; 32], data: &[u8]) -> Result<Vec<u8>> {
|
||||
let nonce = Nonce::from_slice(nonce_bytes);
|
||||
cipher
|
||||
.decrypt(nonce, ciphertext)
|
||||
.map_err(|_| anyhow::anyhow!("decryption failed — wrong master key or corrupted data"))
|
||||
.map_err(|_| AppError::DecryptionFailed.into())
|
||||
}
|
||||
|
||||
// ─── JSON helpers ─────────────────────────────────────────────────────────────
|
||||
@@ -59,7 +61,7 @@ pub fn decrypt_json(master_key: &[u8; 32], data: &[u8]) -> Result<Value> {
|
||||
|
||||
/// Parse a 64-char hex string (from X-Encryption-Key header) into a 32-byte key.
|
||||
pub fn extract_key_from_hex(hex_str: &str) -> Result<[u8; 32]> {
|
||||
let bytes = hex::decode_hex(hex_str.trim())?;
|
||||
let bytes = ::hex::decode(hex_str.trim())?;
|
||||
if bytes.len() != 32 {
|
||||
bail!(
|
||||
"X-Encryption-Key must be 64 hex chars (32 bytes), got {} bytes",
|
||||
@@ -74,21 +76,14 @@ pub fn extract_key_from_hex(hex_str: &str) -> Result<[u8; 32]> {
|
||||
// ─── Public hex helpers ───────────────────────────────────────────────────────
|
||||
|
||||
pub mod hex {
|
||||
use anyhow::{Result, bail};
|
||||
use anyhow::Result;
|
||||
|
||||
pub fn encode_hex(bytes: &[u8]) -> String {
|
||||
bytes.iter().map(|b| format!("{:02x}", b)).collect()
|
||||
}
|
||||
|
||||
pub fn decode_hex(s: &str) -> Result<Vec<u8>> {
|
||||
let s = s.trim();
|
||||
if !s.len().is_multiple_of(2) {
|
||||
bail!("hex string has odd length");
|
||||
}
|
||||
(0..s.len())
|
||||
.step_by(2)
|
||||
.map(|i| u8::from_str_radix(&s[i..i + 2], 16).map_err(|e| anyhow::anyhow!("{}", e)))
|
||||
.collect()
|
||||
Ok(::hex::decode(s.trim())?)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::str::FromStr;
|
||||
|
||||
use anyhow::{Context, Result};
|
||||
use serde_json::Value;
|
||||
use serde_json::{Map, Value};
|
||||
use sqlx::PgPool;
|
||||
use sqlx::postgres::{PgConnectOptions, PgPoolOptions, PgSslMode};
|
||||
|
||||
@@ -36,12 +36,31 @@ fn build_connect_options(config: &DatabaseConfig) -> Result<PgConnectOptions> {
|
||||
pub async fn create_pool(config: &DatabaseConfig) -> Result<PgPool> {
|
||||
tracing::debug!("connecting to database");
|
||||
let connect_options = build_connect_options(config)?;
|
||||
|
||||
// Connection pool configuration from environment
|
||||
let max_connections = std::env::var("SECRETS_DATABASE_POOL_SIZE")
|
||||
.ok()
|
||||
.and_then(|v| v.parse::<u32>().ok())
|
||||
.unwrap_or(10);
|
||||
|
||||
let acquire_timeout_secs = std::env::var("SECRETS_DATABASE_ACQUIRE_TIMEOUT")
|
||||
.ok()
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.unwrap_or(5);
|
||||
|
||||
let pool = PgPoolOptions::new()
|
||||
.max_connections(10)
|
||||
.acquire_timeout(std::time::Duration::from_secs(5))
|
||||
.max_connections(max_connections)
|
||||
.acquire_timeout(std::time::Duration::from_secs(acquire_timeout_secs))
|
||||
.max_lifetime(std::time::Duration::from_secs(1800)) // 30 minutes
|
||||
.idle_timeout(std::time::Duration::from_secs(600)) // 10 minutes
|
||||
.connect_with(connect_options)
|
||||
.await?;
|
||||
tracing::debug!("database connection established");
|
||||
|
||||
tracing::debug!(
|
||||
max_connections,
|
||||
acquire_timeout_secs,
|
||||
"database connection established"
|
||||
);
|
||||
Ok(pool)
|
||||
}
|
||||
|
||||
@@ -543,4 +562,75 @@ pub async fn snapshot_secret_history(
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub const ENTRY_HISTORY_SECRETS_KEY: &str = "__secrets_snapshot_v1";
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
pub struct EntrySecretSnapshot {
|
||||
pub name: String,
|
||||
#[serde(rename = "type")]
|
||||
pub secret_type: String,
|
||||
pub encrypted_hex: String,
|
||||
}
|
||||
|
||||
pub async fn metadata_with_secret_snapshot(
|
||||
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
|
||||
entry_id: uuid::Uuid,
|
||||
metadata: &Value,
|
||||
) -> Result<Value> {
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct Row {
|
||||
name: String,
|
||||
#[sqlx(rename = "type")]
|
||||
secret_type: String,
|
||||
encrypted: Vec<u8>,
|
||||
}
|
||||
|
||||
let rows: Vec<Row> = sqlx::query_as(
|
||||
"SELECT s.name, s.type, s.encrypted \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = $1 \
|
||||
ORDER BY s.name ASC",
|
||||
)
|
||||
.bind(entry_id)
|
||||
.fetch_all(&mut **tx)
|
||||
.await?;
|
||||
|
||||
let snapshots: Vec<EntrySecretSnapshot> = rows
|
||||
.into_iter()
|
||||
.map(|r| EntrySecretSnapshot {
|
||||
name: r.name,
|
||||
secret_type: r.secret_type,
|
||||
encrypted_hex: ::hex::encode(r.encrypted),
|
||||
})
|
||||
.collect();
|
||||
|
||||
let mut merged = match metadata.clone() {
|
||||
Value::Object(obj) => obj,
|
||||
_ => Map::new(),
|
||||
};
|
||||
merged.insert(
|
||||
ENTRY_HISTORY_SECRETS_KEY.to_string(),
|
||||
serde_json::to_value(snapshots)?,
|
||||
);
|
||||
Ok(Value::Object(merged))
|
||||
}
|
||||
|
||||
pub fn strip_secret_snapshot_from_metadata(metadata: &Value) -> Value {
|
||||
let mut m = match metadata.clone() {
|
||||
Value::Object(obj) => obj,
|
||||
_ => return metadata.clone(),
|
||||
};
|
||||
m.remove(ENTRY_HISTORY_SECRETS_KEY);
|
||||
Value::Object(m)
|
||||
}
|
||||
|
||||
pub fn entry_secret_snapshot_from_metadata(metadata: &Value) -> Option<Vec<EntrySecretSnapshot>> {
|
||||
let Value::Object(map) = metadata else {
|
||||
return None;
|
||||
};
|
||||
let raw = map.get(ENTRY_HISTORY_SECRETS_KEY)?;
|
||||
serde_json::from_value(raw.clone()).ok()
|
||||
}
|
||||
|
||||
// ── DB helpers ────────────────────────────────────────────────────────────────
|
||||
|
||||
@@ -15,12 +15,30 @@ pub enum AppError {
|
||||
#[error("Entry not found")]
|
||||
NotFoundEntry,
|
||||
|
||||
#[error("User not found")]
|
||||
NotFoundUser,
|
||||
|
||||
#[error("Secret not found")]
|
||||
NotFoundSecret,
|
||||
|
||||
#[error("Authentication failed")]
|
||||
AuthenticationFailed,
|
||||
|
||||
#[error("Unauthorized: insufficient permissions")]
|
||||
Unauthorized,
|
||||
|
||||
#[error("Validation failed: {message}")]
|
||||
Validation { message: String },
|
||||
|
||||
#[error("Concurrent modification detected")]
|
||||
ConcurrentModification,
|
||||
|
||||
#[error("Decryption failed — the encryption key may be incorrect")]
|
||||
DecryptionFailed,
|
||||
|
||||
#[error("Encryption key not set — user must set passphrase first")]
|
||||
EncryptionKeyNotSet,
|
||||
|
||||
#[error(transparent)]
|
||||
Internal(#[from] anyhow::Error),
|
||||
}
|
||||
@@ -116,6 +134,18 @@ mod tests {
|
||||
let err = AppError::NotFoundEntry;
|
||||
assert_eq!(err.to_string(), "Entry not found");
|
||||
|
||||
let err = AppError::NotFoundUser;
|
||||
assert_eq!(err.to_string(), "User not found");
|
||||
|
||||
let err = AppError::NotFoundSecret;
|
||||
assert_eq!(err.to_string(), "Secret not found");
|
||||
|
||||
let err = AppError::AuthenticationFailed;
|
||||
assert_eq!(err.to_string(), "Authentication failed");
|
||||
|
||||
let err = AppError::Unauthorized;
|
||||
assert!(err.to_string().contains("Unauthorized"));
|
||||
|
||||
let err = AppError::Validation {
|
||||
message: "too long".to_string(),
|
||||
};
|
||||
@@ -123,6 +153,9 @@ mod tests {
|
||||
|
||||
let err = AppError::ConcurrentModification;
|
||||
assert!(err.to_string().contains("Concurrent modification"));
|
||||
|
||||
let err = AppError::EncryptionKeyNotSet;
|
||||
assert!(err.to_string().contains("Encryption key not set"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -4,7 +4,7 @@ use serde_json::Value;
|
||||
use std::collections::BTreeMap;
|
||||
use uuid::Uuid;
|
||||
|
||||
/// A top-level entry (server, service, key, person, …).
|
||||
/// A top-level entry (server, service, account, person, …).
|
||||
/// Sensitive fields are stored separately in `secrets`.
|
||||
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
|
||||
pub struct Entry {
|
||||
|
||||
@@ -9,7 +9,6 @@ use crate::crypto;
|
||||
use crate::db;
|
||||
use crate::error::{AppError, DbErrorContext};
|
||||
use crate::models::EntryRow;
|
||||
use crate::taxonomy;
|
||||
|
||||
// ── Key/value parsing helpers ─────────────────────────────────────────────────
|
||||
|
||||
@@ -186,11 +185,10 @@ pub struct AddParams<'a> {
|
||||
}
|
||||
|
||||
pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> Result<AddResult> {
|
||||
let Value::Object(mut metadata_map) = build_json(params.meta_entries)? else {
|
||||
let Value::Object(metadata_map) = build_json(params.meta_entries)? else {
|
||||
unreachable!("build_json always returns a JSON object");
|
||||
};
|
||||
let normalized_entry_type =
|
||||
taxonomy::normalize_entry_type_and_metadata(params.entry_type, &mut metadata_map);
|
||||
let entry_type = params.entry_type.trim();
|
||||
let metadata = Value::Object(metadata_map);
|
||||
let secret_json = build_json(params.secret_entries)?;
|
||||
let meta_keys = collect_key_paths(params.meta_entries)?;
|
||||
@@ -225,26 +223,40 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
.await?
|
||||
};
|
||||
|
||||
if let Some(ref ex) = existing
|
||||
&& let Err(e) = db::snapshot_entry_history(
|
||||
if let Some(ref ex) = existing {
|
||||
let history_metadata =
|
||||
match db::metadata_with_secret_snapshot(&mut tx, ex.id, &ex.metadata).await {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
|
||||
ex.metadata.clone()
|
||||
}
|
||||
};
|
||||
if let Err(e) = db::snapshot_entry_history(
|
||||
&mut tx,
|
||||
db::EntrySnapshotParams {
|
||||
entry_id: ex.id,
|
||||
user_id: params.user_id,
|
||||
folder: params.folder,
|
||||
entry_type: &normalized_entry_type,
|
||||
entry_type,
|
||||
name: params.name,
|
||||
version: ex.version,
|
||||
action: "add",
|
||||
tags: &ex.tags,
|
||||
metadata: &ex.metadata,
|
||||
metadata: &history_metadata,
|
||||
},
|
||||
)
|
||||
.await
|
||||
{
|
||||
tracing::warn!(error = %e, "failed to snapshot entry history before upsert");
|
||||
{
|
||||
tracing::warn!(error = %e, "failed to snapshot entry history before upsert");
|
||||
}
|
||||
}
|
||||
|
||||
// Upsert the entry row. On conflict (existing entry with same user_id+folder+name),
|
||||
// the entry columns are replaced wholesale. The old secret associations are torn down
|
||||
// below within the same transaction, so the whole operation is atomic: if any step
|
||||
// after this point fails, the transaction rolls back and the entry reverts to its
|
||||
// pre-upsert state (including the version bump that happened in the DO UPDATE clause).
|
||||
let entry_id: Uuid = if let Some(uid) = params.user_id {
|
||||
sqlx::query_scalar(
|
||||
r#"INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata, version, updated_at)
|
||||
@@ -262,7 +274,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
)
|
||||
.bind(uid)
|
||||
.bind(params.folder)
|
||||
.bind(&normalized_entry_type)
|
||||
.bind(entry_type)
|
||||
.bind(params.name)
|
||||
.bind(params.notes)
|
||||
.bind(params.tags)
|
||||
@@ -285,7 +297,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
RETURNING id"#,
|
||||
)
|
||||
.bind(params.folder)
|
||||
.bind(&normalized_entry_type)
|
||||
.bind(entry_type)
|
||||
.bind(params.name)
|
||||
.bind(params.notes)
|
||||
.bind(params.tags)
|
||||
@@ -300,26 +312,6 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
.fetch_one(&mut *tx)
|
||||
.await?;
|
||||
|
||||
if existing.is_none()
|
||||
&& let Err(e) = db::snapshot_entry_history(
|
||||
&mut tx,
|
||||
db::EntrySnapshotParams {
|
||||
entry_id,
|
||||
user_id: params.user_id,
|
||||
folder: params.folder,
|
||||
entry_type: &normalized_entry_type,
|
||||
name: params.name,
|
||||
version: current_entry_version,
|
||||
action: "create",
|
||||
tags: params.tags,
|
||||
metadata: &metadata,
|
||||
},
|
||||
)
|
||||
.await
|
||||
{
|
||||
tracing::warn!(error = %e, "failed to snapshot entry history on create");
|
||||
}
|
||||
|
||||
if existing.is_some() {
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct ExistingField {
|
||||
@@ -429,12 +421,41 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
}
|
||||
}
|
||||
|
||||
if existing.is_none() {
|
||||
let history_metadata =
|
||||
match db::metadata_with_secret_snapshot(&mut tx, entry_id, &metadata).await {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
|
||||
metadata.clone()
|
||||
}
|
||||
};
|
||||
if let Err(e) = db::snapshot_entry_history(
|
||||
&mut tx,
|
||||
db::EntrySnapshotParams {
|
||||
entry_id,
|
||||
user_id: params.user_id,
|
||||
folder: params.folder,
|
||||
entry_type,
|
||||
name: params.name,
|
||||
version: current_entry_version,
|
||||
action: "create",
|
||||
tags: params.tags,
|
||||
metadata: &history_metadata,
|
||||
},
|
||||
)
|
||||
.await
|
||||
{
|
||||
tracing::warn!(error = %e, "failed to snapshot entry history on create");
|
||||
}
|
||||
}
|
||||
|
||||
crate::audit::log_tx(
|
||||
&mut tx,
|
||||
params.user_id,
|
||||
"add",
|
||||
params.folder,
|
||||
&normalized_entry_type,
|
||||
entry_type,
|
||||
params.name,
|
||||
serde_json::json!({
|
||||
"tags": params.tags,
|
||||
@@ -449,7 +470,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
Ok(AddResult {
|
||||
name: params.name.to_string(),
|
||||
folder: params.folder.to_string(),
|
||||
entry_type: normalized_entry_type,
|
||||
entry_type: entry_type.to_string(),
|
||||
tags: params.tags.to_vec(),
|
||||
meta_keys,
|
||||
secret_keys,
|
||||
|
||||
@@ -2,6 +2,8 @@ use anyhow::Result;
|
||||
use sqlx::PgPool;
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::error::AppError;
|
||||
|
||||
const KEY_PREFIX: &str = "sk_";
|
||||
|
||||
/// Generate a new API key: `sk_<64 hex chars>` = 67 characters total.
|
||||
@@ -14,23 +16,32 @@ pub fn generate_api_key() -> String {
|
||||
}
|
||||
|
||||
/// Return the user's existing API key, or generate and store a new one if NULL.
|
||||
/// Uses a transaction with atomic update to prevent TOCTOU race conditions.
|
||||
pub async fn ensure_api_key(pool: &PgPool, user_id: Uuid) -> Result<String> {
|
||||
let existing: Option<(Option<String>,)> =
|
||||
sqlx::query_as("SELECT api_key FROM users WHERE id = $1")
|
||||
.bind(user_id)
|
||||
.fetch_optional(pool)
|
||||
.await?;
|
||||
let mut tx = pool.begin().await?;
|
||||
|
||||
if let Some((Some(key),)) = existing {
|
||||
// Lock the row and check existing key
|
||||
let existing: (Option<String>,) =
|
||||
sqlx::query_as("SELECT api_key FROM users WHERE id = $1 FOR UPDATE")
|
||||
.bind(user_id)
|
||||
.fetch_optional(&mut *tx)
|
||||
.await?
|
||||
.ok_or(AppError::NotFoundUser)?;
|
||||
|
||||
if let Some(key) = existing.0 {
|
||||
tx.commit().await?;
|
||||
return Ok(key);
|
||||
}
|
||||
|
||||
// Generate and store new key atomically
|
||||
let new_key = generate_api_key();
|
||||
sqlx::query("UPDATE users SET api_key = $1 WHERE id = $2")
|
||||
.bind(&new_key)
|
||||
.bind(user_id)
|
||||
.execute(pool)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
|
||||
tx.commit().await?;
|
||||
Ok(new_key)
|
||||
}
|
||||
|
||||
|
||||
@@ -4,20 +4,36 @@ use uuid::Uuid;
|
||||
|
||||
use crate::models::AuditLogEntry;
|
||||
|
||||
pub async fn list_for_user(pool: &PgPool, user_id: Uuid, limit: i64) -> Result<Vec<AuditLogEntry>> {
|
||||
pub async fn list_for_user(
|
||||
pool: &PgPool,
|
||||
user_id: Uuid,
|
||||
limit: i64,
|
||||
offset: i64,
|
||||
) -> Result<Vec<AuditLogEntry>> {
|
||||
let limit = limit.clamp(1, 200);
|
||||
let offset = offset.max(0);
|
||||
|
||||
let rows = sqlx::query_as(
|
||||
"SELECT id, user_id, action, folder, type, name, detail, created_at \
|
||||
FROM audit_log \
|
||||
WHERE user_id = $1 \
|
||||
ORDER BY created_at DESC, id DESC \
|
||||
LIMIT $2",
|
||||
LIMIT $2 OFFSET $3",
|
||||
)
|
||||
.bind(user_id)
|
||||
.bind(limit)
|
||||
.bind(offset)
|
||||
.fetch_all(pool)
|
||||
.await?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
pub async fn count_for_user(pool: &PgPool, user_id: Uuid) -> Result<i64> {
|
||||
let count: i64 =
|
||||
sqlx::query_scalar("SELECT COUNT(*)::bigint FROM audit_log WHERE user_id = $1")
|
||||
.bind(user_id)
|
||||
.fetch_one(pool)
|
||||
.await?;
|
||||
Ok(count)
|
||||
}
|
||||
|
||||
@@ -31,6 +31,10 @@ pub struct DeleteParams<'a> {
|
||||
pub user_id: Option<Uuid>,
|
||||
}
|
||||
|
||||
/// Maximum number of entries that can be deleted in a single bulk operation.
|
||||
/// Prevents accidental mass deletion when filters are too broad.
|
||||
pub const MAX_BULK_DELETE: usize = 1000;
|
||||
|
||||
/// Delete a single entry by id (multi-tenant: `user_id` must match).
|
||||
pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<DeleteResult> {
|
||||
let mut tx = pool.begin().await?;
|
||||
@@ -374,6 +378,16 @@ async fn delete_bulk(
|
||||
}
|
||||
let rows = q.fetch_all(&mut *tx).await?;
|
||||
|
||||
if rows.len() > MAX_BULK_DELETE {
|
||||
tx.rollback().await?;
|
||||
anyhow::bail!(
|
||||
"Bulk delete would affect {} entries (limit: {}). \
|
||||
Narrow your filters or delete entries individually.",
|
||||
rows.len(),
|
||||
MAX_BULK_DELETE,
|
||||
);
|
||||
}
|
||||
|
||||
let mut deleted = Vec::with_capacity(rows.len());
|
||||
for row in &rows {
|
||||
let entry_row: EntryRow = EntryRow {
|
||||
@@ -427,6 +441,15 @@ async fn snapshot_and_delete(
|
||||
row: &EntryRow,
|
||||
user_id: Option<Uuid>,
|
||||
) -> Result<()> {
|
||||
let history_metadata = match db::metadata_with_secret_snapshot(tx, row.id, &row.metadata).await
|
||||
{
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
|
||||
row.metadata.clone()
|
||||
}
|
||||
};
|
||||
|
||||
if let Err(e) = db::snapshot_entry_history(
|
||||
tx,
|
||||
db::EntrySnapshotParams {
|
||||
@@ -438,7 +461,7 @@ async fn snapshot_and_delete(
|
||||
version: row.version,
|
||||
action: "delete",
|
||||
tags: &row.tags,
|
||||
metadata: &row.metadata,
|
||||
metadata: &history_metadata,
|
||||
},
|
||||
)
|
||||
.await
|
||||
|
||||
@@ -31,8 +31,11 @@ pub async fn run(
|
||||
let entry = resolve_entry(pool, name, folder, user_id).await?;
|
||||
|
||||
let rows: Vec<Row> = sqlx::query_as(
|
||||
"SELECT version, action, created_at FROM entries_history \
|
||||
WHERE entry_id = $1 ORDER BY id DESC LIMIT $2",
|
||||
"SELECT DISTINCT ON (version) version, action, created_at \
|
||||
FROM entries_history \
|
||||
WHERE entry_id = $1 \
|
||||
ORDER BY version DESC, id DESC \
|
||||
LIMIT $2",
|
||||
)
|
||||
.bind(entry.id)
|
||||
.bind(limit as i64)
|
||||
|
||||
@@ -54,7 +54,13 @@ pub async fn run(
|
||||
.bind(params.user_id)
|
||||
.fetch_one(pool)
|
||||
.await
|
||||
.unwrap_or(false);
|
||||
.map_err(|e| {
|
||||
anyhow::anyhow!(
|
||||
"Failed to check entry existence for '{}': {}",
|
||||
entry.name,
|
||||
e
|
||||
)
|
||||
})?;
|
||||
|
||||
if exists && !params.force {
|
||||
return Err(anyhow::anyhow!(
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
use std::collections::HashSet;
|
||||
|
||||
use anyhow::Result;
|
||||
use serde_json::Value;
|
||||
use sqlx::PgPool;
|
||||
@@ -122,7 +124,7 @@ pub async fn run(
|
||||
sqlx::query_as(
|
||||
"SELECT folder, type, version, action, tags, metadata \
|
||||
FROM entries_history \
|
||||
WHERE entry_id = $1 AND version = $2 ORDER BY id DESC LIMIT 1",
|
||||
WHERE entry_id = $1 AND version = $2 ORDER BY id ASC LIMIT 1",
|
||||
)
|
||||
.bind(entry_id)
|
||||
.bind(ver)
|
||||
@@ -149,6 +151,9 @@ pub async fn run(
|
||||
)
|
||||
})?;
|
||||
|
||||
let snap_secret_snapshot = db::entry_secret_snapshot_from_metadata(&snap.metadata);
|
||||
let snap_metadata = db::strip_secret_snapshot_from_metadata(&snap.metadata);
|
||||
|
||||
let _ = master_key;
|
||||
|
||||
let mut tx = pool.begin().await?;
|
||||
@@ -176,6 +181,15 @@ pub async fn run(
|
||||
.await?;
|
||||
|
||||
let live_entry_id = if let Some(ref lr) = live {
|
||||
let history_metadata =
|
||||
match db::metadata_with_secret_snapshot(&mut tx, lr.id, &lr.metadata).await {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
|
||||
lr.metadata.clone()
|
||||
}
|
||||
};
|
||||
|
||||
if let Err(e) = db::snapshot_entry_history(
|
||||
&mut tx,
|
||||
db::EntrySnapshotParams {
|
||||
@@ -187,7 +201,7 @@ pub async fn run(
|
||||
version: lr.version,
|
||||
action: "rollback",
|
||||
tags: &lr.tags,
|
||||
metadata: &lr.metadata,
|
||||
metadata: &history_metadata,
|
||||
},
|
||||
)
|
||||
.await
|
||||
@@ -228,11 +242,13 @@ pub async fn run(
|
||||
}
|
||||
|
||||
sqlx::query(
|
||||
"UPDATE entries SET tags = $1, metadata = $2, version = version + 1, \
|
||||
updated_at = NOW() WHERE id = $3",
|
||||
"UPDATE entries SET folder = $1, type = $2, tags = $3, metadata = $4, version = version + 1, \
|
||||
updated_at = NOW() WHERE id = $5",
|
||||
)
|
||||
.bind(&snap.folder)
|
||||
.bind(&snap.entry_type)
|
||||
.bind(&snap.tags)
|
||||
.bind(&snap.metadata)
|
||||
.bind(&snap_metadata)
|
||||
.bind(lr.id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
@@ -250,7 +266,7 @@ pub async fn run(
|
||||
.bind(&snap.entry_type)
|
||||
.bind(name)
|
||||
.bind(&snap.tags)
|
||||
.bind(&snap.metadata)
|
||||
.bind(&snap_metadata)
|
||||
.bind(snap.version)
|
||||
.fetch_one(&mut *tx)
|
||||
.await?
|
||||
@@ -264,16 +280,16 @@ pub async fn run(
|
||||
.bind(&snap.entry_type)
|
||||
.bind(name)
|
||||
.bind(&snap.tags)
|
||||
.bind(&snap.metadata)
|
||||
.bind(&snap_metadata)
|
||||
.bind(snap.version)
|
||||
.fetch_one(&mut *tx)
|
||||
.await?
|
||||
}
|
||||
};
|
||||
|
||||
// In N:N mode, rollback restores entry metadata/tags only.
|
||||
// Secret snapshots are kept for audit but secret linkage/content is not rewritten here.
|
||||
let _ = live_entry_id;
|
||||
if let Some(secret_snapshot) = snap_secret_snapshot {
|
||||
restore_entry_secrets(&mut tx, live_entry_id, user_id, &secret_snapshot).await?;
|
||||
}
|
||||
|
||||
crate::audit::log_tx(
|
||||
&mut tx,
|
||||
@@ -298,3 +314,144 @@ pub async fn run(
|
||||
restored_version: snap.version,
|
||||
})
|
||||
}
|
||||
|
||||
async fn restore_entry_secrets(
|
||||
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
|
||||
entry_id: Uuid,
|
||||
user_id: Option<Uuid>,
|
||||
snapshot: &[db::EntrySecretSnapshot],
|
||||
) -> Result<()> {
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct LinkedSecret {
|
||||
id: Uuid,
|
||||
name: String,
|
||||
encrypted: Vec<u8>,
|
||||
}
|
||||
|
||||
let linked: Vec<LinkedSecret> = sqlx::query_as(
|
||||
"SELECT s.id, s.name, s.encrypted \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = $1",
|
||||
)
|
||||
.bind(entry_id)
|
||||
.fetch_all(&mut **tx)
|
||||
.await?;
|
||||
|
||||
let target_names: HashSet<&str> = snapshot.iter().map(|s| s.name.as_str()).collect();
|
||||
|
||||
for s in &linked {
|
||||
if target_names.contains(s.name.as_str()) {
|
||||
continue;
|
||||
}
|
||||
if let Err(e) = db::snapshot_secret_history(
|
||||
tx,
|
||||
db::SecretSnapshotParams {
|
||||
secret_id: s.id,
|
||||
name: &s.name,
|
||||
encrypted: &s.encrypted,
|
||||
action: "rollback",
|
||||
},
|
||||
)
|
||||
.await
|
||||
{
|
||||
tracing::warn!(error = %e, "failed to snapshot secret before rollback unlink");
|
||||
}
|
||||
|
||||
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
|
||||
.bind(entry_id)
|
||||
.bind(s.id)
|
||||
.execute(&mut **tx)
|
||||
.await?;
|
||||
|
||||
sqlx::query(
|
||||
"DELETE FROM secrets s \
|
||||
WHERE s.id = $1 \
|
||||
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
|
||||
)
|
||||
.bind(s.id)
|
||||
.execute(&mut **tx)
|
||||
.await?;
|
||||
}
|
||||
|
||||
for snap in snapshot {
|
||||
let encrypted = ::hex::decode(&snap.encrypted_hex).map_err(|e| {
|
||||
anyhow::anyhow!("invalid secret snapshot data for '{}': {}", snap.name, e)
|
||||
})?;
|
||||
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct ExistingSecret {
|
||||
id: Uuid,
|
||||
encrypted: Vec<u8>,
|
||||
}
|
||||
|
||||
let existing: Option<ExistingSecret> = if let Some(uid) = user_id {
|
||||
sqlx::query_as("SELECT id, encrypted FROM secrets WHERE user_id = $1 AND name = $2")
|
||||
.bind(uid)
|
||||
.bind(&snap.name)
|
||||
.fetch_optional(&mut **tx)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_as("SELECT id, encrypted FROM secrets WHERE user_id IS NULL AND name = $1")
|
||||
.bind(&snap.name)
|
||||
.fetch_optional(&mut **tx)
|
||||
.await?
|
||||
};
|
||||
|
||||
let secret_id = if let Some(ex) = existing {
|
||||
if ex.encrypted != encrypted
|
||||
&& let Err(e) = db::snapshot_secret_history(
|
||||
tx,
|
||||
db::SecretSnapshotParams {
|
||||
secret_id: ex.id,
|
||||
name: &snap.name,
|
||||
encrypted: &ex.encrypted,
|
||||
action: "rollback",
|
||||
},
|
||||
)
|
||||
.await
|
||||
{
|
||||
tracing::warn!(error = %e, "failed to snapshot secret before rollback restore");
|
||||
}
|
||||
sqlx::query(
|
||||
"UPDATE secrets SET type = $1, encrypted = $2, version = version + 1, updated_at = NOW() \
|
||||
WHERE id = $3",
|
||||
)
|
||||
.bind(&snap.secret_type)
|
||||
.bind(&encrypted)
|
||||
.bind(ex.id)
|
||||
.execute(&mut **tx)
|
||||
.await?;
|
||||
ex.id
|
||||
} else if let Some(uid) = user_id {
|
||||
sqlx::query_scalar(
|
||||
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
|
||||
)
|
||||
.bind(uid)
|
||||
.bind(&snap.name)
|
||||
.bind(&snap.secret_type)
|
||||
.bind(&encrypted)
|
||||
.fetch_one(&mut **tx)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_scalar(
|
||||
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, $2, $3) RETURNING id",
|
||||
)
|
||||
.bind(&snap.name)
|
||||
.bind(&snap.secret_type)
|
||||
.bind(&encrypted)
|
||||
.fetch_one(&mut **tx)
|
||||
.await?
|
||||
};
|
||||
|
||||
sqlx::query(
|
||||
"INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING",
|
||||
)
|
||||
.bind(entry_id)
|
||||
.bind(secret_id)
|
||||
.execute(&mut **tx)
|
||||
.await?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -11,7 +11,6 @@ use crate::service::add::{
|
||||
collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path,
|
||||
parse_kv, remove_path,
|
||||
};
|
||||
use crate::taxonomy;
|
||||
|
||||
#[derive(Debug, serde::Serialize)]
|
||||
pub struct UpdateResult {
|
||||
@@ -25,6 +24,8 @@ pub struct UpdateResult {
|
||||
pub remove_meta: Vec<String>,
|
||||
pub secret_keys: Vec<String>,
|
||||
pub remove_secrets: Vec<String>,
|
||||
pub linked_secrets: Vec<String>,
|
||||
pub unlinked_secrets: Vec<String>,
|
||||
}
|
||||
|
||||
pub struct UpdateParams<'a> {
|
||||
@@ -39,6 +40,8 @@ pub struct UpdateParams<'a> {
|
||||
pub secret_entries: &'a [String],
|
||||
pub secret_types: &'a std::collections::HashMap<String, String>,
|
||||
pub remove_secrets: &'a [String],
|
||||
pub link_secret_names: &'a [String],
|
||||
pub unlink_secret_names: &'a [String],
|
||||
pub user_id: Option<Uuid>,
|
||||
}
|
||||
|
||||
@@ -109,6 +112,15 @@ pub async fn run(
|
||||
}
|
||||
};
|
||||
|
||||
let history_metadata =
|
||||
match db::metadata_with_secret_snapshot(&mut tx, row.id, &row.metadata).await {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
|
||||
row.metadata.clone()
|
||||
}
|
||||
};
|
||||
|
||||
if let Err(e) = db::snapshot_entry_history(
|
||||
&mut tx,
|
||||
db::EntrySnapshotParams {
|
||||
@@ -120,7 +132,7 @@ pub async fn run(
|
||||
version: row.version,
|
||||
action: "update",
|
||||
tags: &row.tags,
|
||||
metadata: &row.metadata,
|
||||
metadata: &history_metadata,
|
||||
},
|
||||
)
|
||||
.await
|
||||
@@ -295,6 +307,101 @@ pub async fn run(
|
||||
}
|
||||
}
|
||||
|
||||
// Link existing secrets by name
|
||||
let mut linked_secrets = Vec::new();
|
||||
for link_name in params.link_secret_names {
|
||||
let link_name = link_name.trim();
|
||||
if link_name.is_empty() {
|
||||
anyhow::bail!("link_secret_names contains an empty name");
|
||||
}
|
||||
let secret_ids: Vec<Uuid> = if let Some(uid) = params.user_id {
|
||||
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id = $1 AND name = $2")
|
||||
.bind(uid)
|
||||
.bind(link_name)
|
||||
.fetch_all(&mut *tx)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id IS NULL AND name = $1")
|
||||
.bind(link_name)
|
||||
.fetch_all(&mut *tx)
|
||||
.await?
|
||||
};
|
||||
|
||||
match secret_ids.len() {
|
||||
0 => anyhow::bail!("Not found: secret named '{}'", link_name),
|
||||
1 => {
|
||||
sqlx::query(
|
||||
"INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING",
|
||||
)
|
||||
.bind(row.id)
|
||||
.bind(secret_ids[0])
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
linked_secrets.push(link_name.to_string());
|
||||
}
|
||||
n => anyhow::bail!(
|
||||
"Ambiguous: {} secrets named '{}' found. Please deduplicate names first.",
|
||||
n,
|
||||
link_name
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
// Unlink secrets by name
|
||||
let mut unlinked_secrets = Vec::new();
|
||||
for unlink_name in params.unlink_secret_names {
|
||||
let unlink_name = unlink_name.trim();
|
||||
if unlink_name.is_empty() {
|
||||
continue;
|
||||
}
|
||||
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct SecretToUnlink {
|
||||
id: Uuid,
|
||||
encrypted: Vec<u8>,
|
||||
}
|
||||
let secret: Option<SecretToUnlink> = sqlx::query_as(
|
||||
"SELECT s.id, s.encrypted \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = $1 AND s.name = $2",
|
||||
)
|
||||
.bind(row.id)
|
||||
.bind(unlink_name)
|
||||
.fetch_optional(&mut *tx)
|
||||
.await?;
|
||||
|
||||
if let Some(s) = secret {
|
||||
if let Err(e) = db::snapshot_secret_history(
|
||||
&mut tx,
|
||||
db::SecretSnapshotParams {
|
||||
secret_id: s.id,
|
||||
name: unlink_name,
|
||||
encrypted: &s.encrypted,
|
||||
action: "delete",
|
||||
},
|
||||
)
|
||||
.await
|
||||
{
|
||||
tracing::warn!(error = %e, "failed to snapshot secret field history before unlink");
|
||||
}
|
||||
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
|
||||
.bind(row.id)
|
||||
.bind(s.id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
sqlx::query(
|
||||
"DELETE FROM secrets s \
|
||||
WHERE s.id = $1 \
|
||||
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
|
||||
)
|
||||
.bind(s.id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
unlinked_secrets.push(unlink_name.to_string());
|
||||
}
|
||||
}
|
||||
|
||||
let meta_keys = collect_key_paths(params.meta_entries)?;
|
||||
let remove_meta_keys = collect_field_paths(params.remove_meta)?;
|
||||
let secret_keys = collect_key_paths(params.secret_entries)?;
|
||||
@@ -304,8 +411,8 @@ pub async fn run(
|
||||
&mut tx,
|
||||
params.user_id,
|
||||
"update",
|
||||
"",
|
||||
"",
|
||||
&row.folder,
|
||||
&row.entry_type,
|
||||
params.name,
|
||||
serde_json::json!({
|
||||
"add_tags": params.add_tags,
|
||||
@@ -314,6 +421,8 @@ pub async fn run(
|
||||
"remove_meta": remove_meta_keys,
|
||||
"secret_keys": secret_keys,
|
||||
"remove_secrets": remove_secret_keys,
|
||||
"linked_secrets": linked_secrets,
|
||||
"unlinked_secrets": unlinked_secrets,
|
||||
}),
|
||||
)
|
||||
.await;
|
||||
@@ -330,6 +439,8 @@ pub async fn run(
|
||||
remove_meta: remove_meta_keys,
|
||||
secret_keys,
|
||||
remove_secrets: remove_secret_keys,
|
||||
linked_secrets,
|
||||
unlinked_secrets,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -379,6 +490,15 @@ pub async fn update_fields_by_id(
|
||||
}
|
||||
};
|
||||
|
||||
let history_metadata =
|
||||
match db::metadata_with_secret_snapshot(&mut tx, row.id, &row.metadata).await {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
tracing::warn!(error = %e, "failed to build secret snapshot for entry history");
|
||||
row.metadata.clone()
|
||||
}
|
||||
};
|
||||
|
||||
if let Err(e) = db::snapshot_entry_history(
|
||||
&mut tx,
|
||||
db::EntrySnapshotParams {
|
||||
@@ -390,7 +510,7 @@ pub async fn update_fields_by_id(
|
||||
version: row.version,
|
||||
action: "update",
|
||||
tags: &row.tags,
|
||||
metadata: &row.metadata,
|
||||
metadata: &history_metadata,
|
||||
},
|
||||
)
|
||||
.await
|
||||
@@ -398,13 +518,7 @@ pub async fn update_fields_by_id(
|
||||
tracing::warn!(error = %e, "failed to snapshot entry history before web update");
|
||||
}
|
||||
|
||||
let mut metadata_map = match params.metadata {
|
||||
Value::Object(m) => m.clone(),
|
||||
_ => Map::new(),
|
||||
};
|
||||
let normalized_type =
|
||||
taxonomy::normalize_entry_type_and_metadata(params.entry_type, &mut metadata_map);
|
||||
let normalized_metadata = Value::Object(metadata_map);
|
||||
let entry_type = params.entry_type.trim();
|
||||
|
||||
let res = sqlx::query(
|
||||
"UPDATE entries SET folder = $1, type = $2, name = $3, notes = $4, tags = $5, metadata = $6, \
|
||||
@@ -412,11 +526,11 @@ pub async fn update_fields_by_id(
|
||||
WHERE id = $7 AND version = $8",
|
||||
)
|
||||
.bind(params.folder)
|
||||
.bind(&normalized_type)
|
||||
.bind(entry_type)
|
||||
.bind(params.name)
|
||||
.bind(params.notes)
|
||||
.bind(params.tags)
|
||||
.bind(&normalized_metadata)
|
||||
.bind(params.metadata)
|
||||
.bind(row.id)
|
||||
.bind(row.version)
|
||||
.execute(&mut *tx)
|
||||
@@ -443,7 +557,7 @@ pub async fn update_fields_by_id(
|
||||
Some(user_id),
|
||||
"update",
|
||||
params.folder,
|
||||
&normalized_type,
|
||||
entry_type,
|
||||
params.name,
|
||||
serde_json::json!({
|
||||
"source": "web",
|
||||
|
||||
@@ -16,14 +16,17 @@ pub struct OAuthProfile {
|
||||
/// Find or create a user from an OAuth profile.
|
||||
/// Returns (user, is_new) where is_new indicates first-time registration.
|
||||
pub async fn find_or_create_user(pool: &PgPool, profile: OAuthProfile) -> Result<(User, bool)> {
|
||||
// Check if this OAuth account already exists
|
||||
// Use a transaction with FOR UPDATE to prevent TOCTOU race conditions
|
||||
let mut tx = pool.begin().await?;
|
||||
|
||||
// Check if this OAuth account already exists (with row lock)
|
||||
let existing: Option<OauthAccount> = sqlx::query_as(
|
||||
"SELECT id, user_id, provider, provider_id, email, name, avatar_url, created_at \
|
||||
FROM oauth_accounts WHERE provider = $1 AND provider_id = $2",
|
||||
FROM oauth_accounts WHERE provider = $1 AND provider_id = $2 FOR UPDATE",
|
||||
)
|
||||
.bind(&profile.provider)
|
||||
.bind(&profile.provider_id)
|
||||
.fetch_optional(pool)
|
||||
.fetch_optional(&mut *tx)
|
||||
.await?;
|
||||
|
||||
if let Some(oa) = existing {
|
||||
@@ -32,8 +35,9 @@ pub async fn find_or_create_user(pool: &PgPool, profile: OAuthProfile) -> Result
|
||||
FROM users WHERE id = $1",
|
||||
)
|
||||
.bind(oa.user_id)
|
||||
.fetch_one(pool)
|
||||
.fetch_one(&mut *tx)
|
||||
.await?;
|
||||
tx.commit().await?;
|
||||
return Ok((user, false));
|
||||
}
|
||||
|
||||
@@ -43,8 +47,6 @@ pub async fn find_or_create_user(pool: &PgPool, profile: OAuthProfile) -> Result
|
||||
.clone()
|
||||
.unwrap_or_else(|| profile.email.clone().unwrap_or_else(|| "User".to_string()));
|
||||
|
||||
let mut tx = pool.begin().await?;
|
||||
|
||||
let user: User = sqlx::query_as(
|
||||
"INSERT INTO users (email, name, avatar_url) \
|
||||
VALUES ($1, $2, $3) \
|
||||
@@ -125,13 +127,16 @@ pub async fn bind_oauth_account(
|
||||
user_id: Uuid,
|
||||
profile: OAuthProfile,
|
||||
) -> Result<OauthAccount> {
|
||||
// Check if this provider_id is already linked to someone else
|
||||
// Use a transaction with FOR UPDATE to prevent TOCTOU race conditions
|
||||
let mut tx = pool.begin().await?;
|
||||
|
||||
// Check if this provider_id is already linked to someone else (with row lock)
|
||||
let conflict: Option<(Uuid,)> = sqlx::query_as(
|
||||
"SELECT user_id FROM oauth_accounts WHERE provider = $1 AND provider_id = $2",
|
||||
"SELECT user_id FROM oauth_accounts WHERE provider = $1 AND provider_id = $2 FOR UPDATE",
|
||||
)
|
||||
.bind(&profile.provider)
|
||||
.bind(&profile.provider_id)
|
||||
.fetch_optional(pool)
|
||||
.fetch_optional(&mut *tx)
|
||||
.await?;
|
||||
|
||||
if let Some((existing_user_id,)) = conflict {
|
||||
@@ -148,11 +153,11 @@ pub async fn bind_oauth_account(
|
||||
}
|
||||
|
||||
let existing_provider_for_user: Option<(String,)> = sqlx::query_as(
|
||||
"SELECT provider_id FROM oauth_accounts WHERE user_id = $1 AND provider = $2",
|
||||
"SELECT provider_id FROM oauth_accounts WHERE user_id = $1 AND provider = $2 FOR UPDATE",
|
||||
)
|
||||
.bind(user_id)
|
||||
.bind(&profile.provider)
|
||||
.fetch_optional(pool)
|
||||
.fetch_optional(&mut *tx)
|
||||
.await?;
|
||||
|
||||
if existing_provider_for_user.is_some() {
|
||||
@@ -174,9 +179,10 @@ pub async fn bind_oauth_account(
|
||||
.bind(&profile.email)
|
||||
.bind(&profile.name)
|
||||
.bind(&profile.avatar_url)
|
||||
.fetch_one(pool)
|
||||
.fetch_one(&mut *tx)
|
||||
.await?;
|
||||
|
||||
tx.commit().await?;
|
||||
Ok(account)
|
||||
}
|
||||
|
||||
@@ -194,10 +200,14 @@ pub async fn unbind_oauth_account(
|
||||
);
|
||||
}
|
||||
|
||||
let count: i64 = sqlx::query_scalar("SELECT COUNT(*) FROM oauth_accounts WHERE user_id = $1")
|
||||
.bind(user_id)
|
||||
.fetch_one(pool)
|
||||
.await?;
|
||||
let mut tx = pool.begin().await?;
|
||||
|
||||
let locked_accounts: Vec<(String,)> =
|
||||
sqlx::query_as("SELECT provider FROM oauth_accounts WHERE user_id = $1 FOR UPDATE")
|
||||
.bind(user_id)
|
||||
.fetch_all(&mut *tx)
|
||||
.await?;
|
||||
let count = locked_accounts.len();
|
||||
|
||||
if count <= 1 {
|
||||
anyhow::bail!("Cannot unbind the last OAuth account. Please link another account first.");
|
||||
@@ -206,8 +216,87 @@ pub async fn unbind_oauth_account(
|
||||
sqlx::query("DELETE FROM oauth_accounts WHERE user_id = $1 AND provider = $2")
|
||||
.bind(user_id)
|
||||
.bind(provider)
|
||||
.execute(pool)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
|
||||
tx.commit().await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
async fn maybe_test_pool() -> Option<PgPool> {
|
||||
let database_url = match std::env::var("SECRETS_DATABASE_URL") {
|
||||
Ok(v) => v,
|
||||
Err(_) => {
|
||||
eprintln!("skip user service tests: SECRETS_DATABASE_URL not set");
|
||||
return None;
|
||||
}
|
||||
};
|
||||
let pool = match sqlx::PgPool::connect(&database_url).await {
|
||||
Ok(pool) => pool,
|
||||
Err(e) => {
|
||||
eprintln!("skip user service tests: cannot connect to database: {e}");
|
||||
return None;
|
||||
}
|
||||
};
|
||||
if let Err(e) = crate::db::migrate(&pool).await {
|
||||
eprintln!("skip user service tests: migrate failed: {e}");
|
||||
return None;
|
||||
}
|
||||
Some(pool)
|
||||
}
|
||||
|
||||
async fn cleanup_user_rows(pool: &PgPool, user_id: Uuid) -> Result<()> {
|
||||
sqlx::query("DELETE FROM oauth_accounts WHERE user_id = $1")
|
||||
.bind(user_id)
|
||||
.execute(pool)
|
||||
.await?;
|
||||
sqlx::query("DELETE FROM users WHERE id = $1")
|
||||
.bind(user_id)
|
||||
.execute(pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn unbind_oauth_account_removes_only_requested_provider() -> Result<()> {
|
||||
let Some(pool) = maybe_test_pool().await else {
|
||||
return Ok(());
|
||||
};
|
||||
let user_id = Uuid::from_u128(rand::random());
|
||||
|
||||
cleanup_user_rows(&pool, user_id).await?;
|
||||
|
||||
sqlx::query("INSERT INTO users (id, name) VALUES ($1, '')")
|
||||
.bind(user_id)
|
||||
.execute(&pool)
|
||||
.await?;
|
||||
sqlx::query(
|
||||
"INSERT INTO oauth_accounts (user_id, provider, provider_id, email, name, avatar_url) \
|
||||
VALUES ($1, 'google', $2, NULL, NULL, NULL), \
|
||||
($1, 'github', $3, NULL, NULL, NULL)",
|
||||
)
|
||||
.bind(user_id)
|
||||
.bind(format!("google-{user_id}"))
|
||||
.bind(format!("github-{user_id}"))
|
||||
.execute(&pool)
|
||||
.await?;
|
||||
|
||||
unbind_oauth_account(&pool, user_id, "github", Some("google")).await?;
|
||||
|
||||
let remaining: Vec<(String,)> = sqlx::query_as(
|
||||
"SELECT provider FROM oauth_accounts WHERE user_id = $1 ORDER BY provider",
|
||||
)
|
||||
.bind(user_id)
|
||||
.fetch_all(&pool)
|
||||
.await?;
|
||||
assert_eq!(remaining, vec![("google".to_string(),)]);
|
||||
|
||||
cleanup_user_rows(&pool, user_id).await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,111 +1,4 @@
|
||||
use serde_json::{Map, Value};
|
||||
|
||||
fn normalize_token(input: &str) -> String {
|
||||
input.trim().to_lowercase().replace('_', "-")
|
||||
}
|
||||
|
||||
fn normalize_subtype_token(input: &str) -> String {
|
||||
normalize_token(input)
|
||||
}
|
||||
|
||||
fn map_legacy_entry_type(input: &str) -> Option<(&'static str, &'static str)> {
|
||||
match input {
|
||||
"log-ingestion-endpoint" => Some(("service", "log-ingestion")),
|
||||
"cloud-api" => Some(("service", "cloud-api")),
|
||||
"git-server" => Some(("service", "git")),
|
||||
"mqtt-broker" => Some(("service", "mqtt-broker")),
|
||||
"database" => Some(("service", "database")),
|
||||
"monitoring-dashboard" => Some(("service", "monitoring")),
|
||||
"dns-api" => Some(("service", "dns-api")),
|
||||
"notification-webhook" => Some(("service", "webhook")),
|
||||
"api-endpoint" => Some(("service", "api-endpoint")),
|
||||
"credential" | "credential-key" => Some(("service", "credential")),
|
||||
"key" => Some(("service", "credential")),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Normalize entry `type` and optionally backfill `metadata.subtype` for legacy values.
|
||||
///
|
||||
/// This keeps backward compatibility:
|
||||
/// - stable primary types stay unchanged
|
||||
/// - known legacy long-tail types are mapped to `service` + `metadata.subtype`
|
||||
/// - unknown values are kept (normalized to kebab-case) instead of hard failing
|
||||
pub fn normalize_entry_type_and_metadata(
|
||||
entry_type: &str,
|
||||
metadata: &mut Map<String, Value>,
|
||||
) -> String {
|
||||
let original_raw = entry_type.trim();
|
||||
let normalized = normalize_token(original_raw);
|
||||
if normalized.is_empty() {
|
||||
return String::new();
|
||||
}
|
||||
|
||||
if let Some((mapped_type, mapped_subtype)) = map_legacy_entry_type(&normalized) {
|
||||
if !metadata.contains_key("subtype") {
|
||||
metadata.insert(
|
||||
"subtype".to_string(),
|
||||
Value::String(mapped_subtype.to_string()),
|
||||
);
|
||||
}
|
||||
if !metadata.contains_key("_original_type") && original_raw != mapped_type {
|
||||
metadata.insert(
|
||||
"_original_type".to_string(),
|
||||
Value::String(original_raw.to_string()),
|
||||
);
|
||||
}
|
||||
return mapped_type.to_string();
|
||||
}
|
||||
|
||||
if let Some(subtype) = metadata.get_mut("subtype")
|
||||
&& let Some(s) = subtype.as_str()
|
||||
{
|
||||
*subtype = Value::String(normalize_subtype_token(s));
|
||||
}
|
||||
|
||||
normalized
|
||||
}
|
||||
|
||||
/// Canonical secret type options for UI dropdowns.
|
||||
pub const SECRET_TYPE_OPTIONS: &[&str] = &[
|
||||
"text", "password", "token", "api-key", "ssh-key", "url", "phone", "id-card",
|
||||
];
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use serde_json::{Map, Value};
|
||||
|
||||
#[test]
|
||||
fn normalize_entry_type_maps_legacy_type_and_backfills_metadata() {
|
||||
let mut metadata = Map::new();
|
||||
let normalized = normalize_entry_type_and_metadata("git-server", &mut metadata);
|
||||
|
||||
assert_eq!(normalized, "service");
|
||||
assert_eq!(
|
||||
metadata.get("subtype"),
|
||||
Some(&Value::String("git".to_string()))
|
||||
);
|
||||
assert_eq!(
|
||||
metadata.get("_original_type"),
|
||||
Some(&Value::String("git-server".to_string()))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn normalize_entry_type_normalizes_existing_subtype() {
|
||||
let mut metadata = Map::new();
|
||||
metadata.insert(
|
||||
"subtype".to_string(),
|
||||
Value::String("Cloud_API".to_string()),
|
||||
);
|
||||
|
||||
let normalized = normalize_entry_type_and_metadata("service", &mut metadata);
|
||||
|
||||
assert_eq!(normalized, "service");
|
||||
assert_eq!(
|
||||
metadata.get("subtype"),
|
||||
Some(&Value::String("cloud-api".to_string()))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "secrets-mcp"
|
||||
version = "0.4.0"
|
||||
version = "0.5.7"
|
||||
edition.workspace = true
|
||||
|
||||
[[bin]]
|
||||
@@ -17,9 +17,10 @@ rmcp = { version = "1", features = ["server", "macros", "transport-streamable-ht
|
||||
axum = "0.8"
|
||||
axum-extra = { version = "0.10", features = ["typed-header"] }
|
||||
tower = "0.5"
|
||||
tower-http = { version = "0.6", features = ["cors", "trace"] }
|
||||
tower-http = { version = "0.6", features = ["cors", "trace", "limit"] }
|
||||
tower-sessions = "0.14"
|
||||
tower-sessions-sqlx-store-chrono = { version = "0.14", features = ["postgres"] }
|
||||
governor = { version = "0.10", features = ["std", "jitter"] }
|
||||
time = "0.3"
|
||||
|
||||
# OAuth (manual token exchange via reqwest)
|
||||
@@ -44,3 +45,4 @@ dotenvy.workspace = true
|
||||
urlencoding = "2"
|
||||
schemars = "1"
|
||||
http = "1"
|
||||
url = "2"
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
use std::net::SocketAddr;
|
||||
|
||||
use axum::{
|
||||
extract::{ConnectInfo, Request, State},
|
||||
extract::{Request, State},
|
||||
http::StatusCode,
|
||||
middleware::Next,
|
||||
response::Response,
|
||||
@@ -11,29 +9,14 @@ use uuid::Uuid;
|
||||
|
||||
use secrets_core::service::api_key::validate_api_key;
|
||||
|
||||
use crate::client_ip;
|
||||
|
||||
/// Injected into request extensions after Bearer token validation.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct AuthUser {
|
||||
pub user_id: Uuid,
|
||||
}
|
||||
|
||||
fn log_client_ip(req: &Request) -> Option<String> {
|
||||
if let Some(first) = req
|
||||
.headers()
|
||||
.get("x-forwarded-for")
|
||||
.and_then(|v| v.to_str().ok())
|
||||
.and_then(|s| s.split(',').next())
|
||||
{
|
||||
let s = first.trim();
|
||||
if !s.is_empty() {
|
||||
return Some(s.to_string());
|
||||
}
|
||||
}
|
||||
req.extensions()
|
||||
.get::<ConnectInfo<SocketAddr>>()
|
||||
.map(|c| c.ip().to_string())
|
||||
}
|
||||
|
||||
/// Axum middleware that validates Bearer API keys for the /mcp route.
|
||||
/// Passes all non-MCP paths through without authentication.
|
||||
pub async fn bearer_auth_middleware(
|
||||
@@ -43,7 +26,7 @@ pub async fn bearer_auth_middleware(
|
||||
) -> Result<Response, StatusCode> {
|
||||
let path = req.uri().path();
|
||||
let method = req.method().as_str();
|
||||
let client_ip = log_client_ip(&req);
|
||||
let client_ip = client_ip::extract_client_ip(&req);
|
||||
|
||||
// Only authenticate /mcp paths
|
||||
if !path.starts_with("/mcp") {
|
||||
@@ -66,7 +49,7 @@ pub async fn bearer_auth_middleware(
|
||||
tracing::warn!(
|
||||
method,
|
||||
path,
|
||||
client_ip = client_ip.as_deref(),
|
||||
%client_ip,
|
||||
"invalid Authorization header format on /mcp (expected Bearer …)"
|
||||
);
|
||||
return Err(StatusCode::UNAUTHORIZED);
|
||||
@@ -75,7 +58,7 @@ pub async fn bearer_auth_middleware(
|
||||
tracing::warn!(
|
||||
method,
|
||||
path,
|
||||
client_ip = client_ip.as_deref(),
|
||||
%client_ip,
|
||||
"missing Authorization header on /mcp"
|
||||
);
|
||||
return Err(StatusCode::UNAUTHORIZED);
|
||||
@@ -93,7 +76,7 @@ pub async fn bearer_auth_middleware(
|
||||
tracing::warn!(
|
||||
method,
|
||||
path,
|
||||
client_ip = client_ip.as_deref(),
|
||||
%client_ip,
|
||||
key_prefix = %&raw_key.chars().take(12).collect::<String>(),
|
||||
key_len = raw_key.len(),
|
||||
"invalid api key (not found in database — e.g. revoked key or DB was reset; update MCP client Bearer token)"
|
||||
@@ -104,7 +87,7 @@ pub async fn bearer_auth_middleware(
|
||||
tracing::error!(
|
||||
method,
|
||||
path,
|
||||
client_ip = client_ip.as_deref(),
|
||||
%client_ip,
|
||||
error = %e,
|
||||
"api key validation error"
|
||||
);
|
||||
|
||||
65
crates/secrets-mcp/src/client_ip.rs
Normal file
65
crates/secrets-mcp/src/client_ip.rs
Normal file
@@ -0,0 +1,65 @@
|
||||
use axum::extract::Request;
|
||||
use std::net::{IpAddr, SocketAddr};
|
||||
|
||||
/// Extract the client IP from a request.
|
||||
///
|
||||
/// When the `TRUST_PROXY` environment variable is set to `1` or `true`, the
|
||||
/// `X-Forwarded-For` and `X-Real-IP` headers are consulted first, which is
|
||||
/// appropriate when the service runs behind a trusted reverse proxy (e.g.
|
||||
/// Caddy). Otherwise — or if those headers are absent/empty — the direct TCP
|
||||
/// connection address from `ConnectInfo` is used.
|
||||
///
|
||||
/// **Important**: only enable `TRUST_PROXY` when the application is guaranteed
|
||||
/// to receive traffic exclusively through a controlled reverse proxy. Enabling
|
||||
/// it on a directly-exposed port allows clients to spoof their IP address and
|
||||
/// bypass per-IP rate limiting.
|
||||
pub fn extract_client_ip(req: &Request) -> String {
|
||||
if trust_proxy_enabled() {
|
||||
if let Some(ip) = forwarded_for_ip(req.headers()) {
|
||||
return ip;
|
||||
}
|
||||
if let Some(ip) = real_ip(req.headers()) {
|
||||
return ip;
|
||||
}
|
||||
}
|
||||
|
||||
connect_info_ip(req).unwrap_or_else(|| "unknown".to_string())
|
||||
}
|
||||
|
||||
fn trust_proxy_enabled() -> bool {
|
||||
static CACHE: std::sync::OnceLock<bool> = std::sync::OnceLock::new();
|
||||
*CACHE.get_or_init(|| {
|
||||
matches!(
|
||||
std::env::var("TRUST_PROXY").as_deref(),
|
||||
Ok("1") | Ok("true") | Ok("yes")
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
fn forwarded_for_ip(headers: &axum::http::HeaderMap) -> Option<String> {
|
||||
let value = headers.get("x-forwarded-for")?.to_str().ok()?;
|
||||
let first = value.split(',').next()?.trim();
|
||||
if first.is_empty() {
|
||||
None
|
||||
} else {
|
||||
validate_ip(first)
|
||||
}
|
||||
}
|
||||
|
||||
fn real_ip(headers: &axum::http::HeaderMap) -> Option<String> {
|
||||
let value = headers.get("x-real-ip")?.to_str().ok()?;
|
||||
let ip = value.trim();
|
||||
if ip.is_empty() { None } else { validate_ip(ip) }
|
||||
}
|
||||
|
||||
/// Validate that a string is a valid IP address.
|
||||
/// Returns Some(ip) if valid, None otherwise.
|
||||
fn validate_ip(s: &str) -> Option<String> {
|
||||
s.parse::<IpAddr>().ok().map(|ip| ip.to_string())
|
||||
}
|
||||
|
||||
fn connect_info_ip(req: &Request) -> Option<String> {
|
||||
req.extensions()
|
||||
.get::<axum::extract::ConnectInfo<SocketAddr>>()
|
||||
.map(|c| c.0.ip().to_string())
|
||||
}
|
||||
@@ -23,11 +23,29 @@ pub fn app_error_to_mcp(err: &AppError) -> rmcp::ErrorData {
|
||||
"Entry not found. Use secrets_find to discover existing entries.",
|
||||
None,
|
||||
),
|
||||
AppError::NotFoundUser => rmcp::ErrorData::invalid_request("User not found.", None),
|
||||
AppError::NotFoundSecret => rmcp::ErrorData::invalid_request("Secret not found.", None),
|
||||
AppError::AuthenticationFailed => rmcp::ErrorData::invalid_request(
|
||||
"Authentication failed. Please check your API key or login credentials.",
|
||||
None,
|
||||
),
|
||||
AppError::Unauthorized => rmcp::ErrorData::invalid_request(
|
||||
"Unauthorized: you do not have permission to access this resource.",
|
||||
None,
|
||||
),
|
||||
AppError::Validation { message } => rmcp::ErrorData::invalid_request(message.clone(), None),
|
||||
AppError::ConcurrentModification => rmcp::ErrorData::invalid_request(
|
||||
"The entry was modified by another request. Please refresh and try again.",
|
||||
None,
|
||||
),
|
||||
AppError::DecryptionFailed => rmcp::ErrorData::invalid_request(
|
||||
"Decryption failed — the encryption key may be incorrect or does not match the data.",
|
||||
None,
|
||||
),
|
||||
AppError::EncryptionKeyNotSet => rmcp::ErrorData::invalid_request(
|
||||
"Encryption key not set. You must set a passphrase before using this feature.",
|
||||
None,
|
||||
),
|
||||
AppError::Internal(_) => rmcp::ErrorData::internal_error(
|
||||
"Request failed due to a server error. Check service logs if you need details.",
|
||||
None,
|
||||
|
||||
@@ -1,25 +1,28 @@
|
||||
use std::net::SocketAddr;
|
||||
use std::time::Instant;
|
||||
|
||||
use axum::{
|
||||
body::{Body, Bytes, to_bytes},
|
||||
extract::{ConnectInfo, Request},
|
||||
extract::Request,
|
||||
http::{
|
||||
HeaderMap, Method, StatusCode,
|
||||
header::{CONTENT_LENGTH, CONTENT_TYPE, USER_AGENT},
|
||||
header::{AUTHORIZATION, CONTENT_LENGTH, CONTENT_TYPE, USER_AGENT},
|
||||
},
|
||||
middleware::Next,
|
||||
response::{IntoResponse, Response},
|
||||
};
|
||||
|
||||
use crate::auth::AuthUser;
|
||||
|
||||
/// Axum middleware that logs structured info for every HTTP request.
|
||||
///
|
||||
/// All requests: method, path, status, latency_ms, client_ip, user_agent.
|
||||
/// POST /mcp requests: additionally parses JSON-RPC body for jsonrpc_method,
|
||||
/// tool_name, jsonrpc_id, mcp_session, batch_size.
|
||||
/// tool_name, jsonrpc_id, mcp_session, batch_size, tool_args (non-sensitive
|
||||
/// arguments only), plus masked auth_key / enc_key fingerprints and user_id
|
||||
/// for diagnosing header forwarding issues.
|
||||
///
|
||||
/// Sensitive headers (Authorization, X-Encryption-Key) and secret values
|
||||
/// are never logged.
|
||||
/// Sensitive headers (Authorization, X-Encryption-Key) are never logged in
|
||||
/// full — only short fingerprints are emitted.
|
||||
pub async fn request_logging_middleware(req: Request, next: Next) -> Response {
|
||||
let method = req.method().clone();
|
||||
let path = req.uri().path().to_string();
|
||||
@@ -33,6 +36,10 @@ pub async fn request_logging_middleware(req: Request, next: Next) -> Response {
|
||||
.and_then(|v| v.to_str().ok())
|
||||
.map(|s| s.to_string());
|
||||
|
||||
// Capture header fingerprints before consuming the request.
|
||||
let auth_key = mask_bearer(req.headers());
|
||||
let enc_key = mask_enc_key(req.headers());
|
||||
|
||||
let is_mcp_post = path.starts_with("/mcp") && method == Method::POST;
|
||||
let is_json = header_str(req.headers(), CONTENT_TYPE)
|
||||
.map(|ct| ct.contains("application/json"))
|
||||
@@ -46,6 +53,11 @@ pub async fn request_logging_middleware(req: Request, next: Next) -> Response {
|
||||
let cap = content_len.unwrap_or(0);
|
||||
if cap <= 512 * 1024 {
|
||||
let (parts, body) = req.into_parts();
|
||||
// user_id is available after auth middleware has run (injected into extensions).
|
||||
let user_id = parts
|
||||
.extensions
|
||||
.get::<AuthUser>()
|
||||
.map(|a| a.user_id.to_string());
|
||||
match to_bytes(body, 512 * 1024).await {
|
||||
Ok(bytes) => {
|
||||
let rpc = parse_jsonrpc_meta(&bytes);
|
||||
@@ -62,6 +74,9 @@ pub async fn request_logging_middleware(req: Request, next: Next) -> Response {
|
||||
ua.as_deref(),
|
||||
content_len,
|
||||
mcp_session.as_deref(),
|
||||
auth_key.as_deref(),
|
||||
&enc_key,
|
||||
user_id.as_deref(),
|
||||
&rpc,
|
||||
);
|
||||
return resp;
|
||||
@@ -78,6 +93,9 @@ pub async fn request_logging_middleware(req: Request, next: Next) -> Response {
|
||||
ua = ua.as_deref(),
|
||||
content_length = content_len,
|
||||
mcp_session = mcp_session.as_deref(),
|
||||
auth_key = auth_key.as_deref(),
|
||||
enc_key = enc_key.as_str(),
|
||||
user_id = user_id.as_deref(),
|
||||
"mcp request",
|
||||
);
|
||||
return (
|
||||
@@ -160,6 +178,9 @@ fn log_mcp_request(
|
||||
ua: Option<&str>,
|
||||
content_length: Option<u64>,
|
||||
mcp_session: Option<&str>,
|
||||
auth_key: Option<&str>,
|
||||
enc_key: &str,
|
||||
user_id: Option<&str>,
|
||||
rpc: &JsonRpcMeta,
|
||||
) {
|
||||
tracing::info!(
|
||||
@@ -175,18 +196,94 @@ fn log_mcp_request(
|
||||
tool = rpc.tool_name.as_deref(),
|
||||
jsonrpc_id = rpc.request_id.as_deref(),
|
||||
batch_size = rpc.batch_size,
|
||||
tool_args = rpc.tool_args.as_deref(),
|
||||
auth_key,
|
||||
enc_key,
|
||||
user_id,
|
||||
"mcp request",
|
||||
);
|
||||
}
|
||||
|
||||
// ── Sensitive header masking ──────────────────────────────────────────────────
|
||||
|
||||
/// Mask a Bearer token: emit only the first 12 characters followed by `…`.
|
||||
/// Returns `None` if the Authorization header is absent or not a Bearer token.
|
||||
/// Example: `sk_90c88844e4e5…`
|
||||
fn mask_bearer(headers: &HeaderMap) -> Option<String> {
|
||||
let val = headers.get(AUTHORIZATION)?.to_str().ok()?;
|
||||
let token = val.strip_prefix("Bearer ")?.trim();
|
||||
if token.is_empty() {
|
||||
return None;
|
||||
}
|
||||
if token.len() > 12 {
|
||||
Some(format!("{}…", &token[..12]))
|
||||
} else {
|
||||
Some(token.to_string())
|
||||
}
|
||||
}
|
||||
|
||||
/// Fingerprint the X-Encryption-Key header.
|
||||
///
|
||||
/// Emits first 4 chars, last 4 chars, and raw byte length, e.g. `146b…5516(64)`.
|
||||
/// Returns `"absent"` when the header is missing. Reveals enough to confirm
|
||||
/// which key arrived and whether it was truncated or padded, without revealing
|
||||
/// the full value.
|
||||
fn mask_enc_key(headers: &HeaderMap) -> String {
|
||||
match headers
|
||||
.get("x-encryption-key")
|
||||
.and_then(|v| v.to_str().ok())
|
||||
{
|
||||
Some(val) => {
|
||||
let raw_len = val.len();
|
||||
let t = val.trim();
|
||||
let len = t.len();
|
||||
if len >= 8 {
|
||||
let prefix = &t[..4];
|
||||
let suffix = &t[len - 4..];
|
||||
if raw_len != len {
|
||||
// Trailing/leading whitespace detected — extra diagnostic.
|
||||
format!("{prefix}…{suffix}({len}, raw={raw_len})")
|
||||
} else {
|
||||
format!("{prefix}…{suffix}({len})")
|
||||
}
|
||||
} else {
|
||||
format!("…({len})")
|
||||
}
|
||||
}
|
||||
None => "absent".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
// ── JSON-RPC body parsing ─────────────────────────────────────────────────────
|
||||
|
||||
/// Safe (non-sensitive) argument keys that may be included verbatim in logs.
|
||||
/// Keys NOT in this list (e.g. `secrets`, `secrets_obj`, `meta_obj`,
|
||||
/// `encryption_key`) are silently dropped.
|
||||
const SAFE_ARG_KEYS: &[&str] = &[
|
||||
"id",
|
||||
"name",
|
||||
"name_query",
|
||||
"folder",
|
||||
"type",
|
||||
"entry_type",
|
||||
"field",
|
||||
"query",
|
||||
"tags",
|
||||
"limit",
|
||||
"offset",
|
||||
"format",
|
||||
"dry_run",
|
||||
"prefix",
|
||||
];
|
||||
|
||||
#[derive(Debug, Default)]
|
||||
struct JsonRpcMeta {
|
||||
request_id: Option<String>,
|
||||
rpc_method: Option<String>,
|
||||
tool_name: Option<String>,
|
||||
batch_size: Option<usize>,
|
||||
/// Non-sensitive tool call arguments for diagnostic logging.
|
||||
tool_args: Option<String>,
|
||||
}
|
||||
|
||||
fn parse_jsonrpc_meta(bytes: &Bytes) -> JsonRpcMeta {
|
||||
@@ -216,12 +313,47 @@ fn parse_single(value: &serde_json::Value) -> JsonRpcMeta {
|
||||
.pointer("/params/name")
|
||||
.and_then(|v| v.as_str())
|
||||
.map(|s| s.to_string());
|
||||
let tool_args = extract_tool_args(value);
|
||||
|
||||
JsonRpcMeta {
|
||||
request_id,
|
||||
rpc_method,
|
||||
tool_name,
|
||||
batch_size: None,
|
||||
tool_args,
|
||||
}
|
||||
}
|
||||
|
||||
/// Extract a compact summary of non-sensitive tool arguments for logging.
|
||||
/// Only keys listed in `SAFE_ARG_KEYS` are included.
|
||||
fn extract_tool_args(value: &serde_json::Value) -> Option<String> {
|
||||
let args = value.pointer("/params/arguments")?;
|
||||
let obj = args.as_object()?;
|
||||
let pairs: Vec<String> = obj
|
||||
.iter()
|
||||
.filter(|(k, v)| SAFE_ARG_KEYS.contains(&k.as_str()) && !v.is_null())
|
||||
.map(|(k, v)| format!("{}={}", k, summarize_value(v)))
|
||||
.collect();
|
||||
if pairs.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(pairs.join(" "))
|
||||
}
|
||||
}
|
||||
|
||||
/// Produce a short, log-safe representation of a JSON value.
|
||||
fn summarize_value(v: &serde_json::Value) -> String {
|
||||
match v {
|
||||
serde_json::Value::String(s) => {
|
||||
if s.len() > 64 {
|
||||
format!("\"{}…\"", &s[..64])
|
||||
} else {
|
||||
format!("\"{s}\"")
|
||||
}
|
||||
}
|
||||
serde_json::Value::Array(arr) => format!("[…{}]", arr.len()),
|
||||
serde_json::Value::Object(_) => "{…}".to_string(),
|
||||
other => other.to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -245,18 +377,5 @@ fn header_str(headers: &HeaderMap, name: impl axum::http::header::AsHeaderName)
|
||||
}
|
||||
|
||||
fn client_ip(req: &Request) -> Option<String> {
|
||||
if let Some(first) = req
|
||||
.headers()
|
||||
.get("x-forwarded-for")
|
||||
.and_then(|v| v.to_str().ok())
|
||||
.and_then(|s| s.split(',').next())
|
||||
{
|
||||
let s = first.trim();
|
||||
if !s.is_empty() {
|
||||
return Some(s.to_string());
|
||||
}
|
||||
}
|
||||
req.extensions()
|
||||
.get::<ConnectInfo<SocketAddr>>()
|
||||
.map(|c| c.ip().to_string())
|
||||
crate::client_ip::extract_client_ip(req).into()
|
||||
}
|
||||
|
||||
@@ -1,8 +1,11 @@
|
||||
mod auth;
|
||||
mod client_ip;
|
||||
mod error;
|
||||
mod logging;
|
||||
mod oauth;
|
||||
mod rate_limit;
|
||||
mod tools;
|
||||
mod validation;
|
||||
mod web;
|
||||
|
||||
use std::net::SocketAddr;
|
||||
@@ -153,10 +156,20 @@ async fn main() -> Result<()> {
|
||||
);
|
||||
|
||||
// ── Router ────────────────────────────────────────────────────────────────
|
||||
let cors = CorsLayer::new()
|
||||
.allow_origin(Any)
|
||||
.allow_methods(Any)
|
||||
.allow_headers(Any);
|
||||
// CORS: restrict origins in production, allow all in development
|
||||
let is_production = matches!(
|
||||
load_env_var("SECRETS_ENV")
|
||||
.as_deref()
|
||||
.map(|s| s.to_ascii_lowercase())
|
||||
.as_deref(),
|
||||
Some("prod" | "production")
|
||||
);
|
||||
|
||||
let cors = build_cors_layer(&base_url, is_production);
|
||||
|
||||
// Rate limiting
|
||||
let rate_limit_state = rate_limit::RateLimitState::new();
|
||||
let rate_limit_cleanup = rate_limit::spawn_cleanup_task(rate_limit_state.ip_limiter.clone());
|
||||
|
||||
let router = Router::new()
|
||||
.merge(web::web_router())
|
||||
@@ -168,8 +181,15 @@ async fn main() -> Result<()> {
|
||||
pool,
|
||||
auth::bearer_auth_middleware,
|
||||
))
|
||||
.layer(axum::middleware::from_fn_with_state(
|
||||
rate_limit_state.clone(),
|
||||
rate_limit::rate_limit_middleware,
|
||||
))
|
||||
.layer(session_layer)
|
||||
.layer(cors)
|
||||
.layer(tower_http::limit::RequestBodyLimitLayer::new(
|
||||
10 * 1024 * 1024,
|
||||
))
|
||||
.with_state(app_state);
|
||||
|
||||
// ── Start server ──────────────────────────────────────────────────────────
|
||||
@@ -192,12 +212,135 @@ async fn main() -> Result<()> {
|
||||
.context("server error")?;
|
||||
|
||||
session_cleanup.abort();
|
||||
rate_limit_cleanup.abort();
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn shutdown_signal() {
|
||||
tokio::signal::ctrl_c()
|
||||
.await
|
||||
.expect("failed to install CTRL+C signal handler");
|
||||
let ctrl_c = tokio::signal::ctrl_c();
|
||||
|
||||
#[cfg(unix)]
|
||||
let terminate = async {
|
||||
tokio::signal::unix::signal(tokio::signal::unix::SignalKind::terminate())
|
||||
.expect("failed to install SIGTERM handler")
|
||||
.recv()
|
||||
.await;
|
||||
};
|
||||
|
||||
#[cfg(not(unix))]
|
||||
let terminate = std::future::pending::<()>();
|
||||
|
||||
tokio::select! {
|
||||
_ = ctrl_c => {},
|
||||
_ = terminate => {},
|
||||
}
|
||||
|
||||
tracing::info!("Shutting down gracefully...");
|
||||
}
|
||||
|
||||
/// Production CORS allowed headers.
|
||||
///
|
||||
/// When adding a new custom header to the MCP or Web API, this list must be
|
||||
/// updated accordingly — otherwise browsers will block the request during
|
||||
/// the CORS preflight check.
|
||||
fn production_allowed_headers() -> [axum::http::HeaderName; 5] {
|
||||
[
|
||||
axum::http::header::AUTHORIZATION,
|
||||
axum::http::header::CONTENT_TYPE,
|
||||
axum::http::HeaderName::from_static("x-encryption-key"),
|
||||
axum::http::HeaderName::from_static("mcp-session-id"),
|
||||
axum::http::HeaderName::from_static("x-mcp-session"),
|
||||
]
|
||||
}
|
||||
|
||||
/// Production CORS allowed methods.
|
||||
///
|
||||
/// Keep this list explicit because tower-http rejects
|
||||
/// `allow_credentials(true)` together with `allow_methods(Any)`.
|
||||
fn production_allowed_methods() -> [axum::http::Method; 5] {
|
||||
[
|
||||
axum::http::Method::GET,
|
||||
axum::http::Method::POST,
|
||||
axum::http::Method::PATCH,
|
||||
axum::http::Method::DELETE,
|
||||
axum::http::Method::OPTIONS,
|
||||
]
|
||||
}
|
||||
|
||||
/// Build the CORS layer for the application.
|
||||
///
|
||||
/// In production mode the origin is restricted to the BASE_URL origin
|
||||
/// (scheme://host:port, path stripped) and credentials are allowed.
|
||||
/// `allow_headers` and `allow_methods` use explicit whitelists to avoid the
|
||||
/// tower-http restriction on `allow_credentials(true)` + wildcards.
|
||||
///
|
||||
/// In development mode all origins, methods and headers are allowed.
|
||||
fn build_cors_layer(base_url: &str, is_production: bool) -> CorsLayer {
|
||||
if is_production {
|
||||
let allowed_origin = if let Ok(parsed) = base_url.parse::<url::Url>() {
|
||||
let origin = parsed.origin().ascii_serialization();
|
||||
origin
|
||||
.parse::<axum::http::HeaderValue>()
|
||||
.unwrap_or_else(|_| panic!("invalid BASE_URL origin: {}", origin))
|
||||
} else {
|
||||
base_url
|
||||
.parse::<axum::http::HeaderValue>()
|
||||
.unwrap_or_else(|_| panic!("invalid BASE_URL: {}", base_url))
|
||||
};
|
||||
CorsLayer::new()
|
||||
.allow_origin(allowed_origin)
|
||||
.allow_methods(production_allowed_methods())
|
||||
.allow_headers(production_allowed_headers())
|
||||
.allow_credentials(true)
|
||||
} else {
|
||||
CorsLayer::new()
|
||||
.allow_origin(Any)
|
||||
.allow_methods(Any)
|
||||
.allow_headers(Any)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn production_cors_does_not_panic() {
|
||||
let layer = build_cors_layer("https://secrets.example.com/app", true);
|
||||
let _ = layer;
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn production_cors_headers_include_all_required() {
|
||||
let headers = production_allowed_headers();
|
||||
let names: Vec<&str> = headers.iter().map(|h| h.as_str()).collect();
|
||||
assert!(names.contains(&"authorization"));
|
||||
assert!(names.contains(&"content-type"));
|
||||
assert!(names.contains(&"x-encryption-key"));
|
||||
assert!(names.contains(&"mcp-session-id"));
|
||||
assert!(names.contains(&"x-mcp-session"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn production_cors_methods_include_all_required() {
|
||||
let methods = production_allowed_methods();
|
||||
assert!(methods.contains(&axum::http::Method::GET));
|
||||
assert!(methods.contains(&axum::http::Method::POST));
|
||||
assert!(methods.contains(&axum::http::Method::PATCH));
|
||||
assert!(methods.contains(&axum::http::Method::DELETE));
|
||||
assert!(methods.contains(&axum::http::Method::OPTIONS));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn production_cors_normalizes_base_url_with_path() {
|
||||
let url = url::Url::parse("https://secrets.example.com/secrets/app").unwrap();
|
||||
let origin = url.origin().ascii_serialization();
|
||||
assert_eq!(origin, "https://secrets.example.com");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn development_cors_allows_everything() {
|
||||
let layer = build_cors_layer("http://localhost:9315", false);
|
||||
let _ = layer;
|
||||
}
|
||||
}
|
||||
|
||||
160
crates/secrets-mcp/src/rate_limit.rs
Normal file
160
crates/secrets-mcp/src/rate_limit.rs
Normal file
@@ -0,0 +1,160 @@
|
||||
use std::num::NonZeroU32;
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use axum::{
|
||||
extract::{Request, State},
|
||||
http::{HeaderMap, HeaderValue, StatusCode},
|
||||
middleware::Next,
|
||||
response::{IntoResponse, Response},
|
||||
};
|
||||
use governor::{
|
||||
Quota, RateLimiter,
|
||||
clock::{Clock, DefaultClock},
|
||||
state::{InMemoryState, NotKeyed, keyed::DashMapStateStore},
|
||||
};
|
||||
use serde_json::json;
|
||||
|
||||
use crate::client_ip;
|
||||
|
||||
/// Per-IP rate limiter (keyed by client IP string)
|
||||
type IpRateLimiter = RateLimiter<String, DashMapStateStore<String>, DefaultClock>;
|
||||
|
||||
/// Global rate limiter (not keyed)
|
||||
type GlobalRateLimiter = RateLimiter<NotKeyed, InMemoryState, DefaultClock>;
|
||||
|
||||
/// Parse a u32 env value into NonZeroU32, logging a warning and falling back
|
||||
/// to the default if the value is zero.
|
||||
fn nz_or_log(value: u32, default: u32, name: &str) -> NonZeroU32 {
|
||||
NonZeroU32::new(value).unwrap_or_else(|| {
|
||||
tracing::warn!(
|
||||
configured = value,
|
||||
default,
|
||||
"{name} must be non-zero, using default"
|
||||
);
|
||||
NonZeroU32::new(default).unwrap()
|
||||
})
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct RateLimitState {
|
||||
pub ip_limiter: Arc<IpRateLimiter>,
|
||||
pub global_limiter: Arc<GlobalRateLimiter>,
|
||||
}
|
||||
|
||||
impl RateLimitState {
|
||||
/// Create a new RateLimitState with default limits.
|
||||
///
|
||||
/// Default limits (can be overridden via environment variables):
|
||||
/// - Global: 100 req/s, burst 200
|
||||
/// - Per-IP: 20 req/s, burst 40
|
||||
pub fn new() -> Self {
|
||||
let global_rate = std::env::var("RATE_LIMIT_GLOBAL_PER_SECOND")
|
||||
.ok()
|
||||
.and_then(|v| v.parse::<u32>().ok())
|
||||
.unwrap_or(100);
|
||||
|
||||
let global_burst = std::env::var("RATE_LIMIT_GLOBAL_BURST")
|
||||
.ok()
|
||||
.and_then(|v| v.parse::<u32>().ok())
|
||||
.unwrap_or(200);
|
||||
|
||||
let ip_rate = std::env::var("RATE_LIMIT_IP_PER_SECOND")
|
||||
.ok()
|
||||
.and_then(|v| v.parse::<u32>().ok())
|
||||
.unwrap_or(20);
|
||||
|
||||
let ip_burst = std::env::var("RATE_LIMIT_IP_BURST")
|
||||
.ok()
|
||||
.and_then(|v| v.parse::<u32>().ok())
|
||||
.unwrap_or(40);
|
||||
|
||||
let global_rate_nz = nz_or_log(global_rate, 100, "RATE_LIMIT_GLOBAL_PER_SECOND");
|
||||
let global_burst_nz = nz_or_log(global_burst, 200, "RATE_LIMIT_GLOBAL_BURST");
|
||||
let ip_rate_nz = nz_or_log(ip_rate, 20, "RATE_LIMIT_IP_PER_SECOND");
|
||||
let ip_burst_nz = nz_or_log(ip_burst, 40, "RATE_LIMIT_IP_BURST");
|
||||
|
||||
let global_quota = Quota::per_second(global_rate_nz).allow_burst(global_burst_nz);
|
||||
let ip_quota = Quota::per_second(ip_rate_nz).allow_burst(ip_burst_nz);
|
||||
|
||||
tracing::info!(
|
||||
global_rate = global_rate_nz.get(),
|
||||
global_burst = global_burst_nz.get(),
|
||||
ip_rate = ip_rate_nz.get(),
|
||||
ip_burst = ip_burst_nz.get(),
|
||||
"rate limiter initialized"
|
||||
);
|
||||
|
||||
Self {
|
||||
global_limiter: Arc::new(RateLimiter::direct(global_quota)),
|
||||
ip_limiter: Arc::new(RateLimiter::dashmap(ip_quota)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Rate limiting middleware function.
|
||||
///
|
||||
/// Checks both global and per-IP rate limits before allowing the request through.
|
||||
/// Returns 429 Too Many Requests if either limit is exceeded.
|
||||
pub async fn rate_limit_middleware(
|
||||
State(rl): State<RateLimitState>,
|
||||
req: Request,
|
||||
next: Next,
|
||||
) -> Result<Response, Response> {
|
||||
// Check global rate limit first
|
||||
if let Err(negative) = rl.global_limiter.check() {
|
||||
let retry_after = negative.wait_time_from(DefaultClock::default().now());
|
||||
tracing::warn!(
|
||||
retry_after_secs = retry_after.as_secs(),
|
||||
"global rate limit exceeded"
|
||||
);
|
||||
return Err(too_many_requests_response(Some(retry_after)));
|
||||
}
|
||||
|
||||
// Check per-IP rate limit
|
||||
let key = client_ip::extract_client_ip(&req);
|
||||
if let Err(negative) = rl.ip_limiter.check_key(&key) {
|
||||
let retry_after = negative.wait_time_from(DefaultClock::default().now());
|
||||
tracing::warn!(
|
||||
client_ip = %key,
|
||||
retry_after_secs = retry_after.as_secs(),
|
||||
"per-IP rate limit exceeded"
|
||||
);
|
||||
return Err(too_many_requests_response(Some(retry_after)));
|
||||
}
|
||||
|
||||
Ok(next.run(req).await)
|
||||
}
|
||||
|
||||
/// Start a background task to clean up expired rate limiter entries.
|
||||
///
|
||||
/// This should be called once during application startup.
|
||||
/// The task runs every 60 seconds and will be aborted on shutdown.
|
||||
pub fn spawn_cleanup_task(ip_limiter: Arc<IpRateLimiter>) -> tokio::task::JoinHandle<()> {
|
||||
tokio::spawn(async move {
|
||||
let mut interval = tokio::time::interval(Duration::from_secs(60));
|
||||
loop {
|
||||
interval.tick().await;
|
||||
ip_limiter.retain_recent();
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Create a 429 Too Many Requests response.
|
||||
fn too_many_requests_response(retry_after: Option<Duration>) -> Response {
|
||||
let mut headers = HeaderMap::new();
|
||||
headers.insert("Content-Type", HeaderValue::from_static("application/json"));
|
||||
|
||||
if let Some(duration) = retry_after {
|
||||
let secs = duration.as_secs().max(1);
|
||||
if let Ok(value) = HeaderValue::from_str(&secs.to_string()) {
|
||||
headers.insert("Retry-After", value);
|
||||
}
|
||||
}
|
||||
|
||||
let body = json!({
|
||||
"error": "Too many requests, please try again later"
|
||||
});
|
||||
|
||||
(StatusCode::TOO_MANY_REQUESTS, headers, body.to_string()).into_response()
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
149
crates/secrets-mcp/src/validation.rs
Normal file
149
crates/secrets-mcp/src/validation.rs
Normal file
@@ -0,0 +1,149 @@
|
||||
/// Validation constants for input field lengths.
|
||||
pub const MAX_NAME_LENGTH: usize = 256;
|
||||
pub const MAX_FOLDER_LENGTH: usize = 128;
|
||||
pub const MAX_ENTRY_TYPE_LENGTH: usize = 64;
|
||||
pub const MAX_NOTES_LENGTH: usize = 10000;
|
||||
pub const MAX_TAG_LENGTH: usize = 64;
|
||||
pub const MAX_TAG_COUNT: usize = 50;
|
||||
pub const MAX_META_KEY_LENGTH: usize = 128;
|
||||
pub const MAX_META_VALUE_LENGTH: usize = 4096;
|
||||
pub const MAX_META_COUNT: usize = 100;
|
||||
|
||||
/// Validate input field lengths for MCP tools.
|
||||
///
|
||||
/// Returns an error if any field exceeds its maximum length.
|
||||
pub fn validate_input_lengths(
|
||||
name: &str,
|
||||
folder: Option<&str>,
|
||||
entry_type: Option<&str>,
|
||||
notes: Option<&str>,
|
||||
) -> Result<(), rmcp::ErrorData> {
|
||||
if name.chars().count() > MAX_NAME_LENGTH {
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!("name must be at most {} characters", MAX_NAME_LENGTH),
|
||||
None,
|
||||
));
|
||||
}
|
||||
if let Some(folder) = folder
|
||||
&& folder.chars().count() > MAX_FOLDER_LENGTH
|
||||
{
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!("folder must be at most {} characters", MAX_FOLDER_LENGTH),
|
||||
None,
|
||||
));
|
||||
}
|
||||
if let Some(entry_type) = entry_type
|
||||
&& entry_type.chars().count() > MAX_ENTRY_TYPE_LENGTH
|
||||
{
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!("type must be at most {} characters", MAX_ENTRY_TYPE_LENGTH),
|
||||
None,
|
||||
));
|
||||
}
|
||||
if let Some(notes) = notes
|
||||
&& notes.chars().count() > MAX_NOTES_LENGTH
|
||||
{
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!("notes must be at most {} characters", MAX_NOTES_LENGTH),
|
||||
None,
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Validate the tags list.
|
||||
///
|
||||
/// Checks total count and per-tag character length.
|
||||
pub fn validate_tags(tags: &[String]) -> Result<(), rmcp::ErrorData> {
|
||||
if tags.len() > MAX_TAG_COUNT {
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!("at most {} tags are allowed", MAX_TAG_COUNT),
|
||||
None,
|
||||
));
|
||||
}
|
||||
for tag in tags {
|
||||
if tag.chars().count() > MAX_TAG_LENGTH {
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!(
|
||||
"tag '{}' exceeds the maximum length of {} characters",
|
||||
tag, MAX_TAG_LENGTH
|
||||
),
|
||||
None,
|
||||
));
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Validate metadata KV strings (key=value / key:=json format).
|
||||
///
|
||||
/// Checks total count and per-key/per-value character lengths.
|
||||
/// This is a best-effort check on the raw KV strings before parsing;
|
||||
/// keys containing `:` path separators are checked as a whole.
|
||||
pub fn validate_meta_entries(entries: &[String]) -> Result<(), rmcp::ErrorData> {
|
||||
if entries.len() > MAX_META_COUNT {
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!("at most {} metadata entries are allowed", MAX_META_COUNT),
|
||||
None,
|
||||
));
|
||||
}
|
||||
for entry in entries {
|
||||
// key:=json — check both key and JSON value length
|
||||
if let Some((key, value)) = entry.split_once(":=") {
|
||||
if key.chars().count() > MAX_META_KEY_LENGTH {
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!(
|
||||
"metadata key '{}' exceeds the maximum length of {} characters",
|
||||
key, MAX_META_KEY_LENGTH
|
||||
),
|
||||
None,
|
||||
));
|
||||
}
|
||||
if value.chars().count() > MAX_META_VALUE_LENGTH {
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!(
|
||||
"metadata JSON value for key '{}' exceeds the maximum length of {} characters",
|
||||
key, MAX_META_VALUE_LENGTH
|
||||
),
|
||||
None,
|
||||
));
|
||||
}
|
||||
continue;
|
||||
}
|
||||
// key=value or key@path
|
||||
if let Some((key, value)) = entry.split_once('=') {
|
||||
if key.chars().count() > MAX_META_KEY_LENGTH {
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!(
|
||||
"metadata key '{}' exceeds the maximum length of {} characters",
|
||||
key, MAX_META_KEY_LENGTH
|
||||
),
|
||||
None,
|
||||
));
|
||||
}
|
||||
if value.chars().count() > MAX_META_VALUE_LENGTH {
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!(
|
||||
"metadata value for key '{}' exceeds the maximum length of {} characters",
|
||||
key, MAX_META_VALUE_LENGTH
|
||||
),
|
||||
None,
|
||||
));
|
||||
}
|
||||
} else {
|
||||
// Fallback: entry without = or := — check total length
|
||||
let max_total = MAX_META_KEY_LENGTH + MAX_META_VALUE_LENGTH;
|
||||
if entry.chars().count() > max_total {
|
||||
let preview = entry.chars().take(50).collect::<String>();
|
||||
return Err(rmcp::ErrorData::invalid_params(
|
||||
format!(
|
||||
"metadata entry '{}' exceeds the maximum length of {} characters",
|
||||
preview, max_total
|
||||
),
|
||||
None,
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
use askama::Template;
|
||||
use chrono::SecondsFormat;
|
||||
use std::net::SocketAddr;
|
||||
use std::net::{IpAddr, SocketAddr};
|
||||
|
||||
use axum::{
|
||||
Json, Router,
|
||||
@@ -20,9 +20,9 @@ use secrets_core::crypto::hex;
|
||||
use secrets_core::error::AppError;
|
||||
use secrets_core::service::{
|
||||
api_key::{ensure_api_key, regenerate_api_key},
|
||||
audit_log::list_for_user,
|
||||
audit_log::{count_for_user, list_for_user},
|
||||
delete::delete_by_id,
|
||||
search::{SearchParams, fetch_secret_schemas, ilike_pattern, list_entries},
|
||||
search::{SearchParams, count_entries, fetch_secret_schemas, ilike_pattern, list_entries},
|
||||
update::{UpdateEntryFieldsByIdParams, update_fields_by_id},
|
||||
user::{
|
||||
OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id,
|
||||
@@ -72,6 +72,9 @@ struct AuditPageTemplate {
|
||||
user_name: String,
|
||||
user_email: String,
|
||||
entries: Vec<AuditEntryView>,
|
||||
current_page: u32,
|
||||
total_pages: u32,
|
||||
total_count: i64,
|
||||
version: &'static str,
|
||||
}
|
||||
|
||||
@@ -95,6 +98,9 @@ struct EntriesPageTemplate {
|
||||
filter_folder: String,
|
||||
filter_name: String,
|
||||
filter_type: String,
|
||||
current_page: u32,
|
||||
total_pages: u32,
|
||||
total_count: i64,
|
||||
version: &'static str,
|
||||
}
|
||||
|
||||
@@ -131,7 +137,8 @@ struct FolderTabView {
|
||||
}
|
||||
|
||||
/// Cap for HTML list (avoids loading unbounded rows into memory).
|
||||
const ENTRIES_PAGE_LIMIT: u32 = 5_000;
|
||||
const ENTRIES_PAGE_LIMIT: u32 = 50;
|
||||
const AUDIT_PAGE_LIMIT: i64 = 10;
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct EntriesQuery {
|
||||
@@ -140,6 +147,7 @@ struct EntriesQuery {
|
||||
/// URL query key is `type` (maps to DB column `entries.type`).
|
||||
#[serde(rename = "type")]
|
||||
entry_type: Option<String>,
|
||||
page: Option<u32>,
|
||||
}
|
||||
|
||||
// ── App state helpers ─────────────────────────────────────────────────────────
|
||||
@@ -168,14 +176,33 @@ async fn current_user_id(session: &Session) -> Option<Uuid> {
|
||||
}
|
||||
|
||||
fn request_client_ip(headers: &HeaderMap, connect_info: ConnectInfo<SocketAddr>) -> Option<String> {
|
||||
if let Some(first) = headers
|
||||
.get("x-forwarded-for")
|
||||
.and_then(|v| v.to_str().ok())
|
||||
.and_then(|s| s.split(',').next())
|
||||
{
|
||||
let value = first.trim();
|
||||
if !value.is_empty() {
|
||||
return Some(value.to_string());
|
||||
let trust_proxy = std::env::var("TRUST_PROXY")
|
||||
.as_deref()
|
||||
.is_ok_and(|v| matches!(v, "1" | "true" | "yes"));
|
||||
request_client_ip_with_trust_proxy(headers, connect_info, trust_proxy)
|
||||
}
|
||||
|
||||
fn request_client_ip_with_trust_proxy(
|
||||
headers: &HeaderMap,
|
||||
connect_info: ConnectInfo<SocketAddr>,
|
||||
trust_proxy: bool,
|
||||
) -> Option<String> {
|
||||
if trust_proxy {
|
||||
if let Some(first) = headers
|
||||
.get("x-forwarded-for")
|
||||
.and_then(|v| v.to_str().ok())
|
||||
.and_then(|s| s.split(',').next())
|
||||
{
|
||||
let value = first.trim();
|
||||
if let Ok(ip) = value.parse::<IpAddr>() {
|
||||
return Some(ip.to_string());
|
||||
}
|
||||
}
|
||||
if let Some(value) = headers.get("x-real-ip").and_then(|v| v.to_str().ok()) {
|
||||
let value = value.trim();
|
||||
if let Ok(ip) = value.parse::<IpAddr>() {
|
||||
return Some(ip.to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -191,6 +218,15 @@ fn request_user_agent(headers: &HeaderMap) -> Option<String> {
|
||||
.map(ToOwned::to_owned)
|
||||
}
|
||||
|
||||
fn paginate(page: u32, total_count: i64, page_size: u32) -> (u32, u32, u32) {
|
||||
let page_size = page_size.max(1);
|
||||
let safe_total_count = u32::try_from(total_count.max(0)).unwrap_or(u32::MAX);
|
||||
let total_pages = safe_total_count.div_ceil(page_size).max(1);
|
||||
let current_page = page.max(1).min(total_pages);
|
||||
let offset = (current_page - 1).saturating_mul(page_size);
|
||||
(current_page, total_pages, offset)
|
||||
}
|
||||
|
||||
// ── Routes ────────────────────────────────────────────────────────────────────
|
||||
|
||||
pub fn web_router() -> Router<AppState> {
|
||||
@@ -217,10 +253,6 @@ pub fn web_router() -> Router<AppState> {
|
||||
.route("/entries", get(entries_page))
|
||||
.route("/audit", get(audit_page))
|
||||
.route("/account/bind/google", get(account_bind_google))
|
||||
.route(
|
||||
"/account/bind/google/callback",
|
||||
get(account_bind_google_callback),
|
||||
)
|
||||
.route("/account/unbind/{provider}", post(account_unbind))
|
||||
.route("/api/key-salt", get(api_key_salt))
|
||||
.route("/api/key-setup", post(api_key_setup))
|
||||
@@ -596,7 +628,8 @@ async fn entries_page(
|
||||
.map(|s| s.trim())
|
||||
.filter(|s| !s.is_empty())
|
||||
.map(|s| s.to_string());
|
||||
let params = SearchParams {
|
||||
let page = q.page.unwrap_or(1).max(1);
|
||||
let count_params = SearchParams {
|
||||
folder: folder_filter.as_deref(),
|
||||
entry_type: type_filter.as_deref(),
|
||||
name: None,
|
||||
@@ -609,7 +642,18 @@ async fn entries_page(
|
||||
user_id: Some(user_id),
|
||||
};
|
||||
|
||||
let rows = list_entries(&state.pool, params).await.map_err(|e| {
|
||||
let total_count = count_entries(&state.pool, &count_params)
|
||||
.await
|
||||
.inspect_err(|e| tracing::warn!(error = %e, "count_entries failed for web entries page"))
|
||||
.unwrap_or(0);
|
||||
let (current_page, total_pages, offset) = paginate(page, total_count, ENTRIES_PAGE_LIMIT);
|
||||
|
||||
let list_params = SearchParams {
|
||||
offset,
|
||||
..count_params
|
||||
};
|
||||
|
||||
let rows = list_entries(&state.pool, list_params).await.map_err(|e| {
|
||||
tracing::error!(error = %e, "failed to load entries list for web");
|
||||
StatusCode::INTERNAL_SERVER_ERROR
|
||||
})?;
|
||||
@@ -681,7 +725,12 @@ async fn entries_page(
|
||||
type_options.sort_unstable();
|
||||
}
|
||||
|
||||
fn entries_href(folder: Option<&str>, entry_type: Option<&str>, name: Option<&str>) -> String {
|
||||
fn entries_href(
|
||||
folder: Option<&str>,
|
||||
entry_type: Option<&str>,
|
||||
name: Option<&str>,
|
||||
page: Option<u32>,
|
||||
) -> String {
|
||||
let mut pairs: Vec<String> = Vec::new();
|
||||
if let Some(f) = folder
|
||||
&& !f.is_empty()
|
||||
@@ -698,6 +747,9 @@ async fn entries_page(
|
||||
{
|
||||
pairs.push(format!("name={}", urlencoding::encode(n)));
|
||||
}
|
||||
if let Some(p) = page {
|
||||
pairs.push(format!("page={}", p));
|
||||
}
|
||||
if pairs.is_empty() {
|
||||
"/entries".to_string()
|
||||
} else {
|
||||
@@ -710,13 +762,23 @@ async fn entries_page(
|
||||
folder_tabs.push(FolderTabView {
|
||||
name: "全部".to_string(),
|
||||
count: all_count,
|
||||
href: entries_href(None, type_filter.as_deref(), name_filter.as_deref()),
|
||||
href: entries_href(
|
||||
None,
|
||||
type_filter.as_deref(),
|
||||
name_filter.as_deref(),
|
||||
Some(1),
|
||||
),
|
||||
active: folder_filter.is_none(),
|
||||
});
|
||||
for r in folder_rows {
|
||||
let name = r.folder;
|
||||
folder_tabs.push(FolderTabView {
|
||||
href: entries_href(Some(&name), type_filter.as_deref(), name_filter.as_deref()),
|
||||
href: entries_href(
|
||||
Some(&name),
|
||||
type_filter.as_deref(),
|
||||
name_filter.as_deref(),
|
||||
Some(1),
|
||||
),
|
||||
active: folder_filter.as_deref() == Some(name.as_str()),
|
||||
name,
|
||||
count: r.count,
|
||||
@@ -773,15 +835,24 @@ async fn entries_page(
|
||||
filter_folder: folder_filter.unwrap_or_default(),
|
||||
filter_name: name_filter.unwrap_or_default(),
|
||||
filter_type: type_filter.unwrap_or_default(),
|
||||
current_page,
|
||||
total_pages,
|
||||
total_count,
|
||||
version: env!("CARGO_PKG_VERSION"),
|
||||
};
|
||||
|
||||
render_template(tmpl)
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct AuditQuery {
|
||||
page: Option<u32>,
|
||||
}
|
||||
|
||||
async fn audit_page(
|
||||
State(state): State<AppState>,
|
||||
session: Session,
|
||||
Query(aq): Query<AuditQuery>,
|
||||
) -> Result<Response, StatusCode> {
|
||||
let Some(user_id) = current_user_id(&session).await else {
|
||||
return Ok(Redirect::to("/login").into_response());
|
||||
@@ -795,7 +866,17 @@ async fn audit_page(
|
||||
None => return Ok(Redirect::to("/login").into_response()),
|
||||
};
|
||||
|
||||
let rows = list_for_user(&state.pool, user_id, 100)
|
||||
let page = aq.page.unwrap_or(1).max(1);
|
||||
|
||||
let total_count = count_for_user(&state.pool, user_id).await.map_err(|e| {
|
||||
tracing::error!(error = %e, "failed to count audit log for user");
|
||||
StatusCode::INTERNAL_SERVER_ERROR
|
||||
})?;
|
||||
|
||||
let (current_page, total_pages, offset) = paginate(page, total_count, AUDIT_PAGE_LIMIT as u32);
|
||||
let actual_offset = i64::from(offset);
|
||||
|
||||
let rows = list_for_user(&state.pool, user_id, AUDIT_PAGE_LIMIT, actual_offset)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
tracing::error!(error = %e, "failed to load audit log for user");
|
||||
@@ -816,6 +897,9 @@ async fn audit_page(
|
||||
user_name: user.name.clone(),
|
||||
user_email: user.email.clone().unwrap_or_default(),
|
||||
entries,
|
||||
current_page,
|
||||
total_pages,
|
||||
total_count,
|
||||
version: env!("CARGO_PKG_VERSION"),
|
||||
};
|
||||
|
||||
@@ -840,14 +924,9 @@ async fn account_bind_google(
|
||||
StatusCode::INTERNAL_SERVER_ERROR
|
||||
})?;
|
||||
|
||||
let redirect_uri = format!("{}/account/bind/google/callback", state.base_url);
|
||||
let mut cfg = state
|
||||
.google_config
|
||||
.clone()
|
||||
.ok_or(StatusCode::SERVICE_UNAVAILABLE)?;
|
||||
cfg.redirect_uri = redirect_uri;
|
||||
let st = random_state();
|
||||
if let Err(e) = session.insert(SESSION_OAUTH_STATE, &st).await {
|
||||
let config = google_cfg(&state).ok_or(StatusCode::SERVICE_UNAVAILABLE)?;
|
||||
let oauth_state = random_state();
|
||||
if let Err(e) = session.insert(SESSION_OAUTH_STATE, &oauth_state).await {
|
||||
tracing::error!(error = %e, "failed to insert oauth_state for account bind flow");
|
||||
if let Err(rm) = session.remove::<bool>(SESSION_OAUTH_BIND_MODE).await {
|
||||
tracing::warn!(error = %rm, "failed to roll back oauth_bind_mode after oauth_state insert failure");
|
||||
@@ -855,34 +934,8 @@ async fn account_bind_google(
|
||||
return Err(StatusCode::INTERNAL_SERVER_ERROR);
|
||||
}
|
||||
|
||||
Ok(Redirect::to(&google_auth_url(&cfg, &st)).into_response())
|
||||
}
|
||||
|
||||
async fn account_bind_google_callback(
|
||||
State(state): State<AppState>,
|
||||
connect_info: ConnectInfo<SocketAddr>,
|
||||
headers: HeaderMap,
|
||||
session: Session,
|
||||
Query(params): Query<OAuthCallbackQuery>,
|
||||
) -> Result<Response, StatusCode> {
|
||||
let client_ip = request_client_ip(&headers, connect_info);
|
||||
let user_agent = request_user_agent(&headers);
|
||||
handle_oauth_callback(
|
||||
&state,
|
||||
&session,
|
||||
params,
|
||||
"google",
|
||||
client_ip.as_deref(),
|
||||
user_agent.as_deref(),
|
||||
|s, cfg, code| {
|
||||
Box::pin(crate::oauth::google::exchange_code(
|
||||
&s.http_client,
|
||||
cfg,
|
||||
code,
|
||||
))
|
||||
},
|
||||
)
|
||||
.await
|
||||
let url = google_auth_url(config, &oauth_state);
|
||||
Ok(Redirect::to(&url).into_response())
|
||||
}
|
||||
|
||||
async fn account_unbind(
|
||||
@@ -1134,10 +1187,16 @@ fn map_app_error(err: &AppError, lang: UiLang) -> EntryApiError {
|
||||
StatusCode::CONFLICT,
|
||||
Json(json!({ "error": err.to_string() })),
|
||||
),
|
||||
AppError::NotFoundEntry => (
|
||||
AppError::NotFoundEntry | AppError::NotFoundUser | AppError::NotFoundSecret => (
|
||||
StatusCode::NOT_FOUND,
|
||||
Json(
|
||||
json!({ "error": tr(lang, "条目不存在或无权访问", "條目不存在或無權存取", "Entry not found or no access") }),
|
||||
json!({ "error": tr(lang, "资源不存在或无权访问", "資源不存在或無權存取", "Resource not found or no access") }),
|
||||
),
|
||||
),
|
||||
AppError::AuthenticationFailed | AppError::Unauthorized => (
|
||||
StatusCode::UNAUTHORIZED,
|
||||
Json(
|
||||
json!({ "error": tr(lang, "认证失败或无权访问", "認證失敗或無權存取", "Authentication failed or unauthorized") }),
|
||||
),
|
||||
),
|
||||
AppError::Validation { message } => {
|
||||
@@ -1149,6 +1208,18 @@ fn map_app_error(err: &AppError, lang: UiLang) -> EntryApiError {
|
||||
json!({ "error": tr(lang, "条目已被修改,请刷新后重试", "條目已被修改,請重新整理後重試", "Entry was modified, please refresh and try again") }),
|
||||
),
|
||||
),
|
||||
AppError::DecryptionFailed => (
|
||||
StatusCode::BAD_REQUEST,
|
||||
Json(
|
||||
json!({ "error": tr(lang, "解密失败,请检查密码短语", "解密失敗,請檢查密碼短語", "Decryption failed — please check your passphrase") }),
|
||||
),
|
||||
),
|
||||
AppError::EncryptionKeyNotSet => (
|
||||
StatusCode::BAD_REQUEST,
|
||||
Json(
|
||||
json!({ "error": tr(lang, "请先设置密码短语后再使用此功能", "請先設定密碼短語再使用此功能", "Please set a passphrase before using this feature") }),
|
||||
),
|
||||
),
|
||||
AppError::Internal(_) => {
|
||||
tracing::error!(error = %err, "internal error in entry mutation");
|
||||
(
|
||||
@@ -1655,6 +1726,34 @@ fn format_audit_target(folder: &str, entry_type: &str, name: &str) -> String {
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn request_client_ip_ignores_forwarded_headers_without_trusted_proxy() {
|
||||
let mut headers = HeaderMap::new();
|
||||
headers.insert("x-forwarded-for", "203.0.113.10".parse().unwrap());
|
||||
|
||||
let ip = request_client_ip_with_trust_proxy(
|
||||
&headers,
|
||||
ConnectInfo(SocketAddr::from(([127, 0, 0, 1], 9315))),
|
||||
false,
|
||||
);
|
||||
|
||||
assert_eq!(ip.as_deref(), Some("127.0.0.1"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn request_client_ip_uses_valid_forwarded_header_with_trusted_proxy() {
|
||||
let mut headers = HeaderMap::new();
|
||||
headers.insert("x-forwarded-for", "203.0.113.10, 10.0.0.1".parse().unwrap());
|
||||
|
||||
let ip = request_client_ip_with_trust_proxy(
|
||||
&headers,
|
||||
ConnectInfo(SocketAddr::from(([127, 0, 0, 1], 9315))),
|
||||
true,
|
||||
);
|
||||
|
||||
assert_eq!(ip.as_deref(), Some("203.0.113.10"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn request_ui_lang_prefers_zh_cn_over_en_fallback() {
|
||||
let mut headers = HeaderMap::new();
|
||||
@@ -1673,4 +1772,29 @@ mod tests {
|
||||
|
||||
assert!(matches!(request_ui_lang(&headers), UiLang::ZhTw));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn paginate_clamps_page_before_computing_offset() {
|
||||
let (current_page, total_pages, offset) = paginate(100, 12, 10);
|
||||
|
||||
assert_eq!(current_page, 2);
|
||||
assert_eq!(total_pages, 2);
|
||||
assert_eq!(offset, 10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn paginate_handles_large_page_without_overflow() {
|
||||
let (current_page, total_pages, offset) = paginate(u32::MAX, 1, ENTRIES_PAGE_LIMIT);
|
||||
|
||||
assert_eq!(current_page, 1);
|
||||
assert_eq!(total_pages, 1);
|
||||
assert_eq!(offset, 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn paginate_saturates_large_total_count() {
|
||||
let (_, total_pages, _) = paginate(1, i64::MAX, ENTRIES_PAGE_LIMIT);
|
||||
|
||||
assert_eq!(total_pages, u32::MAX.div_ceil(ENTRIES_PAGE_LIMIT));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -50,8 +50,25 @@
|
||||
.main { padding: 32px 24px 40px; flex: 1; }
|
||||
.card { background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
|
||||
padding: 24px; width: 100%; max-width: 1180px; margin: 0 auto; }
|
||||
.card-title { font-size: 20px; font-weight: 600; margin-bottom: 8px; }
|
||||
.card-subtitle { color: var(--text-muted); font-size: 13px; margin-bottom: 20px; }
|
||||
.card-title-row {
|
||||
display: flex; align-items: center; flex-wrap: wrap; gap: 8px;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
.card-title { font-size: 20px; font-weight: 600; margin: 0; }
|
||||
.card-title-count {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
min-height: 24px;
|
||||
padding: 0 8px;
|
||||
border: 1px solid var(--border);
|
||||
border-radius: 999px;
|
||||
background: var(--bg);
|
||||
color: var(--text-muted);
|
||||
font-size: 12px;
|
||||
font-weight: 600;
|
||||
line-height: 1;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
.empty { color: var(--text-muted); font-size: 14px; padding: 20px 0; }
|
||||
table { width: 100%; border-collapse: collapse; }
|
||||
th, td { text-align: left; vertical-align: top; padding: 12px 10px; border-top: 1px solid var(--border); }
|
||||
@@ -85,6 +102,24 @@
|
||||
}
|
||||
.detail { max-width: none; }
|
||||
}
|
||||
.pagination {
|
||||
display: flex; align-items: center; gap: 8px; margin-top: 20px;
|
||||
justify-content: center; padding: 12px 0;
|
||||
}
|
||||
.page-btn {
|
||||
padding: 6px 14px; border-radius: 6px; border: 1px solid var(--border);
|
||||
background: var(--surface); color: var(--text); text-decoration: none;
|
||||
font-size: 13px; cursor: pointer;
|
||||
}
|
||||
.page-btn:hover { background: var(--surface2); }
|
||||
.page-btn-disabled {
|
||||
padding: 6px 14px; border-radius: 6px; border: 1px solid var(--border);
|
||||
background: var(--surface); color: var(--text-muted); font-size: 13px;
|
||||
opacity: 0.5; cursor: not-allowed;
|
||||
}
|
||||
.page-info {
|
||||
color: var(--text-muted); font-size: 13px; font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
@@ -114,8 +149,10 @@
|
||||
|
||||
<main class="main">
|
||||
<section class="card">
|
||||
<div class="card-title" data-i18n="auditTitle">我的审计</div>
|
||||
<div class="card-subtitle" data-i18n="auditSubtitle">展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。</div>
|
||||
<div class="card-title-row">
|
||||
<div class="card-title" data-i18n="auditTitle">我的审计</div>
|
||||
<span class="card-title-count">{{ total_count }}</span>
|
||||
</div>
|
||||
|
||||
{% if entries.is_empty() %}
|
||||
<div class="empty" data-i18n="emptyAudit">暂无审计记录。</div>
|
||||
@@ -140,6 +177,22 @@
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
{% if total_count > 0 %}
|
||||
<div class="pagination">
|
||||
{% if current_page > 1 %}
|
||||
<a href="?page={{ current_page - 1 }}" class="page-btn" data-i18n="prevPage">上一页</a>
|
||||
{% else %}
|
||||
<span class="page-btn page-btn-disabled" data-i18n="prevPage">上一页</span>
|
||||
{% endif %}
|
||||
<span class="page-info">{{ current_page }} / {{ total_pages }}</span>
|
||||
{% if current_page < total_pages %}
|
||||
<a href="?page={{ current_page + 1 }}" class="page-btn" data-i18n="nextPage">下一页</a>
|
||||
{% else %}
|
||||
<span class="page-btn page-btn-disabled" data-i18n="nextPage">下一页</span>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
</section>
|
||||
</main>
|
||||
@@ -149,9 +202,9 @@
|
||||
<script>
|
||||
(function () {
|
||||
I18N_PAGE = {
|
||||
'zh-CN': { pageTitle: 'Secrets — 审计', auditTitle: '我的审计', auditSubtitle: '展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。', emptyAudit: '暂无审计记录。', colTime: '时间', colAction: '动作', colTarget: '目标', colDetail: '详情' },
|
||||
'zh-TW': { pageTitle: 'Secrets — 審計', auditTitle: '我的審計', auditSubtitle: '顯示最近 100 筆與目前使用者相關的新審計記錄。時間為瀏覽器本地時區。', emptyAudit: '暫無審計記錄。', colTime: '時間', colAction: '動作', colTarget: '目標', colDetail: '詳情' },
|
||||
en: { pageTitle: 'Secrets — Audit', auditTitle: 'My audit', auditSubtitle: 'Shows the latest 100 audit records related to the current user. Time is in browser local timezone.', emptyAudit: 'No audit records.', colTime: 'Time', colAction: 'Action', colTarget: 'Target', colDetail: 'Detail' }
|
||||
'zh-CN': { pageTitle: 'Secrets — 审计', auditTitle: '我的审计', emptyAudit: '暂无审计记录。', colTime: '时间', colAction: '动作', colTarget: '目标', colDetail: '详情', prevPage: '上一页', nextPage: '下一页' },
|
||||
'zh-TW': { pageTitle: 'Secrets — 審計', auditTitle: '我的審計', emptyAudit: '暫無審計記錄。', colTime: '時間', colAction: '動作', colTarget: '目標', colDetail: '詳情', prevPage: '上一頁', nextPage: '下一頁' },
|
||||
en: { pageTitle: 'Secrets — Audit', auditTitle: 'My audit', emptyAudit: 'No audit records.', colTime: 'Time', colAction: 'Action', colTarget: 'Target', colDetail: 'Detail', prevPage: 'Previous', nextPage: 'Next' }
|
||||
};
|
||||
|
||||
window.applyPageLang = function () {
|
||||
|
||||
@@ -123,7 +123,7 @@
|
||||
background: var(--bg);
|
||||
}
|
||||
table {
|
||||
width: max-content;
|
||||
width: 100%;
|
||||
min-width: 960px;
|
||||
border-collapse: separate;
|
||||
border-spacing: 0;
|
||||
@@ -142,12 +142,12 @@
|
||||
td { font-size: 13px; line-height: 1.45; }
|
||||
tbody tr:nth-child(2n) td { background: rgba(255, 255, 255, 0.01); }
|
||||
.mono { font-family: 'JetBrains Mono', monospace; }
|
||||
.col-type { min-width: 108px; }
|
||||
.col-type { min-width: 108px; width: 1%; }
|
||||
.col-name { min-width: 180px; max-width: 260px; }
|
||||
.col-tags { min-width: 160px; max-width: 220px; }
|
||||
.col-secrets { min-width: 220px; max-width: 420px; vertical-align: top; }
|
||||
.col-secrets .secret-list { max-height: 120px; overflow: auto; }
|
||||
.col-actions { min-width: 132px; }
|
||||
.col-actions { min-width: 132px; width: 1%; }
|
||||
.cell-name, .cell-tags-val {
|
||||
overflow-wrap: anywhere;
|
||||
word-break: break-word;
|
||||
@@ -350,6 +350,24 @@
|
||||
}
|
||||
.detail, .notes-scroll, .secret-list { max-width: none; }
|
||||
}
|
||||
.pagination {
|
||||
display: flex; align-items: center; gap: 8px; margin-top: 20px;
|
||||
justify-content: center; padding: 12px 0;
|
||||
}
|
||||
.page-btn {
|
||||
padding: 6px 14px; border-radius: 6px; border: 1px solid var(--border);
|
||||
background: var(--surface); color: var(--text); text-decoration: none;
|
||||
font-size: 13px; cursor: pointer;
|
||||
}
|
||||
.page-btn:hover { background: var(--surface2); }
|
||||
.page-btn-disabled {
|
||||
padding: 6px 14px; border-radius: 6px; border: 1px solid var(--border);
|
||||
background: var(--surface); color: var(--text-muted); font-size: 13px;
|
||||
opacity: 0.5; cursor: not-allowed;
|
||||
}
|
||||
.page-info {
|
||||
color: var(--text-muted); font-size: 13px; font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
@@ -456,6 +474,22 @@
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
{% if total_count > 0 %}
|
||||
<div class="pagination">
|
||||
{% if current_page > 1 %}
|
||||
<a href="?{% if !filter_folder.is_empty() %}folder={{ filter_folder | urlencode }}&{% endif %}{% if !filter_type.is_empty() %}type={{ filter_type | urlencode }}&{% endif %}{% if !filter_name.is_empty() %}name={{ filter_name | urlencode }}&{% endif %}page={{ current_page - 1 }}" class="page-btn" data-i18n="prevPage">上一页</a>
|
||||
{% else %}
|
||||
<span class="page-btn page-btn-disabled" data-i18n="prevPage">上一页</span>
|
||||
{% endif %}
|
||||
<span class="page-info">{{ current_page }} / {{ total_pages }}</span>
|
||||
{% if current_page < total_pages %}
|
||||
<a href="?{% if !filter_folder.is_empty() %}folder={{ filter_folder | urlencode }}&{% endif %}{% if !filter_type.is_empty() %}type={{ filter_type | urlencode }}&{% endif %}{% if !filter_name.is_empty() %}name={{ filter_name | urlencode }}&{% endif %}page={{ current_page + 1 }}" class="page-btn" data-i18n="nextPage">下一页</a>
|
||||
{% else %}
|
||||
<span class="page-btn page-btn-disabled" data-i18n="nextPage">下一页</span>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
</section>
|
||||
</main>
|
||||
@@ -554,7 +588,9 @@ var SECRET_TYPE_OPTIONS = JSON.parse(document.getElementById('secret-type-option
|
||||
secretNameCheckError: '校验失败,请重试',
|
||||
secretNameFixBeforeSave: '请先修复密文名称校验问题后再保存',
|
||||
secretTypePlaceholder: '选择类型',
|
||||
secretTypeInvalid: '类型不能为空'
|
||||
secretTypeInvalid: '类型不能为空',
|
||||
prevPage: '上一页',
|
||||
nextPage: '下一页',
|
||||
},
|
||||
'zh-TW': {
|
||||
pageTitle: 'Secrets — 條目',
|
||||
@@ -610,7 +646,9 @@ var SECRET_TYPE_OPTIONS = JSON.parse(document.getElementById('secret-type-option
|
||||
secretNameCheckError: '校驗失敗,請重試',
|
||||
secretNameFixBeforeSave: '請先修復密文名稱校驗問題後再儲存',
|
||||
secretTypePlaceholder: '選擇類型',
|
||||
secretTypeInvalid: '類型不能為空'
|
||||
secretTypeInvalid: '類型不能為空',
|
||||
prevPage: '上一頁',
|
||||
nextPage: '下一頁',
|
||||
},
|
||||
en: {
|
||||
pageTitle: 'Secrets — Entries',
|
||||
@@ -666,7 +704,9 @@ var SECRET_TYPE_OPTIONS = JSON.parse(document.getElementById('secret-type-option
|
||||
secretNameCheckError: 'Validation failed, please retry',
|
||||
secretNameFixBeforeSave: 'Fix secret name validation errors before saving',
|
||||
secretTypePlaceholder: 'Select type',
|
||||
secretTypeInvalid: 'Type cannot be empty'
|
||||
secretTypeInvalid: 'Type cannot be empty',
|
||||
prevPage: 'Previous',
|
||||
nextPage: 'Next'
|
||||
}
|
||||
};
|
||||
|
||||
@@ -961,6 +1001,114 @@ var SECRET_TYPE_OPTIONS = JSON.parse(document.getElementById('secret-type-option
|
||||
showDeleteErr('');
|
||||
}
|
||||
|
||||
function refreshListAfterSave(entryId, body, secretRows) {
|
||||
var tr = document.querySelector('tr[data-entry-id="' + entryId + '"]');
|
||||
if (!tr) { window.location.reload(); return; }
|
||||
var nameCell = tr.querySelector('.cell-name');
|
||||
if (nameCell) nameCell.textContent = body.name;
|
||||
var typeCell = tr.querySelector('.cell-type');
|
||||
if (typeCell) typeCell.textContent = body.type;
|
||||
var notesCell = tr.querySelector('.cell-notes-val');
|
||||
if (notesCell) {
|
||||
if (body.notes) { notesCell.textContent = body.notes; }
|
||||
else { var notesWrap = tr.querySelector('.cell-notes'); if (notesWrap) notesWrap.innerHTML = ''; }
|
||||
}
|
||||
var tagsCell = tr.querySelector('.cell-tags-val');
|
||||
if (tagsCell) tagsCell.textContent = body.tags.join(', ');
|
||||
var secretsList = tr.querySelector('.secret-list');
|
||||
if (secretsList) {
|
||||
secretsList.innerHTML = '';
|
||||
secretRows.forEach(function (info) {
|
||||
var chip = document.createElement('span');
|
||||
chip.className = 'secret-chip';
|
||||
var nameSpan = document.createElement('span');
|
||||
nameSpan.className = 'secret-name';
|
||||
nameSpan.textContent = info.newName;
|
||||
nameSpan.title = info.newName;
|
||||
var typeSpan = document.createElement('span');
|
||||
typeSpan.className = 'secret-type';
|
||||
typeSpan.textContent = info.newType || 'text';
|
||||
var unlinkBtn = document.createElement('button');
|
||||
unlinkBtn.type = 'button';
|
||||
unlinkBtn.className = 'btn-unlink-secret';
|
||||
unlinkBtn.setAttribute('data-secret-id', info.secretId);
|
||||
unlinkBtn.setAttribute('data-secret-name', info.newName);
|
||||
unlinkBtn.title = t('unlinkTitle');
|
||||
unlinkBtn.textContent = '\u00d7';
|
||||
chip.appendChild(nameSpan);
|
||||
chip.appendChild(typeSpan);
|
||||
chip.appendChild(unlinkBtn);
|
||||
secretsList.appendChild(chip);
|
||||
});
|
||||
}
|
||||
tr.setAttribute('data-entry-folder', body.folder);
|
||||
tr.setAttribute('data-entry-metadata', JSON.stringify(body.metadata));
|
||||
var updatedSecrets = secretRows.map(function (info) {
|
||||
return { id: info.secretId, name: info.newName, secret_type: info.newType || 'text' };
|
||||
});
|
||||
tr.setAttribute('data-entry-secrets', JSON.stringify(updatedSecrets));
|
||||
}
|
||||
|
||||
function refreshListAfterDelete(entryId) {
|
||||
var tr = document.querySelector('tr[data-entry-id="' + entryId + '"]');
|
||||
var folder = tr ? tr.getAttribute('data-entry-folder') : null;
|
||||
if (tr) tr.remove();
|
||||
var tbody = document.querySelector('table tbody');
|
||||
if (tbody && !tbody.querySelector('tr[data-entry-id]')) {
|
||||
var card = document.querySelector('.card');
|
||||
if (card) {
|
||||
var tableWrap = card.querySelector('.table-wrap');
|
||||
if (tableWrap) tableWrap.remove();
|
||||
var existingEmpty = card.querySelector('.empty');
|
||||
if (!existingEmpty) {
|
||||
var emptyDiv = document.createElement('div');
|
||||
emptyDiv.className = 'empty';
|
||||
emptyDiv.setAttribute('data-i18n', 'emptyEntries');
|
||||
emptyDiv.textContent = t('emptyEntries');
|
||||
var filterBar = card.querySelector('.filter-bar');
|
||||
if (filterBar) { card.insertBefore(emptyDiv, filterBar.nextSibling); }
|
||||
else { card.appendChild(emptyDiv); }
|
||||
}
|
||||
}
|
||||
}
|
||||
var allTab = document.querySelector('.folder-tab[data-all-tab="1"]');
|
||||
if (allTab) {
|
||||
var count = parseInt(allTab.getAttribute('data-count') || '0', 10);
|
||||
if (count > 0) {
|
||||
count -= 1;
|
||||
allTab.setAttribute('data-count', String(count));
|
||||
allTab.textContent = t('allTab') + ' (' + count + ')';
|
||||
}
|
||||
}
|
||||
if (folder) {
|
||||
document.querySelectorAll('.folder-tab:not([data-all-tab])').forEach(function (tab) {
|
||||
if (tab.textContent.trim().indexOf(folder) === 0) {
|
||||
var m = tab.textContent.match(/\((\d+)\)/);
|
||||
if (m) {
|
||||
var c = parseInt(m[1], 10);
|
||||
if (c > 0) {
|
||||
c -= 1;
|
||||
tab.textContent = folder + ' (' + c + ')';
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function refreshListAfterUnlink(entryId, secretId) {
|
||||
var tr = document.querySelector('tr[data-entry-id="' + entryId + '"]');
|
||||
if (!tr) return;
|
||||
var chip = tr.querySelector('.btn-unlink-secret[data-secret-id="' + secretId + '"]');
|
||||
if (chip && chip.parentElement) chip.parentElement.remove();
|
||||
var secrets = tr.getAttribute('data-entry-secrets');
|
||||
try {
|
||||
var arr = JSON.parse(secrets);
|
||||
arr = arr.filter(function (s) { return s.id !== secretId; });
|
||||
tr.setAttribute('data-entry-secrets', JSON.stringify(arr));
|
||||
} catch (e) {}
|
||||
}
|
||||
|
||||
document.getElementById('delete-cancel').addEventListener('click', closeDelete);
|
||||
deleteOverlay.addEventListener('click', function (e) {
|
||||
if (e.target === deleteOverlay) closeDelete();
|
||||
@@ -975,8 +1123,9 @@ var SECRET_TYPE_OPTIONS = JSON.parse(document.getElementById('secret-type-option
|
||||
});
|
||||
})
|
||||
.then(function () {
|
||||
var deletedId = pendingDeleteId;
|
||||
closeDelete();
|
||||
window.location.reload();
|
||||
refreshListAfterDelete(deletedId);
|
||||
})
|
||||
.catch(function (e) { showDeleteErr(e.message || String(e)); });
|
||||
});
|
||||
@@ -1086,7 +1235,7 @@ var SECRET_TYPE_OPTIONS = JSON.parse(document.getElementById('secret-type-option
|
||||
}));
|
||||
}).then(function () {
|
||||
closeEdit();
|
||||
window.location.reload();
|
||||
refreshListAfterSave(currentEntryId, body, secretRows);
|
||||
}).catch(function (e) {
|
||||
showEditErr(e.message || String(e));
|
||||
});
|
||||
@@ -1102,7 +1251,6 @@ var SECRET_TYPE_OPTIONS = JSON.parse(document.getElementById('secret-type-option
|
||||
var secretId = btn.getAttribute('data-secret-id');
|
||||
var secretName = btn.getAttribute('data-secret-name') || '';
|
||||
if (!entryId || !secretId) return;
|
||||
if (!confirm(tf('confirmUnlinkSecret', { name: secretName }))) return;
|
||||
fetch('/api/entries/' + encodeURIComponent(entryId) + '/secrets/' + encodeURIComponent(secretId), {
|
||||
method: 'DELETE',
|
||||
credentials: 'same-origin'
|
||||
@@ -1112,7 +1260,7 @@ var SECRET_TYPE_OPTIONS = JSON.parse(document.getElementById('secret-type-option
|
||||
return data;
|
||||
});
|
||||
}).then(function () {
|
||||
window.location.reload();
|
||||
refreshListAfterUnlink(entryId, secretId);
|
||||
}).catch(function (err) {
|
||||
alert(err.message || String(err));
|
||||
});
|
||||
@@ -1126,7 +1274,6 @@ var SECRET_TYPE_OPTIONS = JSON.parse(document.getElementById('secret-type-option
|
||||
var secretId = btn.getAttribute('data-secret-id');
|
||||
var secretName = btn.getAttribute('data-secret-name') || '';
|
||||
if (!entryId || !secretId) return;
|
||||
if (!confirm(tf('confirmUnlinkSecret', { name: secretName }))) return;
|
||||
fetch('/api/entries/' + encodeURIComponent(entryId) + '/secrets/' + encodeURIComponent(secretId), {
|
||||
method: 'DELETE',
|
||||
credentials: 'same-origin'
|
||||
@@ -1136,7 +1283,12 @@ var SECRET_TYPE_OPTIONS = JSON.parse(document.getElementById('secret-type-option
|
||||
return data;
|
||||
});
|
||||
}).then(function () {
|
||||
window.location.reload();
|
||||
btn.closest('.secret-edit-row').remove();
|
||||
var tableRow = document.querySelector('tr[data-entry-id="' + entryId + '"]');
|
||||
if (tableRow) {
|
||||
var chip = tableRow.querySelector('.btn-unlink-secret[data-secret-id="' + secretId + '"]');
|
||||
if (chip && chip.parentElement) chip.parentElement.remove();
|
||||
}
|
||||
}).catch(function (err) {
|
||||
alert(err.message || String(err));
|
||||
});
|
||||
|
||||
@@ -31,7 +31,23 @@ GOOGLE_CLIENT_SECRET=
|
||||
# ─── 日志(可选)──────────────────────────────────────────────────────
|
||||
# RUST_LOG=secrets_mcp=debug
|
||||
|
||||
# ─── 注意 ─────────────────────────────────────────────────────────────
|
||||
# SERVER_MASTER_KEY 已不再需要。
|
||||
# 新架构(E2EE)中,加密密钥由用户密码短语在客户端本地派生,服务端不持有原始密钥。
|
||||
# 仅在需要迁移旧版 wrapped_key 数据时临时启用。
|
||||
# ─── 数据库连接池(可选)──────────────────────────────────────────────
|
||||
# 最大连接数,默认 10
|
||||
# SECRETS_DATABASE_POOL_SIZE=10
|
||||
# 获取连接超时秒数,默认 5
|
||||
# SECRETS_DATABASE_ACQUIRE_TIMEOUT=5
|
||||
|
||||
# ─── 限流(可选)──────────────────────────────────────────────────────
|
||||
# 全局限流速率(req/s),默认 100
|
||||
# RATE_LIMIT_GLOBAL_PER_SECOND=100
|
||||
# 全局限流突发量,默认 200
|
||||
# RATE_LIMIT_GLOBAL_BURST=200
|
||||
# 单 IP 限流速率(req/s),默认 20
|
||||
# RATE_LIMIT_IP_PER_SECOND=20
|
||||
# 单 IP 限流突发量,默认 40
|
||||
# RATE_LIMIT_IP_BURST=40
|
||||
|
||||
# ─── 代理信任(可选)─────────────────────────────────────────────────
|
||||
# 设为 1/true/yes 时从 X-Forwarded-For / X-Real-IP 提取客户端 IP
|
||||
# 仅在反代环境下启用,否则客户端可伪造 IP 绕过限流
|
||||
# TRUST_PROXY=1
|
||||
|
||||
@@ -11,7 +11,7 @@ tag="secrets-mcp-${version}"
|
||||
echo "==> 当前 secrets-mcp 版本: ${version}"
|
||||
echo "==> 检查是否已存在 tag: ${tag}"
|
||||
|
||||
if git rev-parse "refs/tags/${tag}" >/dev/null 2>&1; then
|
||||
if jj log --no-graph --revisions "tag(${tag})" --limit 1 >/dev/null 2>&1; then
|
||||
echo "提示: 已存在 tag ${tag},将按重复构建处理,不阻断检查。"
|
||||
echo "如需创建新的发布版本,请先 bump crates/secrets-mcp/Cargo.toml 中的 version。"
|
||||
else
|
||||
|
||||
@@ -1,95 +0,0 @@
|
||||
#!/bin/bash
|
||||
# 同步测试环境数据到生产环境
|
||||
# 用法: ./scripts/sync-test-to-prod.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# PostgreSQL 客户端工具路径 (Homebrew libpq)
|
||||
export PATH="/opt/homebrew/opt/libpq/bin:$PATH"
|
||||
|
||||
# SSL 配置
|
||||
export PGSSLMODE=verify-full
|
||||
export PGSSLROOTCERT=/etc/ssl/cert.pem
|
||||
|
||||
# 测试环境
|
||||
TEST_DB="postgres://postgres:Voson_2026_Pg18!@db.refining.ltd:5432/secrets-nn-test"
|
||||
|
||||
# 生产环境
|
||||
PROD_DB="postgres://postgres:Voson_2026_Pg18!@db.refining.ltd:5432/secrets-nn-prod"
|
||||
|
||||
echo "========================================="
|
||||
echo " 测试环境 -> 生产环境 数据同步"
|
||||
echo "========================================="
|
||||
echo ""
|
||||
|
||||
# 确认操作
|
||||
read -p "⚠️ 此操作将覆盖生产环境数据,确认继续? (yes/no): " confirm
|
||||
if [ "$confirm" != "yes" ]; then
|
||||
echo "已取消"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "步骤 1/4: 导出测试环境数据..."
|
||||
TEMP_DIR=$(mktemp -d)
|
||||
trap "rm -rf $TEMP_DIR" EXIT
|
||||
|
||||
# 导出测试环境数据(不含审计日志和历史记录)
|
||||
pg_dump "$TEST_DB" \
|
||||
--table=entries \
|
||||
--table=secrets \
|
||||
--table=entry_secrets \
|
||||
--table=users \
|
||||
--table=oauth_accounts \
|
||||
--data-only \
|
||||
--column-inserts \
|
||||
--no-owner \
|
||||
--no-privileges \
|
||||
> "$TEMP_DIR/test_data.sql"
|
||||
|
||||
echo "✓ 测试数据已导出到临时文件"
|
||||
echo " 文件大小: $(du -h "$TEMP_DIR/test_data.sql" | cut -f1)"
|
||||
|
||||
echo ""
|
||||
echo "步骤 2/4: 备份当前生产数据..."
|
||||
pg_dump "$PROD_DB" \
|
||||
--table=entries \
|
||||
--table=secrets \
|
||||
--table=entry_secrets \
|
||||
--table=users \
|
||||
--table=oauth_accounts \
|
||||
--data-only \
|
||||
--column-inserts \
|
||||
--no-owner \
|
||||
--no-privileges \
|
||||
> "$TEMP_DIR/prod_backup_$(date +%Y%m%d_%H%M%S).sql"
|
||||
|
||||
echo "✓ 生产数据已备份"
|
||||
|
||||
echo ""
|
||||
echo "步骤 3/4: 清空生产环境目标表..."
|
||||
psql "$PROD_DB" <<'SQL'
|
||||
TRUNCATE TABLE entry_secrets CASCADE;
|
||||
TRUNCATE TABLE secrets CASCADE;
|
||||
TRUNCATE TABLE entries CASCADE;
|
||||
SQL
|
||||
|
||||
echo "✓ 生产环境目标表已清空"
|
||||
|
||||
echo ""
|
||||
echo "步骤 4/4: 导入测试数据到生产环境..."
|
||||
psql "$PROD_DB" -f "$TEMP_DIR/test_data.sql" 2>&1 | tail -20
|
||||
|
||||
echo ""
|
||||
echo "验证数据..."
|
||||
echo "生产环境数据统计:"
|
||||
psql "$PROD_DB" -c "SELECT 'users' as table_name, count(*) FROM users UNION ALL SELECT 'entries', count(*) FROM entries UNION ALL SELECT 'secrets', count(*) FROM secrets UNION ALL SELECT 'entry_secrets', count(*) FROM entry_secrets UNION ALL SELECT 'oauth_accounts', count(*) FROM oauth_accounts ORDER BY table_name;"
|
||||
|
||||
echo ""
|
||||
echo "========================================="
|
||||
echo " ✓ 数据同步完成!"
|
||||
echo "========================================="
|
||||
echo ""
|
||||
echo "提示:"
|
||||
echo " - 生产数据备份已保存在临时目录"
|
||||
echo " - 临时文件将在脚本退出后自动删除"
|
||||
Reference in New Issue
Block a user