Compare commits

...

14 Commits

Author SHA1 Message Date
voson
b6349dd1c8 chore(secrets-mcp): bump version to 0.1.10
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 2m57s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 5s
Made-with: Cursor
2026-03-21 16:46:33 +08:00
voson
f720983328 refactor(db): 移除无意义 actor,修复 history 多租户与模型
Some checks failed
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been cancelled
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Has started running
- 删除 entries_history / audit_log / secrets_history 的 actor 列及写入逻辑
- MCP secrets_history 透传当前 user_id
- Entry 增加 user_id,search 查询不再用伪 UUID
- 迁移:保留 users.api_key,从 api_keys 表回退时生成新明文 key 并删表
- 文档:audit_log auth 语义、API Key 存储说明

Made-with: Cursor
2026-03-21 16:45:50 +08:00
voson
7bd0603dc6 chore(secrets-mcp): bump version to 0.1.9
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 2m47s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 5s
Made-with: Cursor
2026-03-21 12:25:38 +08:00
voson
17a95bea5b refactor(audit): 移除旧格式兼容,user_id 统一走列字段
Some checks failed
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been cancelled
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Has been cancelled
- audit_log 查询去掉 detail->>'user_id' 回退分支
- login_detail 不再冗余写入 user_id 到 detail JSON
- 迁移 SQL 去掉多余的 ALTER TABLE ADD COLUMN

Made-with: Cursor
2026-03-21 12:24:00 +08:00
voson
a42db62702 style(secrets-mcp): rustfmt web.rs audit mapping
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 5m20s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
Made-with: Cursor
2026-03-21 12:06:29 +08:00
voson
2edb970cba chore(secrets-mcp): bump version to 0.1.8
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Failing after 19s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been skipped
Made-with: Cursor
2026-03-21 12:05:22 +08:00
voson
17f8ac0dbc web: 审计页时间按浏览器本地时区显示
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Failing after 25s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been skipped
Made-with: Cursor
2026-03-21 12:03:44 +08:00
voson
259fbe10a6 ci: 精简 Release upsert 逻辑
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m36s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 5s
提取 auth/api 公共变量避免重复;用 xargs 单行替换 while 循环清理
旧 assets;POST 分支用管道直接取 id 省去临时文件。
279 行 → 248 行。

Made-with: Cursor
2026-03-21 11:36:43 +08:00
voson
c815fb4cc8 ci: 修复覆盖重发时 Release 唯一约束冲突
DELETE + POST 同名 release 会触发 Gitea 的 UQE_release_n 约束。
改为:已有 release → PATCH 更新 name/body,再逐个删除旧 assets 后重传;
      无 release → 正常 POST 新建。

Made-with: Cursor
2026-03-21 11:33:45 +08:00
voson
90cd1eca15 ci: 允许对同版本覆盖重发版
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Failing after 4m33s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been skipped
- 解析版本时不再 exit 1,改为记录 tag_exists=true 并打印警告
- 创建 Tag 步骤:若 tag 已存在则先本地删除再远端删除,再重新打带注释的 tag
- 创建 Release 步骤:先查询同名 Release,若存在则 DELETE 旧 Release,再 POST 新建

Made-with: Cursor
2026-03-21 11:22:24 +08:00
voson
da007348ea ci: 合并为 ci + deploy 两个 job,check 先于 build
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Failing after 7s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Has been skipped
单台 self-hosted runner 下并行 job 只是排队,多 job 拆分带来的
artifact 传递、重复 checkout、调度延迟反而更慢。

改动:
- 原 version/check/build-linux/publish-release 四个 job 合并为单个 ci job
- 步骤顺序:版本拦截 → fmt/clippy/test → build → 打 tag → 发 Release
- tag 在构建成功后才创建,避免失败提交留下脏 tag
- Release 创建+上传+发布合并为单步,去掉草稿中转
- deploy job 仅保留 artifact 下载 + SSH 部署逻辑,不再重复编译
- 整体从 400 行缩减至 244 行

Made-with: Cursor
2026-03-21 11:18:10 +08:00
voson
f2344b7543 feat(secrets-mcp): 审计页、audit_log user_id、OAuth 登录与仪表盘 footer
All checks were successful
Secrets MCP — Build & Release / 版本 & Release (push) Successful in 3s
Secrets MCP — Build & Release / 质量检查 (fmt / clippy / test) (push) Successful in 7m20s
Secrets MCP — Build & Release / Build Linux (musl) (push) Successful in 8m23s
Secrets MCP — Build & Release / 发布草稿 Release (push) Successful in 1s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- audit_log 增加 user_id;业务写审计透传 user_id
- Web /audit 与侧边栏;Dashboard 版本 footer 贴底(margin-top: auto)
- 停止 API Key 鉴权成功写入登录审计
- 文档、CI、release-check 配套更新

Made-with: Cursor
2026-03-21 11:12:11 +08:00
voson
ee028d45c3 ci: 优化 workflow 并行度与产物传递
- check 与 build-linux 改为并行执行,节省约 10min
- 新增 upload-artifact / download-artifact,deploy-mcp 直接复用二进制,免重复编译(节省约 15min)
- check / build 缓存加入 target/ 目录,加速增量编译
- 提取 MUSL_TARGET 全局变量,消除 x86_64-unknown-linux-musl 硬编码
- publish-release 增加 check 结果检查,质量失败时不发布 Release
- 移除 build-linux 冗余飞书通知,publish-release 汇总已覆盖

Made-with: Cursor
2026-03-21 10:07:29 +08:00
voson
a44c8ebf08 feat(mcp): persist login audit for OAuth and API key
All checks were successful
Secrets MCP — Build & Release / 版本 & Release (push) Successful in 3s
Secrets MCP — Build & Release / 质量检查 (fmt / clippy / test) (push) Successful in 3m16s
Secrets MCP — Build & Release / Build Linux (secrets-mcp, musl) (push) Successful in 4m32s
Secrets MCP — Build & Release / 发布草稿 Release (push) Successful in 3s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 4m33s
- Add audit::log_login in secrets-core (audit_log detail: user_id, provider, client_ip, user_agent)
- Log web Google OAuth success after session established
- Log MCP Bearer API key auth success in middleware
- Bump secrets-mcp to 0.1.6 (tag 0.1.5 existed)

Made-with: Cursor
2026-03-21 09:48:52 +08:00
21 changed files with 687 additions and 393 deletions

View File

@@ -7,7 +7,6 @@ on:
- 'crates/**' - 'crates/**'
- 'Cargo.toml' - 'Cargo.toml'
- 'Cargo.lock' - 'Cargo.lock'
# systemd / 部署模板变更也应跑构建(产物无变时可快速跳过 check
- 'deploy/**' - 'deploy/**'
- '.gitea/workflows/**' - '.gitea/workflows/**'
@@ -25,225 +24,162 @@ env:
CARGO_NET_RETRY: 10 CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always CARGO_TERM_COLOR: always
RUST_BACKTRACE: short RUST_BACKTRACE: short
MUSL_TARGET: x86_64-unknown-linux-musl
jobs: jobs:
version: ci:
name: 版本 & Release name: 检查 / 构建 / 发版
runs-on: debian runs-on: debian
timeout-minutes: 40
outputs: outputs:
version: ${{ steps.ver.outputs.version }}
tag: ${{ steps.ver.outputs.tag }} tag: ${{ steps.ver.outputs.tag }}
tag_exists: ${{ steps.ver.outputs.tag_exists }} version: ${{ steps.ver.outputs.version }}
release_id: ${{ steps.release.outputs.release_id }}
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
# ── 版本解析 ────────────────────────────────────────────────────────
- name: 解析版本 - name: 解析版本
id: ver id: ver
run: | run: |
version=$(grep -m1 '^version' crates/secrets-mcp/Cargo.toml | sed 's/.*"\(.*\)".*/\1/') version=$(grep -m1 '^version' crates/secrets-mcp/Cargo.toml | sed 's/.*"\(.*\)".*/\1/')
tag="secrets-mcp-${version}" tag="secrets-mcp-${version}"
previous_tag=$(git tag --list 'secrets-mcp-*' --sort=-v:refname | awk -v tag="$tag" '$0 != tag { print; exit }')
echo "version=${version}" >> "$GITHUB_OUTPUT" echo "version=${version}" >> "$GITHUB_OUTPUT"
echo "tag=${tag}" >> "$GITHUB_OUTPUT" echo "tag=${tag}" >> "$GITHUB_OUTPUT"
echo "previous_tag=${previous_tag}" >> "$GITHUB_OUTPUT"
if git rev-parse "refs/tags/${tag}" >/dev/null 2>&1; then if git rev-parse "refs/tags/${tag}" >/dev/null 2>&1; then
echo "⚠ 版本 ${tag} 已存在,将覆盖重新发版。"
echo "tag_exists=true" >> "$GITHUB_OUTPUT" echo "tag_exists=true" >> "$GITHUB_OUTPUT"
echo "版本 ${tag} 已存在"
else else
echo "tag_exists=false" >> "$GITHUB_OUTPUT"
echo "将创建新版本 ${tag}" echo "将创建新版本 ${tag}"
echo "tag_exists=false" >> "$GITHUB_OUTPUT"
fi fi
- name: 严格拦截重复版本 # ── Rust 工具链 ──────────────────────────────────────────────────────
if: steps.ver.outputs.tag_exists == 'true' - name: 安装 Rust 与 musl 工具链
run: | run: |
echo "错误: 版本 ${{ steps.ver.outputs.tag }} 已存在,禁止重复发版。" sudo apt-get update -qq
echo "请先 bump crates/secrets-mcp/Cargo.toml 中的 version并执行 cargo build 同步 Cargo.lock。" sudo apt-get install -y -qq pkg-config musl-tools binutils jq
exit 1 if ! command -v rustup >/dev/null 2>&1; then
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain "${RUST_TOOLCHAIN}"
echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
fi
source "$HOME/.cargo/env" 2>/dev/null || true
rustup toolchain install "${RUST_TOOLCHAIN}" --profile minimal \
--component rustfmt --component clippy
rustup default "${RUST_TOOLCHAIN}"
rustup target add "${MUSL_TARGET}" --toolchain "${RUST_TOOLCHAIN}"
rustc -V && cargo -V
- name: 缓存 Cargo
uses: actions/cache@v4
with:
path: |
~/.cargo/registry/index
~/.cargo/registry/cache
~/.cargo/git/db
target
key: cargo-${{ env.MUSL_TARGET }}-${{ env.RUST_TOOLCHAIN }}-${{ hashFiles('Cargo.lock') }}
restore-keys: |
cargo-${{ env.MUSL_TARGET }}-${{ env.RUST_TOOLCHAIN }}-
cargo-${{ env.MUSL_TARGET }}-
# ── 质量检查(先于构建,失败即止)──────────────────────────────────
- name: fmt
run: cargo fmt -- --check
- name: clippy
run: cargo clippy --locked -- -D warnings
- name: test
run: cargo test --locked
# ── 构建(质量检查通过后才执行)────────────────────────────────────
- name: 构建 secrets-mcp (musl)
run: |
cargo build --release --locked --target "${MUSL_TARGET}" -p secrets-mcp
strip "target/${MUSL_TARGET}/release/${MCP_BINARY}"
- name: 上传构建产物
uses: actions/upload-artifact@v3
with:
name: ${{ env.MCP_BINARY }}-linux-musl
path: target/${{ env.MUSL_TARGET }}/release/${{ env.MCP_BINARY }}
retention-days: 3
# ── 创建 / 覆盖 Tag构建成功后才打───────────────────────────────
- name: 创建 Tag - name: 创建 Tag
if: steps.ver.outputs.tag_exists == 'false'
run: | run: |
git config user.name "github-actions[bot]" git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com" git config user.email "github-actions[bot]@users.noreply.github.com"
git tag -a "${{ steps.ver.outputs.tag }}" -m "Release ${{ steps.ver.outputs.tag }}" tag="${{ steps.ver.outputs.tag }}"
git push origin "${{ steps.ver.outputs.tag }}" if [ "${{ steps.ver.outputs.tag_exists }}" = "true" ]; then
git tag -d "$tag" 2>/dev/null || true
git push origin ":refs/tags/$tag" 2>/dev/null || true
fi
git tag -a "$tag" -m "Release $tag"
git push origin "$tag"
- name: 解析或创建 Release # ── Release可选需配置 RELEASE_TOKEN───────────────────────────
id: release - name: Upsert Release
if: env.RELEASE_TOKEN != ''
env: env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }} RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: | run: |
if [ -z "$RELEASE_TOKEN" ]; then
echo "release_id=" >> "$GITHUB_OUTPUT"
exit 0
fi
command -v jq >/dev/null 2>&1 || (sudo apt-get update -qq && sudo apt-get install -y -qq jq)
tag="${{ steps.ver.outputs.tag }}" tag="${{ steps.ver.outputs.tag }}"
version="${{ steps.ver.outputs.version }}" version="${{ steps.ver.outputs.version }}"
release_api="${{ github.server_url }}/api/v1/repos/${{ github.repository }}/releases" api="${{ github.server_url }}/api/v1/repos/${{ github.repository }}/releases"
auth="Authorization: token $RELEASE_TOKEN"
http_code=$(curl -sS -o /tmp/release.json -w '%{http_code}' \ previous_tag=$(git tag --list 'secrets-mcp-*' --sort=-v:refname | awk -v t="$tag" '$0 != t { print; exit }')
-H "Authorization: token $RELEASE_TOKEN" \
"${release_api}/tags/${tag}")
if [ "$http_code" = "200" ]; then
release_id=$(jq -r '.id // empty' /tmp/release.json)
if [ -n "$release_id" ]; then
echo "已找到现有 Release: ${release_id}"
echo "release_id=${release_id}" >> "$GITHUB_OUTPUT"
exit 0
fi
fi
previous_tag="${{ steps.ver.outputs.previous_tag }}"
if [ -n "$previous_tag" ]; then if [ -n "$previous_tag" ]; then
changes=$(git log --pretty=format:'- %s (%h)' "${previous_tag}..HEAD") changes=$(git log --pretty=format:'- %s (%h)' "${previous_tag}..HEAD")
else else
changes=$(git log --pretty=format:'- %s (%h)') changes=$(git log --pretty=format:'- %s (%h)')
fi fi
[ -z "$changes" ] && changes="- 首次发布" [ -z "$changes" ] && changes="- 首次发布"
body=$(printf '## 变更日志\n\n%s' "$changes") body=$(printf '## 变更日志\n\n%s' "$changes")
payload=$(jq -n \ # Upsert: 存在 → PATCH + 清旧 assets不存在 → POST
--arg tag "$tag" \ release_id=$(curl -sS -H "$auth" "${api}/tags/${tag}" 2>/dev/null | jq -r '.id // empty')
--arg name "secrets-mcp ${version}" \
--arg body "$body" \
'{tag_name: $tag, name: $name, body: $body, draft: true}')
http_code=$(curl -sS -o /tmp/create-release.json -w '%{http_code}' \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-X POST "$release_api" \
-d "$payload")
if [ "$http_code" = "201" ] || [ "$http_code" = "200" ]; then
release_id=$(jq -r '.id // empty' /tmp/create-release.json)
fi
if [ -n "$release_id" ]; then if [ -n "$release_id" ]; then
echo "已创建草稿 Release: ${release_id}" curl -sS -o /dev/null -H "$auth" -H "Content-Type: application/json" \
echo "release_id=${release_id}" >> "$GITHUB_OUTPUT" -X PATCH "${api}/${release_id}" \
-d "$(jq -n --arg n "secrets-mcp ${version}" --arg b "$body" '{name:$n,body:$b,draft:false}')"
curl -sS -H "$auth" "${api}/${release_id}/assets" | \
jq -r '.[].id' | xargs -I{} curl -sS -o /dev/null -H "$auth" -X DELETE "${api}/${release_id}/assets/{}"
echo "已更新 Release ${release_id}"
else else
echo "⚠ 创建 Release 失败 (HTTP ${http_code}),跳过产物上传" release_id=$(curl -fsS -H "$auth" -H "Content-Type: application/json" \
cat /tmp/create-release.json 2>/dev/null || true -X POST "$api" \
echo "release_id=" >> "$GITHUB_OUTPUT" -d "$(jq -n --arg t "$tag" --arg n "secrets-mcp ${version}" --arg b "$body" \
'{tag_name:$t,name:$n,body:$b,draft:false}')" | jq -r '.id')
echo "已创建 Release ${release_id}"
fi fi
check: bin="target/${MUSL_TARGET}/release/${MCP_BINARY}"
name: 质量检查 (fmt / clippy / test) archive="${MCP_BINARY}-${tag}-x86_64-linux-musl.tar.gz"
needs: [version]
runs-on: debian
timeout-minutes: 15
steps:
- name: 安装 Rust
run: |
if ! command -v rustup >/dev/null 2>&1; then
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain "${RUST_TOOLCHAIN}"
echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
fi
source "$HOME/.cargo/env" 2>/dev/null || true
rustup toolchain install "${RUST_TOOLCHAIN}" --profile minimal --component rustfmt --component clippy
rustup default "${RUST_TOOLCHAIN}"
rustc -V
cargo -V
- uses: actions/checkout@v4
- name: 缓存 Cargo
uses: actions/cache@v4
with:
path: |
~/.cargo/registry/index
~/.cargo/registry/cache
~/.cargo/git/db
key: cargo-check-${{ env.RUST_TOOLCHAIN }}-${{ hashFiles('Cargo.lock') }}
restore-keys: |
cargo-check-${{ env.RUST_TOOLCHAIN }}-
cargo-check-
- run: cargo fmt -- --check
- run: cargo clippy --locked -- -D warnings
- run: cargo test --locked
build-linux:
name: Build Linux (secrets-mcp, musl)
needs: [version, check]
runs-on: debian
timeout-minutes: 25
steps:
- name: 安装依赖
run: |
sudo apt-get update
sudo apt-get install -y pkg-config musl-tools binutils curl
if ! command -v rustup >/dev/null 2>&1; then
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain "${RUST_TOOLCHAIN}"
echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
fi
source "$HOME/.cargo/env" 2>/dev/null || true
rustup toolchain install "${RUST_TOOLCHAIN}" --profile minimal
rustup default "${RUST_TOOLCHAIN}"
rustup target add x86_64-unknown-linux-musl --toolchain "${RUST_TOOLCHAIN}"
rustc -V
cargo -V
- uses: actions/checkout@v4
- name: 缓存 Cargo
uses: actions/cache@v4
with:
path: |
~/.cargo/registry/index
~/.cargo/registry/cache
~/.cargo/git/db
key: cargo-x86_64-unknown-linux-musl-${{ env.RUST_TOOLCHAIN }}-${{ hashFiles('Cargo.lock') }}
restore-keys: |
cargo-x86_64-unknown-linux-musl-${{ env.RUST_TOOLCHAIN }}-
cargo-x86_64-unknown-linux-musl-
- name: 构建 secrets-mcp (musl)
run: |
cargo build --release --locked --target x86_64-unknown-linux-musl -p secrets-mcp
strip target/x86_64-unknown-linux-musl/release/${{ env.MCP_BINARY }}
- name: 上传 Release 产物
if: needs.version.outputs.release_id != ''
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
[ -z "$RELEASE_TOKEN" ] && exit 0
tag="${{ needs.version.outputs.tag }}"
bin="target/x86_64-unknown-linux-musl/release/${{ env.MCP_BINARY }}"
archive="${{ env.MCP_BINARY }}-${tag}-x86_64-linux-musl.tar.gz"
tar -czf "$archive" -C "$(dirname "$bin")" "$(basename "$bin")" tar -czf "$archive" -C "$(dirname "$bin")" "$(basename "$bin")"
sha256sum "$archive" > "${archive}.sha256" sha256sum "$archive" > "${archive}.sha256"
release_url="${{ github.server_url }}/api/v1/repos/${{ github.repository }}/releases/${{ needs.version.outputs.release_id }}/assets" curl -fsS -H "$auth" -F "attachment=@${archive}" "${api}/${release_id}/assets"
curl -fsS -H "Authorization: token $RELEASE_TOKEN" \ curl -fsS -H "$auth" -F "attachment=@${archive}.sha256" "${api}/${release_id}/assets"
-F "attachment=@${archive}" "$release_url" echo "Release ${tag} 已发布"
curl -fsS -H "Authorization: token $RELEASE_TOKEN" \
-F "attachment=@${archive}.sha256" "$release_url"
# ── 飞书汇总通知 ─────────────────────────────────────────────────────
- name: 飞书通知 - name: 飞书通知
if: always() if: always()
env: env:
WEBHOOK_URL: ${{ vars.WEBHOOK_URL }} WEBHOOK_URL: ${{ vars.WEBHOOK_URL }}
run: | run: |
[ -z "$WEBHOOK_URL" ] && exit 0 [ -z "$WEBHOOK_URL" ] && exit 0
command -v jq >/dev/null 2>&1 || (sudo apt-get update -qq && sudo apt-get install -y -qq jq) tag="${{ steps.ver.outputs.tag }}"
tag="${{ needs.version.outputs.tag }}" commit="${{ github.event.head_commit.message }}"
commit=$(git log -1 --pretty=format:"%s" 2>/dev/null || echo "N/A") [ -z "$commit" ] && commit="${{ github.sha }}"
url="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_number }}" url="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_number }}"
result="${{ job.status }}" result="${{ job.status }}"
if [ "$result" = "success" ]; then icon="✅"; else icon="❌"; fi if [ "$result" = "success" ]; then icon="✅"; else icon="❌"; fi
msg="secrets-mcp linux 构建${icon} msg="secrets-mcp 构建&发版 ${icon}
版本:${tag} 版本:${tag}
提交:${commit} 提交:${commit}
作者:${{ github.actor }} 作者:${{ github.actor }}
@@ -251,50 +187,21 @@ jobs:
payload=$(jq -n --arg text "$msg" '{msg_type: "text", content: {text: $text}}') payload=$(jq -n --arg text "$msg" '{msg_type: "text", content: {text: $text}}')
curl -sS -H "Content-Type: application/json" -X POST -d "$payload" "$WEBHOOK_URL" curl -sS -H "Content-Type: application/json" -X POST -d "$payload" "$WEBHOOK_URL"
deploy-mcp: deploy:
name: 部署 secrets-mcp name: 部署 secrets-mcp
needs: [version, build-linux] needs: [ci]
# 部署目标由仓库 Actions 配置vars.DEPLOY_HOST / vars.DEPLOY_USER私钥 secrets.DEPLOY_SSH_KEYPEM 原文,勿 base64 if: |
# (可用 scripts/setup-gitea-actions.sh 或 Gitea API 写入,勿写进本文件) github.ref == 'refs/heads/main' ||
# Google OAuth / SERVER_MASTER_KEY / SECRETS_DATABASE_URL 等勿写入 CI请在 ECS 上 github.ref == 'refs/heads/feat/mcp' ||
# /opt/secrets-mcp/.env 配置(见 deploy/.env.example github.ref == 'refs/heads/mcp'
# 若仓库 main 仍为纯 CLI、仅 feat/mcp 含本 workflow请去掉条件里的 main避免误部署。
if: needs.build-linux.result == 'success' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/feat/mcp' || github.ref == 'refs/heads/mcp')
runs-on: debian runs-on: debian
timeout-minutes: 10 timeout-minutes: 10
steps: steps:
- uses: actions/checkout@v4 - name: 下载构建产物
uses: actions/download-artifact@v3
- name: 安装 Rust
run: |
if ! command -v rustup >/dev/null 2>&1; then
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain "${RUST_TOOLCHAIN}"
echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
fi
source "$HOME/.cargo/env" 2>/dev/null || true
sudo apt-get update -qq && sudo apt-get install -y -qq pkg-config musl-tools
rustup toolchain install "${RUST_TOOLCHAIN}" --profile minimal
rustup default "${RUST_TOOLCHAIN}"
rustup target add x86_64-unknown-linux-musl --toolchain "${RUST_TOOLCHAIN}"
rustc -V
cargo -V
- name: 缓存 Cargo
uses: actions/cache@v4
with: with:
path: | name: ${{ env.MCP_BINARY }}-linux-musl
~/.cargo/registry/index path: /tmp/artifact
~/.cargo/registry/cache
~/.cargo/git/db
key: cargo-x86_64-unknown-linux-musl-${{ env.RUST_TOOLCHAIN }}-${{ hashFiles('Cargo.lock') }}
restore-keys: |
cargo-x86_64-unknown-linux-musl-${{ env.RUST_TOOLCHAIN }}-
cargo-x86_64-unknown-linux-musl-
- name: 构建 secrets-mcp
run: |
cargo build --release --locked --target x86_64-unknown-linux-musl -p secrets-mcp
strip target/x86_64-unknown-linux-musl/release/${{ env.MCP_BINARY }}
- name: 部署到阿里云 ECS - name: 部署到阿里云 ECS
env: env:
@@ -303,16 +210,15 @@ jobs:
DEPLOY_SSH_KEY: ${{ secrets.DEPLOY_SSH_KEY }} DEPLOY_SSH_KEY: ${{ secrets.DEPLOY_SSH_KEY }}
run: | run: |
if [ -z "$DEPLOY_HOST" ] || [ -z "$DEPLOY_USER" ] || [ -z "$DEPLOY_SSH_KEY" ]; then if [ -z "$DEPLOY_HOST" ] || [ -z "$DEPLOY_USER" ] || [ -z "$DEPLOY_SSH_KEY" ]; then
echo "部署跳过:请在仓库 Actions 中配置 vars.DEPLOY_HOST、vars.DEPLOY_USER 与 secrets.DEPLOY_SSH_KEY" echo "部署跳过:请配置 vars.DEPLOY_HOST、vars.DEPLOY_USER 与 secrets.DEPLOY_SSH_KEY"
exit 0 exit 0
fi fi
echo "$DEPLOY_SSH_KEY" > /tmp/deploy_key echo "$DEPLOY_SSH_KEY" > /tmp/deploy_key
chmod 600 /tmp/deploy_key chmod 600 /tmp/deploy_key
SCP="scp -i /tmp/deploy_key -o StrictHostKeyChecking=no" scp -i /tmp/deploy_key -o StrictHostKeyChecking=no \
"/tmp/artifact/${MCP_BINARY}" \
$SCP target/x86_64-unknown-linux-musl/release/${{ env.MCP_BINARY }} \
"${DEPLOY_USER}@${DEPLOY_HOST}:/tmp/secrets-mcp.new" "${DEPLOY_USER}@${DEPLOY_HOST}:/tmp/secrets-mcp.new"
ssh -i /tmp/deploy_key -o StrictHostKeyChecking=no "${DEPLOY_USER}@${DEPLOY_HOST}" " ssh -i /tmp/deploy_key -o StrictHostKeyChecking=no "${DEPLOY_USER}@${DEPLOY_HOST}" "
@@ -322,7 +228,6 @@ jobs:
sleep 2 sleep 2
sudo systemctl is-active secrets-mcp && echo '服务启动成功' || (sudo journalctl -u secrets-mcp -n 20 && exit 1) sudo systemctl is-active secrets-mcp && echo '服务启动成功' || (sudo journalctl -u secrets-mcp -n 20 && exit 1)
" "
rm -f /tmp/deploy_key rm -f /tmp/deploy_key
- name: 飞书通知 - name: 飞书通知
@@ -331,94 +236,13 @@ jobs:
WEBHOOK_URL: ${{ vars.WEBHOOK_URL }} WEBHOOK_URL: ${{ vars.WEBHOOK_URL }}
run: | run: |
[ -z "$WEBHOOK_URL" ] && exit 0 [ -z "$WEBHOOK_URL" ] && exit 0
command -v jq >/dev/null 2>&1 || (sudo apt-get update -qq && sudo apt-get install -y -qq jq) tag="${{ needs.ci.outputs.tag }}"
tag="${{ needs.version.outputs.tag }}"
commit=$(git log -1 --pretty=format:"%s" 2>/dev/null || echo "N/A")
url="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_number }}" url="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_number }}"
result="${{ job.status }}" result="${{ job.status }}"
if [ "$result" = "success" ]; then icon="✅"; else icon="❌"; fi if [ "$result" = "success" ]; then icon="✅"; else icon="❌"; fi
msg="secrets-mcp 部署${icon} msg="secrets-mcp 部署 ${icon}
版本:${tag} 版本:${tag}
提交:${commit}
作者:${{ github.actor }} 作者:${{ github.actor }}
详情:${url}" 详情:${url}"
payload=$(jq -n --arg text "$msg" '{msg_type: "text", content: {text: $text}}') payload=$(jq -n --arg text "$msg" '{msg_type: "text", content: {text: $text}}')
curl -sS -H "Content-Type: application/json" -X POST -d "$payload" "$WEBHOOK_URL" curl -sS -H "Content-Type: application/json" -X POST -d "$payload" "$WEBHOOK_URL"
publish-release:
name: 发布草稿 Release
needs: [version, build-linux]
if: always() && needs.version.outputs.release_id != ''
runs-on: debian
timeout-minutes: 5
steps:
- uses: actions/checkout@v4
- name: 发布草稿
env:
RELEASE_TOKEN: ${{ secrets.RELEASE_TOKEN }}
run: |
[ -z "$RELEASE_TOKEN" ] && exit 0
linux_r="${{ needs.build-linux.result }}"
if [ "$linux_r" != "success" ]; then
echo "linux 构建未成功,保留草稿 Release"
exit 0
fi
release_api="${{ github.server_url }}/api/v1/repos/${{ github.repository }}/releases/${{ needs.version.outputs.release_id }}"
http_code=$(curl -sS -o /tmp/publish-release.json -w '%{http_code}' \
-H "Authorization: token $RELEASE_TOKEN" \
-H "Content-Type: application/json" \
-X PATCH "$release_api" \
-d '{"draft":false}')
if [ "$http_code" != "200" ]; then
echo "发布草稿 Release 失败 (HTTP ${http_code})"
cat /tmp/publish-release.json 2>/dev/null || true
exit 1
fi
echo "Release 已发布"
- name: 飞书汇总通知
if: always()
env:
WEBHOOK_URL: ${{ vars.WEBHOOK_URL }}
run: |
[ -z "$WEBHOOK_URL" ] && exit 0
command -v jq >/dev/null 2>&1 || (sudo apt-get update -qq && sudo apt-get install -y -qq jq)
tag="${{ needs.version.outputs.tag }}"
tag_exists="${{ needs.version.outputs.tag_exists }}"
commit="${{ github.event.head_commit.message }}"
[ -z "$commit" ] && commit="${{ github.sha }}"
url="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_number }}"
linux_r="${{ needs.build-linux.result }}"
publish_r="${{ job.status }}"
icon() { case "$1" in success) echo "✅";; skipped) echo "⏭";; *) echo "❌";; esac; }
if [ "$linux_r" = "success" ] && [ "$publish_r" = "success" ]; then
status="发布成功 ✅"
elif [ "$linux_r" != "success" ]; then
status="构建失败 ❌"
else
status="发布失败 ❌"
fi
if [ "$tag_exists" = "false" ]; then
version_line="🆕 新版本 ${tag}"
else
version_line="🔄 重复构建 ${tag}"
fi
msg="secrets-mcp ${status}
${version_line}
linux $(icon "$linux_r") | Release $(icon "$publish_r")
提交:${commit}
作者:${{ github.actor }}
详情:${url}"
payload=$(jq -n --arg text "$msg" '{msg_type: "text", content: {text: $text}}')
curl -sS -H "Content-Type: application/json" -X POST -d "$payload" "$WEBHOOK_URL"

View File

@@ -6,7 +6,7 @@
1. 涉及 `crates/**`、根目录 `Cargo.toml`/`Cargo.lock``secrets-mcp` 行为变更的提交,默认视为**需要发版**,除非明确说明「本次不发版」。 1. 涉及 `crates/**`、根目录 `Cargo.toml`/`Cargo.lock``secrets-mcp` 行为变更的提交,默认视为**需要发版**,除非明确说明「本次不发版」。
2. 发版前检查 `crates/secrets-mcp/Cargo.toml``version`,再查 tag`git tag -l 'secrets-mcp-*'` 2. 发版前检查 `crates/secrets-mcp/Cargo.toml``version`,再查 tag`git tag -l 'secrets-mcp-*'`
3. 若当前版本对应 tag 已存在,须先 bump `version`,再 `cargo build` 同步 `Cargo.lock` 后提交 3. 若当前版本对应 tag 已存在,默认允许复用现有 tag 继续构建;仅在需要新的发布版本时再 bump `version` `cargo build` 同步 `Cargo.lock`
4. 提交前优先运行 `./scripts/release-check.sh`(版本/tag + `fmt` + `clippy --locked` + `test --locked`)。 4. 提交前优先运行 `./scripts/release-check.sh`(版本/tag + `fmt` + `clippy --locked` + `test --locked`)。
## 项目结构 ## 项目结构
@@ -28,7 +28,7 @@ secrets/
- **建议库名**`secrets-mcp`(专用实例,与历史库名区分)。 - **建议库名**`secrets-mcp`(专用实例,与历史库名区分)。
- **连接**:环境变量 **`SECRETS_DATABASE_URL`**(本分支无本地配置文件路径)。 - **连接**:环境变量 **`SECRETS_DATABASE_URL`**(本分支无本地配置文件路径)。
- **表**`entries`(含 `user_id`)、`secrets``entries_history``secrets_history``audit_log``users``oauth_accounts``api_keys`,首次连接 **auto-migrate** - **表**`entries`(含 `user_id`)、`secrets``entries_history``secrets_history``audit_log``users``oauth_accounts`,首次连接 **auto-migrate**
### 表结构(摘录) ### 表结构(摘录)
@@ -60,7 +60,7 @@ secrets (
) )
``` ```
### users / oauth_accounts / api_keys ### users / oauth_accounts
```sql ```sql
users ( users (
@@ -71,6 +71,7 @@ users (
key_salt BYTEA, -- PBKDF2 salt32B首次设置密码短语时写入 key_salt BYTEA, -- PBKDF2 salt32B首次设置密码短语时写入
key_check BYTEA, -- 派生密钥加密已知常量,用于验证密码短语 key_check BYTEA, -- 派生密钥加密已知常量,用于验证密码短语
key_params JSONB, -- 算法参数,如 {"alg":"pbkdf2-sha256","iterations":600000} key_params JSONB, -- 算法参数,如 {"alg":"pbkdf2-sha256","iterations":600000}
api_key TEXT UNIQUE, -- MCP Bearer token当前实现为明文存储
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW() updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
) )
@@ -83,21 +84,11 @@ oauth_accounts (
... ...
UNIQUE(provider, provider_id) UNIQUE(provider, provider_id)
) )
api_keys (
id UUID PRIMARY KEY DEFAULT uuidv7(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
name VARCHAR(256) NOT NULL,
key_hash VARCHAR(64) NOT NULL UNIQUE,
key_prefix VARCHAR(12) NOT NULL,
last_used_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
``` ```
### audit_log / history ### audit_log / history
与迁移脚本一致:`audit_log``entries_history``secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs``migrate` SQL。 与迁移脚本一致:`audit_log``entries_history``secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs``migrate` SQL。`audit_log` 中普通业务事件的 `namespace/kind/name` 对应 entry 坐标;登录类事件固定使用 `namespace='auth'`,此时 `kind/name` 表示认证目标而非 entry 身份。
### 字段职责 ### 字段职责
@@ -149,7 +140,7 @@ git tag -l 'secrets-mcp-*'
## CI/CD ## CI/CD
- **触发**:任意分支 `push`,且路径含 `crates/**``deploy/**`、根目录 `Cargo.toml``Cargo.lock`(见 `.gitea/workflows/secrets.yml`)。 - **触发**:任意分支 `push`,且路径含 `crates/**``deploy/**`、根目录 `Cargo.toml``Cargo.lock`(见 `.gitea/workflows/secrets.yml`)。
- **版本与 tag**:从 `crates/secrets-mcp/Cargo.toml` 读版本;若远程已存在同名 `secrets-mcp-<version>` tag**工作流失败**(须先 bump 版本并 `cargo build` 同步 `Cargo.lock`;否则由 CI 创建并推送该 tag。 - **版本与 tag**:从 `crates/secrets-mcp/Cargo.toml` 读版本;若远程已存在同名 `secrets-mcp-<version>` tag则复用现有 tag 继续构建;否则由 CI 创建并推送该 tag。
- **质量与构建**`fmt` / `clippy --locked` / `test --locked``x86_64-unknown-linux-musl` 发布构建 `secrets-mcp` - **质量与构建**`fmt` / `clippy --locked` / `test --locked``x86_64-unknown-linux-musl` 发布构建 `secrets-mcp`
- **Release可选**`secrets.RELEASE_TOKEN`Gitea PAT用于创建草稿 Release、上传 `tar.gz` + `.sha256`、构建成功后发布;未配置则跳过 API Release仅 tag + 构建。 - **Release可选**`secrets.RELEASE_TOKEN`Gitea PAT用于创建草稿 Release、上传 `tar.gz` + `.sha256`、构建成功后发布;未配置则跳过 API Release仅 tag + 构建。
- **部署(可选)**:仅 `main``feat/mcp``mcp` 分支在构建成功时跑 `deploy-mcp`;需 `vars.DEPLOY_HOST``vars.DEPLOY_USER``secrets.DEPLOY_SSH_KEY`。勿把 OAuth/DB 等写进 workflow`deploy/.env.example` 在目标机配置。 - **部署(可选)**:仅 `main``feat/mcp``mcp` 分支在构建成功时跑 `deploy-mcp`;需 `vars.DEPLOY_HOST``vars.DEPLOY_USER``secrets.DEPLOY_SSH_KEY`。勿把 OAuth/DB 等写进 workflow`deploy/.env.example` 在目标机配置。
@@ -165,6 +156,5 @@ git tag -l 'secrets-mcp-*'
| `SECRETS_MCP_BIND` | 监听地址,默认 `0.0.0.0:9315`。 | | `SECRETS_MCP_BIND` | 监听地址,默认 `0.0.0.0:9315`。 |
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;仅运行时配置。 | | `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;仅运行时配置。 |
| `RUST_LOG` | 如 `secrets_mcp=debug`。 | | `RUST_LOG` | 如 `secrets_mcp=debug`。 |
| `USER` | 若写入审计 `actor`,由运行环境提供。 |
> `SERVER_MASTER_KEY` 已不再需要。新架构下密钥由用户密码短语在客户端派生,服务端不持有。 > `SERVER_MASTER_KEY` 已不再需要。新架构下密钥由用户密码短语在客户端派生,服务端不持有。

2
Cargo.lock generated
View File

@@ -1949,7 +1949,7 @@ dependencies = [
[[package]] [[package]]
name = "secrets-mcp" name = "secrets-mcp"
version = "0.1.5" version = "0.1.10"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"askama", "askama",

View File

@@ -77,7 +77,7 @@ flowchart LR
### 敏感数据传输 ### 敏感数据传输
- **OAuth `client_secret`** 只存服务端环境变量,不发给浏览器 - **OAuth `client_secret`** 只存服务端环境变量,不发给浏览器
- **API Key** 创建时原始 key 仅展示一次,库中只存 SHA-256 哈希 - **API Key** 当前存放在 `users.api_key`Dashboard 会明文展示并可重置
- **X-Encryption-Key** 随 MCP 请求经 TLS 传输,服务端仅在请求处理期间持有(不持久化) - **X-Encryption-Key** 随 MCP 请求经 TLS 传输,服务端仅在请求处理期间持有(不持久化)
- **生产环境必须走 HTTPS/TLS** - **生产环境必须走 HTTPS/TLS**
@@ -121,7 +121,7 @@ flowchart LR
## 数据模型 ## 数据模型
主表 **`entries`**`namespace``kind``name``tags``metadata`,多租户时带 `user_id`+ 子表 **`secrets`**(每行一个加密字段:`field_name``encrypted`)。另有 `entries_history``secrets_history``audit_log`,以及 **`users`**(含 `key_salt``key_check``key_params`)、**`oauth_accounts`**、**`api_keys`**。首次连库自动迁移建表。 主表 **`entries`**`namespace``kind``name``tags``metadata`,多租户时带 `user_id`+ 子表 **`secrets`**(每行一个加密字段:`field_name``encrypted`)。另有 `entries_history``secrets_history``audit_log`,以及 **`users`**(含 `key_salt``key_check``key_params``api_key`)、**`oauth_accounts`**。首次连库自动迁移建表。
| 位置 | 字段 | 说明 | | 位置 | 字段 | 说明 |
|------|------|------| |------|------|------|
@@ -142,9 +142,10 @@ flowchart LR
## 审计日志 ## 审计日志
`add``update``delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。 `add``update``delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。
其中业务条目事件使用 `[namespace/kind] name` 语义;登录类事件使用 `namespace='auth'`,此时 `kind/name` 表示认证目标(例如 `oauth/google`),不表示某条 secrets entry。
```sql ```sql
SELECT action, namespace, kind, name, actor, detail, created_at SELECT action, namespace, kind, name, detail, created_at
FROM audit_log FROM audit_log
ORDER BY created_at DESC ORDER BY created_at DESC
LIMIT 20; LIMIT 20;
@@ -165,7 +166,7 @@ deploy/ # systemd、.env 示例
见 [`.gitea/workflows/secrets.yml`](.gitea/workflows/secrets.yml)。 见 [`.gitea/workflows/secrets.yml`](.gitea/workflows/secrets.yml)。
- **触发**:任意分支 `push`,且变更路径包含 `crates/**``deploy/**`、根目录 `Cargo.toml` / `Cargo.lock` - **触发**:任意分支 `push`,且变更路径包含 `crates/**``deploy/**`、根目录 `Cargo.toml` / `Cargo.lock`
- **流水线**:解析 `crates/secrets-mcp/Cargo.toml` 版本 → **若 `secrets-mcp-<version>` 的 tag 已存在则整次运行失败**(避免重复发版)→ 否则自动打 tag → `cargo fmt` / `clippy --locked` / `test --locked` → 交叉编译 `x86_64-unknown-linux-musl``secrets-mcp` - **流水线**:解析 `crates/secrets-mcp/Cargo.toml` 版本 → 若 `secrets-mcp-<version>` 的 tag 已存在则**复用现有 tag 继续构建**否则自动打 tag → `cargo fmt` / `clippy --locked` / `test --locked` → 交叉编译 `x86_64-unknown-linux-musl``secrets-mcp`
- **Release可选**:配置仓库 Secret `RELEASE_TOKEN`Gitea PAT明文勿 base64会通过 API 创建**草稿** Release、在 Linux 构建成功后上传 `tar.gz``.sha256`,再自动将草稿**正式发布**;未配置则跳过创建 Release 与产物上传,仅保留 tag 与构建结果。 - **Release可选**:配置仓库 Secret `RELEASE_TOKEN`Gitea PAT明文勿 base64会通过 API 创建**草稿** Release、在 Linux 构建成功后上传 `tar.gz``.sha256`,再自动将草稿**正式发布**;未配置则跳过创建 Release 与产物上传,仅保留 tag 与构建结果。
- **部署(可选)**:仅在 `main``feat/mcp``mcp` 分支且构建成功时,若已配置 `vars.DEPLOY_HOST``vars.DEPLOY_USER``secrets.DEPLOY_SSH_KEY`,则 `deploy-mcp` 通过 SCP/SSH 更新目标机二进制并 `systemctl restart secrets-mcp` - **部署(可选)**:仅在 `main``feat/mcp``mcp` 分支且构建成功时,若已配置 `vars.DEPLOY_HOST``vars.DEPLOY_USER``secrets.DEPLOY_SSH_KEY`,则 `deploy-mcp` 通过 SCP/SSH 更新目标机二进制并 `systemctl restart secrets-mcp`
- **通知(可选)**`vars.WEBHOOK_URL` 为飞书 Webhook 时,构建/部署/发布节点会推送简要状态。 - **通知(可选)**`vars.WEBHOOK_URL` 为飞书 Webhook 时,构建/部署/发布节点会推送简要状态。

View File

@@ -1,37 +1,88 @@
use serde_json::Value; use serde_json::{Value, json};
use sqlx::{Postgres, Transaction}; use sqlx::{PgPool, Postgres, Transaction};
use uuid::Uuid;
/// Return the current OS user as the audit actor (falls back to empty string). pub const ACTION_LOGIN: &str = "login";
pub fn current_actor() -> String { pub const NAMESPACE_AUTH: &str = "auth";
std::env::var("USER").unwrap_or_default()
fn login_detail(provider: &str, client_ip: Option<&str>, user_agent: Option<&str>) -> Value {
json!({
"provider": provider,
"client_ip": client_ip,
"user_agent": user_agent,
})
}
/// Write a login audit entry without requiring an explicit transaction.
pub async fn log_login(
pool: &PgPool,
kind: &str,
provider: &str,
user_id: Uuid,
client_ip: Option<&str>,
user_agent: Option<&str>,
) {
let detail = login_detail(provider, client_ip, user_agent);
let result: Result<_, sqlx::Error> = sqlx::query(
"INSERT INTO audit_log (user_id, action, namespace, kind, name, detail) \
VALUES ($1, $2, $3, $4, $5, $6)",
)
.bind(user_id)
.bind(ACTION_LOGIN)
.bind(NAMESPACE_AUTH)
.bind(kind)
.bind(provider)
.bind(&detail)
.execute(pool)
.await;
if let Err(e) = result {
tracing::warn!(error = %e, kind, provider, "failed to write login audit log");
} else {
tracing::debug!(kind, provider, ?user_id, "login audit logged");
}
} }
/// Write an audit entry within an existing transaction. /// Write an audit entry within an existing transaction.
pub async fn log_tx( pub async fn log_tx(
tx: &mut Transaction<'_, Postgres>, tx: &mut Transaction<'_, Postgres>,
user_id: Option<Uuid>,
action: &str, action: &str,
namespace: &str, namespace: &str,
kind: &str, kind: &str,
name: &str, name: &str,
detail: Value, detail: Value,
) { ) {
let actor = current_actor();
let result: Result<_, sqlx::Error> = sqlx::query( let result: Result<_, sqlx::Error> = sqlx::query(
"INSERT INTO audit_log (action, namespace, kind, name, detail, actor) \ "INSERT INTO audit_log (user_id, action, namespace, kind, name, detail) \
VALUES ($1, $2, $3, $4, $5, $6)", VALUES ($1, $2, $3, $4, $5, $6)",
) )
.bind(user_id)
.bind(action) .bind(action)
.bind(namespace) .bind(namespace)
.bind(kind) .bind(kind)
.bind(name) .bind(name)
.bind(&detail) .bind(&detail)
.bind(&actor)
.execute(&mut **tx) .execute(&mut **tx)
.await; .await;
if let Err(e) = result { if let Err(e) = result {
tracing::warn!(error = %e, "failed to write audit log"); tracing::warn!(error = %e, "failed to write audit log");
} else { } else {
tracing::debug!(action, namespace, kind, name, actor, "audit logged"); tracing::debug!(action, namespace, kind, name, "audit logged");
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn login_detail_includes_expected_fields() {
let detail = login_detail("google", Some("127.0.0.1"), Some("Mozilla/5.0"));
assert_eq!(detail["provider"], "google");
assert_eq!(detail["client_ip"], "127.0.0.1");
assert_eq!(detail["user_agent"], "Mozilla/5.0");
} }
} }

View File

@@ -3,8 +3,6 @@ use serde_json::Value;
use sqlx::PgPool; use sqlx::PgPool;
use sqlx::postgres::PgPoolOptions; use sqlx::postgres::PgPoolOptions;
use crate::audit::current_actor;
pub async fn create_pool(database_url: &str) -> Result<PgPool> { pub async fn create_pool(database_url: &str) -> Result<PgPool> {
tracing::debug!("connecting to database"); tracing::debug!("connecting to database");
let pool = PgPoolOptions::new() let pool = PgPoolOptions::new()
@@ -67,17 +65,18 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
-- ── audit_log: append-only operation log ───────────────────────────────── -- ── audit_log: append-only operation log ─────────────────────────────────
CREATE TABLE IF NOT EXISTS audit_log ( CREATE TABLE IF NOT EXISTS audit_log (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
user_id UUID,
action VARCHAR(32) NOT NULL, action VARCHAR(32) NOT NULL,
namespace VARCHAR(64) NOT NULL, namespace VARCHAR(64) NOT NULL,
kind VARCHAR(64) NOT NULL, kind VARCHAR(64) NOT NULL,
name VARCHAR(256) NOT NULL, name VARCHAR(256) NOT NULL,
detail JSONB NOT NULL DEFAULT '{}', detail JSONB NOT NULL DEFAULT '{}',
actor VARCHAR(128) NOT NULL DEFAULT '',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
); );
CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC); CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC);
CREATE INDEX IF NOT EXISTS idx_audit_log_ns_kind ON audit_log(namespace, kind); CREATE INDEX IF NOT EXISTS idx_audit_log_ns_kind ON audit_log(namespace, kind);
CREATE INDEX IF NOT EXISTS idx_audit_log_user_id ON audit_log(user_id) WHERE user_id IS NOT NULL;
-- ── entries_history ─────────────────────────────────────────────────────── -- ── entries_history ───────────────────────────────────────────────────────
CREATE TABLE IF NOT EXISTS entries_history ( CREATE TABLE IF NOT EXISTS entries_history (
@@ -90,7 +89,6 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
action VARCHAR(16) NOT NULL, action VARCHAR(16) NOT NULL,
tags TEXT[] NOT NULL DEFAULT '{}', tags TEXT[] NOT NULL DEFAULT '{}',
metadata JSONB NOT NULL DEFAULT '{}', metadata JSONB NOT NULL DEFAULT '{}',
actor VARCHAR(128) NOT NULL DEFAULT '',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
); );
@@ -103,6 +101,7 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID; ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID;
CREATE INDEX IF NOT EXISTS idx_entries_history_user_id CREATE INDEX IF NOT EXISTS idx_entries_history_user_id
ON entries_history(user_id) WHERE user_id IS NOT NULL; ON entries_history(user_id) WHERE user_id IS NOT NULL;
ALTER TABLE entries_history DROP COLUMN IF EXISTS actor;
-- ── secrets_history: field-level snapshot ──────────────────────────────── -- ── secrets_history: field-level snapshot ────────────────────────────────
CREATE TABLE IF NOT EXISTS secrets_history ( CREATE TABLE IF NOT EXISTS secrets_history (
@@ -113,7 +112,6 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
field_name VARCHAR(256) NOT NULL, field_name VARCHAR(256) NOT NULL,
encrypted BYTEA NOT NULL DEFAULT '\x', encrypted BYTEA NOT NULL DEFAULT '\x',
action VARCHAR(16) NOT NULL, action VARCHAR(16) NOT NULL,
actor VARCHAR(128) NOT NULL DEFAULT '',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
); );
@@ -122,6 +120,12 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
CREATE INDEX IF NOT EXISTS idx_secrets_history_secret_id CREATE INDEX IF NOT EXISTS idx_secrets_history_secret_id
ON secrets_history(secret_id); ON secrets_history(secret_id);
-- Drop redundant actor column (derivable via entries_history JOIN)
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
-- Drop redundant actor column; user_id already identifies the business user
ALTER TABLE audit_log DROP COLUMN IF EXISTS actor;
-- ── users ───────────────────────────────────────────────────────────────── -- ── users ─────────────────────────────────────────────────────────────────
CREATE TABLE IF NOT EXISTS users ( CREATE TABLE IF NOT EXISTS users (
id UUID PRIMARY KEY DEFAULT uuidv7(), id UUID PRIMARY KEY DEFAULT uuidv7(),
@@ -156,10 +160,75 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
) )
.execute(pool) .execute(pool)
.await?; .await?;
restore_plaintext_api_keys(pool).await?;
tracing::debug!("migrations complete"); tracing::debug!("migrations complete");
Ok(()) Ok(())
} }
async fn restore_plaintext_api_keys(pool: &PgPool) -> Result<()> {
let has_users_api_key: bool = sqlx::query_scalar(
"SELECT EXISTS (
SELECT 1
FROM information_schema.columns
WHERE table_schema = 'public'
AND table_name = 'users'
AND column_name = 'api_key'
)",
)
.fetch_one(pool)
.await?;
if !has_users_api_key {
sqlx::query("ALTER TABLE users ADD COLUMN api_key TEXT")
.execute(pool)
.await?;
sqlx::query("CREATE UNIQUE INDEX IF NOT EXISTS idx_users_api_key ON users(api_key) WHERE api_key IS NOT NULL")
.execute(pool)
.await?;
}
let has_api_keys_table: bool = sqlx::query_scalar(
"SELECT EXISTS (
SELECT 1
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'api_keys'
)",
)
.fetch_one(pool)
.await?;
if !has_api_keys_table {
return Ok(());
}
#[derive(sqlx::FromRow)]
struct UserWithoutKey {
id: uuid::Uuid,
}
let users_without_key: Vec<UserWithoutKey> =
sqlx::query_as("SELECT DISTINCT user_id AS id FROM api_keys WHERE user_id NOT IN (SELECT id FROM users WHERE api_key IS NOT NULL)")
.fetch_all(pool)
.await?;
for user in users_without_key {
let new_key = crate::service::api_key::generate_api_key();
sqlx::query("UPDATE users SET api_key = $1 WHERE id = $2")
.bind(&new_key)
.bind(user.id)
.execute(pool)
.await?;
}
sqlx::query("DROP TABLE IF EXISTS api_keys")
.execute(pool)
.await?;
Ok(())
}
// ── Entry-level history snapshot ───────────────────────────────────────────── // ── Entry-level history snapshot ─────────────────────────────────────────────
pub struct EntrySnapshotParams<'a> { pub struct EntrySnapshotParams<'a> {
@@ -178,11 +247,10 @@ pub async fn snapshot_entry_history(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>, tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
p: EntrySnapshotParams<'_>, p: EntrySnapshotParams<'_>,
) -> Result<()> { ) -> Result<()> {
let actor = current_actor();
sqlx::query( sqlx::query(
"INSERT INTO entries_history \ "INSERT INTO entries_history \
(entry_id, namespace, kind, name, version, action, tags, metadata, actor, user_id) \ (entry_id, namespace, kind, name, version, action, tags, metadata, user_id) \
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)", VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)",
) )
.bind(p.entry_id) .bind(p.entry_id)
.bind(p.namespace) .bind(p.namespace)
@@ -192,7 +260,6 @@ pub async fn snapshot_entry_history(
.bind(p.action) .bind(p.action)
.bind(p.tags) .bind(p.tags)
.bind(p.metadata) .bind(p.metadata)
.bind(&actor)
.bind(p.user_id) .bind(p.user_id)
.execute(&mut **tx) .execute(&mut **tx)
.await?; .await?;
@@ -214,11 +281,10 @@ pub async fn snapshot_secret_history(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>, tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
p: SecretSnapshotParams<'_>, p: SecretSnapshotParams<'_>,
) -> Result<()> { ) -> Result<()> {
let actor = current_actor();
sqlx::query( sqlx::query(
"INSERT INTO secrets_history \ "INSERT INTO secrets_history \
(entry_id, secret_id, entry_version, field_name, encrypted, action, actor) \ (entry_id, secret_id, entry_version, field_name, encrypted, action) \
VALUES ($1, $2, $3, $4, $5, $6, $7)", VALUES ($1, $2, $3, $4, $5, $6)",
) )
.bind(p.entry_id) .bind(p.entry_id)
.bind(p.secret_id) .bind(p.secret_id)
@@ -226,7 +292,6 @@ pub async fn snapshot_secret_history(
.bind(p.field_name) .bind(p.field_name)
.bind(p.encrypted) .bind(p.encrypted)
.bind(p.action) .bind(p.action)
.bind(&actor)
.execute(&mut **tx) .execute(&mut **tx)
.await?; .await?;
Ok(()) Ok(())

View File

@@ -9,6 +9,7 @@ use uuid::Uuid;
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)] #[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct Entry { pub struct Entry {
pub id: Uuid, pub id: Uuid,
pub user_id: Option<Uuid>,
pub namespace: String, pub namespace: String,
pub kind: String, pub kind: String,
pub name: String, pub name: String,
@@ -174,6 +175,19 @@ pub struct OauthAccount {
pub created_at: DateTime<Utc>, pub created_at: DateTime<Utc>,
} }
/// A single audit log row, optionally scoped to a business user.
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct AuditLogEntry {
pub id: i64,
pub user_id: Option<Uuid>,
pub action: String,
pub namespace: String,
pub kind: String,
pub name: String,
pub detail: Value,
pub created_at: DateTime<Utc>,
}
// ── TOML ↔ JSON value conversion ────────────────────────────────────────────── // ── TOML ↔ JSON value conversion ──────────────────────────────────────────────
/// Convert a serde_json Value to a toml Value. /// Convert a serde_json Value to a toml Value.

View File

@@ -346,6 +346,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
crate::audit::log_tx( crate::audit::log_tx(
&mut tx, &mut tx,
params.user_id,
"add", "add",
params.namespace, params.namespace,
params.kind, params.kind,

View File

@@ -0,0 +1,23 @@
use anyhow::Result;
use sqlx::PgPool;
use uuid::Uuid;
use crate::models::AuditLogEntry;
pub async fn list_for_user(pool: &PgPool, user_id: Uuid, limit: i64) -> Result<Vec<AuditLogEntry>> {
let limit = limit.clamp(1, 200);
let rows = sqlx::query_as(
"SELECT id, user_id, action, namespace, kind, name, detail, created_at \
FROM audit_log \
WHERE user_id = $1 \
ORDER BY created_at DESC, id DESC \
LIMIT $2",
)
.bind(user_id)
.bind(limit)
.fetch_all(pool)
.await?;
Ok(rows)
}

View File

@@ -137,7 +137,7 @@ async fn delete_one(
}; };
snapshot_and_delete(&mut tx, namespace, kind, name, &row, user_id).await?; snapshot_and_delete(&mut tx, namespace, kind, name, &row, user_id).await?;
crate::audit::log_tx(&mut tx, "delete", namespace, kind, name, json!({})).await; crate::audit::log_tx(&mut tx, user_id, "delete", namespace, kind, name, json!({})).await;
tx.commit().await?; tx.commit().await?;
Ok(DeleteResult { Ok(DeleteResult {
@@ -240,6 +240,7 @@ async fn delete_bulk(
.await?; .await?;
crate::audit::log_tx( crate::audit::log_tx(
&mut tx, &mut tx,
user_id,
"delete", "delete",
namespace, namespace,
&row.kind, &row.kind,

View File

@@ -7,7 +7,6 @@ use uuid::Uuid;
pub struct HistoryEntry { pub struct HistoryEntry {
pub version: i64, pub version: i64,
pub action: String, pub action: String,
pub actor: String,
pub created_at: String, pub created_at: String,
} }
@@ -23,13 +22,12 @@ pub async fn run(
struct Row { struct Row {
version: i64, version: i64,
action: String, action: String,
actor: String,
created_at: chrono::DateTime<chrono::Utc>, created_at: chrono::DateTime<chrono::Utc>,
} }
let rows: Vec<Row> = if let Some(uid) = user_id { let rows: Vec<Row> = if let Some(uid) = user_id {
sqlx::query_as( sqlx::query_as(
"SELECT version, action, actor, created_at FROM entries_history \ "SELECT version, action, created_at FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id = $4 \ WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id = $4 \
ORDER BY id DESC LIMIT $5", ORDER BY id DESC LIMIT $5",
) )
@@ -42,7 +40,7 @@ pub async fn run(
.await? .await?
} else { } else {
sqlx::query_as( sqlx::query_as(
"SELECT version, action, actor, created_at FROM entries_history \ "SELECT version, action, created_at FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id IS NULL \ WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id IS NULL \
ORDER BY id DESC LIMIT $4", ORDER BY id DESC LIMIT $4",
) )
@@ -59,7 +57,6 @@ pub async fn run(
.map(|r| HistoryEntry { .map(|r| HistoryEntry {
version: r.version, version: r.version,
action: r.action, action: r.action,
actor: r.actor,
created_at: r.created_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(), created_at: r.created_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
}) })
.collect()) .collect())

View File

@@ -1,5 +1,6 @@
pub mod add; pub mod add;
pub mod api_key; pub mod api_key;
pub mod audit_log;
pub mod delete; pub mod delete;
pub mod env_map; pub mod env_map;
pub mod export; pub mod export;

View File

@@ -274,6 +274,7 @@ pub async fn run(
crate::audit::log_tx( crate::audit::log_tx(
&mut tx, &mut tx,
user_id,
"rollback", "rollback",
namespace, namespace,
kind, kind,

View File

@@ -131,7 +131,7 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
}; };
let sql = format!( let sql = format!(
"SELECT id, COALESCE(user_id, '00000000-0000-0000-0000-000000000000'::uuid) AS user_id, \ "SELECT id, user_id, \
namespace, kind, name, tags, metadata, version, created_at, updated_at \ namespace, kind, name, tags, metadata, version, created_at, updated_at \
FROM entries {where_clause} ORDER BY {order} LIMIT ${limit_idx} OFFSET ${offset_idx}" FROM entries {where_clause} ORDER BY {order} LIMIT ${limit_idx} OFFSET ${offset_idx}"
); );
@@ -212,8 +212,7 @@ pub async fn fetch_secrets_for_entries(
#[derive(sqlx::FromRow)] #[derive(sqlx::FromRow)]
struct EntryRaw { struct EntryRaw {
id: Uuid, id: Uuid,
#[allow(dead_code)] // Selected for row shape; Entry model has no user_id field user_id: Option<Uuid>,
user_id: Uuid,
namespace: String, namespace: String,
kind: String, kind: String,
name: String, name: String,
@@ -228,6 +227,7 @@ impl From<EntryRaw> for Entry {
fn from(r: EntryRaw) -> Self { fn from(r: EntryRaw) -> Self {
Entry { Entry {
id: r.id, id: r.id,
user_id: r.user_id,
namespace: r.namespace, namespace: r.namespace,
kind: r.kind, kind: r.kind,
name: r.name, name: r.name,

View File

@@ -241,6 +241,7 @@ pub async fn run(
crate::audit::log_tx( crate::audit::log_tx(
&mut tx, &mut tx,
params.user_id,
"update", "update",
params.namespace, params.namespace,
params.kind, params.kind,

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "secrets-mcp" name = "secrets-mcp"
version = "0.1.5" version = "0.1.10"
edition.workspace = true edition.workspace = true
[[bin]] [[bin]]

View File

@@ -473,15 +473,16 @@ impl SecretsService {
async fn secrets_history( async fn secrets_history(
&self, &self,
Parameters(input): Parameters<HistoryInput>, Parameters(input): Parameters<HistoryInput>,
_ctx: RequestContext<RoleServer>, ctx: RequestContext<RoleServer>,
) -> Result<CallToolResult, rmcp::ErrorData> { ) -> Result<CallToolResult, rmcp::ErrorData> {
let user_id = Self::user_id_from_ctx(&ctx)?;
let result = svc_history( let result = svc_history(
&self.pool, &self.pool,
&input.namespace, &input.namespace,
&input.kind, &input.kind,
&input.name, &input.name,
input.limit.unwrap_or(20), input.limit.unwrap_or(20),
None, user_id,
) )
.await .await
.map_err(|e| rmcp::ErrorData::internal_error(e.to_string(), None))?; .map_err(|e| rmcp::ErrorData::internal_error(e.to_string(), None))?;

View File

@@ -1,9 +1,12 @@
use askama::Template; use askama::Template;
use chrono::SecondsFormat;
use std::net::SocketAddr;
use axum::{ use axum::{
Json, Router, Json, Router,
body::Body, body::Body,
extract::{Path, Query, State}, extract::{ConnectInfo, Path, Query, State},
http::{StatusCode, header}, http::{HeaderMap, StatusCode, header},
response::{Html, IntoResponse, Redirect, Response}, response::{Html, IntoResponse, Redirect, Response},
routing::{get, post}, routing::{get, post},
}; };
@@ -11,9 +14,11 @@ use serde::{Deserialize, Serialize};
use tower_sessions::Session; use tower_sessions::Session;
use uuid::Uuid; use uuid::Uuid;
use secrets_core::audit::log_login;
use secrets_core::crypto::hex; use secrets_core::crypto::hex;
use secrets_core::service::{ use secrets_core::service::{
api_key::{ensure_api_key, regenerate_api_key}, api_key::{ensure_api_key, regenerate_api_key},
audit_log::list_for_user,
user::{ user::{
OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id, OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id,
unbind_oauth_account, update_user_key_setup, unbind_oauth_account, update_user_key_setup,
@@ -47,6 +52,23 @@ struct DashboardTemplate {
version: &'static str, version: &'static str,
} }
#[derive(Template)]
#[template(path = "audit.html")]
struct AuditPageTemplate {
user_name: String,
user_email: String,
entries: Vec<AuditEntryView>,
version: &'static str,
}
struct AuditEntryView {
/// RFC3339 UTC for `<time datetime>`; rendered as browser-local in audit.html.
created_at_iso: String,
action: String,
target: String,
detail: String,
}
// ── App state helpers ───────────────────────────────────────────────────────── // ── App state helpers ─────────────────────────────────────────────────────────
fn google_cfg(state: &AppState) -> Option<&OAuthConfig> { fn google_cfg(state: &AppState) -> Option<&OAuthConfig> {
@@ -62,6 +84,30 @@ async fn current_user_id(session: &Session) -> Option<Uuid> {
.and_then(|s| Uuid::parse_str(&s).ok()) .and_then(|s| Uuid::parse_str(&s).ok())
} }
fn request_client_ip(headers: &HeaderMap, connect_info: ConnectInfo<SocketAddr>) -> Option<String> {
if let Some(first) = headers
.get("x-forwarded-for")
.and_then(|v| v.to_str().ok())
.and_then(|s| s.split(',').next())
{
let value = first.trim();
if !value.is_empty() {
return Some(value.to_string());
}
}
Some(connect_info.ip().to_string())
}
fn request_user_agent(headers: &HeaderMap) -> Option<String> {
headers
.get(header::USER_AGENT)
.and_then(|value| value.to_str().ok())
.map(str::trim)
.filter(|value| !value.is_empty())
.map(ToOwned::to_owned)
}
// ── Routes ──────────────────────────────────────────────────────────────────── // ── Routes ────────────────────────────────────────────────────────────────────
pub fn web_router() -> Router<AppState> { pub fn web_router() -> Router<AppState> {
@@ -76,6 +122,7 @@ pub fn web_router() -> Router<AppState> {
.route("/auth/google/callback", get(auth_google_callback)) .route("/auth/google/callback", get(auth_google_callback))
.route("/auth/logout", post(auth_logout)) .route("/auth/logout", post(auth_logout))
.route("/dashboard", get(dashboard)) .route("/dashboard", get(dashboard))
.route("/audit", get(audit_page))
.route("/account/bind/google", get(account_bind_google)) .route("/account/bind/google", get(account_bind_google))
.route( .route(
"/account/bind/google/callback", "/account/bind/google/callback",
@@ -141,16 +188,28 @@ struct OAuthCallbackQuery {
async fn auth_google_callback( async fn auth_google_callback(
State(state): State<AppState>, State(state): State<AppState>,
connect_info: ConnectInfo<SocketAddr>,
headers: HeaderMap,
session: Session, session: Session,
Query(params): Query<OAuthCallbackQuery>, Query(params): Query<OAuthCallbackQuery>,
) -> Result<Response, StatusCode> { ) -> Result<Response, StatusCode> {
handle_oauth_callback(&state, &session, params, "google", |s, cfg, code| { let client_ip = request_client_ip(&headers, connect_info);
Box::pin(crate::oauth::google::exchange_code( let user_agent = request_user_agent(&headers);
&s.http_client, handle_oauth_callback(
cfg, &state,
code, &session,
)) params,
}) "google",
client_ip.as_deref(),
user_agent.as_deref(),
|s, cfg, code| {
Box::pin(crate::oauth::google::exchange_code(
&s.http_client,
cfg,
code,
))
},
)
.await .await
} }
@@ -161,6 +220,8 @@ async fn handle_oauth_callback<F>(
session: &Session, session: &Session,
params: OAuthCallbackQuery, params: OAuthCallbackQuery,
provider: &str, provider: &str,
client_ip: Option<&str>,
user_agent: Option<&str>,
exchange_fn: F, exchange_fn: F,
) -> Result<Response, StatusCode> ) -> Result<Response, StatusCode>
where where
@@ -260,11 +321,6 @@ where
StatusCode::INTERNAL_SERVER_ERROR StatusCode::INTERNAL_SERVER_ERROR
})?; })?;
// Ensure the user has an API key (auto-creates on first login).
if let Err(e) = ensure_api_key(&state.pool, user.id).await {
tracing::warn!(error = %e, "failed to ensure api key for user");
}
session session
.insert(SESSION_USER_ID, user.id.to_string()) .insert(SESSION_USER_ID, user.id.to_string())
.await .await
@@ -274,6 +330,16 @@ where
.await .await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
log_login(
&state.pool,
"oauth",
provider,
user.id,
client_ip,
user_agent,
)
.await;
Ok(Redirect::to("/dashboard").into_response()) Ok(Redirect::to("/dashboard").into_response())
} }
@@ -313,6 +379,49 @@ async fn dashboard(
render_template(tmpl) render_template(tmpl)
} }
async fn audit_page(
State(state): State<AppState>,
session: Session,
) -> Result<Response, StatusCode> {
let Some(user_id) = current_user_id(&session).await else {
return Ok(Redirect::to("/").into_response());
};
let user = match get_user_by_id(&state.pool, user_id)
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
{
Some(u) => u,
None => return Ok(Redirect::to("/").into_response()),
};
let rows = list_for_user(&state.pool, user_id, 100)
.await
.map_err(|e| {
tracing::error!(error = %e, "failed to load audit log for user");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let entries = rows
.into_iter()
.map(|row| AuditEntryView {
created_at_iso: row.created_at.to_rfc3339_opts(SecondsFormat::Secs, true),
action: row.action,
target: format_audit_target(&row.namespace, &row.kind, &row.name),
detail: serde_json::to_string_pretty(&row.detail).unwrap_or_else(|_| "{}".to_string()),
})
.collect();
let tmpl = AuditPageTemplate {
user_name: user.name.clone(),
user_email: user.email.clone().unwrap_or_default(),
entries,
version: env!("CARGO_PKG_VERSION"),
};
render_template(tmpl)
}
// ── Account bind/unbind ─────────────────────────────────────────────────────── // ── Account bind/unbind ───────────────────────────────────────────────────────
async fn account_bind_google( async fn account_bind_google(
@@ -342,16 +451,28 @@ async fn account_bind_google(
async fn account_bind_google_callback( async fn account_bind_google_callback(
State(state): State<AppState>, State(state): State<AppState>,
connect_info: ConnectInfo<SocketAddr>,
headers: HeaderMap,
session: Session, session: Session,
Query(params): Query<OAuthCallbackQuery>, Query(params): Query<OAuthCallbackQuery>,
) -> Result<Response, StatusCode> { ) -> Result<Response, StatusCode> {
handle_oauth_callback(&state, &session, params, "google", |s, cfg, code| { let client_ip = request_client_ip(&headers, connect_info);
Box::pin(crate::oauth::google::exchange_code( let user_agent = request_user_agent(&headers);
&s.http_client, handle_oauth_callback(
cfg, &state,
code, &session,
)) params,
}) "google",
client_ip.as_deref(),
user_agent.as_deref(),
|s, cfg, code| {
Box::pin(crate::oauth::google::exchange_code(
&s.http_client,
cfg,
code,
))
},
)
.await .await
} }
@@ -514,3 +635,12 @@ fn render_template<T: Template>(tmpl: T) -> Result<Response, StatusCode> {
})?; })?;
Ok(Html(html).into_response()) Ok(Html(html).into_response())
} }
fn format_audit_target(namespace: &str, kind: &str, name: &str) -> String {
// Auth events reuse kind/name as a provider-scoped target, not an entry identity.
if namespace == "auth" {
format!("{}/{}", kind, name)
} else {
format!("[{}/{}] {}", namespace, kind, name)
}
}

View File

@@ -0,0 +1,154 @@
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="icon" href="/favicon.svg?v={{ version }}" type="image/svg+xml">
<title>Secrets — Audit</title>
<style>
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;600&family=Inter:wght@400;500;600&display=swap');
:root {
--bg: #0d1117; --surface: #161b22; --surface2: #21262d;
--border: #30363d; --text: #e6edf3; --text-muted: #8b949e;
--accent: #58a6ff; --accent-hover: #79b8ff;
}
body { background: var(--bg); color: var(--text); font-family: 'Inter', sans-serif; min-height: 100vh; }
.layout { display: flex; min-height: 100vh; }
.sidebar {
width: 220px; flex-shrink: 0; background: var(--surface); border-right: 1px solid var(--border);
padding: 24px 16px; display: flex; flex-direction: column; gap: 20px;
}
.sidebar-logo { font-family: 'JetBrains Mono', monospace; font-size: 16px; font-weight: 600;
color: var(--text); text-decoration: none; padding: 0 10px; }
.sidebar-logo span { color: var(--accent); }
.sidebar-menu { display: flex; flex-direction: column; gap: 6px; }
.sidebar-link {
padding: 10px 12px; border-radius: 8px; color: var(--text-muted); text-decoration: none;
border: 1px solid transparent; font-size: 13px; font-weight: 500;
}
.sidebar-link:hover { background: var(--surface2); color: var(--text); }
.sidebar-link.active {
background: rgba(88,166,255,0.12); color: var(--text); border-color: rgba(88,166,255,0.35);
}
.content-shell { flex: 1; min-width: 0; display: flex; flex-direction: column; }
.topbar {
background: var(--surface); border-bottom: 1px solid var(--border); padding: 0 24px;
display: flex; align-items: center; gap: 12px; min-height: 52px;
}
.topbar-spacer { flex: 1; }
.nav-user { font-size: 13px; color: var(--text-muted); }
.btn-sign-out {
padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border);
background: none; color: var(--text); font-size: 12px; text-decoration: none; cursor: pointer;
}
.btn-sign-out:hover { background: var(--surface2); }
.main { padding: 32px 24px 40px; flex: 1; }
.card { background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
padding: 24px; width: 100%; max-width: 1180px; margin: 0 auto; }
.card-title { font-size: 20px; font-weight: 600; margin-bottom: 8px; }
.card-subtitle { color: var(--text-muted); font-size: 13px; margin-bottom: 20px; }
.empty { color: var(--text-muted); font-size: 14px; padding: 20px 0; }
table { width: 100%; border-collapse: collapse; }
th, td { text-align: left; vertical-align: top; padding: 12px 10px; border-top: 1px solid var(--border); }
th { color: var(--text-muted); font-size: 12px; font-weight: 600; }
td { font-size: 13px; }
.mono { font-family: 'JetBrains Mono', monospace; }
.detail {
background: var(--bg); border: 1px solid var(--border); border-radius: 8px;
padding: 10px; white-space: pre-wrap; word-break: break-word; font-size: 12px;
max-width: 460px;
}
@media (max-width: 900px) {
.layout { flex-direction: column; }
.sidebar {
width: 100%; border-right: none; border-bottom: 1px solid var(--border);
padding: 16px; gap: 14px;
}
.sidebar-menu { flex-direction: row; }
.sidebar-link { flex: 1; text-align: center; }
.main { padding: 20px 12px 28px; }
.card { padding: 16px; }
.topbar { padding: 12px 16px; flex-wrap: wrap; }
table, thead, tbody, th, td, tr { display: block; }
thead { display: none; }
tr { border-top: 1px solid var(--border); padding: 12px 0; }
td { border-top: none; padding: 6px 0; }
td::before {
display: block; color: var(--text-muted); font-size: 11px;
margin-bottom: 4px; text-transform: uppercase;
}
td.col-time::before { content: "Time"; }
td.col-action::before { content: "Action"; }
td.col-target::before { content: "Target"; }
td.col-detail::before { content: "Detail"; }
.detail { max-width: none; }
}
</style>
</head>
<body>
<div class="layout">
<aside class="sidebar">
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link">MCP</a>
<a href="/audit" class="sidebar-link active">审计</a>
</nav>
</aside>
<div class="content-shell">
<div class="topbar">
<span class="topbar-spacer"></span>
<span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span>
<form action="/auth/logout" method="post" style="display:inline">
<button type="submit" class="btn-sign-out">退出</button>
</form>
</div>
<main class="main">
<section class="card">
<div class="card-title">我的审计</div>
<div class="card-subtitle">展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。</div>
{% if entries.is_empty() %}
<div class="empty">暂无审计记录。</div>
{% else %}
<table>
<thead>
<tr>
<th>时间</th>
<th>动作</th>
<th>目标</th>
<th>详情</th>
</tr>
</thead>
<tbody>
{% for entry in entries %}
<tr>
<td class="col-time mono"><time class="audit-local-time" datetime="{{ entry.created_at_iso }}">{{ entry.created_at_iso }}</time></td>
<td class="col-action mono">{{ entry.action }}</td>
<td class="col-target mono">{{ entry.target }}</td>
<td class="col-detail"><pre class="detail">{{ entry.detail }}</pre></td>
</tr>
{% endfor %}
</tbody>
</table>
{% endif %}
</section>
</main>
</div>
</div>
<script>
(function () {
document.querySelectorAll('time.audit-local-time[datetime]').forEach(function (el) {
var raw = el.getAttribute('datetime');
var d = raw ? new Date(raw) : null;
if (d && !isNaN(d.getTime())) {
el.textContent = d.toLocaleString(undefined, { dateStyle: 'medium', timeStyle: 'medium' });
el.title = raw + ' (UTC)';
}
});
})();
</script>
</body>
</html>

View File

@@ -16,13 +16,29 @@
} }
body { background: var(--bg); color: var(--text); font-family: 'Inter', sans-serif; min-height: 100vh; } body { background: var(--bg); color: var(--text); font-family: 'Inter', sans-serif; min-height: 100vh; }
/* Nav */ .layout { display: flex; min-height: 100vh; }
.nav { background: var(--surface); border-bottom: 1px solid var(--border); .sidebar {
padding: 0 24px; display: flex; align-items: center; gap: 12px; height: 52px; } width: 220px; flex-shrink: 0; background: var(--surface); border-right: 1px solid var(--border);
.nav-logo { font-family: 'JetBrains Mono', monospace; font-size: 15px; font-weight: 600; padding: 24px 16px; display: flex; flex-direction: column; gap: 20px;
color: var(--text); text-decoration: none; } }
.nav-logo span { color: var(--accent); } .sidebar-logo { font-family: 'JetBrains Mono', monospace; font-size: 16px; font-weight: 600;
.nav-spacer { flex: 1; } color: var(--text); text-decoration: none; padding: 0 10px; }
.sidebar-logo span { color: var(--accent); }
.sidebar-menu { display: flex; flex-direction: column; gap: 6px; }
.sidebar-link {
padding: 10px 12px; border-radius: 8px; color: var(--text-muted); text-decoration: none;
border: 1px solid transparent; font-size: 13px; font-weight: 500;
}
.sidebar-link:hover { background: var(--surface2); color: var(--text); }
.sidebar-link.active {
background: rgba(88,166,255,0.12); color: var(--text); border-color: rgba(88,166,255,0.35);
}
.content-shell { flex: 1; min-width: 0; display: flex; flex-direction: column; }
.topbar {
background: var(--surface); border-bottom: 1px solid var(--border); padding: 0 24px;
display: flex; align-items: center; gap: 12px; min-height: 52px;
}
.topbar-spacer { flex: 1; }
.nav-user { font-size: 13px; color: var(--text-muted); } .nav-user { font-size: 13px; color: var(--text-muted); }
.lang-bar { display: flex; gap: 2px; background: var(--surface2); border-radius: 6px; padding: 2px; } .lang-bar { display: flex; gap: 2px; background: var(--surface2); border-radius: 6px; padding: 2px; }
.lang-btn { padding: 3px 9px; border: none; background: none; color: var(--text-muted); .lang-btn { padding: 3px 9px; border: none; background: none; color: var(--text-muted);
@@ -32,11 +48,19 @@
background: none; color: var(--text); font-size: 12px; cursor: pointer; } background: none; color: var(--text); font-size: 12px; cursor: pointer; }
.btn-sign-out:hover { background: var(--surface2); } .btn-sign-out:hover { background: var(--surface2); }
/* Main: column so footer can sit at bottom of viewport when content is short */ /* Main content column */
.main { display: flex; flex-direction: column; align-items: center; .main { display: flex; flex-direction: column; align-items: center;
padding: 48px 24px 24px; min-height: calc(100vh - 52px); } padding: 24px 20px 8px; min-height: 0; }
.app-footer {
margin-top: auto;
text-align: center;
padding: 4px 20px 12px;
font-size: 12px;
color: #9da7b3;
font-family: 'JetBrains Mono', monospace;
}
.card { background: var(--surface); border: 1px solid var(--border); border-radius: 12px; .card { background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
padding: 32px; width: 100%; max-width: 980px; } padding: 24px; width: 100%; max-width: 980px; }
.card-title { font-size: 18px; font-weight: 600; margin-bottom: 24px; } .card-title { font-size: 18px; font-weight: 600; margin-bottom: 24px; }
/* Form */ /* Form */
.field { margin-bottom: 12px; } .field { margin-bottom: 12px; }
@@ -123,28 +147,40 @@
background: none; color: var(--text); font-size: 13px; cursor: pointer; } background: none; color: var(--text); font-size: 13px; cursor: pointer; }
.btn-modal-cancel:hover { background: var(--surface2); } .btn-modal-cancel:hover { background: var(--surface2); }
@media (max-width: 720px) { @media (max-width: 900px) {
.config-tabs { grid-template-columns: 1fr; } .layout { flex-direction: column; }
.sidebar {
width: 100%; border-right: none; border-bottom: 1px solid var(--border);
padding: 16px; gap: 14px;
}
.sidebar-menu { flex-direction: row; }
.sidebar-link { flex: 1; text-align: center; }
} }
.app-footer { @media (max-width: 720px) {
margin-top: auto; .config-tabs { grid-template-columns: 1fr; }
width: 100%; .topbar { padding: 12px 16px; flex-wrap: wrap; }
max-width: 980px; .main { padding: 16px 12px 6px; }
flex-shrink: 0; .app-footer { padding: 4px 12px 10px; }
text-align: center; .card { padding: 18px; }
padding-top: 28px;
font-size: 12px;
color: #9da7b3;
font-family: 'JetBrains Mono', monospace;
} }
</style> </style>
</head> </head>
<body data-has-passphrase="{{ has_passphrase }}" data-base-url="{{ base_url }}"> <body data-has-passphrase="{{ has_passphrase }}" data-base-url="{{ base_url }}">
<nav class="nav"> <div class="layout">
<a href="/dashboard" class="nav-logo"><span>secrets</span></a> <aside class="sidebar">
<span class="nav-spacer"></span> <a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link active">MCP</a>
<a href="/audit" class="sidebar-link">审计</a>
</nav>
</aside>
<div class="content-shell">
<div class="topbar">
<span class="topbar-spacer"></span>
<span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span> <span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span>
<div class="lang-bar"> <div class="lang-bar">
<button class="lang-btn" onclick="setLang('zh-CN')"></button> <button class="lang-btn" onclick="setLang('zh-CN')"></button>
@@ -154,7 +190,7 @@
<form action="/auth/logout" method="post" style="display:inline"> <form action="/auth/logout" method="post" style="display:inline">
<button type="submit" class="btn-sign-out" data-i18n="signOut">退出</button> <button type="submit" class="btn-sign-out" data-i18n="signOut">退出</button>
</form> </form>
</nav> </div>
<div class="main"> <div class="main">
<div class="card"> <div class="card">
@@ -258,9 +294,11 @@
</div> </div>
</div> </div>
</div>
<footer class="app-footer">{{ version }}</footer> <footer class="app-footer">{{ version }}</footer>
</div> </div>
</div>
</div>
<!-- ── Change passphrase modal ──────────────────────────────────────────────── --> <!-- ── Change passphrase modal ──────────────────────────────────────────────── -->
<div class="modal-bd" id="change-modal"> <div class="modal-bd" id="change-modal">

View File

@@ -12,12 +12,13 @@ echo "==> 当前 secrets-mcp 版本: ${version}"
echo "==> 检查是否已存在 tag: ${tag}" echo "==> 检查是否已存在 tag: ${tag}"
if git rev-parse "refs/tags/${tag}" >/dev/null 2>&1; then if git rev-parse "refs/tags/${tag}" >/dev/null 2>&1; then
echo "错误: 已存在 tag ${tag}" echo "提示: 已存在 tag ${tag},将按重复构建处理,不阻断检查。"
echo "请先 bump crates/secrets-mcp/Cargo.toml 中的 version,再执行 cargo build 同步 Cargo.lock。" echo "如需创建新的发布版本,请先 bump crates/secrets-mcp/Cargo.toml 中的 version。"
exit 1 else
echo "==> 未发现重复 tag将创建新版本"
fi fi
echo "==> 未发现重复 tag开始执行检查" echo "==> 开始执行检查"
cargo fmt -- --check cargo fmt -- --check
cargo clippy --locked -- -D warnings cargo clippy --locked -- -D warnings
cargo test --locked cargo test --locked