Compare commits
7 Commits
secrets-mc
...
secrets-mc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b6349dd1c8 | ||
|
|
f720983328 | ||
|
|
7bd0603dc6 | ||
|
|
17a95bea5b | ||
|
|
a42db62702 | ||
|
|
2edb970cba | ||
|
|
17f8ac0dbc |
18
AGENTS.md
18
AGENTS.md
@@ -28,7 +28,7 @@ secrets/
|
|||||||
|
|
||||||
- **建议库名**:`secrets-mcp`(专用实例,与历史库名区分)。
|
- **建议库名**:`secrets-mcp`(专用实例,与历史库名区分)。
|
||||||
- **连接**:环境变量 **`SECRETS_DATABASE_URL`**(本分支无本地配置文件路径)。
|
- **连接**:环境变量 **`SECRETS_DATABASE_URL`**(本分支无本地配置文件路径)。
|
||||||
- **表**:`entries`(含 `user_id`)、`secrets`、`entries_history`、`secrets_history`、`audit_log`、`users`、`oauth_accounts`、`api_keys`,首次连接 **auto-migrate**。
|
- **表**:`entries`(含 `user_id`)、`secrets`、`entries_history`、`secrets_history`、`audit_log`、`users`、`oauth_accounts`,首次连接 **auto-migrate**。
|
||||||
|
|
||||||
### 表结构(摘录)
|
### 表结构(摘录)
|
||||||
|
|
||||||
@@ -60,7 +60,7 @@ secrets (
|
|||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
### users / oauth_accounts / api_keys
|
### users / oauth_accounts
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
users (
|
users (
|
||||||
@@ -71,6 +71,7 @@ users (
|
|||||||
key_salt BYTEA, -- PBKDF2 salt(32B),首次设置密码短语时写入
|
key_salt BYTEA, -- PBKDF2 salt(32B),首次设置密码短语时写入
|
||||||
key_check BYTEA, -- 派生密钥加密已知常量,用于验证密码短语
|
key_check BYTEA, -- 派生密钥加密已知常量,用于验证密码短语
|
||||||
key_params JSONB, -- 算法参数,如 {"alg":"pbkdf2-sha256","iterations":600000}
|
key_params JSONB, -- 算法参数,如 {"alg":"pbkdf2-sha256","iterations":600000}
|
||||||
|
api_key TEXT UNIQUE, -- MCP Bearer token(当前实现为明文存储)
|
||||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
)
|
)
|
||||||
@@ -83,21 +84,11 @@ oauth_accounts (
|
|||||||
...
|
...
|
||||||
UNIQUE(provider, provider_id)
|
UNIQUE(provider, provider_id)
|
||||||
)
|
)
|
||||||
|
|
||||||
api_keys (
|
|
||||||
id UUID PRIMARY KEY DEFAULT uuidv7(),
|
|
||||||
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
|
||||||
name VARCHAR(256) NOT NULL,
|
|
||||||
key_hash VARCHAR(64) NOT NULL UNIQUE,
|
|
||||||
key_prefix VARCHAR(12) NOT NULL,
|
|
||||||
last_used_at TIMESTAMPTZ,
|
|
||||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
|
||||||
)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### audit_log / history
|
### audit_log / history
|
||||||
|
|
||||||
与迁移脚本一致:`audit_log`、`entries_history`、`secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs` 内 `migrate` SQL。
|
与迁移脚本一致:`audit_log`、`entries_history`、`secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs` 内 `migrate` SQL。`audit_log` 中普通业务事件的 `namespace/kind/name` 对应 entry 坐标;登录类事件固定使用 `namespace='auth'`,此时 `kind/name` 表示认证目标而非 entry 身份。
|
||||||
|
|
||||||
### 字段职责
|
### 字段职责
|
||||||
|
|
||||||
@@ -165,6 +156,5 @@ git tag -l 'secrets-mcp-*'
|
|||||||
| `SECRETS_MCP_BIND` | 监听地址,默认 `0.0.0.0:9315`。 |
|
| `SECRETS_MCP_BIND` | 监听地址,默认 `0.0.0.0:9315`。 |
|
||||||
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;仅运行时配置。 |
|
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;仅运行时配置。 |
|
||||||
| `RUST_LOG` | 如 `secrets_mcp=debug`。 |
|
| `RUST_LOG` | 如 `secrets_mcp=debug`。 |
|
||||||
| `USER` | 若写入审计 `actor`,由运行环境提供。 |
|
|
||||||
|
|
||||||
> `SERVER_MASTER_KEY` 已不再需要。新架构下密钥由用户密码短语在客户端派生,服务端不持有。
|
> `SERVER_MASTER_KEY` 已不再需要。新架构下密钥由用户密码短语在客户端派生,服务端不持有。
|
||||||
|
|||||||
2
Cargo.lock
generated
2
Cargo.lock
generated
@@ -1949,7 +1949,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "secrets-mcp"
|
name = "secrets-mcp"
|
||||||
version = "0.1.7"
|
version = "0.1.10"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"askama",
|
"askama",
|
||||||
|
|||||||
@@ -77,7 +77,7 @@ flowchart LR
|
|||||||
### 敏感数据传输
|
### 敏感数据传输
|
||||||
|
|
||||||
- **OAuth `client_secret`** 只存服务端环境变量,不发给浏览器
|
- **OAuth `client_secret`** 只存服务端环境变量,不发给浏览器
|
||||||
- **API Key** 创建时原始 key 仅展示一次,库中只存 SHA-256 哈希
|
- **API Key** 当前存放在 `users.api_key`,Dashboard 会明文展示并可重置
|
||||||
- **X-Encryption-Key** 随 MCP 请求经 TLS 传输,服务端仅在请求处理期间持有(不持久化)
|
- **X-Encryption-Key** 随 MCP 请求经 TLS 传输,服务端仅在请求处理期间持有(不持久化)
|
||||||
- **生产环境必须走 HTTPS/TLS**
|
- **生产环境必须走 HTTPS/TLS**
|
||||||
|
|
||||||
@@ -121,7 +121,7 @@ flowchart LR
|
|||||||
|
|
||||||
## 数据模型
|
## 数据模型
|
||||||
|
|
||||||
主表 **`entries`**(`namespace`、`kind`、`name`、`tags`、`metadata`,多租户时带 `user_id`)+ 子表 **`secrets`**(每行一个加密字段:`field_name`、`encrypted`)。另有 `entries_history`、`secrets_history`、`audit_log`,以及 **`users`**(含 `key_salt`、`key_check`、`key_params`)、**`oauth_accounts`**、**`api_keys`**。首次连库自动迁移建表。
|
主表 **`entries`**(`namespace`、`kind`、`name`、`tags`、`metadata`,多租户时带 `user_id`)+ 子表 **`secrets`**(每行一个加密字段:`field_name`、`encrypted`)。另有 `entries_history`、`secrets_history`、`audit_log`,以及 **`users`**(含 `key_salt`、`key_check`、`key_params`、`api_key`)、**`oauth_accounts`**。首次连库自动迁移建表。
|
||||||
|
|
||||||
| 位置 | 字段 | 说明 |
|
| 位置 | 字段 | 说明 |
|
||||||
|------|------|------|
|
|------|------|------|
|
||||||
@@ -142,9 +142,10 @@ flowchart LR
|
|||||||
## 审计日志
|
## 审计日志
|
||||||
|
|
||||||
`add`、`update`、`delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。
|
`add`、`update`、`delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。
|
||||||
|
其中业务条目事件使用 `[namespace/kind] name` 语义;登录类事件使用 `namespace='auth'`,此时 `kind/name` 表示认证目标(例如 `oauth/google`),不表示某条 secrets entry。
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT action, namespace, kind, name, actor, detail, created_at
|
SELECT action, namespace, kind, name, detail, created_at
|
||||||
FROM audit_log
|
FROM audit_log
|
||||||
ORDER BY created_at DESC
|
ORDER BY created_at DESC
|
||||||
LIMIT 20;
|
LIMIT 20;
|
||||||
|
|||||||
@@ -5,19 +5,8 @@ use uuid::Uuid;
|
|||||||
pub const ACTION_LOGIN: &str = "login";
|
pub const ACTION_LOGIN: &str = "login";
|
||||||
pub const NAMESPACE_AUTH: &str = "auth";
|
pub const NAMESPACE_AUTH: &str = "auth";
|
||||||
|
|
||||||
/// Return the current OS user as the audit actor (falls back to empty string).
|
fn login_detail(provider: &str, client_ip: Option<&str>, user_agent: Option<&str>) -> Value {
|
||||||
pub fn current_actor() -> String {
|
|
||||||
std::env::var("USER").unwrap_or_default()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn login_detail(
|
|
||||||
user_id: Uuid,
|
|
||||||
provider: &str,
|
|
||||||
client_ip: Option<&str>,
|
|
||||||
user_agent: Option<&str>,
|
|
||||||
) -> Value {
|
|
||||||
json!({
|
json!({
|
||||||
"user_id": user_id,
|
|
||||||
"provider": provider,
|
"provider": provider,
|
||||||
"client_ip": client_ip,
|
"client_ip": client_ip,
|
||||||
"user_agent": user_agent,
|
"user_agent": user_agent,
|
||||||
@@ -33,11 +22,10 @@ pub async fn log_login(
|
|||||||
client_ip: Option<&str>,
|
client_ip: Option<&str>,
|
||||||
user_agent: Option<&str>,
|
user_agent: Option<&str>,
|
||||||
) {
|
) {
|
||||||
let actor = current_actor();
|
let detail = login_detail(provider, client_ip, user_agent);
|
||||||
let detail = login_detail(user_id, provider, client_ip, user_agent);
|
|
||||||
let result: Result<_, sqlx::Error> = sqlx::query(
|
let result: Result<_, sqlx::Error> = sqlx::query(
|
||||||
"INSERT INTO audit_log (user_id, action, namespace, kind, name, detail, actor) \
|
"INSERT INTO audit_log (user_id, action, namespace, kind, name, detail) \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6, $7)",
|
VALUES ($1, $2, $3, $4, $5, $6)",
|
||||||
)
|
)
|
||||||
.bind(user_id)
|
.bind(user_id)
|
||||||
.bind(ACTION_LOGIN)
|
.bind(ACTION_LOGIN)
|
||||||
@@ -45,14 +33,13 @@ pub async fn log_login(
|
|||||||
.bind(kind)
|
.bind(kind)
|
||||||
.bind(provider)
|
.bind(provider)
|
||||||
.bind(&detail)
|
.bind(&detail)
|
||||||
.bind(&actor)
|
|
||||||
.execute(pool)
|
.execute(pool)
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
if let Err(e) = result {
|
if let Err(e) = result {
|
||||||
tracing::warn!(error = %e, kind, provider, "failed to write login audit log");
|
tracing::warn!(error = %e, kind, provider, "failed to write login audit log");
|
||||||
} else {
|
} else {
|
||||||
tracing::debug!(kind, provider, ?user_id, actor, "login audit logged");
|
tracing::debug!(kind, provider, ?user_id, "login audit logged");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -66,10 +53,9 @@ pub async fn log_tx(
|
|||||||
name: &str,
|
name: &str,
|
||||||
detail: Value,
|
detail: Value,
|
||||||
) {
|
) {
|
||||||
let actor = current_actor();
|
|
||||||
let result: Result<_, sqlx::Error> = sqlx::query(
|
let result: Result<_, sqlx::Error> = sqlx::query(
|
||||||
"INSERT INTO audit_log (user_id, action, namespace, kind, name, detail, actor) \
|
"INSERT INTO audit_log (user_id, action, namespace, kind, name, detail) \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6, $7)",
|
VALUES ($1, $2, $3, $4, $5, $6)",
|
||||||
)
|
)
|
||||||
.bind(user_id)
|
.bind(user_id)
|
||||||
.bind(action)
|
.bind(action)
|
||||||
@@ -77,14 +63,13 @@ pub async fn log_tx(
|
|||||||
.bind(kind)
|
.bind(kind)
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.bind(&detail)
|
.bind(&detail)
|
||||||
.bind(&actor)
|
|
||||||
.execute(&mut **tx)
|
.execute(&mut **tx)
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
if let Err(e) = result {
|
if let Err(e) = result {
|
||||||
tracing::warn!(error = %e, "failed to write audit log");
|
tracing::warn!(error = %e, "failed to write audit log");
|
||||||
} else {
|
} else {
|
||||||
tracing::debug!(action, namespace, kind, name, actor, "audit logged");
|
tracing::debug!(action, namespace, kind, name, "audit logged");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -94,10 +79,8 @@ mod tests {
|
|||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn login_detail_includes_expected_fields() {
|
fn login_detail_includes_expected_fields() {
|
||||||
let user_id = Uuid::nil();
|
let detail = login_detail("google", Some("127.0.0.1"), Some("Mozilla/5.0"));
|
||||||
let detail = login_detail(user_id, "google", Some("127.0.0.1"), Some("Mozilla/5.0"));
|
|
||||||
|
|
||||||
assert_eq!(detail["user_id"], json!(user_id));
|
|
||||||
assert_eq!(detail["provider"], "google");
|
assert_eq!(detail["provider"], "google");
|
||||||
assert_eq!(detail["client_ip"], "127.0.0.1");
|
assert_eq!(detail["client_ip"], "127.0.0.1");
|
||||||
assert_eq!(detail["user_agent"], "Mozilla/5.0");
|
assert_eq!(detail["user_agent"], "Mozilla/5.0");
|
||||||
|
|||||||
@@ -3,8 +3,6 @@ use serde_json::Value;
|
|||||||
use sqlx::PgPool;
|
use sqlx::PgPool;
|
||||||
use sqlx::postgres::PgPoolOptions;
|
use sqlx::postgres::PgPoolOptions;
|
||||||
|
|
||||||
use crate::audit::current_actor;
|
|
||||||
|
|
||||||
pub async fn create_pool(database_url: &str) -> Result<PgPool> {
|
pub async fn create_pool(database_url: &str) -> Result<PgPool> {
|
||||||
tracing::debug!("connecting to database");
|
tracing::debug!("connecting to database");
|
||||||
let pool = PgPoolOptions::new()
|
let pool = PgPoolOptions::new()
|
||||||
@@ -73,11 +71,9 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
kind VARCHAR(64) NOT NULL,
|
kind VARCHAR(64) NOT NULL,
|
||||||
name VARCHAR(256) NOT NULL,
|
name VARCHAR(256) NOT NULL,
|
||||||
detail JSONB NOT NULL DEFAULT '{}',
|
detail JSONB NOT NULL DEFAULT '{}',
|
||||||
actor VARCHAR(128) NOT NULL DEFAULT '',
|
|
||||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
);
|
);
|
||||||
|
|
||||||
ALTER TABLE audit_log ADD COLUMN IF NOT EXISTS user_id UUID;
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC);
|
CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC);
|
||||||
CREATE INDEX IF NOT EXISTS idx_audit_log_ns_kind ON audit_log(namespace, kind);
|
CREATE INDEX IF NOT EXISTS idx_audit_log_ns_kind ON audit_log(namespace, kind);
|
||||||
CREATE INDEX IF NOT EXISTS idx_audit_log_user_id ON audit_log(user_id) WHERE user_id IS NOT NULL;
|
CREATE INDEX IF NOT EXISTS idx_audit_log_user_id ON audit_log(user_id) WHERE user_id IS NOT NULL;
|
||||||
@@ -93,7 +89,6 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
action VARCHAR(16) NOT NULL,
|
action VARCHAR(16) NOT NULL,
|
||||||
tags TEXT[] NOT NULL DEFAULT '{}',
|
tags TEXT[] NOT NULL DEFAULT '{}',
|
||||||
metadata JSONB NOT NULL DEFAULT '{}',
|
metadata JSONB NOT NULL DEFAULT '{}',
|
||||||
actor VARCHAR(128) NOT NULL DEFAULT '',
|
|
||||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -106,6 +101,7 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID;
|
ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID;
|
||||||
CREATE INDEX IF NOT EXISTS idx_entries_history_user_id
|
CREATE INDEX IF NOT EXISTS idx_entries_history_user_id
|
||||||
ON entries_history(user_id) WHERE user_id IS NOT NULL;
|
ON entries_history(user_id) WHERE user_id IS NOT NULL;
|
||||||
|
ALTER TABLE entries_history DROP COLUMN IF EXISTS actor;
|
||||||
|
|
||||||
-- ── secrets_history: field-level snapshot ────────────────────────────────
|
-- ── secrets_history: field-level snapshot ────────────────────────────────
|
||||||
CREATE TABLE IF NOT EXISTS secrets_history (
|
CREATE TABLE IF NOT EXISTS secrets_history (
|
||||||
@@ -116,7 +112,6 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
field_name VARCHAR(256) NOT NULL,
|
field_name VARCHAR(256) NOT NULL,
|
||||||
encrypted BYTEA NOT NULL DEFAULT '\x',
|
encrypted BYTEA NOT NULL DEFAULT '\x',
|
||||||
action VARCHAR(16) NOT NULL,
|
action VARCHAR(16) NOT NULL,
|
||||||
actor VARCHAR(128) NOT NULL DEFAULT '',
|
|
||||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -125,6 +120,12 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
CREATE INDEX IF NOT EXISTS idx_secrets_history_secret_id
|
CREATE INDEX IF NOT EXISTS idx_secrets_history_secret_id
|
||||||
ON secrets_history(secret_id);
|
ON secrets_history(secret_id);
|
||||||
|
|
||||||
|
-- Drop redundant actor column (derivable via entries_history JOIN)
|
||||||
|
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
|
||||||
|
|
||||||
|
-- Drop redundant actor column; user_id already identifies the business user
|
||||||
|
ALTER TABLE audit_log DROP COLUMN IF EXISTS actor;
|
||||||
|
|
||||||
-- ── users ─────────────────────────────────────────────────────────────────
|
-- ── users ─────────────────────────────────────────────────────────────────
|
||||||
CREATE TABLE IF NOT EXISTS users (
|
CREATE TABLE IF NOT EXISTS users (
|
||||||
id UUID PRIMARY KEY DEFAULT uuidv7(),
|
id UUID PRIMARY KEY DEFAULT uuidv7(),
|
||||||
@@ -159,10 +160,75 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
)
|
)
|
||||||
.execute(pool)
|
.execute(pool)
|
||||||
.await?;
|
.await?;
|
||||||
|
restore_plaintext_api_keys(pool).await?;
|
||||||
|
|
||||||
tracing::debug!("migrations complete");
|
tracing::debug!("migrations complete");
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async fn restore_plaintext_api_keys(pool: &PgPool) -> Result<()> {
|
||||||
|
let has_users_api_key: bool = sqlx::query_scalar(
|
||||||
|
"SELECT EXISTS (
|
||||||
|
SELECT 1
|
||||||
|
FROM information_schema.columns
|
||||||
|
WHERE table_schema = 'public'
|
||||||
|
AND table_name = 'users'
|
||||||
|
AND column_name = 'api_key'
|
||||||
|
)",
|
||||||
|
)
|
||||||
|
.fetch_one(pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if !has_users_api_key {
|
||||||
|
sqlx::query("ALTER TABLE users ADD COLUMN api_key TEXT")
|
||||||
|
.execute(pool)
|
||||||
|
.await?;
|
||||||
|
sqlx::query("CREATE UNIQUE INDEX IF NOT EXISTS idx_users_api_key ON users(api_key) WHERE api_key IS NOT NULL")
|
||||||
|
.execute(pool)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
let has_api_keys_table: bool = sqlx::query_scalar(
|
||||||
|
"SELECT EXISTS (
|
||||||
|
SELECT 1
|
||||||
|
FROM information_schema.tables
|
||||||
|
WHERE table_schema = 'public'
|
||||||
|
AND table_name = 'api_keys'
|
||||||
|
)",
|
||||||
|
)
|
||||||
|
.fetch_one(pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if !has_api_keys_table {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(sqlx::FromRow)]
|
||||||
|
struct UserWithoutKey {
|
||||||
|
id: uuid::Uuid,
|
||||||
|
}
|
||||||
|
|
||||||
|
let users_without_key: Vec<UserWithoutKey> =
|
||||||
|
sqlx::query_as("SELECT DISTINCT user_id AS id FROM api_keys WHERE user_id NOT IN (SELECT id FROM users WHERE api_key IS NOT NULL)")
|
||||||
|
.fetch_all(pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
for user in users_without_key {
|
||||||
|
let new_key = crate::service::api_key::generate_api_key();
|
||||||
|
sqlx::query("UPDATE users SET api_key = $1 WHERE id = $2")
|
||||||
|
.bind(&new_key)
|
||||||
|
.bind(user.id)
|
||||||
|
.execute(pool)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
sqlx::query("DROP TABLE IF EXISTS api_keys")
|
||||||
|
.execute(pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
// ── Entry-level history snapshot ─────────────────────────────────────────────
|
// ── Entry-level history snapshot ─────────────────────────────────────────────
|
||||||
|
|
||||||
pub struct EntrySnapshotParams<'a> {
|
pub struct EntrySnapshotParams<'a> {
|
||||||
@@ -181,11 +247,10 @@ pub async fn snapshot_entry_history(
|
|||||||
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
|
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
|
||||||
p: EntrySnapshotParams<'_>,
|
p: EntrySnapshotParams<'_>,
|
||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
let actor = current_actor();
|
|
||||||
sqlx::query(
|
sqlx::query(
|
||||||
"INSERT INTO entries_history \
|
"INSERT INTO entries_history \
|
||||||
(entry_id, namespace, kind, name, version, action, tags, metadata, actor, user_id) \
|
(entry_id, namespace, kind, name, version, action, tags, metadata, user_id) \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)",
|
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)",
|
||||||
)
|
)
|
||||||
.bind(p.entry_id)
|
.bind(p.entry_id)
|
||||||
.bind(p.namespace)
|
.bind(p.namespace)
|
||||||
@@ -195,7 +260,6 @@ pub async fn snapshot_entry_history(
|
|||||||
.bind(p.action)
|
.bind(p.action)
|
||||||
.bind(p.tags)
|
.bind(p.tags)
|
||||||
.bind(p.metadata)
|
.bind(p.metadata)
|
||||||
.bind(&actor)
|
|
||||||
.bind(p.user_id)
|
.bind(p.user_id)
|
||||||
.execute(&mut **tx)
|
.execute(&mut **tx)
|
||||||
.await?;
|
.await?;
|
||||||
@@ -217,11 +281,10 @@ pub async fn snapshot_secret_history(
|
|||||||
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
|
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
|
||||||
p: SecretSnapshotParams<'_>,
|
p: SecretSnapshotParams<'_>,
|
||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
let actor = current_actor();
|
|
||||||
sqlx::query(
|
sqlx::query(
|
||||||
"INSERT INTO secrets_history \
|
"INSERT INTO secrets_history \
|
||||||
(entry_id, secret_id, entry_version, field_name, encrypted, action, actor) \
|
(entry_id, secret_id, entry_version, field_name, encrypted, action) \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6, $7)",
|
VALUES ($1, $2, $3, $4, $5, $6)",
|
||||||
)
|
)
|
||||||
.bind(p.entry_id)
|
.bind(p.entry_id)
|
||||||
.bind(p.secret_id)
|
.bind(p.secret_id)
|
||||||
@@ -229,7 +292,6 @@ pub async fn snapshot_secret_history(
|
|||||||
.bind(p.field_name)
|
.bind(p.field_name)
|
||||||
.bind(p.encrypted)
|
.bind(p.encrypted)
|
||||||
.bind(p.action)
|
.bind(p.action)
|
||||||
.bind(&actor)
|
|
||||||
.execute(&mut **tx)
|
.execute(&mut **tx)
|
||||||
.await?;
|
.await?;
|
||||||
Ok(())
|
Ok(())
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ use uuid::Uuid;
|
|||||||
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
|
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
|
||||||
pub struct Entry {
|
pub struct Entry {
|
||||||
pub id: Uuid,
|
pub id: Uuid,
|
||||||
|
pub user_id: Option<Uuid>,
|
||||||
pub namespace: String,
|
pub namespace: String,
|
||||||
pub kind: String,
|
pub kind: String,
|
||||||
pub name: String,
|
pub name: String,
|
||||||
@@ -184,7 +185,6 @@ pub struct AuditLogEntry {
|
|||||||
pub kind: String,
|
pub kind: String,
|
||||||
pub name: String,
|
pub name: String,
|
||||||
pub detail: Value,
|
pub detail: Value,
|
||||||
pub actor: String,
|
|
||||||
pub created_at: DateTime<Utc>,
|
pub created_at: DateTime<Utc>,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -8,9 +8,9 @@ pub async fn list_for_user(pool: &PgPool, user_id: Uuid, limit: i64) -> Result<V
|
|||||||
let limit = limit.clamp(1, 200);
|
let limit = limit.clamp(1, 200);
|
||||||
|
|
||||||
let rows = sqlx::query_as(
|
let rows = sqlx::query_as(
|
||||||
"SELECT id, user_id, action, namespace, kind, name, detail, actor, created_at \
|
"SELECT id, user_id, action, namespace, kind, name, detail, created_at \
|
||||||
FROM audit_log \
|
FROM audit_log \
|
||||||
WHERE user_id = $1 OR (user_id IS NULL AND detail->>'user_id' = $1::text) \
|
WHERE user_id = $1 \
|
||||||
ORDER BY created_at DESC, id DESC \
|
ORDER BY created_at DESC, id DESC \
|
||||||
LIMIT $2",
|
LIMIT $2",
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -7,7 +7,6 @@ use uuid::Uuid;
|
|||||||
pub struct HistoryEntry {
|
pub struct HistoryEntry {
|
||||||
pub version: i64,
|
pub version: i64,
|
||||||
pub action: String,
|
pub action: String,
|
||||||
pub actor: String,
|
|
||||||
pub created_at: String,
|
pub created_at: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -23,13 +22,12 @@ pub async fn run(
|
|||||||
struct Row {
|
struct Row {
|
||||||
version: i64,
|
version: i64,
|
||||||
action: String,
|
action: String,
|
||||||
actor: String,
|
|
||||||
created_at: chrono::DateTime<chrono::Utc>,
|
created_at: chrono::DateTime<chrono::Utc>,
|
||||||
}
|
}
|
||||||
|
|
||||||
let rows: Vec<Row> = if let Some(uid) = user_id {
|
let rows: Vec<Row> = if let Some(uid) = user_id {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT version, action, actor, created_at FROM entries_history \
|
"SELECT version, action, created_at FROM entries_history \
|
||||||
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id = $4 \
|
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id = $4 \
|
||||||
ORDER BY id DESC LIMIT $5",
|
ORDER BY id DESC LIMIT $5",
|
||||||
)
|
)
|
||||||
@@ -42,7 +40,7 @@ pub async fn run(
|
|||||||
.await?
|
.await?
|
||||||
} else {
|
} else {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT version, action, actor, created_at FROM entries_history \
|
"SELECT version, action, created_at FROM entries_history \
|
||||||
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id IS NULL \
|
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id IS NULL \
|
||||||
ORDER BY id DESC LIMIT $4",
|
ORDER BY id DESC LIMIT $4",
|
||||||
)
|
)
|
||||||
@@ -59,7 +57,6 @@ pub async fn run(
|
|||||||
.map(|r| HistoryEntry {
|
.map(|r| HistoryEntry {
|
||||||
version: r.version,
|
version: r.version,
|
||||||
action: r.action,
|
action: r.action,
|
||||||
actor: r.actor,
|
|
||||||
created_at: r.created_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
|
created_at: r.created_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
|
||||||
})
|
})
|
||||||
.collect())
|
.collect())
|
||||||
|
|||||||
@@ -131,7 +131,7 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
|||||||
};
|
};
|
||||||
|
|
||||||
let sql = format!(
|
let sql = format!(
|
||||||
"SELECT id, COALESCE(user_id, '00000000-0000-0000-0000-000000000000'::uuid) AS user_id, \
|
"SELECT id, user_id, \
|
||||||
namespace, kind, name, tags, metadata, version, created_at, updated_at \
|
namespace, kind, name, tags, metadata, version, created_at, updated_at \
|
||||||
FROM entries {where_clause} ORDER BY {order} LIMIT ${limit_idx} OFFSET ${offset_idx}"
|
FROM entries {where_clause} ORDER BY {order} LIMIT ${limit_idx} OFFSET ${offset_idx}"
|
||||||
);
|
);
|
||||||
@@ -212,8 +212,7 @@ pub async fn fetch_secrets_for_entries(
|
|||||||
#[derive(sqlx::FromRow)]
|
#[derive(sqlx::FromRow)]
|
||||||
struct EntryRaw {
|
struct EntryRaw {
|
||||||
id: Uuid,
|
id: Uuid,
|
||||||
#[allow(dead_code)] // Selected for row shape; Entry model has no user_id field
|
user_id: Option<Uuid>,
|
||||||
user_id: Uuid,
|
|
||||||
namespace: String,
|
namespace: String,
|
||||||
kind: String,
|
kind: String,
|
||||||
name: String,
|
name: String,
|
||||||
@@ -228,6 +227,7 @@ impl From<EntryRaw> for Entry {
|
|||||||
fn from(r: EntryRaw) -> Self {
|
fn from(r: EntryRaw) -> Self {
|
||||||
Entry {
|
Entry {
|
||||||
id: r.id,
|
id: r.id,
|
||||||
|
user_id: r.user_id,
|
||||||
namespace: r.namespace,
|
namespace: r.namespace,
|
||||||
kind: r.kind,
|
kind: r.kind,
|
||||||
name: r.name,
|
name: r.name,
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
[package]
|
[package]
|
||||||
name = "secrets-mcp"
|
name = "secrets-mcp"
|
||||||
version = "0.1.7"
|
version = "0.1.10"
|
||||||
edition.workspace = true
|
edition.workspace = true
|
||||||
|
|
||||||
[[bin]]
|
[[bin]]
|
||||||
|
|||||||
@@ -473,15 +473,16 @@ impl SecretsService {
|
|||||||
async fn secrets_history(
|
async fn secrets_history(
|
||||||
&self,
|
&self,
|
||||||
Parameters(input): Parameters<HistoryInput>,
|
Parameters(input): Parameters<HistoryInput>,
|
||||||
_ctx: RequestContext<RoleServer>,
|
ctx: RequestContext<RoleServer>,
|
||||||
) -> Result<CallToolResult, rmcp::ErrorData> {
|
) -> Result<CallToolResult, rmcp::ErrorData> {
|
||||||
|
let user_id = Self::user_id_from_ctx(&ctx)?;
|
||||||
let result = svc_history(
|
let result = svc_history(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
&input.namespace,
|
&input.namespace,
|
||||||
&input.kind,
|
&input.kind,
|
||||||
&input.name,
|
&input.name,
|
||||||
input.limit.unwrap_or(20),
|
input.limit.unwrap_or(20),
|
||||||
None,
|
user_id,
|
||||||
)
|
)
|
||||||
.await
|
.await
|
||||||
.map_err(|e| rmcp::ErrorData::internal_error(e.to_string(), None))?;
|
.map_err(|e| rmcp::ErrorData::internal_error(e.to_string(), None))?;
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
use askama::Template;
|
use askama::Template;
|
||||||
|
use chrono::SecondsFormat;
|
||||||
use std::net::SocketAddr;
|
use std::net::SocketAddr;
|
||||||
|
|
||||||
use axum::{
|
use axum::{
|
||||||
@@ -61,7 +62,8 @@ struct AuditPageTemplate {
|
|||||||
}
|
}
|
||||||
|
|
||||||
struct AuditEntryView {
|
struct AuditEntryView {
|
||||||
created_at: String,
|
/// RFC3339 UTC for `<time datetime>`; rendered as browser-local in audit.html.
|
||||||
|
created_at_iso: String,
|
||||||
action: String,
|
action: String,
|
||||||
target: String,
|
target: String,
|
||||||
detail: String,
|
detail: String,
|
||||||
@@ -319,11 +321,6 @@ where
|
|||||||
StatusCode::INTERNAL_SERVER_ERROR
|
StatusCode::INTERNAL_SERVER_ERROR
|
||||||
})?;
|
})?;
|
||||||
|
|
||||||
// Ensure the user has an API key (auto-creates on first login).
|
|
||||||
if let Err(e) = ensure_api_key(&state.pool, user.id).await {
|
|
||||||
tracing::warn!(error = %e, "failed to ensure api key for user");
|
|
||||||
}
|
|
||||||
|
|
||||||
session
|
session
|
||||||
.insert(SESSION_USER_ID, user.id.to_string())
|
.insert(SESSION_USER_ID, user.id.to_string())
|
||||||
.await
|
.await
|
||||||
@@ -408,7 +405,7 @@ async fn audit_page(
|
|||||||
let entries = rows
|
let entries = rows
|
||||||
.into_iter()
|
.into_iter()
|
||||||
.map(|row| AuditEntryView {
|
.map(|row| AuditEntryView {
|
||||||
created_at: row.created_at.format("%Y-%m-%d %H:%M:%S UTC").to_string(),
|
created_at_iso: row.created_at.to_rfc3339_opts(SecondsFormat::Secs, true),
|
||||||
action: row.action,
|
action: row.action,
|
||||||
target: format_audit_target(&row.namespace, &row.kind, &row.name),
|
target: format_audit_target(&row.namespace, &row.kind, &row.name),
|
||||||
detail: serde_json::to_string_pretty(&row.detail).unwrap_or_else(|_| "{}".to_string()),
|
detail: serde_json::to_string_pretty(&row.detail).unwrap_or_else(|_| "{}".to_string()),
|
||||||
@@ -640,6 +637,7 @@ fn render_template<T: Template>(tmpl: T) -> Result<Response, StatusCode> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn format_audit_target(namespace: &str, kind: &str, name: &str) -> String {
|
fn format_audit_target(namespace: &str, kind: &str, name: &str) -> String {
|
||||||
|
// Auth events reuse kind/name as a provider-scoped target, not an entry identity.
|
||||||
if namespace == "auth" {
|
if namespace == "auth" {
|
||||||
format!("{}/{}", kind, name)
|
format!("{}/{}", kind, name)
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -108,7 +108,7 @@
|
|||||||
<main class="main">
|
<main class="main">
|
||||||
<section class="card">
|
<section class="card">
|
||||||
<div class="card-title">我的审计</div>
|
<div class="card-title">我的审计</div>
|
||||||
<div class="card-subtitle">展示最近 100 条与当前用户相关的新审计记录。</div>
|
<div class="card-subtitle">展示最近 100 条与当前用户相关的新审计记录。时间为浏览器本地时区。</div>
|
||||||
|
|
||||||
{% if entries.is_empty() %}
|
{% if entries.is_empty() %}
|
||||||
<div class="empty">暂无审计记录。</div>
|
<div class="empty">暂无审计记录。</div>
|
||||||
@@ -125,7 +125,7 @@
|
|||||||
<tbody>
|
<tbody>
|
||||||
{% for entry in entries %}
|
{% for entry in entries %}
|
||||||
<tr>
|
<tr>
|
||||||
<td class="col-time mono">{{ entry.created_at }}</td>
|
<td class="col-time mono"><time class="audit-local-time" datetime="{{ entry.created_at_iso }}">{{ entry.created_at_iso }}</time></td>
|
||||||
<td class="col-action mono">{{ entry.action }}</td>
|
<td class="col-action mono">{{ entry.action }}</td>
|
||||||
<td class="col-target mono">{{ entry.target }}</td>
|
<td class="col-target mono">{{ entry.target }}</td>
|
||||||
<td class="col-detail"><pre class="detail">{{ entry.detail }}</pre></td>
|
<td class="col-detail"><pre class="detail">{{ entry.detail }}</pre></td>
|
||||||
@@ -138,5 +138,17 @@
|
|||||||
</main>
|
</main>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
<script>
|
||||||
|
(function () {
|
||||||
|
document.querySelectorAll('time.audit-local-time[datetime]').forEach(function (el) {
|
||||||
|
var raw = el.getAttribute('datetime');
|
||||||
|
var d = raw ? new Date(raw) : null;
|
||||||
|
if (d && !isNaN(d.getTime())) {
|
||||||
|
el.textContent = d.toLocaleString(undefined, { dateStyle: 'medium', timeStyle: 'medium' });
|
||||||
|
el.title = raw + ' (UTC)';
|
||||||
|
}
|
||||||
|
});
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|||||||
Reference in New Issue
Block a user