Compare commits

..

5 Commits

Author SHA1 Message Date
df701f21b9 feat(secrets-mcp): 共享 key 删除时自动迁移并重定向 (v0.3.7)
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 4m4s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
删除仍被 metadata.key_ref 引用的 key 条目时,在同一事务内将密文复制到首个引用方,
其余引用方的 key_ref 重定向到新 owner;env_map 解析 key_ref 时不再限定 type=key。
Web 删除 API 返回 migrated;Dashboard 删除成功后提示迁移。

Bump secrets-mcp to 0.3.7;补充删除迁移相关单测(需 SECRETS_DATABASE_URL)。

Made-with: Cursor
2026-04-03 09:27:20 +08:00
c3c536200e feat(secrets-mcp): Web 条目编辑 API 与 Notes 列表展示优化(0.3.6)
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m55s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- secrets-core: EntryWriteRow;按 id 更新/删除(含并发冲突与唯一键)
- Web: PATCH/DELETE /api/entries/{id};列表编辑/删除与错误映射
- entries 模板:Notes 限高滚动;空 Notes 不显示占位框
- 版本 0.3.5 → 0.3.6,同步 Cargo.lock

Made-with: Cursor
2026-04-02 14:58:10 +08:00
7909f7102d feat(secrets-mcp): 条目页按 folder/type 筛选并发版 0.3.5
- entries 路由支持 ?folder=&type= 查询,与搜索层 SearchParams 对齐
- 条目列表页增加筛选表单与说明文案
- 版本 0.3.4 → 0.3.5,同步 Cargo.lock

Made-with: Cursor
2026-04-02 14:37:36 +08:00
87a29af82d feat(web): 条目列表页 /entries 与总条数统计
All checks were successful
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m56s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Successful in 6s
- 新增受会话保护的 GET /entries,仅展示 entries 非敏感列
- search: list_entries、count_entries 共享筛选条件;分页与计数不读 secrets
- 侧边栏在 dashboard/audit 增加「条目」入口
- secrets-mcp 0.3.4(tag 尚未存在)

Made-with: Cursor
2026-04-02 11:26:51 +08:00
1b11f7e976 release(secrets-mcp): v0.3.3 — 强制 PostgreSQL TLS 校验
Some checks failed
Secrets MCP — Build & Release / 检查 / 构建 / 发版 (push) Successful in 3m54s
Secrets MCP — Build & Release / 部署 secrets-mcp (push) Failing after 7s
显式引入数据库 TLS 配置并在生产环境拒绝弱 sslmode,避免连接静默降级。同步更新 deploy/README 与运维 runbook,落地 db.refining.ltd 的证书与服务器配置流程。

Made-with: Cursor
2026-04-01 15:18:14 +08:00
18 changed files with 1623 additions and 84 deletions

View File

@@ -118,7 +118,7 @@ oauth_accounts (
### PEM 共享(`key_ref`
将共享 PEM 存为 **`type=key`** 的 entry其它记录在 `metadata.key_ref` 指向该 key 的 `name`(支持 `folder/name` 格式消歧)。更新 key 记录后,引用方通过服务层解析合并逻辑即可使用新密钥(实现`secrets_core::service::env_map`
建议将共享 PEM 存为 **`type=key`** 的 entry其它记录在 `metadata.key_ref` 指向目标 entry 的 `name`(支持 `folder/name` 格式消歧)。删除被引用 key 时,服务会自动迁移为单副本 + 重定向(复制到首个引用方,其余引用方改指向新 owner解析逻辑`secrets_core::service::env_map`
## 代码规范

2
Cargo.lock generated
View File

@@ -1968,7 +1968,7 @@ dependencies = [
[[package]]
name = "secrets-mcp"
version = "0.3.2"
version = "0.3.7"
dependencies = [
"anyhow",
"askama",

View File

@@ -17,7 +17,10 @@ cargo build --release -p secrets-mcp
| 变量 | 说明 |
|------|------|
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL 连接串(建议专用库,如 `secrets-mcp`)。 |
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL 连接串(推荐使用域名,例如 `db.refining.ltd`,避免直连 IP)。 |
| `SECRETS_DATABASE_SSL_MODE` | 可选但强烈建议生产必填。推荐 `verify-full`(至少 `verify-ca`),避免回退到弱 TLS 模式。 |
| `SECRETS_DATABASE_SSL_ROOT_CERT` | 可选。私有 CA 或自签链路时指定 CA 根证书路径(如 `/etc/secrets/pg-ca.crt`)。 |
| `SECRETS_ENV` | 可选。设为 `prod` / `production` 时会拒绝弱 PostgreSQL TLS 模式(`prefer``disable``allow``require`)。 |
| `BASE_URL` | 对外访问基址OAuth 回调为 `{BASE_URL}/auth/google/callback`。默认 `http://localhost:9315`。 |
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`。容器内或直接对外暴露端口时请改为 `0.0.0.0:9315`;反代时常为 `127.0.0.1:9315`。 |
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;不配置则无 Google 登录入口。运行时从环境读取,勿写入 CI、勿打入二进制。 |
@@ -27,9 +30,26 @@ cargo build --release -p secrets-mcp
cargo run -p secrets-mcp
```
生产推荐示例PostgreSQL TLS
```bash
SECRETS_DATABASE_URL=postgres://postgres:***@db.refining.ltd:5432/secrets-mcp
SECRETS_DATABASE_SSL_MODE=verify-full
SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
SECRETS_ENV=production
```
- **Web**`BASE_URL`登录、Dashboard、设置密码短语、创建 API Key
- **MCP**Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer <api_key>` + `X-Encryption-Key: <hex>` 请求头(读密文工具须带密钥)。
## PostgreSQL TLS 加固
- 推荐将数据库域名单独设置为 `db.refining.ltd`,服务域名保持 `secrets.refining.app`
- 数据库证书建议使用可校验链路(如 Let's Encrypt 或私有 CA并保证证书 `SAN` 包含 `db.refining.ltd`
- PostgreSQL 侧建议使用 `hostssl` 规则限制应用来源(如 `47.238.146.244/32`),逐步移除公网明文 `host` 访问。
- 应用端推荐 `SECRETS_DATABASE_SSL_MODE=verify-full`;仅在过渡阶段可临时用 `verify-ca`
- 可执行运维步骤见 [`deploy/postgres-tls-hardening.md`](deploy/postgres-tls-hardening.md)。
## MCP 与 AI 工作流v0.3+
条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun``ricnsmart/aliyun`)。
@@ -37,6 +57,7 @@ cargo run -p secrets-mcp
- **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。
- **`secrets_get` / `secrets_update` / `secrets_delete`(按 name/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。
- **`secrets_delete`**`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder`
- **共享 key 自动迁移删除**:删除仍被 `metadata.key_ref` 引用的 key 条目时,系统会自动迁移:把密文复制到首个引用方,并将其余引用方的 `key_ref` 重定向到新 owner然后继续删除。
## 加密架构(混合 E2EE
@@ -147,7 +168,8 @@ flowchart LR
### PEM 共享(`key_ref`
同一 PEM 可被多条 `server` 等记录引用:将 PEM 存为 **`type=key`** 的 entry在其它条目的 `metadata.key_ref` 中写该 key 条目`name`(支持 `folder/name` 格式消歧);轮换时只更新 key 记录即可。
同一 PEM 可被多条 `server` 等记录引用:建议将 PEM 存为 **`type=key`** 的 entry在其它条目的 `metadata.key_ref` 中写目标 entry `name`(支持 `folder/name` 格式消歧);轮换时只更新该目标记录即可。
删除共享 key 时,系统会自动迁移引用:将密文复制到首个引用方(单副本),其余引用方的 `key_ref` 自动重定向到该新 owner再删除原 key 记录。
## 审计日志

View File

@@ -1,4 +1,15 @@
use anyhow::Result;
use std::path::PathBuf;
use anyhow::{Context, Result};
use sqlx::postgres::PgSslMode;
#[derive(Debug, Clone)]
pub struct DatabaseConfig {
pub url: String,
pub ssl_mode: Option<PgSslMode>,
pub ssl_root_cert: Option<PathBuf>,
pub enforce_strict_tls: bool,
}
/// Resolve database URL from environment.
/// Priority: `SECRETS_DATABASE_URL` env var → error.
@@ -18,3 +29,54 @@ pub fn resolve_db_url(override_url: &str) -> Result<String> {
Example: SECRETS_DATABASE_URL=postgres://user:pass@host:port/dbname"
)
}
fn env_var_non_empty(name: &str) -> Option<String> {
std::env::var(name)
.ok()
.filter(|value| !value.trim().is_empty())
}
fn parse_ssl_mode_from_env() -> Result<Option<PgSslMode>> {
let Some(mode) = env_var_non_empty("SECRETS_DATABASE_SSL_MODE") else {
return Ok(None);
};
let parsed = mode.parse::<PgSslMode>().with_context(|| {
format!(
"Invalid SECRETS_DATABASE_SSL_MODE='{mode}'. Use one of: disable, allow, prefer, require, verify-ca, verify-full."
)
})?;
Ok(Some(parsed))
}
fn resolve_ssl_root_cert_from_env() -> Result<Option<PathBuf>> {
let Some(path) = env_var_non_empty("SECRETS_DATABASE_SSL_ROOT_CERT") else {
return Ok(None);
};
let path = PathBuf::from(path);
if !path.exists() {
anyhow::bail!(
"SECRETS_DATABASE_SSL_ROOT_CERT points to a missing file: {}",
path.display()
);
}
Ok(Some(path))
}
fn is_production_env() -> bool {
matches!(
env_var_non_empty("SECRETS_ENV")
.as_deref()
.map(|value| value.to_ascii_lowercase()),
Some(value) if value == "prod" || value == "production"
)
}
pub fn resolve_db_config(override_url: &str) -> Result<DatabaseConfig> {
Ok(DatabaseConfig {
url: resolve_db_url(override_url)?,
ssl_mode: parse_ssl_mode_from_env()?,
ssl_root_cert: resolve_ssl_root_cert_from_env()?,
enforce_strict_tls: is_production_env(),
})
}

View File

@@ -1,14 +1,45 @@
use anyhow::Result;
use std::str::FromStr;
use anyhow::{Context, Result};
use serde_json::Value;
use sqlx::PgPool;
use sqlx::postgres::PgPoolOptions;
use sqlx::postgres::{PgConnectOptions, PgPoolOptions, PgSslMode};
pub async fn create_pool(database_url: &str) -> Result<PgPool> {
use crate::config::DatabaseConfig;
fn build_connect_options(config: &DatabaseConfig) -> Result<PgConnectOptions> {
let mut options = PgConnectOptions::from_str(&config.url)
.with_context(|| "failed to parse SECRETS_DATABASE_URL".to_string())?;
if let Some(mode) = config.ssl_mode {
options = options.ssl_mode(mode);
}
if let Some(path) = &config.ssl_root_cert {
options = options.ssl_root_cert(path);
}
if config.enforce_strict_tls
&& !matches!(
options.get_ssl_mode(),
PgSslMode::VerifyCa | PgSslMode::VerifyFull
)
{
anyhow::bail!(
"Refusing to start in production with weak PostgreSQL TLS mode. \
Set SECRETS_DATABASE_SSL_MODE=verify-ca or verify-full."
);
}
Ok(options)
}
pub async fn create_pool(config: &DatabaseConfig) -> Result<PgPool> {
tracing::debug!("connecting to database");
let connect_options = build_connect_options(config)?;
let pool = PgPoolOptions::new()
.max_connections(10)
.acquire_timeout(std::time::Duration::from_secs(5))
.connect(database_url)
.connect_with(connect_options)
.await?;
tracing::debug!("database connection established");
Ok(pool)

View File

@@ -51,6 +51,34 @@ pub struct EntryRow {
pub notes: String,
}
/// Entry row including `name` (used for id-scoped web / service updates).
#[derive(Debug, sqlx::FromRow)]
pub struct EntryWriteRow {
pub id: Uuid,
pub version: i64,
pub folder: String,
#[sqlx(rename = "type")]
pub entry_type: String,
pub name: String,
pub tags: Vec<String>,
pub metadata: Value,
pub notes: String,
}
impl From<&EntryWriteRow> for EntryRow {
fn from(r: &EntryWriteRow) -> Self {
EntryRow {
id: r.id,
version: r.version,
folder: r.folder.clone(),
entry_type: r.entry_type.clone(),
tags: r.tags.clone(),
metadata: r.metadata.clone(),
notes: r.notes.clone(),
}
}
}
/// Minimal secret field row fetched before snapshots or cascade deletes.
#[derive(Debug, sqlx::FromRow)]
pub struct SecretFieldRow {

View File

@@ -4,7 +4,7 @@ use sqlx::PgPool;
use uuid::Uuid;
use crate::db;
use crate::models::{EntryRow, SecretFieldRow};
use crate::models::{EntryRow, EntryWriteRow, SecretFieldRow};
#[derive(Debug, serde::Serialize)]
pub struct DeletedEntry {
@@ -17,6 +17,7 @@ pub struct DeletedEntry {
#[derive(Debug, serde::Serialize)]
pub struct DeleteResult {
pub deleted: Vec<DeletedEntry>,
pub migrated: Vec<String>,
pub dry_run: bool,
}
@@ -31,6 +32,233 @@ pub struct DeleteParams<'a> {
pub user_id: Option<Uuid>,
}
#[derive(Debug, sqlx::FromRow)]
struct KeyReferrer {
id: Uuid,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
name: String,
}
fn ref_label(r: &KeyReferrer) -> String {
format!("{}/{} ({})", r.folder, r.name, r.entry_type)
}
fn ref_path(r: &KeyReferrer) -> String {
format!("{}/{}", r.folder, r.name)
}
async fn fetch_key_referrers_pool(
pool: &PgPool,
key_entry_id: Uuid,
key_folder: &str,
key_name: &str,
user_id: Option<Uuid>,
) -> Result<Vec<KeyReferrer>> {
let qualified = format!("{}/{}", key_folder, key_name);
let refs: Vec<KeyReferrer> = if let Some(uid) = user_id {
sqlx::query_as(
"SELECT id, folder, type, name FROM entries \
WHERE user_id = $1 AND id <> $2 \
AND (metadata->>'key_ref' = $3 OR metadata->>'key_ref' = $4) \
ORDER BY folder, type, name",
)
.bind(uid)
.bind(key_entry_id)
.bind(key_name)
.bind(&qualified)
.fetch_all(pool)
.await?
} else {
sqlx::query_as(
"SELECT id, folder, type, name FROM entries \
WHERE user_id IS NULL AND id <> $1 \
AND (metadata->>'key_ref' = $2 OR metadata->>'key_ref' = $3) \
ORDER BY folder, type, name",
)
.bind(key_entry_id)
.bind(key_name)
.bind(&qualified)
.fetch_all(pool)
.await?
};
Ok(refs)
}
async fn migrate_key_refs_if_needed(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
key_row: &EntryRow,
key_name: &str,
user_id: Option<Uuid>,
dry_run: bool,
) -> Result<Vec<String>> {
let qualified = format!("{}/{}", key_row.folder, key_name);
let refs: Vec<KeyReferrer> = if let Some(uid) = user_id {
sqlx::query_as(
"SELECT id, folder, type, name FROM entries \
WHERE user_id = $1 AND id <> $2 \
AND (metadata->>'key_ref' = $3 OR metadata->>'key_ref' = $4) \
ORDER BY folder, type, name",
)
.bind(uid)
.bind(key_row.id)
.bind(key_name)
.bind(&qualified)
.fetch_all(&mut **tx)
.await?
} else {
sqlx::query_as(
"SELECT id, folder, type, name FROM entries \
WHERE user_id IS NULL AND id <> $1 \
AND (metadata->>'key_ref' = $2 OR metadata->>'key_ref' = $3) \
ORDER BY folder, type, name",
)
.bind(key_row.id)
.bind(key_name)
.bind(&qualified)
.fetch_all(&mut **tx)
.await?
};
if refs.is_empty() {
return Ok(vec![]);
}
if dry_run {
return Ok(refs.iter().map(ref_label).collect());
}
let owner = &refs[0];
let owner_path = ref_path(owner);
let key_fields: Vec<SecretFieldRow> =
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1")
.bind(key_row.id)
.fetch_all(&mut **tx)
.await?;
for f in &key_fields {
sqlx::query(
"INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3) \
ON CONFLICT (entry_id, field_name) DO NOTHING",
)
.bind(owner.id)
.bind(&f.field_name)
.bind(&f.encrypted)
.execute(&mut **tx)
.await?;
}
sqlx::query(
"UPDATE entries SET metadata = metadata - 'key_ref', \
version = version + 1, updated_at = NOW() WHERE id = $1",
)
.bind(owner.id)
.execute(&mut **tx)
.await?;
crate::audit::log_tx(
tx,
user_id,
"key_migrate",
&owner.folder,
&owner.entry_type,
&owner.name,
json!({
"from_key": format!("{}/{}", key_row.folder, key_name),
"role": "new_owner",
"redirect_target": owner_path,
}),
)
.await;
for r in refs.iter().skip(1) {
sqlx::query(
"UPDATE entries SET metadata = jsonb_set(metadata, '{key_ref}', to_jsonb($2::text), true), \
version = version + 1, updated_at = NOW() WHERE id = $1",
)
.bind(r.id)
.bind(&owner_path)
.execute(&mut **tx)
.await?;
crate::audit::log_tx(
tx,
user_id,
"key_migrate",
&r.folder,
&r.entry_type,
&r.name,
json!({
"from_key": format!("{}/{}", key_row.folder, key_name),
"role": "redirected_ref",
"redirect_to": owner_path,
}),
)
.await;
}
Ok(refs.iter().map(ref_label).collect())
}
/// Delete a single entry by id (multi-tenant: `user_id` must match). Cascades `secrets` via FK.
pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<DeleteResult> {
let mut tx = pool.begin().await?;
let row: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
WHERE id = $1 AND user_id = $2 FOR UPDATE",
)
.bind(entry_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await?;
let row = match row {
Some(r) => r,
None => {
tx.rollback().await?;
anyhow::bail!("Entry not found");
}
};
let folder = row.folder.clone();
let entry_type = row.entry_type.clone();
let name = row.name.clone();
let entry_row: EntryRow = (&row).into();
let migrated =
migrate_key_refs_if_needed(&mut tx, &entry_row, &name, Some(user_id), false).await?;
snapshot_and_delete(
&mut tx,
&folder,
&entry_type,
&name,
&entry_row,
Some(user_id),
)
.await?;
crate::audit::log_tx(
&mut tx,
Some(user_id),
"delete",
&folder,
&entry_type,
&name,
json!({ "source": "web", "entry_id": entry_id }),
)
.await;
tx.commit().await?;
Ok(DeleteResult {
deleted: vec![DeletedEntry {
name,
folder,
entry_type,
}],
migrated,
dry_run: false,
})
}
pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult> {
match params.name {
Some(name) => delete_one(pool, name, params.folder, params.dry_run, params.user_id).await,
@@ -66,6 +294,7 @@ async fn delete_one(
// - 2+ matches → disambiguation error (same as non-dry-run)
#[derive(sqlx::FromRow)]
struct DryRunRow {
id: Uuid,
folder: String,
#[sqlx(rename = "type")]
entry_type: String,
@@ -74,7 +303,7 @@ async fn delete_one(
let rows: Vec<DryRunRow> = if let Some(uid) = user_id {
if let Some(f) = folder {
sqlx::query_as(
"SELECT folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3",
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3",
)
.bind(uid)
.bind(f)
@@ -82,40 +311,48 @@ async fn delete_one(
.fetch_all(pool)
.await?
} else {
sqlx::query_as("SELECT folder, type FROM entries WHERE user_id = $1 AND name = $2")
.bind(uid)
.bind(name)
.fetch_all(pool)
.await?
sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND name = $2",
)
.bind(uid)
.bind(name)
.fetch_all(pool)
.await?
}
} else if let Some(f) = folder {
sqlx::query_as(
"SELECT folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2",
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2",
)
.bind(f)
.bind(name)
.fetch_all(pool)
.await?
} else {
sqlx::query_as("SELECT folder, type FROM entries WHERE user_id IS NULL AND name = $1")
.bind(name)
.fetch_all(pool)
.await?
sqlx::query_as(
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND name = $1",
)
.bind(name)
.fetch_all(pool)
.await?
};
return match rows.len() {
0 => Ok(DeleteResult {
deleted: vec![],
migrated: vec![],
dry_run: true,
}),
1 => {
let row = rows.into_iter().next().unwrap();
let refs =
fetch_key_referrers_pool(pool, row.id, &row.folder, name, user_id).await?;
Ok(DeleteResult {
deleted: vec![DeletedEntry {
name: name.to_string(),
folder: row.folder,
entry_type: row.entry_type,
}],
migrated: refs.iter().map(ref_label).collect(),
dry_run: true,
})
}
@@ -180,6 +417,7 @@ async fn delete_one(
tx.rollback().await?;
return Ok(DeleteResult {
deleted: vec![],
migrated: vec![],
dry_run: false,
});
}
@@ -199,6 +437,7 @@ async fn delete_one(
let folder = row.folder.clone();
let entry_type = row.entry_type.clone();
let migrated = migrate_key_refs_if_needed(&mut tx, &row, name, user_id, false).await?;
snapshot_and_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?;
crate::audit::log_tx(
&mut tx,
@@ -218,6 +457,7 @@ async fn delete_one(
folder,
entry_type,
}],
migrated,
dry_run: false,
})
}
@@ -278,6 +518,12 @@ async fn delete_bulk(
let rows = q.fetch_all(pool).await?;
if dry_run {
let mut migrated: Vec<String> = Vec::new();
for row in &rows {
let refs =
fetch_key_referrers_pool(pool, row.id, &row.folder, &row.name, user_id).await?;
migrated.extend(refs.iter().map(ref_label));
}
let deleted = rows
.iter()
.map(|r| DeletedEntry {
@@ -288,11 +534,13 @@ async fn delete_bulk(
.collect();
return Ok(DeleteResult {
deleted,
migrated,
dry_run: true,
});
}
let mut deleted = Vec::with_capacity(rows.len());
let mut migrated: Vec<String> = Vec::new();
for row in &rows {
let entry_row = EntryRow {
id: row.id,
@@ -304,6 +552,8 @@ async fn delete_bulk(
notes: row.notes.clone(),
};
let mut tx = pool.begin().await?;
let m = migrate_key_refs_if_needed(&mut tx, &entry_row, &row.name, user_id, false).await?;
migrated.extend(m);
snapshot_and_delete(
&mut tx,
&row.folder,
@@ -333,6 +583,7 @@ async fn delete_bulk(
Ok(DeleteResult {
deleted,
migrated,
dry_run: false,
})
}
@@ -395,3 +646,264 @@ async fn snapshot_and_delete(
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use serde_json::json;
async fn maybe_test_pool() -> Option<PgPool> {
let Ok(url) = std::env::var("SECRETS_DATABASE_URL") else {
eprintln!("skip delete migration tests: SECRETS_DATABASE_URL is not set");
return None;
};
let Ok(pool) = PgPool::connect(&url).await else {
eprintln!("skip delete migration tests: cannot connect to database");
return None;
};
if let Err(e) = crate::db::migrate(&pool).await {
eprintln!("skip delete migration tests: migrate failed: {e}");
return None;
}
Some(pool)
}
async fn insert_entry(
pool: &PgPool,
id: Uuid,
user_id: Uuid,
folder: &str,
entry_type: &str,
name: &str,
metadata: serde_json::Value,
) -> Result<()> {
sqlx::query(
"INSERT INTO entries (id, user_id, folder, type, name, notes, tags, metadata, version) \
VALUES ($1, $2, $3, $4, $5, '', ARRAY[]::text[], $6, 1)",
)
.bind(id)
.bind(user_id)
.bind(folder)
.bind(entry_type)
.bind(name)
.bind(metadata)
.execute(pool)
.await?;
Ok(())
}
#[tokio::test]
async fn delete_shared_key_dry_run_reports_migration_without_writes() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let user_id = Uuid::from_u128(rand::random());
let key_id = Uuid::from_u128(rand::random());
let ref_a = Uuid::from_u128(rand::random());
let ref_b = Uuid::from_u128(rand::random());
insert_entry(
&pool,
key_id,
user_id,
"kfolder",
"key",
"shared-key",
json!({}),
)
.await?;
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
.bind(key_id)
.bind("pem")
.bind(vec![1_u8, 2, 3])
.execute(&pool)
.await?;
insert_entry(
&pool,
ref_a,
user_id,
"afolder",
"server",
"srv-a",
json!({"key_ref":"kfolder/shared-key"}),
)
.await?;
insert_entry(
&pool,
ref_b,
user_id,
"bfolder",
"server",
"srv-b",
json!({"key_ref":"shared-key"}),
)
.await?;
let result = run(
&pool,
DeleteParams {
name: Some("shared-key"),
folder: Some("kfolder"),
entry_type: None,
dry_run: true,
user_id: Some(user_id),
},
)
.await?;
assert!(result.dry_run);
assert_eq!(result.deleted.len(), 1);
assert_eq!(result.migrated.len(), 2);
let key_exists: bool = sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1 AND user_id = $2)",
)
.bind(key_id)
.bind(user_id)
.fetch_one(&pool)
.await?;
assert!(key_exists);
let ref_a_key_ref: Option<String> =
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
.bind(ref_a)
.fetch_one(&pool)
.await?;
let ref_b_key_ref: Option<String> =
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
.bind(ref_b)
.fetch_one(&pool)
.await?;
assert_eq!(ref_a_key_ref.as_deref(), Some("kfolder/shared-key"));
assert_eq!(ref_b_key_ref.as_deref(), Some("shared-key"));
sqlx::query("DELETE FROM entries WHERE user_id = $1")
.bind(user_id)
.execute(&pool)
.await?;
Ok(())
}
#[tokio::test]
async fn delete_shared_key_auto_migrates_single_copy_and_redirects_refs() -> Result<()> {
let Some(pool) = maybe_test_pool().await else {
return Ok(());
};
let user_id = Uuid::from_u128(rand::random());
let key_id = Uuid::from_u128(rand::random());
let ref_a = Uuid::from_u128(rand::random());
let ref_b = Uuid::from_u128(rand::random());
let ref_c = Uuid::from_u128(rand::random());
insert_entry(
&pool,
key_id,
user_id,
"kfolder",
"key",
"shared-key",
json!({}),
)
.await?;
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
.bind(key_id)
.bind("pem")
.bind(vec![7_u8, 8, 9])
.execute(&pool)
.await?;
// owner candidate (sorted first by folder)
insert_entry(
&pool,
ref_a,
user_id,
"afolder",
"server",
"srv-a",
json!({"key_ref":"kfolder/shared-key"}),
)
.await?;
insert_entry(
&pool,
ref_b,
user_id,
"bfolder",
"server",
"srv-b",
json!({"key_ref":"shared-key"}),
)
.await?;
insert_entry(
&pool,
ref_c,
user_id,
"cfolder",
"service",
"svc-c",
json!({"key_ref":"kfolder/shared-key"}),
)
.await?;
let result = run(
&pool,
DeleteParams {
name: Some("shared-key"),
folder: Some("kfolder"),
entry_type: None,
dry_run: false,
user_id: Some(user_id),
},
)
.await?;
assert!(!result.dry_run);
assert_eq!(result.deleted.len(), 1);
assert_eq!(result.migrated.len(), 3);
let key_exists: bool = sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1 AND user_id = $2)",
)
.bind(key_id)
.bind(user_id)
.fetch_one(&pool)
.await?;
assert!(!key_exists);
let owner_key_ref: Option<String> =
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
.bind(ref_a)
.fetch_one(&pool)
.await?;
let ref_b_key_ref: Option<String> =
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
.bind(ref_b)
.fetch_one(&pool)
.await?;
let ref_c_key_ref: Option<String> =
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
.bind(ref_c)
.fetch_one(&pool)
.await?;
assert_eq!(owner_key_ref, None);
assert_eq!(ref_b_key_ref.as_deref(), Some("afolder/srv-a"));
assert_eq!(ref_c_key_ref.as_deref(), Some("afolder/srv-a"));
let owner_has_copied: bool = sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM secrets WHERE entry_id = $1 AND field_name = 'pem')",
)
.bind(ref_a)
.fetch_one(&pool)
.await?;
assert!(owner_has_copied);
sqlx::query("DELETE FROM entries WHERE user_id = $1")
.bind(user_id)
.execute(&pool)
.await?;
Ok(())
}
}

View File

@@ -75,16 +75,8 @@ async fn build_entry_env_map(
} else {
(None, key_ref)
};
let key_entries = fetch_entries(
pool,
ref_folder,
Some("key"),
Some(ref_name),
&[],
None,
user_id,
)
.await?;
let key_entries =
fetch_entries(pool, ref_folder, None, Some(ref_name), &[], None, user_id).await?;
if key_entries.len() > 1 {
anyhow::bail!(

View File

@@ -27,49 +27,46 @@ pub struct SearchResult {
pub secret_schemas: HashMap<Uuid, Vec<SecretField>>,
}
pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult> {
let entries = fetch_entries_paged(pool, &params).await?;
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
let secret_schemas = if !entry_ids.is_empty() {
fetch_secret_schemas(pool, &entry_ids).await?
} else {
HashMap::new()
};
Ok(SearchResult {
entries,
secret_schemas,
})
}
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
pub async fn fetch_entries(
pool: &PgPool,
folder: Option<&str>,
entry_type: Option<&str>,
name: Option<&str>,
tags: &[String],
query: Option<&str>,
user_id: Option<Uuid>,
) -> Result<Vec<Entry>> {
let params = SearchParams {
folder,
entry_type,
name,
tags,
query,
sort: "name",
limit: FETCH_ALL_LIMIT,
offset: 0,
user_id,
};
/// List `entries` rows matching params (paged, ordered per `params.sort`).
/// Does not read the `secrets` table.
pub async fn list_entries(pool: &PgPool, params: SearchParams<'_>) -> Result<Vec<Entry>> {
fetch_entries_paged(pool, &params).await
}
async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<Entry>> {
/// Count `entries` rows matching the same filters as [`list_entries`] (ignores `sort` / `limit` / `offset`).
/// Does not read the `secrets` table.
pub async fn count_entries(pool: &PgPool, a: &SearchParams<'_>) -> Result<i64> {
let (where_clause, _) = entry_where_clause_and_next_idx(a);
let sql = format!("SELECT COUNT(*)::bigint FROM entries {where_clause}");
let mut q = sqlx::query_scalar::<_, i64>(&sql);
if let Some(uid) = a.user_id {
q = q.bind(uid);
}
if let Some(v) = a.folder {
q = q.bind(v);
}
if let Some(v) = a.entry_type {
q = q.bind(v);
}
if let Some(v) = a.name {
q = q.bind(v);
}
for tag in a.tags {
q = q.bind(tag);
}
if let Some(v) = a.query {
let pattern = format!("%{}%", v.replace('%', "\\%").replace('_', "\\_"));
q = q.bind(pattern);
}
let n = q.fetch_one(pool).await?;
Ok(n)
}
/// Shared WHERE clause and the next `$n` index (for LIMIT/OFFSET in paged queries).
fn entry_where_clause_and_next_idx(a: &SearchParams<'_>) -> (String, i32) {
let mut conditions: Vec<String> = Vec::new();
let mut idx: i32 = 1;
// user_id filtering — always comes first when present
if a.user_id.is_some() {
conditions.push(format!("user_id = ${}", idx));
idx += 1;
@@ -115,6 +112,55 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
idx += 1;
}
let where_clause = if conditions.is_empty() {
String::new()
} else {
format!("WHERE {}", conditions.join(" AND "))
};
(where_clause, idx)
}
pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult> {
let entries = fetch_entries_paged(pool, &params).await?;
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
let secret_schemas = if !entry_ids.is_empty() {
fetch_secret_schemas(pool, &entry_ids).await?
} else {
HashMap::new()
};
Ok(SearchResult {
entries,
secret_schemas,
})
}
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
pub async fn fetch_entries(
pool: &PgPool,
folder: Option<&str>,
entry_type: Option<&str>,
name: Option<&str>,
tags: &[String],
query: Option<&str>,
user_id: Option<Uuid>,
) -> Result<Vec<Entry>> {
let params = SearchParams {
folder,
entry_type,
name,
tags,
query,
sort: "name",
limit: FETCH_ALL_LIMIT,
offset: 0,
user_id,
};
list_entries(pool, params).await
}
async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<Entry>> {
let (where_clause, idx) = entry_where_clause_and_next_idx(a);
let order = match a.sort {
"updated" => "updated_at DESC",
"created" => "created_at DESC",
@@ -122,14 +168,7 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
};
let limit_idx = idx;
idx += 1;
let offset_idx = idx;
let where_clause = if conditions.is_empty() {
String::new()
} else {
format!("WHERE {}", conditions.join(" AND "))
};
let offset_idx = idx + 1;
let sql = format!(
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
@@ -138,7 +177,6 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
);
let mut q = sqlx::query_as::<_, EntryRaw>(&sql);
if let Some(uid) = a.user_id {
q = q.bind(uid);
}

View File

@@ -5,7 +5,7 @@ use uuid::Uuid;
use crate::crypto;
use crate::db;
use crate::models::EntryRow;
use crate::models::{EntryRow, EntryWriteRow};
use crate::service::add::{
collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path,
parse_kv, remove_path,
@@ -306,3 +306,118 @@ pub async fn run(
remove_secrets: remove_secret_keys,
})
}
/// Update non-sensitive entry columns by primary key (multi-tenant: `user_id` must match).
/// Does not read or modify `secrets` rows.
pub struct UpdateEntryFieldsByIdParams<'a> {
pub folder: &'a str,
pub entry_type: &'a str,
pub name: &'a str,
pub notes: &'a str,
pub tags: &'a [String],
pub metadata: &'a serde_json::Value,
}
pub async fn update_fields_by_id(
pool: &PgPool,
entry_id: Uuid,
user_id: Uuid,
params: UpdateEntryFieldsByIdParams<'_>,
) -> Result<()> {
if params.folder.len() > 128 {
anyhow::bail!("folder must be at most 128 characters");
}
if params.entry_type.len() > 64 {
anyhow::bail!("type must be at most 64 characters");
}
if params.name.len() > 256 {
anyhow::bail!("name must be at most 256 characters");
}
let mut tx = pool.begin().await?;
let row: Option<EntryWriteRow> = sqlx::query_as(
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
WHERE id = $1 AND user_id = $2 FOR UPDATE",
)
.bind(entry_id)
.bind(user_id)
.fetch_optional(&mut *tx)
.await?;
let row = match row {
Some(r) => r,
None => {
tx.rollback().await?;
anyhow::bail!("Entry not found");
}
};
if let Err(e) = db::snapshot_entry_history(
&mut tx,
db::EntrySnapshotParams {
entry_id: row.id,
user_id: Some(user_id),
folder: &row.folder,
entry_type: &row.entry_type,
name: &row.name,
version: row.version,
action: "update",
tags: &row.tags,
metadata: &row.metadata,
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot entry history before web update");
}
let res = sqlx::query(
"UPDATE entries SET folder = $1, type = $2, name = $3, notes = $4, tags = $5, metadata = $6, \
version = version + 1, updated_at = NOW() \
WHERE id = $7 AND version = $8",
)
.bind(params.folder)
.bind(params.entry_type)
.bind(params.name)
.bind(params.notes)
.bind(params.tags)
.bind(params.metadata)
.bind(row.id)
.bind(row.version)
.execute(&mut *tx)
.await
.map_err(|e| {
if let sqlx::Error::Database(ref d) = e
&& d.code().as_deref() == Some("23505")
{
return anyhow::anyhow!(
"An entry with this folder and name already exists for your account."
);
}
e.into()
})?;
if res.rows_affected() == 0 {
tx.rollback().await?;
anyhow::bail!("Concurrent modification detected. Please refresh and try again.");
}
crate::audit::log_tx(
&mut tx,
Some(user_id),
"update",
params.folder,
params.entry_type,
params.name,
serde_json::json!({
"source": "web",
"entry_id": entry_id,
"fields": ["folder", "type", "name", "notes", "tags", "metadata"],
}),
)
.await;
tx.commit().await?;
Ok(())
}

View File

@@ -1,6 +1,6 @@
[package]
name = "secrets-mcp"
version = "0.3.2"
version = "0.3.7"
edition.workspace = true
[[bin]]

View File

@@ -21,7 +21,7 @@ use tower_sessions_sqlx_store_chrono::PostgresStore;
use tracing_subscriber::EnvFilter;
use tracing_subscriber::fmt::time::FormatTime;
use secrets_core::config::resolve_db_url;
use secrets_core::config::resolve_db_config;
use secrets_core::db::{create_pool, migrate};
use crate::oauth::OAuthConfig;
@@ -78,9 +78,9 @@ async fn main() -> Result<()> {
.init();
// ── Database ──────────────────────────────────────────────────────────────
let db_url = resolve_db_url("")
let db_config = resolve_db_config("")
.context("Database not configured. Set SECRETS_DATABASE_URL environment variable.")?;
let pool = create_pool(&db_url)
let pool = create_pool(&db_config)
.await
.context("failed to connect to database")?;
migrate(&pool)

View File

@@ -8,9 +8,10 @@ use axum::{
extract::{ConnectInfo, Path, Query, State},
http::{HeaderMap, StatusCode, header},
response::{Html, IntoResponse, Redirect, Response},
routing::{get, post},
routing::{get, patch, post},
};
use serde::{Deserialize, Serialize};
use serde_json::json;
use tower_sessions::Session;
use uuid::Uuid;
@@ -19,6 +20,9 @@ use secrets_core::crypto::hex;
use secrets_core::service::{
api_key::{ensure_api_key, regenerate_api_key},
audit_log::list_for_user,
delete::delete_by_id,
search::{SearchParams, count_entries, list_entries},
update::{UpdateEntryFieldsByIdParams, update_fields_by_id},
user::{
OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id,
unbind_oauth_account, update_user_key_setup,
@@ -78,6 +82,44 @@ struct AuditEntryView {
detail: String,
}
#[derive(Template)]
#[template(path = "entries.html")]
struct EntriesPageTemplate {
user_name: String,
user_email: String,
entries: Vec<EntryListItemView>,
total_count: i64,
shown_count: usize,
limit: u32,
filter_folder: String,
filter_type: String,
version: &'static str,
}
/// Non-sensitive fields only (no `secrets` / ciphertext).
struct EntryListItemView {
id: String,
folder: String,
entry_type: String,
name: String,
notes: String,
tags: String,
metadata: String,
/// RFC3339 UTC for `<time datetime>`; localized in entries.html.
updated_at_iso: String,
}
/// Cap for HTML list (avoids loading unbounded rows into memory).
const ENTRIES_PAGE_LIMIT: u32 = 5_000;
#[derive(Deserialize)]
struct EntriesQuery {
folder: Option<String>,
/// URL query key is `type` (maps to DB column `entries.type`).
#[serde(rename = "type")]
entry_type: Option<String>,
}
// ── App state helpers ─────────────────────────────────────────────────────────
fn google_cfg(state: &AppState) -> Option<&OAuthConfig> {
@@ -149,6 +191,7 @@ pub fn web_router() -> Router<AppState> {
.route("/auth/google/callback", get(auth_google_callback))
.route("/auth/logout", post(auth_logout))
.route("/dashboard", get(dashboard))
.route("/entries", get(entries_page))
.route("/audit", get(audit_page))
.route("/account/bind/google", get(account_bind_google))
.route(
@@ -160,6 +203,10 @@ pub fn web_router() -> Router<AppState> {
.route("/api/key-setup", post(api_key_setup))
.route("/api/apikey", get(api_apikey_get))
.route("/api/apikey/regenerate", post(api_apikey_regenerate))
.route(
"/api/entries/{id}",
patch(api_entry_patch).delete(api_entry_delete),
)
}
fn text_asset_response(content: &'static str, content_type: &'static str) -> Response {
@@ -478,6 +525,89 @@ async fn dashboard(
render_template(tmpl)
}
async fn entries_page(
State(state): State<AppState>,
session: Session,
Query(q): Query<EntriesQuery>,
) -> Result<Response, StatusCode> {
let Some(user_id) = current_user_id(&session).await else {
return Ok(Redirect::to("/login").into_response());
};
let user = match get_user_by_id(&state.pool, user_id).await.map_err(|e| {
tracing::error!(error = %e, %user_id, "failed to load user for entries page");
StatusCode::INTERNAL_SERVER_ERROR
})? {
Some(u) => u,
None => return Ok(Redirect::to("/login").into_response()),
};
let folder_filter = q
.folder
.as_ref()
.map(|s| s.trim())
.filter(|s| !s.is_empty())
.map(|s| s.to_string());
let type_filter = q
.entry_type
.as_ref()
.map(|s| s.trim())
.filter(|s| !s.is_empty())
.map(|s| s.to_string());
let params = SearchParams {
folder: folder_filter.as_deref(),
entry_type: type_filter.as_deref(),
name: None,
tags: &[],
query: None,
sort: "updated",
limit: ENTRIES_PAGE_LIMIT,
offset: 0,
user_id: Some(user_id),
};
let total_count = count_entries(&state.pool, &params).await.map_err(|e| {
tracing::error!(error = %e, "failed to count entries for web");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let rows = list_entries(&state.pool, params).await.map_err(|e| {
tracing::error!(error = %e, "failed to load entries list for web");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let shown_count = rows.len();
let entries = rows
.into_iter()
.map(|e| EntryListItemView {
id: e.id.to_string(),
folder: e.folder,
entry_type: e.entry_type,
name: e.name,
notes: e.notes,
tags: e.tags.join(", "),
metadata: serde_json::to_string_pretty(&e.metadata)
.unwrap_or_else(|_| "{}".to_string()),
updated_at_iso: e.updated_at.to_rfc3339_opts(SecondsFormat::Secs, true),
})
.collect();
let tmpl = EntriesPageTemplate {
user_name: user.name.clone(),
user_email: user.email.clone().unwrap_or_default(),
entries,
total_count,
shown_count,
limit: ENTRIES_PAGE_LIMIT,
filter_folder: folder_filter.unwrap_or_default(),
filter_type: type_filter.unwrap_or_default(),
version: env!("CARGO_PKG_VERSION"),
};
render_template(tmpl)
}
async fn audit_page(
State(state): State<AppState>,
session: Session,
@@ -751,6 +881,125 @@ async fn api_apikey_regenerate(
Ok(Json(ApiKeyResponse { api_key }))
}
// ── Entry management (Web UI, non-sensitive fields only) ───────────────────────
#[derive(Deserialize)]
struct EntryPatchBody {
folder: String,
#[serde(rename = "type")]
entry_type: String,
name: String,
notes: String,
tags: Vec<String>,
metadata: serde_json::Value,
}
type EntryApiError = (StatusCode, Json<serde_json::Value>);
fn map_entry_mutation_err(e: anyhow::Error) -> EntryApiError {
let msg = e.to_string();
if msg.contains("Entry not found") {
return (
StatusCode::NOT_FOUND,
Json(json!({ "error": "条目不存在或无权访问" })),
);
}
if msg.contains("already exists") {
return (
StatusCode::CONFLICT,
Json(json!({ "error": "该账号下已存在相同 folder + name 的条目" })),
);
}
if msg.contains("Concurrent modification") {
return (
StatusCode::CONFLICT,
Json(json!({ "error": "条目已被修改,请刷新后重试" })),
);
}
if msg.contains("must be at most") {
return (StatusCode::BAD_REQUEST, Json(json!({ "error": msg })));
}
tracing::error!(error = %e, "entry mutation failed");
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(json!({ "error": "操作失败,请稍后重试" })),
)
}
async fn api_entry_patch(
State(state): State<AppState>,
session: Session,
Path(entry_id): Path<Uuid>,
Json(body): Json<EntryPatchBody>,
) -> Result<Json<serde_json::Value>, EntryApiError> {
let user_id = current_user_id(&session)
.await
.ok_or((StatusCode::UNAUTHORIZED, Json(json!({ "error": "未登录" }))))?;
let folder = body.folder.trim();
let entry_type = body.entry_type.trim();
let name = body.name.trim();
let notes = body.notes.trim();
if name.is_empty() {
return Err((
StatusCode::BAD_REQUEST,
Json(json!({ "error": "name 不能为空" })),
));
}
let tags: Vec<String> = body
.tags
.into_iter()
.map(|t| t.trim().to_string())
.filter(|t| !t.is_empty())
.collect();
if !body.metadata.is_object() {
return Err((
StatusCode::BAD_REQUEST,
Json(json!({ "error": "metadata 必须是 JSON 对象" })),
));
}
update_fields_by_id(
&state.pool,
entry_id,
user_id,
UpdateEntryFieldsByIdParams {
folder,
entry_type,
name,
notes,
tags: &tags,
metadata: &body.metadata,
},
)
.await
.map_err(map_entry_mutation_err)?;
Ok(Json(json!({ "ok": true })))
}
async fn api_entry_delete(
State(state): State<AppState>,
session: Session,
Path(entry_id): Path<Uuid>,
) -> Result<Json<serde_json::Value>, EntryApiError> {
let user_id = current_user_id(&session)
.await
.ok_or((StatusCode::UNAUTHORIZED, Json(json!({ "error": "未登录" }))))?;
let result = delete_by_id(&state.pool, entry_id, user_id)
.await
.map_err(map_entry_mutation_err)?;
Ok(Json(json!({
"ok": true,
"migrated": result.migrated,
})))
}
// ── OAuth / Well-known ────────────────────────────────────────────────────────
/// RFC 9728 — OAuth 2.0 Protected Resource Metadata.

View File

@@ -92,6 +92,7 @@
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link">MCP</a>
<a href="/entries" class="sidebar-link">条目</a>
<a href="/audit" class="sidebar-link active">审计</a>
</nav>
</aside>

View File

@@ -174,6 +174,7 @@
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link active">MCP</a>
<a href="/entries" class="sidebar-link">条目</a>
<a href="/audit" class="sidebar-link">审计</a>
</nav>
</aside>

View File

@@ -0,0 +1,390 @@
<!DOCTYPE html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="icon" href="/favicon.svg?v={{ version }}" type="image/svg+xml">
<title>Secrets — 条目</title>
<style>
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;600&family=Inter:wght@400;500;600&display=swap');
:root {
--bg: #0d1117; --surface: #161b22; --surface2: #21262d;
--border: #30363d; --text: #e6edf3; --text-muted: #8b949e;
--accent: #58a6ff; --accent-hover: #79b8ff;
}
body { background: var(--bg); color: var(--text); font-family: 'Inter', sans-serif; min-height: 100vh; }
.layout { display: flex; min-height: 100vh; }
.sidebar {
width: 220px; flex-shrink: 0; background: var(--surface); border-right: 1px solid var(--border);
padding: 24px 16px; display: flex; flex-direction: column; gap: 20px;
}
.sidebar-logo { font-family: 'JetBrains Mono', monospace; font-size: 16px; font-weight: 600;
color: var(--text); text-decoration: none; padding: 0 10px; }
.sidebar-logo span { color: var(--accent); }
.sidebar-menu { display: flex; flex-direction: column; gap: 6px; }
.sidebar-link {
padding: 10px 12px; border-radius: 8px; color: var(--text-muted); text-decoration: none;
border: 1px solid transparent; font-size: 13px; font-weight: 500;
}
.sidebar-link:hover { background: var(--surface2); color: var(--text); }
.sidebar-link.active {
background: rgba(88,166,255,0.12); color: var(--text); border-color: rgba(88,166,255,0.35);
}
.content-shell { flex: 1; min-width: 0; display: flex; flex-direction: column; }
.topbar {
background: var(--surface); border-bottom: 1px solid var(--border); padding: 0 24px;
display: flex; align-items: center; gap: 12px; min-height: 52px;
}
.topbar-spacer { flex: 1; }
.nav-user { font-size: 13px; color: var(--text-muted); }
.btn-sign-out {
padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border);
background: none; color: var(--text); font-size: 12px; text-decoration: none; cursor: pointer;
}
.btn-sign-out:hover { background: var(--surface2); }
.main { padding: 32px 24px 40px; flex: 1; }
.card { background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
padding: 24px; width: 100%; max-width: 1280px; margin: 0 auto; }
.card-title { font-size: 20px; font-weight: 600; margin-bottom: 8px; }
.card-subtitle { color: var(--text-muted); font-size: 13px; margin-bottom: 20px; }
.filter-bar {
display: flex; flex-wrap: wrap; align-items: flex-end; gap: 12px 16px;
margin-bottom: 20px; padding: 16px; background: var(--bg); border: 1px solid var(--border);
border-radius: 10px;
}
.filter-field { display: flex; flex-direction: column; gap: 6px; min-width: 140px; flex: 1; }
.filter-field label { font-size: 12px; color: var(--text-muted); font-weight: 500; }
.filter-field input {
background: var(--surface); border: 1px solid var(--border); border-radius: 6px;
color: var(--text); padding: 8px 10px; font-size: 13px; font-family: 'JetBrains Mono', monospace;
outline: none; width: 100%;
}
.filter-field input:focus { border-color: var(--accent); }
.filter-actions { display: flex; flex-wrap: wrap; align-items: center; gap: 8px; }
.btn-filter {
padding: 8px 16px; border-radius: 6px; border: none; background: var(--accent); color: #0d1117;
font-size: 13px; font-weight: 600; cursor: pointer;
}
.btn-filter:hover { background: var(--accent-hover); }
.btn-clear {
padding: 8px 14px; border-radius: 6px; border: 1px solid var(--border); background: transparent;
color: var(--text-muted); font-size: 13px; text-decoration: none; cursor: pointer;
}
.btn-clear:hover { background: var(--surface2); color: var(--text); }
.empty { color: var(--text-muted); font-size: 14px; padding: 20px 0; }
.table-wrap { overflow-x: auto; }
table { width: 100%; border-collapse: collapse; min-width: 720px; }
th, td { text-align: left; vertical-align: top; padding: 12px 10px; border-top: 1px solid var(--border); }
th { color: var(--text-muted); font-size: 12px; font-weight: 600; white-space: nowrap; }
td { font-size: 13px; }
.mono { font-family: 'JetBrains Mono', monospace; }
.cell-notes, .cell-meta {
max-width: 280px; word-break: break-word;
}
.notes-scroll {
max-height: 160px;
overflow: auto;
white-space: pre-wrap;
word-break: break-word;
padding: 8px;
background: var(--bg);
border: 1px solid var(--border);
border-radius: 8px;
font-size: 12px;
}
.detail {
background: var(--bg); border: 1px solid var(--border); border-radius: 8px;
padding: 10px; white-space: pre-wrap; word-break: break-word; font-size: 12px;
max-width: 320px; max-height: 160px; overflow: auto;
}
.col-actions { white-space: nowrap; }
.row-actions { display: flex; flex-wrap: wrap; gap: 6px; }
.btn-row {
padding: 4px 10px; border-radius: 6px; font-size: 12px; cursor: pointer;
border: 1px solid var(--border); background: var(--surface2); color: var(--text-muted);
font-family: inherit;
}
.btn-row:hover { color: var(--text); border-color: var(--text-muted); }
.btn-row.danger:hover { border-color: #f85149; color: #f85149; }
.modal-overlay {
position: fixed; inset: 0; background: rgba(1, 4, 9, 0.65); z-index: 200;
display: flex; align-items: center; justify-content: center; padding: 16px;
}
.modal-overlay[hidden] { display: none !important; }
.modal {
background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
padding: 22px; width: 100%; max-width: 520px; max-height: 90vh; overflow: auto;
box-shadow: 0 16px 48px rgba(0,0,0,0.45);
}
.modal-title { font-size: 16px; font-weight: 600; margin-bottom: 14px; }
.modal-field { margin-bottom: 12px; }
.modal-field label { display: block; font-size: 12px; color: var(--text-muted); margin-bottom: 5px; }
.modal-field input, .modal-field textarea {
width: 100%; background: var(--bg); border: 1px solid var(--border); border-radius: 6px;
color: var(--text); padding: 8px 10px; font-size: 13px; font-family: 'JetBrains Mono', monospace;
outline: none;
}
.modal-field textarea { min-height: 72px; resize: vertical; }
.modal-field textarea.metadata-edit { min-height: 140px; }
.modal-error { color: #f85149; font-size: 12px; margin-bottom: 10px; display: none; }
.modal-error.visible { display: block; }
.modal-footer { display: flex; flex-wrap: wrap; gap: 8px; justify-content: flex-end; margin-top: 16px; }
.btn-modal { padding: 8px 16px; border-radius: 6px; font-size: 13px; cursor: pointer; font-family: inherit; border: 1px solid var(--border); background: transparent; color: var(--text); }
.btn-modal.primary { background: var(--accent); color: #0d1117; border-color: transparent; font-weight: 600; }
.btn-modal.primary:hover { background: var(--accent-hover); }
.btn-modal.danger { border-color: #f85149; color: #f85149; }
@media (max-width: 900px) {
.layout { flex-direction: column; }
.sidebar {
width: 100%; border-right: none; border-bottom: 1px solid var(--border);
padding: 16px; gap: 14px;
}
.sidebar-menu { flex-direction: row; flex-wrap: wrap; }
.sidebar-link { flex: 1; text-align: center; min-width: 72px; }
.main { padding: 20px 12px 28px; }
.card { padding: 16px; }
.topbar { padding: 12px 16px; flex-wrap: wrap; }
table, thead, tbody, th, td, tr { display: block; }
thead { display: none; }
tr { border-top: 1px solid var(--border); padding: 12px 0; }
td { border-top: none; padding: 6px 0; max-width: none; }
td::before {
display: block; color: var(--text-muted); font-size: 11px;
margin-bottom: 4px; text-transform: uppercase;
}
td.col-updated::before { content: "更新"; }
td.col-folder::before { content: "Folder"; }
td.col-type::before { content: "Type"; }
td.col-name::before { content: "Name"; }
td.col-notes::before { content: "Notes"; }
td.col-tags::before { content: "Tags"; }
td.col-meta::before { content: "Metadata"; }
td.col-actions::before { content: "操作"; }
.detail { max-width: none; }
.notes-scroll { max-width: none; }
}
</style>
</head>
<body>
<div class="layout">
<aside class="sidebar">
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
<nav class="sidebar-menu">
<a href="/dashboard" class="sidebar-link">MCP</a>
<a href="/entries" class="sidebar-link active">条目</a>
<a href="/audit" class="sidebar-link">审计</a>
</nav>
</aside>
<div class="content-shell">
<div class="topbar">
<span class="topbar-spacer"></span>
<span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span>
<form action="/auth/logout" method="post" style="display:inline">
<button type="submit" class="btn-sign-out">退出</button>
</form>
</div>
<main class="main">
<section class="card">
<div class="card-title">我的条目</div>
<div class="card-subtitle">在当前筛选条件下,共 <strong>{{ total_count }}</strong> 条记录;本页显示 <strong>{{ shown_count }}</strong> 条(按更新时间降序,单页最多 {{ limit }} 条)。不含密文字段。时间为浏览器本地时区。</div>
<form class="filter-bar" method="get" action="/entries">
<div class="filter-field">
<label for="filter-folder">Folder精确匹配</label>
<input id="filter-folder" name="folder" type="text" value="{{ filter_folder }}" placeholder="例如 refining" autocomplete="off">
</div>
<div class="filter-field">
<label for="filter-type">Type精确匹配</label>
<input id="filter-type" name="type" type="text" value="{{ filter_type }}" placeholder="例如 server" autocomplete="off">
</div>
<div class="filter-actions">
<button type="submit" class="btn-filter">筛选</button>
<a href="/entries" class="btn-clear">清空</a>
</div>
</form>
{% if entries.is_empty() %}
<div class="empty">暂无条目。</div>
{% else %}
<div class="table-wrap">
<table>
<thead>
<tr>
<th>更新</th>
<th>Folder</th>
<th>Type</th>
<th>Name</th>
<th>Notes</th>
<th>Tags</th>
<th>Metadata</th>
<th>操作</th>
</tr>
</thead>
<tbody>
{% for entry in entries %}
<tr data-entry-id="{{ entry.id }}">
<td class="col-updated mono"><time class="entry-local-time" datetime="{{ entry.updated_at_iso }}">{{ entry.updated_at_iso }}</time></td>
<td class="col-folder mono cell-folder">{{ entry.folder }}</td>
<td class="col-type mono cell-type">{{ entry.entry_type }}</td>
<td class="col-name mono cell-name">{{ entry.name }}</td>
<td class="col-notes cell-notes">{% if !entry.notes.is_empty() %}<div class="notes-scroll cell-notes-val">{{ entry.notes }}</div>{% endif %}</td>
<td class="col-tags mono cell-tags-val">{{ entry.tags }}</td>
<td class="col-meta cell-meta"><pre class="detail cell-meta-val">{{ entry.metadata }}</pre></td>
<td class="col-actions">
<div class="row-actions">
<button type="button" class="btn-row btn-edit">编辑</button>
<button type="button" class="btn-row danger btn-del">删除</button>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% endif %}
</section>
</main>
</div>
</div>
<div id="edit-overlay" class="modal-overlay" hidden>
<div class="modal" role="dialog" aria-modal="true" aria-labelledby="edit-title">
<div class="modal-title" id="edit-title">编辑条目</div>
<div id="edit-error" class="modal-error"></div>
<div class="modal-field"><label for="edit-folder">Folder</label><input id="edit-folder" type="text" autocomplete="off"></div>
<div class="modal-field"><label for="edit-type">Type</label><input id="edit-type" type="text" autocomplete="off"></div>
<div class="modal-field"><label for="edit-name">Name</label><input id="edit-name" type="text" autocomplete="off"></div>
<div class="modal-field"><label for="edit-notes">Notes</label><textarea id="edit-notes"></textarea></div>
<div class="modal-field"><label for="edit-tags">Tags逗号分隔</label><input id="edit-tags" type="text" autocomplete="off"></div>
<div class="modal-field"><label for="edit-metadata">MetadataJSON 对象)</label><textarea id="edit-metadata" class="metadata-edit"></textarea></div>
<div class="modal-footer">
<button type="button" class="btn-modal" id="edit-cancel">取消</button>
<button type="button" class="btn-modal primary" id="edit-save">保存</button>
</div>
</div>
</div>
<script>
(function () {
document.querySelectorAll('time.entry-local-time[datetime]').forEach(function (el) {
var raw = el.getAttribute('datetime');
var d = raw ? new Date(raw) : null;
if (d && !isNaN(d.getTime())) {
el.textContent = d.toLocaleString(undefined, { dateStyle: 'medium', timeStyle: 'medium' });
el.title = raw + ' (UTC)';
}
});
var editOverlay = document.getElementById('edit-overlay');
var editError = document.getElementById('edit-error');
var editFolder = document.getElementById('edit-folder');
var editType = document.getElementById('edit-type');
var editName = document.getElementById('edit-name');
var editNotes = document.getElementById('edit-notes');
var editTags = document.getElementById('edit-tags');
var editMetadata = document.getElementById('edit-metadata');
var currentEntryId = null;
function showEditErr(msg) {
editError.textContent = msg || '';
editError.classList.toggle('visible', !!msg);
}
function openEdit(tr) {
var id = tr.getAttribute('data-entry-id');
if (!id) return;
currentEntryId = id;
showEditErr('');
editFolder.value = tr.querySelector('.cell-folder') ? tr.querySelector('.cell-folder').textContent.trim() : '';
editType.value = tr.querySelector('.cell-type') ? tr.querySelector('.cell-type').textContent.trim() : '';
editName.value = tr.querySelector('.cell-name') ? tr.querySelector('.cell-name').textContent.trim() : '';
editNotes.value = tr.querySelector('.cell-notes-val') ? tr.querySelector('.cell-notes-val').textContent : '';
var tagsText = tr.querySelector('.cell-tags-val') ? tr.querySelector('.cell-tags-val').textContent.trim() : '';
editTags.value = tagsText;
var metaPre = tr.querySelector('.cell-meta-val');
editMetadata.value = metaPre ? metaPre.textContent : '{}';
editOverlay.hidden = false;
}
function closeEdit() {
editOverlay.hidden = true;
currentEntryId = null;
showEditErr('');
}
document.getElementById('edit-cancel').addEventListener('click', closeEdit);
editOverlay.addEventListener('click', function (e) {
if (e.target === editOverlay) closeEdit();
});
document.getElementById('edit-save').addEventListener('click', function () {
if (!currentEntryId) return;
var meta;
try {
meta = JSON.parse(editMetadata.value);
} catch (err) {
showEditErr('Metadata 不是合法 JSON');
return;
}
if (meta === null || typeof meta !== 'object' || Array.isArray(meta)) {
showEditErr('Metadata 必须是 JSON 对象');
return;
}
var tags = editTags.value.split(',').map(function (s) { return s.trim(); }).filter(Boolean);
var body = {
folder: editFolder.value,
type: editType.value,
name: editName.value.trim(),
notes: editNotes.value,
tags: tags,
metadata: meta
};
showEditErr('');
fetch('/api/entries/' + encodeURIComponent(currentEntryId), {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
credentials: 'same-origin',
body: JSON.stringify(body)
}).then(function (r) {
return r.json().then(function (data) {
if (!r.ok) throw new Error(data.error || ('HTTP ' + r.status));
return data;
});
}).then(function () {
closeEdit();
window.location.reload();
}).catch(function (e) {
showEditErr(e.message || String(e));
});
});
document.querySelectorAll('tr[data-entry-id]').forEach(function (tr) {
tr.querySelector('.btn-edit').addEventListener('click', function () { openEdit(tr); });
tr.querySelector('.btn-del').addEventListener('click', function () {
var id = tr.getAttribute('data-entry-id');
var nameEl = tr.querySelector('.cell-name');
var name = nameEl ? nameEl.textContent.trim() : '';
if (!id) return;
if (!confirm('确定删除条目「' + name + '」?')) return;
fetch('/api/entries/' + encodeURIComponent(id), { method: 'DELETE', credentials: 'same-origin' })
.then(function (r) {
return r.json().then(function (data) {
if (!r.ok) throw new Error(data.error || ('HTTP ' + r.status));
return data;
});
})
.then(function (data) {
if (data && Array.isArray(data.migrated) && data.migrated.length > 0) {
alert('已自动迁移共享 key 引用:' + data.migrated.length + ' 个条目完成重定向。');
}
window.location.reload();
})
.catch(function (e) { alert(e.message || String(e)); });
});
});
})();
</script>
</body>
</html>

View File

@@ -3,7 +3,13 @@
# ─── 数据库 ───────────────────────────────────────────────────────────
# Web 会话tower-sessions与业务数据共用此库启动时会自动 migrate 会话表,无需额外环境变量。
SECRETS_DATABASE_URL=postgres://postgres:PASSWORD@HOST:PORT/secrets-mcp
SECRETS_DATABASE_URL=postgres://postgres:PASSWORD@db.refining.ltd:5432/secrets-mcp
# 强烈建议生产使用 verify-full至少 verify-ca
SECRETS_DATABASE_SSL_MODE=verify-full
# 私有 CA 或自建链路时填写 CA 根证书路径;使用公共受信 CA 可留空
# SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
# 当设为 prod/production 时,服务会拒绝弱 TLS 模式prefer/disable/allow/require
SECRETS_ENV=production
# ─── 服务地址 ─────────────────────────────────────────────────────────
# 内网监听地址Cloudflare / Nginx 反代时填内网端口)

View File

@@ -0,0 +1,92 @@
# PostgreSQL TLS Hardening Runbook
This runbook applies to:
- PostgreSQL server: `47.117.131.22` (`db.refining.ltd`)
- `secrets-mcp` app server: `47.238.146.244` (`secrets.refining.app`)
## 1) Issue certificate for `db.refining.ltd` (Let's Encrypt + Cloudflare DNS-01)
Install `acme.sh` on the PostgreSQL server and use a Cloudflare API token with DNS edit permission for the target zone.
```bash
curl https://get.acme.sh | sh -s email=ops@refining.ltd
export CF_Token="your_cloudflare_dns_token"
export CF_Zone_ID="your_zone_id"
~/.acme.sh/acme.sh --issue --dns dns_cf -d db.refining.ltd --keylength ec-256
```
Install cert/key into a PostgreSQL-readable path:
```bash
sudo mkdir -p /etc/postgresql/tls
sudo ~/.acme.sh/acme.sh --install-cert -d db.refining.ltd --ecc \
--fullchain-file /etc/postgresql/tls/fullchain.pem \
--key-file /etc/postgresql/tls/privkey.pem \
--reloadcmd "systemctl reload postgresql || systemctl restart postgresql"
sudo chown -R postgres:postgres /etc/postgresql/tls
sudo chmod 600 /etc/postgresql/tls/privkey.pem
sudo chmod 644 /etc/postgresql/tls/fullchain.pem
```
## 2) Configure PostgreSQL TLS and access rules
In `postgresql.conf`:
```conf
ssl = on
ssl_cert_file = '/etc/postgresql/tls/fullchain.pem'
ssl_key_file = '/etc/postgresql/tls/privkey.pem'
```
In `pg_hba.conf`, allow app traffic via TLS only (example):
```conf
hostssl secrets-mcp postgres 47.238.146.244/32 scram-sha-256
```
Keep a safe admin path (`local` socket or restricted source CIDR) before removing old plaintext `host` rules.
Reload PostgreSQL:
```bash
sudo systemctl reload postgresql
```
## 3) Verify server-side TLS
```bash
openssl s_client -starttls postgres -connect db.refining.ltd:5432 -servername db.refining.ltd
```
The handshake should succeed and the certificate should match `db.refining.ltd`.
## 4) Update `secrets-mcp` app server env
Use environment values like:
```bash
SECRETS_DATABASE_URL=postgres://postgres:***@db.refining.ltd:5432/secrets-mcp
SECRETS_DATABASE_SSL_MODE=verify-full
SECRETS_ENV=production
```
If you use private CA instead of public CA, also set:
```bash
SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
```
Restart `secrets-mcp` after updating env.
## 5) Verify from app server
Run positive and negative checks:
- Positive: app starts, migrations pass, dashboard + MCP API work.
- Negative:
- wrong hostname -> connection fails
- wrong CA file -> connection fails
- disable TLS on DB -> connection fails
This ensures no silent downgrade to weak TLS in production.