Compare commits
8 Commits
secrets-mc
...
b99d821644
| Author | SHA1 | Date | |
|---|---|---|---|
| b99d821644 | |||
|
|
32f275f88a | ||
|
|
c6fb457734 | ||
| df701f21b9 | |||
| c3c536200e | |||
| 7909f7102d | |||
| 87a29af82d | |||
| 1b11f7e976 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -5,3 +5,4 @@
|
||||
# Google OAuth 下载的 JSON 凭据文件
|
||||
client_secret_*.apps.googleusercontent.com.json
|
||||
*.pem
|
||||
tmp/
|
||||
@@ -118,7 +118,7 @@ oauth_accounts (
|
||||
|
||||
### PEM 共享(`key_ref`)
|
||||
|
||||
将共享 PEM 存为 **`type=key`** 的 entry;其它记录在 `metadata.key_ref` 指向该 key 的 `name`(支持 `folder/name` 格式消歧)。更新 key 记录后,引用方通过服务层解析合并逻辑即可使用新密钥(实现见 `secrets_core::service::env_map`)。
|
||||
建议将共享 PEM 存为 **`type=key`** 的 entry;其它记录在 `metadata.key_ref` 指向目标 entry 的 `name`(支持 `folder/name` 格式消歧)。删除被引用 key 时,服务会自动迁移为单副本 + 重定向(复制到首个引用方,其余引用方改指向新 owner);解析逻辑见 `secrets_core::service::env_map`。
|
||||
|
||||
## 代码规范
|
||||
|
||||
|
||||
2
Cargo.lock
generated
2
Cargo.lock
generated
@@ -1968,7 +1968,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "secrets-mcp"
|
||||
version = "0.3.2"
|
||||
version = "0.3.9"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"askama",
|
||||
|
||||
26
README.md
26
README.md
@@ -17,7 +17,10 @@ cargo build --release -p secrets-mcp
|
||||
|
||||
| 变量 | 说明 |
|
||||
|------|------|
|
||||
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL 连接串(建议专用库,如 `secrets-mcp`)。 |
|
||||
| `SECRETS_DATABASE_URL` | **必填**。PostgreSQL 连接串(推荐使用域名,例如 `db.refining.ltd`,避免直连 IP)。 |
|
||||
| `SECRETS_DATABASE_SSL_MODE` | 可选但强烈建议生产必填。推荐 `verify-full`(至少 `verify-ca`),避免回退到弱 TLS 模式。 |
|
||||
| `SECRETS_DATABASE_SSL_ROOT_CERT` | 可选。私有 CA 或自签链路时指定 CA 根证书路径(如 `/etc/secrets/pg-ca.crt`)。 |
|
||||
| `SECRETS_ENV` | 可选。设为 `prod` / `production` 时会拒绝弱 PostgreSQL TLS 模式(`prefer`、`disable`、`allow`、`require`)。 |
|
||||
| `BASE_URL` | 对外访问基址;OAuth 回调为 `{BASE_URL}/auth/google/callback`。默认 `http://localhost:9315`。 |
|
||||
| `SECRETS_MCP_BIND` | 监听地址,默认 `127.0.0.1:9315`。容器内或直接对外暴露端口时请改为 `0.0.0.0:9315`;反代时常为 `127.0.0.1:9315`。 |
|
||||
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | 可选;不配置则无 Google 登录入口。运行时从环境读取,勿写入 CI、勿打入二进制。 |
|
||||
@@ -27,9 +30,26 @@ cargo build --release -p secrets-mcp
|
||||
cargo run -p secrets-mcp
|
||||
```
|
||||
|
||||
生产推荐示例(PostgreSQL TLS):
|
||||
|
||||
```bash
|
||||
SECRETS_DATABASE_URL=postgres://postgres:***@db.refining.ltd:5432/secrets-mcp
|
||||
SECRETS_DATABASE_SSL_MODE=verify-full
|
||||
SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
|
||||
SECRETS_ENV=production
|
||||
```
|
||||
|
||||
- **Web**:`BASE_URL`(登录、Dashboard、设置密码短语、创建 API Key)。
|
||||
- **MCP**:Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer <api_key>` + `X-Encryption-Key: <hex>` 请求头(读密文工具须带密钥)。
|
||||
|
||||
## PostgreSQL TLS 加固
|
||||
|
||||
- 推荐将数据库域名单独设置为 `db.refining.ltd`,服务域名保持 `secrets.refining.app`。
|
||||
- 数据库证书建议使用可校验链路(如 Let's Encrypt 或私有 CA),并保证证书 `SAN` 包含 `db.refining.ltd`。
|
||||
- PostgreSQL 侧建议使用 `hostssl` 规则限制应用来源(如 `47.238.146.244/32`),逐步移除公网明文 `host` 访问。
|
||||
- 应用端推荐 `SECRETS_DATABASE_SSL_MODE=verify-full`;仅在过渡阶段可临时用 `verify-ca`。
|
||||
- 可执行运维步骤见 [`deploy/postgres-tls-hardening.md`](deploy/postgres-tls-hardening.md)。
|
||||
|
||||
## MCP 与 AI 工作流(v0.3+)
|
||||
|
||||
条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun` 与 `ricnsmart/aliyun`)。
|
||||
@@ -37,6 +57,7 @@ cargo run -p secrets-mcp
|
||||
- **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。
|
||||
- **`secrets_get` / `secrets_update` / `secrets_delete`(按 name)/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。
|
||||
- **`secrets_delete`**:`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder`。
|
||||
- **共享 key 自动迁移删除**:删除仍被 `metadata.key_ref` 引用的 key 条目时,系统会自动迁移:把密文复制到首个引用方,并将其余引用方的 `key_ref` 重定向到新 owner,然后继续删除。
|
||||
|
||||
## 加密架构(混合 E2EE)
|
||||
|
||||
@@ -147,7 +168,8 @@ flowchart LR
|
||||
|
||||
### PEM 共享(`key_ref`)
|
||||
|
||||
同一 PEM 可被多条 `server` 等记录引用:将 PEM 存为 **`type=key`** 的 entry,在其它条目的 `metadata.key_ref` 中写该 key 条目的 `name`(支持 `folder/name` 格式消歧);轮换时只更新 key 记录即可。
|
||||
同一 PEM 可被多条 `server` 等记录引用:建议将 PEM 存为 **`type=key`** 的 entry,在其它条目的 `metadata.key_ref` 中写目标 entry 的 `name`(支持 `folder/name` 格式消歧);轮换时只更新该目标记录即可。
|
||||
删除共享 key 时,系统会自动迁移引用:将密文复制到首个引用方(单副本),其余引用方的 `key_ref` 自动重定向到该新 owner,再删除原 key 记录。
|
||||
|
||||
## 审计日志
|
||||
|
||||
|
||||
@@ -1,4 +1,15 @@
|
||||
use anyhow::Result;
|
||||
use std::path::PathBuf;
|
||||
|
||||
use anyhow::{Context, Result};
|
||||
use sqlx::postgres::PgSslMode;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct DatabaseConfig {
|
||||
pub url: String,
|
||||
pub ssl_mode: Option<PgSslMode>,
|
||||
pub ssl_root_cert: Option<PathBuf>,
|
||||
pub enforce_strict_tls: bool,
|
||||
}
|
||||
|
||||
/// Resolve database URL from environment.
|
||||
/// Priority: `SECRETS_DATABASE_URL` env var → error.
|
||||
@@ -18,3 +29,54 @@ pub fn resolve_db_url(override_url: &str) -> Result<String> {
|
||||
Example: SECRETS_DATABASE_URL=postgres://user:pass@host:port/dbname"
|
||||
)
|
||||
}
|
||||
|
||||
fn env_var_non_empty(name: &str) -> Option<String> {
|
||||
std::env::var(name)
|
||||
.ok()
|
||||
.filter(|value| !value.trim().is_empty())
|
||||
}
|
||||
|
||||
fn parse_ssl_mode_from_env() -> Result<Option<PgSslMode>> {
|
||||
let Some(mode) = env_var_non_empty("SECRETS_DATABASE_SSL_MODE") else {
|
||||
return Ok(None);
|
||||
};
|
||||
|
||||
let parsed = mode.parse::<PgSslMode>().with_context(|| {
|
||||
format!(
|
||||
"Invalid SECRETS_DATABASE_SSL_MODE='{mode}'. Use one of: disable, allow, prefer, require, verify-ca, verify-full."
|
||||
)
|
||||
})?;
|
||||
Ok(Some(parsed))
|
||||
}
|
||||
|
||||
fn resolve_ssl_root_cert_from_env() -> Result<Option<PathBuf>> {
|
||||
let Some(path) = env_var_non_empty("SECRETS_DATABASE_SSL_ROOT_CERT") else {
|
||||
return Ok(None);
|
||||
};
|
||||
let path = PathBuf::from(path);
|
||||
if !path.exists() {
|
||||
anyhow::bail!(
|
||||
"SECRETS_DATABASE_SSL_ROOT_CERT points to a missing file: {}",
|
||||
path.display()
|
||||
);
|
||||
}
|
||||
Ok(Some(path))
|
||||
}
|
||||
|
||||
fn is_production_env() -> bool {
|
||||
matches!(
|
||||
env_var_non_empty("SECRETS_ENV")
|
||||
.as_deref()
|
||||
.map(|value| value.to_ascii_lowercase()),
|
||||
Some(value) if value == "prod" || value == "production"
|
||||
)
|
||||
}
|
||||
|
||||
pub fn resolve_db_config(override_url: &str) -> Result<DatabaseConfig> {
|
||||
Ok(DatabaseConfig {
|
||||
url: resolve_db_url(override_url)?,
|
||||
ssl_mode: parse_ssl_mode_from_env()?,
|
||||
ssl_root_cert: resolve_ssl_root_cert_from_env()?,
|
||||
enforce_strict_tls: is_production_env(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,14 +1,45 @@
|
||||
use anyhow::Result;
|
||||
use std::str::FromStr;
|
||||
|
||||
use anyhow::{Context, Result};
|
||||
use serde_json::Value;
|
||||
use sqlx::PgPool;
|
||||
use sqlx::postgres::PgPoolOptions;
|
||||
use sqlx::postgres::{PgConnectOptions, PgPoolOptions, PgSslMode};
|
||||
|
||||
pub async fn create_pool(database_url: &str) -> Result<PgPool> {
|
||||
use crate::config::DatabaseConfig;
|
||||
|
||||
fn build_connect_options(config: &DatabaseConfig) -> Result<PgConnectOptions> {
|
||||
let mut options = PgConnectOptions::from_str(&config.url)
|
||||
.with_context(|| "failed to parse SECRETS_DATABASE_URL".to_string())?;
|
||||
|
||||
if let Some(mode) = config.ssl_mode {
|
||||
options = options.ssl_mode(mode);
|
||||
}
|
||||
if let Some(path) = &config.ssl_root_cert {
|
||||
options = options.ssl_root_cert(path);
|
||||
}
|
||||
|
||||
if config.enforce_strict_tls
|
||||
&& !matches!(
|
||||
options.get_ssl_mode(),
|
||||
PgSslMode::VerifyCa | PgSslMode::VerifyFull
|
||||
)
|
||||
{
|
||||
anyhow::bail!(
|
||||
"Refusing to start in production with weak PostgreSQL TLS mode. \
|
||||
Set SECRETS_DATABASE_SSL_MODE=verify-ca or verify-full."
|
||||
);
|
||||
}
|
||||
|
||||
Ok(options)
|
||||
}
|
||||
|
||||
pub async fn create_pool(config: &DatabaseConfig) -> Result<PgPool> {
|
||||
tracing::debug!("connecting to database");
|
||||
let connect_options = build_connect_options(config)?;
|
||||
let pool = PgPoolOptions::new()
|
||||
.max_connections(10)
|
||||
.acquire_timeout(std::time::Duration::from_secs(5))
|
||||
.connect(database_url)
|
||||
.connect_with(connect_options)
|
||||
.await?;
|
||||
tracing::debug!("database connection established");
|
||||
Ok(pool)
|
||||
@@ -52,16 +83,30 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
||||
-- ── secrets: one row per encrypted field ─────────────────────────────────
|
||||
CREATE TABLE IF NOT EXISTS secrets (
|
||||
id UUID PRIMARY KEY DEFAULT uuidv7(),
|
||||
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
|
||||
field_name VARCHAR(256) NOT NULL,
|
||||
user_id UUID,
|
||||
name VARCHAR(256) NOT NULL,
|
||||
type VARCHAR(64) NOT NULL DEFAULT 'text',
|
||||
encrypted BYTEA NOT NULL DEFAULT '\x',
|
||||
version BIGINT NOT NULL DEFAULT 1,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(entry_id, field_name)
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_secrets_entry_id ON secrets(entry_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_secrets_user_id ON secrets(user_id) WHERE user_id IS NOT NULL;
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_secrets_unique_user_name
|
||||
ON secrets(user_id, name) WHERE user_id IS NOT NULL;
|
||||
CREATE INDEX IF NOT EXISTS idx_secrets_name ON secrets(name);
|
||||
CREATE INDEX IF NOT EXISTS idx_secrets_type ON secrets(type);
|
||||
|
||||
-- ── entry_secrets: N:N relation ────────────────────────────────────────────
|
||||
CREATE TABLE IF NOT EXISTS entry_secrets (
|
||||
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
|
||||
secret_id UUID NOT NULL REFERENCES secrets(id) ON DELETE CASCADE,
|
||||
sort_order INT NOT NULL DEFAULT 0,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
PRIMARY KEY(entry_id, secret_id)
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_entry_secrets_secret_id ON entry_secrets(secret_id);
|
||||
|
||||
-- ── audit_log: append-only operation log ─────────────────────────────────
|
||||
CREATE TABLE IF NOT EXISTS audit_log (
|
||||
@@ -110,17 +155,13 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
||||
-- ── secrets_history: field-level snapshot ────────────────────────────────
|
||||
CREATE TABLE IF NOT EXISTS secrets_history (
|
||||
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
||||
entry_id UUID NOT NULL,
|
||||
secret_id UUID NOT NULL,
|
||||
entry_version BIGINT NOT NULL,
|
||||
field_name VARCHAR(256) NOT NULL,
|
||||
name VARCHAR(256) NOT NULL,
|
||||
encrypted BYTEA NOT NULL DEFAULT '\x',
|
||||
action VARCHAR(16) NOT NULL,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_secrets_history_entry_id
|
||||
ON secrets_history(entry_id, entry_version DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_secrets_history_secret_id
|
||||
ON secrets_history(secret_id);
|
||||
|
||||
@@ -179,6 +220,16 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
||||
END IF;
|
||||
END $$;
|
||||
|
||||
DO $$ BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_constraint WHERE conname = 'fk_secrets_user_id'
|
||||
) THEN
|
||||
ALTER TABLE secrets
|
||||
ADD CONSTRAINT fk_secrets_user_id
|
||||
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE SET NULL;
|
||||
END IF;
|
||||
END $$;
|
||||
|
||||
DO $$ BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM pg_constraint WHERE conname = 'fk_audit_log_user_id'
|
||||
@@ -468,10 +519,8 @@ pub async fn snapshot_entry_history(
|
||||
// ── Secret field-level history snapshot ──────────────────────────────────────
|
||||
|
||||
pub struct SecretSnapshotParams<'a> {
|
||||
pub entry_id: uuid::Uuid,
|
||||
pub secret_id: uuid::Uuid,
|
||||
pub entry_version: i64,
|
||||
pub field_name: &'a str,
|
||||
pub name: &'a str,
|
||||
pub encrypted: &'a [u8],
|
||||
pub action: &'a str,
|
||||
}
|
||||
@@ -482,13 +531,11 @@ pub async fn snapshot_secret_history(
|
||||
) -> Result<()> {
|
||||
sqlx::query(
|
||||
"INSERT INTO secrets_history \
|
||||
(entry_id, secret_id, entry_version, field_name, encrypted, action) \
|
||||
VALUES ($1, $2, $3, $4, $5, $6)",
|
||||
(secret_id, name, encrypted, action) \
|
||||
VALUES ($1, $2, $3, $4)",
|
||||
)
|
||||
.bind(p.entry_id)
|
||||
.bind(p.secret_id)
|
||||
.bind(p.entry_version)
|
||||
.bind(p.field_name)
|
||||
.bind(p.name)
|
||||
.bind(p.encrypted)
|
||||
.bind(p.action)
|
||||
.execute(&mut **tx)
|
||||
|
||||
@@ -27,8 +27,11 @@ pub struct Entry {
|
||||
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
|
||||
pub struct SecretField {
|
||||
pub id: Uuid,
|
||||
pub entry_id: Uuid,
|
||||
pub field_name: String,
|
||||
pub user_id: Option<Uuid>,
|
||||
pub name: String,
|
||||
#[serde(rename = "type")]
|
||||
#[sqlx(rename = "type")]
|
||||
pub secret_type: String,
|
||||
/// AES-256-GCM ciphertext: nonce(12B) || ciphertext+tag
|
||||
pub encrypted: Vec<u8>,
|
||||
pub version: i64,
|
||||
@@ -51,11 +54,39 @@ pub struct EntryRow {
|
||||
pub notes: String,
|
||||
}
|
||||
|
||||
/// Entry row including `name` (used for id-scoped web / service updates).
|
||||
#[derive(Debug, sqlx::FromRow)]
|
||||
pub struct EntryWriteRow {
|
||||
pub id: Uuid,
|
||||
pub version: i64,
|
||||
pub folder: String,
|
||||
#[sqlx(rename = "type")]
|
||||
pub entry_type: String,
|
||||
pub name: String,
|
||||
pub tags: Vec<String>,
|
||||
pub metadata: Value,
|
||||
pub notes: String,
|
||||
}
|
||||
|
||||
impl From<&EntryWriteRow> for EntryRow {
|
||||
fn from(r: &EntryWriteRow) -> Self {
|
||||
EntryRow {
|
||||
id: r.id,
|
||||
version: r.version,
|
||||
folder: r.folder.clone(),
|
||||
entry_type: r.entry_type.clone(),
|
||||
tags: r.tags.clone(),
|
||||
metadata: r.metadata.clone(),
|
||||
notes: r.notes.clone(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Minimal secret field row fetched before snapshots or cascade deletes.
|
||||
#[derive(Debug, sqlx::FromRow)]
|
||||
pub struct SecretFieldRow {
|
||||
pub id: Uuid,
|
||||
pub field_name: String,
|
||||
pub name: String,
|
||||
pub encrypted: Vec<u8>,
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
use anyhow::Result;
|
||||
use serde_json::{Map, Value};
|
||||
use sqlx::PgPool;
|
||||
use std::collections::{BTreeSet, HashSet};
|
||||
use std::fs;
|
||||
use uuid::Uuid;
|
||||
|
||||
@@ -176,6 +177,7 @@ pub struct AddParams<'a> {
|
||||
pub tags: &'a [String],
|
||||
pub meta_entries: &'a [String],
|
||||
pub secret_entries: &'a [String],
|
||||
pub link_secret_names: &'a [String],
|
||||
/// Optional user_id for multi-user isolation (None = single-user CLI mode)
|
||||
pub user_id: Option<Uuid>,
|
||||
}
|
||||
@@ -185,6 +187,11 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
let secret_json = build_json(params.secret_entries)?;
|
||||
let meta_keys = collect_key_paths(params.meta_entries)?;
|
||||
let secret_keys = collect_key_paths(params.secret_entries)?;
|
||||
let flat_fields = flatten_json_fields("", &secret_json);
|
||||
let new_secret_names: BTreeSet<String> =
|
||||
flat_fields.iter().map(|(name, _)| name.clone()).collect();
|
||||
let link_secret_names =
|
||||
validate_link_secret_names(params.link_secret_names, &new_secret_names)?;
|
||||
|
||||
let mut tx = pool.begin().await?;
|
||||
|
||||
@@ -279,7 +286,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
.await?
|
||||
};
|
||||
|
||||
let new_entry_version: i64 = sqlx::query_scalar("SELECT version FROM entries WHERE id = $1")
|
||||
let current_entry_version: i64 =
|
||||
sqlx::query_scalar("SELECT version FROM entries WHERE id = $1")
|
||||
.bind(entry_id)
|
||||
.fetch_one(&mut *tx)
|
||||
.await?;
|
||||
@@ -293,7 +301,7 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
folder: params.folder,
|
||||
entry_type: params.entry_type,
|
||||
name: params.name,
|
||||
version: new_entry_version,
|
||||
version: current_entry_version,
|
||||
action: "create",
|
||||
tags: params.tags,
|
||||
metadata: &metadata,
|
||||
@@ -308,11 +316,15 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct ExistingField {
|
||||
id: Uuid,
|
||||
field_name: String,
|
||||
name: String,
|
||||
encrypted: Vec<u8>,
|
||||
}
|
||||
let existing_fields: Vec<ExistingField> =
|
||||
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1")
|
||||
let existing_fields: Vec<ExistingField> = sqlx::query_as(
|
||||
"SELECT s.id, s.name, s.encrypted \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = $1",
|
||||
)
|
||||
.bind(entry_id)
|
||||
.fetch_all(&mut *tx)
|
||||
.await?;
|
||||
@@ -321,10 +333,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
if let Err(e) = db::snapshot_secret_history(
|
||||
&mut tx,
|
||||
db::SecretSnapshotParams {
|
||||
entry_id,
|
||||
secret_id: f.id,
|
||||
entry_version: new_entry_version - 1,
|
||||
field_name: &f.field_name,
|
||||
name: &f.name,
|
||||
encrypted: &f.encrypted,
|
||||
action: "add",
|
||||
},
|
||||
@@ -335,23 +345,70 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
}
|
||||
}
|
||||
|
||||
sqlx::query("DELETE FROM secrets WHERE entry_id = $1")
|
||||
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1")
|
||||
.bind(entry_id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
|
||||
sqlx::query(
|
||||
"DELETE FROM secrets s \
|
||||
WHERE NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
|
||||
)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
}
|
||||
|
||||
let flat_fields = flatten_json_fields("", &secret_json);
|
||||
for (field_name, field_value) in &flat_fields {
|
||||
let encrypted = crypto::encrypt_json(master_key, field_value)?;
|
||||
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
|
||||
.bind(entry_id)
|
||||
let secret_id: Uuid = sqlx::query_scalar(
|
||||
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
|
||||
)
|
||||
.bind(params.user_id)
|
||||
.bind(field_name)
|
||||
.bind(infer_secret_type(field_name))
|
||||
.bind(&encrypted)
|
||||
.fetch_one(&mut *tx)
|
||||
.await?;
|
||||
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
|
||||
.bind(entry_id)
|
||||
.bind(secret_id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
}
|
||||
|
||||
for link_name in &link_secret_names {
|
||||
let secret_ids: Vec<Uuid> = if let Some(uid) = params.user_id {
|
||||
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id = $1 AND name = $2")
|
||||
.bind(uid)
|
||||
.bind(link_name)
|
||||
.fetch_all(&mut *tx)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_scalar("SELECT id FROM secrets WHERE user_id IS NULL AND name = $1")
|
||||
.bind(link_name)
|
||||
.fetch_all(&mut *tx)
|
||||
.await?
|
||||
};
|
||||
|
||||
match secret_ids.len() {
|
||||
0 => anyhow::bail!("Not found: secret named '{}'", link_name),
|
||||
1 => {
|
||||
sqlx::query(
|
||||
"INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING",
|
||||
)
|
||||
.bind(entry_id)
|
||||
.bind(secret_ids[0])
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
}
|
||||
n => anyhow::bail!(
|
||||
"Ambiguous: {} secrets named '{}' found. Please deduplicate names first.",
|
||||
n,
|
||||
link_name
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
crate::audit::log_tx(
|
||||
&mut tx,
|
||||
params.user_id,
|
||||
@@ -379,9 +436,56 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
||||
})
|
||||
}
|
||||
|
||||
pub(crate) fn infer_secret_type(name: &str) -> &'static str {
|
||||
match name {
|
||||
"ssh_key" => "pem",
|
||||
"password" => "password",
|
||||
"phone" | "phone_2" => "phone",
|
||||
"webhook_url" | "address" => "url",
|
||||
"access_key_id"
|
||||
| "access_key_secret"
|
||||
| "global_api_key"
|
||||
| "api_key"
|
||||
| "secret_key"
|
||||
| "personal_access_token"
|
||||
| "runner_token"
|
||||
| "GOOGLE_CLIENT_ID"
|
||||
| "GOOGLE_CLIENT_SECRET" => "token",
|
||||
_ => "text",
|
||||
}
|
||||
}
|
||||
|
||||
fn validate_link_secret_names(
|
||||
link_secret_names: &[String],
|
||||
new_secret_names: &BTreeSet<String>,
|
||||
) -> Result<Vec<String>> {
|
||||
let mut deduped = Vec::new();
|
||||
let mut seen = HashSet::new();
|
||||
|
||||
for raw in link_secret_names {
|
||||
let trimmed = raw.trim();
|
||||
if trimmed.is_empty() {
|
||||
anyhow::bail!("link_secret_names contains an empty name");
|
||||
}
|
||||
if new_secret_names.contains(trimmed) {
|
||||
anyhow::bail!(
|
||||
"Conflict: secret '{}' is provided both in secrets/secrets_obj and link_secret_names",
|
||||
trimmed
|
||||
);
|
||||
}
|
||||
if seen.insert(trimmed.to_string()) {
|
||||
deduped.push(trimmed.to_string());
|
||||
}
|
||||
}
|
||||
|
||||
Ok(deduped)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use sqlx::PgPool;
|
||||
use std::collections::BTreeSet;
|
||||
|
||||
#[test]
|
||||
fn parse_nested_file_shorthand() {
|
||||
@@ -410,4 +514,199 @@ mod tests {
|
||||
assert_eq!(fields[1].0, "credentials.type");
|
||||
assert_eq!(fields[2].0, "username");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_link_secret_names_conflict_with_new_secret() {
|
||||
let mut new_names = BTreeSet::new();
|
||||
new_names.insert("password".to_string());
|
||||
let err = validate_link_secret_names(&[String::from("password")], &new_names)
|
||||
.expect_err("must fail on overlap");
|
||||
assert!(
|
||||
err.to_string()
|
||||
.contains("provided both in secrets/secrets_obj and link_secret_names")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn validate_link_secret_names_dedup_and_trim() {
|
||||
let names = vec![
|
||||
" shared_key ".to_string(),
|
||||
"shared_key".to_string(),
|
||||
"runner_token".to_string(),
|
||||
];
|
||||
let deduped = validate_link_secret_names(&names, &BTreeSet::new()).unwrap();
|
||||
assert_eq!(deduped, vec!["shared_key", "runner_token"]);
|
||||
}
|
||||
|
||||
async fn maybe_test_pool() -> Option<PgPool> {
|
||||
let Ok(url) = std::env::var("SECRETS_DATABASE_URL") else {
|
||||
eprintln!("skip add linkage tests: SECRETS_DATABASE_URL is not set");
|
||||
return None;
|
||||
};
|
||||
let Ok(pool) = PgPool::connect(&url).await else {
|
||||
eprintln!("skip add linkage tests: cannot connect to database");
|
||||
return None;
|
||||
};
|
||||
if let Err(e) = crate::db::migrate(&pool).await {
|
||||
eprintln!("skip add linkage tests: migrate failed: {e}");
|
||||
return None;
|
||||
}
|
||||
Some(pool)
|
||||
}
|
||||
|
||||
async fn cleanup_test_rows(pool: &PgPool, marker: &str) -> Result<()> {
|
||||
sqlx::query(
|
||||
"DELETE FROM entries WHERE user_id IS NULL AND (name LIKE $1 OR folder LIKE $1)",
|
||||
)
|
||||
.bind(format!("%{marker}%"))
|
||||
.execute(pool)
|
||||
.await?;
|
||||
sqlx::query(
|
||||
"DELETE FROM secrets WHERE user_id IS NULL AND name LIKE $1 \
|
||||
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = secrets.id)",
|
||||
)
|
||||
.bind(format!("%{marker}%"))
|
||||
.execute(pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn add_links_existing_secret_by_unique_name() -> Result<()> {
|
||||
let Some(pool) = maybe_test_pool().await else {
|
||||
return Ok(());
|
||||
};
|
||||
let suffix = Uuid::from_u128(rand::random()).to_string();
|
||||
let marker = format!("link_unique_{}", &suffix[..8]);
|
||||
let secret_name = format!("{}_secret", marker);
|
||||
let entry_name = format!("{}_entry", marker);
|
||||
|
||||
cleanup_test_rows(&pool, &marker).await?;
|
||||
|
||||
let secret_id: Uuid = sqlx::query_scalar(
|
||||
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, 'text', $2) RETURNING id",
|
||||
)
|
||||
.bind(&secret_name)
|
||||
.bind(vec![1_u8, 2, 3])
|
||||
.fetch_one(&pool)
|
||||
.await?;
|
||||
|
||||
run(
|
||||
&pool,
|
||||
AddParams {
|
||||
name: &entry_name,
|
||||
folder: &marker,
|
||||
entry_type: "service",
|
||||
notes: "",
|
||||
tags: &[],
|
||||
meta_entries: &[],
|
||||
secret_entries: &[],
|
||||
link_secret_names: std::slice::from_ref(&secret_name),
|
||||
user_id: None,
|
||||
},
|
||||
&[0_u8; 32],
|
||||
)
|
||||
.await?;
|
||||
|
||||
let linked: bool = sqlx::query_scalar(
|
||||
"SELECT EXISTS( \
|
||||
SELECT 1 FROM entry_secrets es \
|
||||
JOIN entries e ON e.id = es.entry_id \
|
||||
WHERE e.user_id IS NULL AND e.name = $1 AND es.secret_id = $2 \
|
||||
)",
|
||||
)
|
||||
.bind(&entry_name)
|
||||
.bind(secret_id)
|
||||
.fetch_one(&pool)
|
||||
.await?;
|
||||
assert!(linked);
|
||||
|
||||
cleanup_test_rows(&pool, &marker).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn add_link_secret_name_not_found_fails() -> Result<()> {
|
||||
let Some(pool) = maybe_test_pool().await else {
|
||||
return Ok(());
|
||||
};
|
||||
let suffix = Uuid::from_u128(rand::random()).to_string();
|
||||
let marker = format!("link_missing_{}", &suffix[..8]);
|
||||
let secret_name = format!("{}_secret", marker);
|
||||
let entry_name = format!("{}_entry", marker);
|
||||
|
||||
cleanup_test_rows(&pool, &marker).await?;
|
||||
|
||||
let err = run(
|
||||
&pool,
|
||||
AddParams {
|
||||
name: &entry_name,
|
||||
folder: &marker,
|
||||
entry_type: "service",
|
||||
notes: "",
|
||||
tags: &[],
|
||||
meta_entries: &[],
|
||||
secret_entries: &[],
|
||||
link_secret_names: std::slice::from_ref(&secret_name),
|
||||
user_id: None,
|
||||
},
|
||||
&[0_u8; 32],
|
||||
)
|
||||
.await
|
||||
.expect_err("must fail when linked secret is not found");
|
||||
assert!(err.to_string().contains("Not found: secret named"));
|
||||
|
||||
cleanup_test_rows(&pool, &marker).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn add_link_secret_name_ambiguous_fails() -> Result<()> {
|
||||
let Some(pool) = maybe_test_pool().await else {
|
||||
return Ok(());
|
||||
};
|
||||
let suffix = Uuid::from_u128(rand::random()).to_string();
|
||||
let marker = format!("link_amb_{}", &suffix[..8]);
|
||||
let secret_name = format!("{}_dup_secret", marker);
|
||||
let entry_name = format!("{}_entry", marker);
|
||||
|
||||
cleanup_test_rows(&pool, &marker).await?;
|
||||
|
||||
sqlx::query(
|
||||
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, 'text', $2)",
|
||||
)
|
||||
.bind(&secret_name)
|
||||
.bind(vec![1_u8])
|
||||
.execute(&pool)
|
||||
.await?;
|
||||
sqlx::query(
|
||||
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES (NULL, $1, 'text', $2)",
|
||||
)
|
||||
.bind(&secret_name)
|
||||
.bind(vec![2_u8])
|
||||
.execute(&pool)
|
||||
.await?;
|
||||
|
||||
let err = run(
|
||||
&pool,
|
||||
AddParams {
|
||||
name: &entry_name,
|
||||
folder: &marker,
|
||||
entry_type: "service",
|
||||
notes: "",
|
||||
tags: &[],
|
||||
meta_entries: &[],
|
||||
secret_entries: &[],
|
||||
link_secret_names: std::slice::from_ref(&secret_name),
|
||||
user_id: None,
|
||||
},
|
||||
&[0_u8; 32],
|
||||
)
|
||||
.await
|
||||
.expect_err("must fail on ambiguous linked secret name");
|
||||
assert!(err.to_string().contains("Ambiguous:"));
|
||||
|
||||
cleanup_test_rows(&pool, &marker).await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,7 @@ use sqlx::PgPool;
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::db;
|
||||
use crate::models::{EntryRow, SecretFieldRow};
|
||||
use crate::models::{EntryRow, EntryWriteRow, SecretFieldRow};
|
||||
|
||||
#[derive(Debug, serde::Serialize)]
|
||||
pub struct DeletedEntry {
|
||||
@@ -17,6 +17,7 @@ pub struct DeletedEntry {
|
||||
#[derive(Debug, serde::Serialize)]
|
||||
pub struct DeleteResult {
|
||||
pub deleted: Vec<DeletedEntry>,
|
||||
pub migrated: Vec<String>,
|
||||
pub dry_run: bool,
|
||||
}
|
||||
|
||||
@@ -31,6 +32,233 @@ pub struct DeleteParams<'a> {
|
||||
pub user_id: Option<Uuid>,
|
||||
}
|
||||
|
||||
#[derive(Debug, sqlx::FromRow)]
|
||||
struct KeyReferrer {
|
||||
id: Uuid,
|
||||
folder: String,
|
||||
#[sqlx(rename = "type")]
|
||||
entry_type: String,
|
||||
name: String,
|
||||
}
|
||||
|
||||
fn ref_label(r: &KeyReferrer) -> String {
|
||||
format!("{}/{} ({})", r.folder, r.name, r.entry_type)
|
||||
}
|
||||
|
||||
fn ref_path(r: &KeyReferrer) -> String {
|
||||
format!("{}/{}", r.folder, r.name)
|
||||
}
|
||||
|
||||
async fn fetch_key_referrers_pool(
|
||||
pool: &PgPool,
|
||||
key_entry_id: Uuid,
|
||||
key_folder: &str,
|
||||
key_name: &str,
|
||||
user_id: Option<Uuid>,
|
||||
) -> Result<Vec<KeyReferrer>> {
|
||||
let qualified = format!("{}/{}", key_folder, key_name);
|
||||
let refs: Vec<KeyReferrer> = if let Some(uid) = user_id {
|
||||
sqlx::query_as(
|
||||
"SELECT id, folder, type, name FROM entries \
|
||||
WHERE user_id = $1 AND id <> $2 \
|
||||
AND (metadata->>'key_ref' = $3 OR metadata->>'key_ref' = $4) \
|
||||
ORDER BY folder, type, name",
|
||||
)
|
||||
.bind(uid)
|
||||
.bind(key_entry_id)
|
||||
.bind(key_name)
|
||||
.bind(&qualified)
|
||||
.fetch_all(pool)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_as(
|
||||
"SELECT id, folder, type, name FROM entries \
|
||||
WHERE user_id IS NULL AND id <> $1 \
|
||||
AND (metadata->>'key_ref' = $2 OR metadata->>'key_ref' = $3) \
|
||||
ORDER BY folder, type, name",
|
||||
)
|
||||
.bind(key_entry_id)
|
||||
.bind(key_name)
|
||||
.bind(&qualified)
|
||||
.fetch_all(pool)
|
||||
.await?
|
||||
};
|
||||
Ok(refs)
|
||||
}
|
||||
|
||||
async fn migrate_key_refs_if_needed(
|
||||
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
|
||||
key_row: &EntryRow,
|
||||
key_name: &str,
|
||||
user_id: Option<Uuid>,
|
||||
dry_run: bool,
|
||||
) -> Result<Vec<String>> {
|
||||
let qualified = format!("{}/{}", key_row.folder, key_name);
|
||||
let refs: Vec<KeyReferrer> = if let Some(uid) = user_id {
|
||||
sqlx::query_as(
|
||||
"SELECT id, folder, type, name FROM entries \
|
||||
WHERE user_id = $1 AND id <> $2 \
|
||||
AND (metadata->>'key_ref' = $3 OR metadata->>'key_ref' = $4) \
|
||||
ORDER BY folder, type, name",
|
||||
)
|
||||
.bind(uid)
|
||||
.bind(key_row.id)
|
||||
.bind(key_name)
|
||||
.bind(&qualified)
|
||||
.fetch_all(&mut **tx)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_as(
|
||||
"SELECT id, folder, type, name FROM entries \
|
||||
WHERE user_id IS NULL AND id <> $1 \
|
||||
AND (metadata->>'key_ref' = $2 OR metadata->>'key_ref' = $3) \
|
||||
ORDER BY folder, type, name",
|
||||
)
|
||||
.bind(key_row.id)
|
||||
.bind(key_name)
|
||||
.bind(&qualified)
|
||||
.fetch_all(&mut **tx)
|
||||
.await?
|
||||
};
|
||||
|
||||
if refs.is_empty() {
|
||||
return Ok(vec![]);
|
||||
}
|
||||
if dry_run {
|
||||
return Ok(refs.iter().map(ref_label).collect());
|
||||
}
|
||||
|
||||
let owner = &refs[0];
|
||||
let owner_path = ref_path(owner);
|
||||
let key_fields: Vec<SecretFieldRow> = sqlx::query_as(
|
||||
"SELECT s.id, s.name, s.encrypted \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = $1",
|
||||
)
|
||||
.bind(key_row.id)
|
||||
.fetch_all(&mut **tx)
|
||||
.await?;
|
||||
|
||||
for f in &key_fields {
|
||||
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2) ON CONFLICT DO NOTHING")
|
||||
.bind(owner.id)
|
||||
.bind(f.id)
|
||||
.execute(&mut **tx)
|
||||
.await?;
|
||||
}
|
||||
|
||||
sqlx::query(
|
||||
"UPDATE entries SET metadata = metadata - 'key_ref', \
|
||||
version = version + 1, updated_at = NOW() WHERE id = $1",
|
||||
)
|
||||
.bind(owner.id)
|
||||
.execute(&mut **tx)
|
||||
.await?;
|
||||
|
||||
crate::audit::log_tx(
|
||||
tx,
|
||||
user_id,
|
||||
"key_migrate",
|
||||
&owner.folder,
|
||||
&owner.entry_type,
|
||||
&owner.name,
|
||||
json!({
|
||||
"from_key": format!("{}/{}", key_row.folder, key_name),
|
||||
"role": "new_owner",
|
||||
"redirect_target": owner_path,
|
||||
}),
|
||||
)
|
||||
.await;
|
||||
|
||||
for r in refs.iter().skip(1) {
|
||||
sqlx::query(
|
||||
"UPDATE entries SET metadata = jsonb_set(metadata, '{key_ref}', to_jsonb($2::text), true), \
|
||||
version = version + 1, updated_at = NOW() WHERE id = $1",
|
||||
)
|
||||
.bind(r.id)
|
||||
.bind(&owner_path)
|
||||
.execute(&mut **tx)
|
||||
.await?;
|
||||
|
||||
crate::audit::log_tx(
|
||||
tx,
|
||||
user_id,
|
||||
"key_migrate",
|
||||
&r.folder,
|
||||
&r.entry_type,
|
||||
&r.name,
|
||||
json!({
|
||||
"from_key": format!("{}/{}", key_row.folder, key_name),
|
||||
"role": "redirected_ref",
|
||||
"redirect_to": owner_path,
|
||||
}),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
|
||||
Ok(refs.iter().map(ref_label).collect())
|
||||
}
|
||||
|
||||
/// Delete a single entry by id (multi-tenant: `user_id` must match).
|
||||
pub async fn delete_by_id(pool: &PgPool, entry_id: Uuid, user_id: Uuid) -> Result<DeleteResult> {
|
||||
let mut tx = pool.begin().await?;
|
||||
let row: Option<EntryWriteRow> = sqlx::query_as(
|
||||
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
|
||||
WHERE id = $1 AND user_id = $2 FOR UPDATE",
|
||||
)
|
||||
.bind(entry_id)
|
||||
.bind(user_id)
|
||||
.fetch_optional(&mut *tx)
|
||||
.await?;
|
||||
|
||||
let row = match row {
|
||||
Some(r) => r,
|
||||
None => {
|
||||
tx.rollback().await?;
|
||||
anyhow::bail!("Entry not found");
|
||||
}
|
||||
};
|
||||
|
||||
let folder = row.folder.clone();
|
||||
let entry_type = row.entry_type.clone();
|
||||
let name = row.name.clone();
|
||||
let entry_row: EntryRow = (&row).into();
|
||||
let migrated =
|
||||
migrate_key_refs_if_needed(&mut tx, &entry_row, &name, Some(user_id), false).await?;
|
||||
|
||||
snapshot_and_delete(
|
||||
&mut tx,
|
||||
&folder,
|
||||
&entry_type,
|
||||
&name,
|
||||
&entry_row,
|
||||
Some(user_id),
|
||||
)
|
||||
.await?;
|
||||
crate::audit::log_tx(
|
||||
&mut tx,
|
||||
Some(user_id),
|
||||
"delete",
|
||||
&folder,
|
||||
&entry_type,
|
||||
&name,
|
||||
json!({ "source": "web", "entry_id": entry_id }),
|
||||
)
|
||||
.await;
|
||||
tx.commit().await?;
|
||||
|
||||
Ok(DeleteResult {
|
||||
deleted: vec![DeletedEntry {
|
||||
name,
|
||||
folder,
|
||||
entry_type,
|
||||
}],
|
||||
migrated,
|
||||
dry_run: false,
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult> {
|
||||
match params.name {
|
||||
Some(name) => delete_one(pool, name, params.folder, params.dry_run, params.user_id).await,
|
||||
@@ -66,6 +294,7 @@ async fn delete_one(
|
||||
// - 2+ matches → disambiguation error (same as non-dry-run)
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct DryRunRow {
|
||||
id: Uuid,
|
||||
folder: String,
|
||||
#[sqlx(rename = "type")]
|
||||
entry_type: String,
|
||||
@@ -74,7 +303,7 @@ async fn delete_one(
|
||||
let rows: Vec<DryRunRow> = if let Some(uid) = user_id {
|
||||
if let Some(f) = folder {
|
||||
sqlx::query_as(
|
||||
"SELECT folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3",
|
||||
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3",
|
||||
)
|
||||
.bind(uid)
|
||||
.bind(f)
|
||||
@@ -82,7 +311,9 @@ async fn delete_one(
|
||||
.fetch_all(pool)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_as("SELECT folder, type FROM entries WHERE user_id = $1 AND name = $2")
|
||||
sqlx::query_as(
|
||||
"SELECT id, folder, type FROM entries WHERE user_id = $1 AND name = $2",
|
||||
)
|
||||
.bind(uid)
|
||||
.bind(name)
|
||||
.fetch_all(pool)
|
||||
@@ -90,14 +321,16 @@ async fn delete_one(
|
||||
}
|
||||
} else if let Some(f) = folder {
|
||||
sqlx::query_as(
|
||||
"SELECT folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2",
|
||||
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2",
|
||||
)
|
||||
.bind(f)
|
||||
.bind(name)
|
||||
.fetch_all(pool)
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_as("SELECT folder, type FROM entries WHERE user_id IS NULL AND name = $1")
|
||||
sqlx::query_as(
|
||||
"SELECT id, folder, type FROM entries WHERE user_id IS NULL AND name = $1",
|
||||
)
|
||||
.bind(name)
|
||||
.fetch_all(pool)
|
||||
.await?
|
||||
@@ -106,16 +339,20 @@ async fn delete_one(
|
||||
return match rows.len() {
|
||||
0 => Ok(DeleteResult {
|
||||
deleted: vec![],
|
||||
migrated: vec![],
|
||||
dry_run: true,
|
||||
}),
|
||||
1 => {
|
||||
let row = rows.into_iter().next().unwrap();
|
||||
let refs =
|
||||
fetch_key_referrers_pool(pool, row.id, &row.folder, name, user_id).await?;
|
||||
Ok(DeleteResult {
|
||||
deleted: vec![DeletedEntry {
|
||||
name: name.to_string(),
|
||||
folder: row.folder,
|
||||
entry_type: row.entry_type,
|
||||
}],
|
||||
migrated: refs.iter().map(ref_label).collect(),
|
||||
dry_run: true,
|
||||
})
|
||||
}
|
||||
@@ -180,6 +417,7 @@ async fn delete_one(
|
||||
tx.rollback().await?;
|
||||
return Ok(DeleteResult {
|
||||
deleted: vec![],
|
||||
migrated: vec![],
|
||||
dry_run: false,
|
||||
});
|
||||
}
|
||||
@@ -199,6 +437,7 @@ async fn delete_one(
|
||||
|
||||
let folder = row.folder.clone();
|
||||
let entry_type = row.entry_type.clone();
|
||||
let migrated = migrate_key_refs_if_needed(&mut tx, &row, name, user_id, false).await?;
|
||||
snapshot_and_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?;
|
||||
crate::audit::log_tx(
|
||||
&mut tx,
|
||||
@@ -218,6 +457,7 @@ async fn delete_one(
|
||||
folder,
|
||||
entry_type,
|
||||
}],
|
||||
migrated,
|
||||
dry_run: false,
|
||||
})
|
||||
}
|
||||
@@ -278,6 +518,12 @@ async fn delete_bulk(
|
||||
let rows = q.fetch_all(pool).await?;
|
||||
|
||||
if dry_run {
|
||||
let mut migrated: Vec<String> = Vec::new();
|
||||
for row in &rows {
|
||||
let refs =
|
||||
fetch_key_referrers_pool(pool, row.id, &row.folder, &row.name, user_id).await?;
|
||||
migrated.extend(refs.iter().map(ref_label));
|
||||
}
|
||||
let deleted = rows
|
||||
.iter()
|
||||
.map(|r| DeletedEntry {
|
||||
@@ -288,11 +534,13 @@ async fn delete_bulk(
|
||||
.collect();
|
||||
return Ok(DeleteResult {
|
||||
deleted,
|
||||
migrated,
|
||||
dry_run: true,
|
||||
});
|
||||
}
|
||||
|
||||
let mut deleted = Vec::with_capacity(rows.len());
|
||||
let mut migrated: Vec<String> = Vec::new();
|
||||
for row in &rows {
|
||||
let entry_row = EntryRow {
|
||||
id: row.id,
|
||||
@@ -304,6 +552,8 @@ async fn delete_bulk(
|
||||
notes: row.notes.clone(),
|
||||
};
|
||||
let mut tx = pool.begin().await?;
|
||||
let m = migrate_key_refs_if_needed(&mut tx, &entry_row, &row.name, user_id, false).await?;
|
||||
migrated.extend(m);
|
||||
snapshot_and_delete(
|
||||
&mut tx,
|
||||
&row.folder,
|
||||
@@ -333,6 +583,7 @@ async fn delete_bulk(
|
||||
|
||||
Ok(DeleteResult {
|
||||
deleted,
|
||||
migrated,
|
||||
dry_run: false,
|
||||
})
|
||||
}
|
||||
@@ -364,8 +615,12 @@ async fn snapshot_and_delete(
|
||||
tracing::warn!(error = %e, "failed to snapshot entry history before delete");
|
||||
}
|
||||
|
||||
let fields: Vec<SecretFieldRow> =
|
||||
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1")
|
||||
let fields: Vec<SecretFieldRow> = sqlx::query_as(
|
||||
"SELECT s.id, s.name, s.encrypted \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = $1",
|
||||
)
|
||||
.bind(row.id)
|
||||
.fetch_all(&mut **tx)
|
||||
.await?;
|
||||
@@ -374,10 +629,8 @@ async fn snapshot_and_delete(
|
||||
if let Err(e) = db::snapshot_secret_history(
|
||||
tx,
|
||||
db::SecretSnapshotParams {
|
||||
entry_id: row.id,
|
||||
secret_id: f.id,
|
||||
entry_version: row.version,
|
||||
field_name: &f.field_name,
|
||||
name: &f.name,
|
||||
encrypted: &f.encrypted,
|
||||
action: "delete",
|
||||
},
|
||||
@@ -393,5 +646,293 @@ async fn snapshot_and_delete(
|
||||
.execute(&mut **tx)
|
||||
.await?;
|
||||
|
||||
sqlx::query(
|
||||
"DELETE FROM secrets s \
|
||||
WHERE NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
|
||||
)
|
||||
.execute(&mut **tx)
|
||||
.await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use serde_json::json;
|
||||
|
||||
async fn maybe_test_pool() -> Option<PgPool> {
|
||||
let Ok(url) = std::env::var("SECRETS_DATABASE_URL") else {
|
||||
eprintln!("skip delete migration tests: SECRETS_DATABASE_URL is not set");
|
||||
return None;
|
||||
};
|
||||
let Ok(pool) = PgPool::connect(&url).await else {
|
||||
eprintln!("skip delete migration tests: cannot connect to database");
|
||||
return None;
|
||||
};
|
||||
if let Err(e) = crate::db::migrate(&pool).await {
|
||||
eprintln!("skip delete migration tests: migrate failed: {e}");
|
||||
return None;
|
||||
}
|
||||
Some(pool)
|
||||
}
|
||||
|
||||
async fn insert_entry(
|
||||
pool: &PgPool,
|
||||
id: Uuid,
|
||||
user_id: Uuid,
|
||||
folder: &str,
|
||||
entry_type: &str,
|
||||
name: &str,
|
||||
metadata: serde_json::Value,
|
||||
) -> Result<()> {
|
||||
sqlx::query(
|
||||
"INSERT INTO entries (id, user_id, folder, type, name, notes, tags, metadata, version) \
|
||||
VALUES ($1, $2, $3, $4, $5, '', ARRAY[]::text[], $6, 1)",
|
||||
)
|
||||
.bind(id)
|
||||
.bind(user_id)
|
||||
.bind(folder)
|
||||
.bind(entry_type)
|
||||
.bind(name)
|
||||
.bind(metadata)
|
||||
.execute(pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn insert_secret_for_entry(
|
||||
pool: &PgPool,
|
||||
user_id: Uuid,
|
||||
entry_id: Uuid,
|
||||
name: &str,
|
||||
secret_type: &str,
|
||||
encrypted: Vec<u8>,
|
||||
) -> Result<()> {
|
||||
let secret_id: Uuid = sqlx::query_scalar(
|
||||
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
|
||||
)
|
||||
.bind(user_id)
|
||||
.bind(name)
|
||||
.bind(secret_type)
|
||||
.bind(encrypted)
|
||||
.fetch_one(pool)
|
||||
.await?;
|
||||
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
|
||||
.bind(entry_id)
|
||||
.bind(secret_id)
|
||||
.execute(pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn delete_shared_key_dry_run_reports_migration_without_writes() -> Result<()> {
|
||||
let Some(pool) = maybe_test_pool().await else {
|
||||
return Ok(());
|
||||
};
|
||||
|
||||
let user_id = Uuid::from_u128(rand::random());
|
||||
let key_id = Uuid::from_u128(rand::random());
|
||||
let ref_a = Uuid::from_u128(rand::random());
|
||||
let ref_b = Uuid::from_u128(rand::random());
|
||||
|
||||
insert_entry(
|
||||
&pool,
|
||||
key_id,
|
||||
user_id,
|
||||
"kfolder",
|
||||
"key",
|
||||
"shared-key",
|
||||
json!({}),
|
||||
)
|
||||
.await?;
|
||||
insert_secret_for_entry(&pool, user_id, key_id, "pem", "pem", vec![1_u8, 2, 3]).await?;
|
||||
|
||||
insert_entry(
|
||||
&pool,
|
||||
ref_a,
|
||||
user_id,
|
||||
"afolder",
|
||||
"server",
|
||||
"srv-a",
|
||||
json!({"key_ref":"kfolder/shared-key"}),
|
||||
)
|
||||
.await?;
|
||||
insert_entry(
|
||||
&pool,
|
||||
ref_b,
|
||||
user_id,
|
||||
"bfolder",
|
||||
"server",
|
||||
"srv-b",
|
||||
json!({"key_ref":"shared-key"}),
|
||||
)
|
||||
.await?;
|
||||
|
||||
let result = run(
|
||||
&pool,
|
||||
DeleteParams {
|
||||
name: Some("shared-key"),
|
||||
folder: Some("kfolder"),
|
||||
entry_type: None,
|
||||
dry_run: true,
|
||||
user_id: Some(user_id),
|
||||
},
|
||||
)
|
||||
.await?;
|
||||
|
||||
assert!(result.dry_run);
|
||||
assert_eq!(result.deleted.len(), 1);
|
||||
assert_eq!(result.migrated.len(), 2);
|
||||
|
||||
let key_exists: bool = sqlx::query_scalar(
|
||||
"SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1 AND user_id = $2)",
|
||||
)
|
||||
.bind(key_id)
|
||||
.bind(user_id)
|
||||
.fetch_one(&pool)
|
||||
.await?;
|
||||
assert!(key_exists);
|
||||
|
||||
let ref_a_key_ref: Option<String> =
|
||||
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
|
||||
.bind(ref_a)
|
||||
.fetch_one(&pool)
|
||||
.await?;
|
||||
let ref_b_key_ref: Option<String> =
|
||||
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
|
||||
.bind(ref_b)
|
||||
.fetch_one(&pool)
|
||||
.await?;
|
||||
assert_eq!(ref_a_key_ref.as_deref(), Some("kfolder/shared-key"));
|
||||
assert_eq!(ref_b_key_ref.as_deref(), Some("shared-key"));
|
||||
|
||||
sqlx::query("DELETE FROM entries WHERE user_id = $1")
|
||||
.bind(user_id)
|
||||
.execute(&pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn delete_shared_key_auto_migrates_single_copy_and_redirects_refs() -> Result<()> {
|
||||
let Some(pool) = maybe_test_pool().await else {
|
||||
return Ok(());
|
||||
};
|
||||
|
||||
let user_id = Uuid::from_u128(rand::random());
|
||||
let key_id = Uuid::from_u128(rand::random());
|
||||
let ref_a = Uuid::from_u128(rand::random());
|
||||
let ref_b = Uuid::from_u128(rand::random());
|
||||
let ref_c = Uuid::from_u128(rand::random());
|
||||
|
||||
insert_entry(
|
||||
&pool,
|
||||
key_id,
|
||||
user_id,
|
||||
"kfolder",
|
||||
"key",
|
||||
"shared-key",
|
||||
json!({}),
|
||||
)
|
||||
.await?;
|
||||
insert_secret_for_entry(&pool, user_id, key_id, "pem", "pem", vec![7_u8, 8, 9]).await?;
|
||||
|
||||
// owner candidate (sorted first by folder)
|
||||
insert_entry(
|
||||
&pool,
|
||||
ref_a,
|
||||
user_id,
|
||||
"afolder",
|
||||
"server",
|
||||
"srv-a",
|
||||
json!({"key_ref":"kfolder/shared-key"}),
|
||||
)
|
||||
.await?;
|
||||
insert_entry(
|
||||
&pool,
|
||||
ref_b,
|
||||
user_id,
|
||||
"bfolder",
|
||||
"server",
|
||||
"srv-b",
|
||||
json!({"key_ref":"shared-key"}),
|
||||
)
|
||||
.await?;
|
||||
insert_entry(
|
||||
&pool,
|
||||
ref_c,
|
||||
user_id,
|
||||
"cfolder",
|
||||
"service",
|
||||
"svc-c",
|
||||
json!({"key_ref":"kfolder/shared-key"}),
|
||||
)
|
||||
.await?;
|
||||
|
||||
let result = run(
|
||||
&pool,
|
||||
DeleteParams {
|
||||
name: Some("shared-key"),
|
||||
folder: Some("kfolder"),
|
||||
entry_type: None,
|
||||
dry_run: false,
|
||||
user_id: Some(user_id),
|
||||
},
|
||||
)
|
||||
.await?;
|
||||
|
||||
assert!(!result.dry_run);
|
||||
assert_eq!(result.deleted.len(), 1);
|
||||
assert_eq!(result.migrated.len(), 3);
|
||||
|
||||
let key_exists: bool = sqlx::query_scalar(
|
||||
"SELECT EXISTS(SELECT 1 FROM entries WHERE id = $1 AND user_id = $2)",
|
||||
)
|
||||
.bind(key_id)
|
||||
.bind(user_id)
|
||||
.fetch_one(&pool)
|
||||
.await?;
|
||||
assert!(!key_exists);
|
||||
|
||||
let owner_key_ref: Option<String> =
|
||||
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
|
||||
.bind(ref_a)
|
||||
.fetch_one(&pool)
|
||||
.await?;
|
||||
let ref_b_key_ref: Option<String> =
|
||||
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
|
||||
.bind(ref_b)
|
||||
.fetch_one(&pool)
|
||||
.await?;
|
||||
let ref_c_key_ref: Option<String> =
|
||||
sqlx::query_scalar("SELECT metadata->>'key_ref' FROM entries WHERE id = $1")
|
||||
.bind(ref_c)
|
||||
.fetch_one(&pool)
|
||||
.await?;
|
||||
|
||||
assert_eq!(owner_key_ref, None);
|
||||
assert_eq!(ref_b_key_ref.as_deref(), Some("afolder/srv-a"));
|
||||
assert_eq!(ref_c_key_ref.as_deref(), Some("afolder/srv-a"));
|
||||
|
||||
let owner_has_copied: bool = sqlx::query_scalar(
|
||||
"SELECT EXISTS( \
|
||||
SELECT 1 \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = $1 AND s.name = 'pem' \
|
||||
)",
|
||||
)
|
||||
.bind(ref_a)
|
||||
.fetch_one(&pool)
|
||||
.await?;
|
||||
assert!(owner_has_copied);
|
||||
|
||||
sqlx::query("DELETE FROM entries WHERE user_id = $1")
|
||||
.bind(user_id)
|
||||
.execute(&pool)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -51,7 +51,7 @@ async fn build_entry_env_map(
|
||||
} else {
|
||||
all_fields
|
||||
.iter()
|
||||
.filter(|f| only_fields.contains(&f.field_name))
|
||||
.filter(|f| only_fields.contains(&f.name))
|
||||
.collect()
|
||||
};
|
||||
|
||||
@@ -63,7 +63,7 @@ async fn build_entry_env_map(
|
||||
let key = format!(
|
||||
"{}_{}",
|
||||
effective_prefix,
|
||||
f.field_name.to_uppercase().replace(['-', '.'], "_")
|
||||
f.name.to_uppercase().replace(['-', '.'], "_")
|
||||
);
|
||||
map.insert(key, json_to_env_string(&decrypted));
|
||||
}
|
||||
@@ -75,16 +75,8 @@ async fn build_entry_env_map(
|
||||
} else {
|
||||
(None, key_ref)
|
||||
};
|
||||
let key_entries = fetch_entries(
|
||||
pool,
|
||||
ref_folder,
|
||||
Some("key"),
|
||||
Some(ref_name),
|
||||
&[],
|
||||
None,
|
||||
user_id,
|
||||
)
|
||||
.await?;
|
||||
let key_entries =
|
||||
fetch_entries(pool, ref_folder, None, Some(ref_name), &[], None, user_id).await?;
|
||||
|
||||
if key_entries.len() > 1 {
|
||||
anyhow::bail!(
|
||||
@@ -105,7 +97,7 @@ async fn build_entry_env_map(
|
||||
let key_var = format!(
|
||||
"{}_{}",
|
||||
key_prefix,
|
||||
f.field_name.to_uppercase().replace(['-', '.'], "_")
|
||||
f.name.to_uppercase().replace(['-', '.'], "_")
|
||||
);
|
||||
map.insert(key_var, json_to_env_string(&decrypted));
|
||||
}
|
||||
|
||||
@@ -55,7 +55,7 @@ pub async fn export(
|
||||
let mut map = BTreeMap::new();
|
||||
for f in fields {
|
||||
let decrypted = crypto::decrypt_json(mk, &f.encrypted)?;
|
||||
map.insert(f.field_name.clone(), decrypted);
|
||||
map.insert(f.name.clone(), decrypted);
|
||||
}
|
||||
Some(map)
|
||||
}
|
||||
|
||||
@@ -25,7 +25,7 @@ pub async fn get_secret_field(
|
||||
|
||||
let field = fields
|
||||
.iter()
|
||||
.find(|f| f.field_name == field_name)
|
||||
.find(|f| f.name == field_name)
|
||||
.ok_or_else(|| anyhow::anyhow!("Secret field '{}' not found", field_name))?;
|
||||
|
||||
crypto::decrypt_json(master_key, &field.encrypted)
|
||||
@@ -49,7 +49,7 @@ pub async fn get_all_secrets(
|
||||
let mut map = HashMap::new();
|
||||
for f in fields {
|
||||
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
|
||||
map.insert(f.field_name.clone(), decrypted);
|
||||
map.insert(f.name.clone(), decrypted);
|
||||
}
|
||||
Ok(map)
|
||||
}
|
||||
@@ -72,7 +72,7 @@ pub async fn get_secret_field_by_id(
|
||||
|
||||
let field = fields
|
||||
.iter()
|
||||
.find(|f| f.field_name == field_name)
|
||||
.find(|f| f.name == field_name)
|
||||
.ok_or_else(|| anyhow::anyhow!("Secret field '{}' not found", field_name))?;
|
||||
|
||||
crypto::decrypt_json(master_key, &field.encrypted)
|
||||
@@ -98,7 +98,7 @@ pub async fn get_all_secrets_by_id(
|
||||
let mut map = HashMap::new();
|
||||
for f in fields {
|
||||
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
|
||||
map.insert(f.field_name.clone(), decrypted);
|
||||
map.insert(f.name.clone(), decrypted);
|
||||
}
|
||||
Ok(map)
|
||||
}
|
||||
|
||||
@@ -85,6 +85,7 @@ pub async fn run(
|
||||
tags: &entry.tags,
|
||||
meta_entries: &meta_entries,
|
||||
secret_entries: &secret_entries,
|
||||
link_secret_names: &[],
|
||||
user_id: params.user_id,
|
||||
},
|
||||
master_key,
|
||||
|
||||
@@ -3,7 +3,6 @@ use serde_json::Value;
|
||||
use sqlx::PgPool;
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::crypto;
|
||||
use crate::db;
|
||||
|
||||
#[derive(Debug, serde::Serialize)]
|
||||
@@ -27,7 +26,6 @@ pub async fn run(
|
||||
) -> Result<RollbackResult> {
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct EntryHistoryRow {
|
||||
entry_id: Uuid,
|
||||
folder: String,
|
||||
#[sqlx(rename = "type")]
|
||||
entry_type: String,
|
||||
@@ -122,7 +120,7 @@ pub async fn run(
|
||||
|
||||
let snap: Option<EntryHistoryRow> = if let Some(ver) = to_version {
|
||||
sqlx::query_as(
|
||||
"SELECT entry_id, folder, type, version, action, tags, metadata \
|
||||
"SELECT folder, type, version, action, tags, metadata \
|
||||
FROM entries_history \
|
||||
WHERE entry_id = $1 AND version = $2 ORDER BY id DESC LIMIT 1",
|
||||
)
|
||||
@@ -132,7 +130,7 @@ pub async fn run(
|
||||
.await?
|
||||
} else {
|
||||
sqlx::query_as(
|
||||
"SELECT entry_id, folder, type, version, action, tags, metadata \
|
||||
"SELECT folder, type, version, action, tags, metadata \
|
||||
FROM entries_history \
|
||||
WHERE entry_id = $1 ORDER BY id DESC LIMIT 1",
|
||||
)
|
||||
@@ -151,33 +149,7 @@ pub async fn run(
|
||||
)
|
||||
})?;
|
||||
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct SecretHistoryRow {
|
||||
field_name: String,
|
||||
encrypted: Vec<u8>,
|
||||
action: String,
|
||||
}
|
||||
|
||||
let field_snaps: Vec<SecretHistoryRow> = sqlx::query_as(
|
||||
"SELECT field_name, encrypted, action FROM secrets_history \
|
||||
WHERE entry_id = $1 AND entry_version = $2 ORDER BY field_name",
|
||||
)
|
||||
.bind(snap.entry_id)
|
||||
.bind(snap.version)
|
||||
.fetch_all(pool)
|
||||
.await?;
|
||||
|
||||
for f in &field_snaps {
|
||||
if f.action != "delete" && !f.encrypted.is_empty() {
|
||||
crypto::decrypt_json(master_key, &f.encrypted).map_err(|e| {
|
||||
anyhow::anyhow!(
|
||||
"Cannot decrypt snapshot for field '{}': {}",
|
||||
f.field_name,
|
||||
e
|
||||
)
|
||||
})?;
|
||||
}
|
||||
}
|
||||
let _ = master_key;
|
||||
|
||||
let mut tx = pool.begin().await?;
|
||||
|
||||
@@ -226,11 +198,15 @@ pub async fn run(
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct LiveField {
|
||||
id: Uuid,
|
||||
field_name: String,
|
||||
name: String,
|
||||
encrypted: Vec<u8>,
|
||||
}
|
||||
let live_fields: Vec<LiveField> =
|
||||
sqlx::query_as("SELECT id, field_name, encrypted FROM secrets WHERE entry_id = $1")
|
||||
let live_fields: Vec<LiveField> = sqlx::query_as(
|
||||
"SELECT s.id, s.name, s.encrypted \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = $1",
|
||||
)
|
||||
.bind(lr.id)
|
||||
.fetch_all(&mut *tx)
|
||||
.await?;
|
||||
@@ -239,10 +215,8 @@ pub async fn run(
|
||||
if let Err(e) = db::snapshot_secret_history(
|
||||
&mut tx,
|
||||
db::SecretSnapshotParams {
|
||||
entry_id: lr.id,
|
||||
secret_id: f.id,
|
||||
entry_version: lr.version,
|
||||
field_name: &f.field_name,
|
||||
name: &f.name,
|
||||
encrypted: &f.encrypted,
|
||||
action: "rollback",
|
||||
},
|
||||
@@ -297,22 +271,9 @@ pub async fn run(
|
||||
}
|
||||
};
|
||||
|
||||
sqlx::query("DELETE FROM secrets WHERE entry_id = $1")
|
||||
.bind(live_entry_id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
|
||||
for f in &field_snaps {
|
||||
if f.action == "delete" {
|
||||
continue;
|
||||
}
|
||||
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
|
||||
.bind(live_entry_id)
|
||||
.bind(&f.field_name)
|
||||
.bind(&f.encrypted)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
}
|
||||
// In N:N mode, rollback restores entry metadata/tags only.
|
||||
// Secret snapshots are kept for audit but secret linkage/content is not rewritten here.
|
||||
let _ = live_entry_id;
|
||||
|
||||
crate::audit::log_tx(
|
||||
&mut tx,
|
||||
|
||||
@@ -27,49 +27,46 @@ pub struct SearchResult {
|
||||
pub secret_schemas: HashMap<Uuid, Vec<SecretField>>,
|
||||
}
|
||||
|
||||
pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult> {
|
||||
let entries = fetch_entries_paged(pool, ¶ms).await?;
|
||||
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
|
||||
let secret_schemas = if !entry_ids.is_empty() {
|
||||
fetch_secret_schemas(pool, &entry_ids).await?
|
||||
} else {
|
||||
HashMap::new()
|
||||
};
|
||||
Ok(SearchResult {
|
||||
entries,
|
||||
secret_schemas,
|
||||
})
|
||||
}
|
||||
|
||||
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
|
||||
pub async fn fetch_entries(
|
||||
pool: &PgPool,
|
||||
folder: Option<&str>,
|
||||
entry_type: Option<&str>,
|
||||
name: Option<&str>,
|
||||
tags: &[String],
|
||||
query: Option<&str>,
|
||||
user_id: Option<Uuid>,
|
||||
) -> Result<Vec<Entry>> {
|
||||
let params = SearchParams {
|
||||
folder,
|
||||
entry_type,
|
||||
name,
|
||||
tags,
|
||||
query,
|
||||
sort: "name",
|
||||
limit: FETCH_ALL_LIMIT,
|
||||
offset: 0,
|
||||
user_id,
|
||||
};
|
||||
/// List `entries` rows matching params (paged, ordered per `params.sort`).
|
||||
/// Does not read the `secrets` table.
|
||||
pub async fn list_entries(pool: &PgPool, params: SearchParams<'_>) -> Result<Vec<Entry>> {
|
||||
fetch_entries_paged(pool, ¶ms).await
|
||||
}
|
||||
|
||||
async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<Entry>> {
|
||||
/// Count `entries` rows matching the same filters as [`list_entries`] (ignores `sort` / `limit` / `offset`).
|
||||
/// Does not read the `secrets` table.
|
||||
pub async fn count_entries(pool: &PgPool, a: &SearchParams<'_>) -> Result<i64> {
|
||||
let (where_clause, _) = entry_where_clause_and_next_idx(a);
|
||||
let sql = format!("SELECT COUNT(*)::bigint FROM entries {where_clause}");
|
||||
let mut q = sqlx::query_scalar::<_, i64>(&sql);
|
||||
if let Some(uid) = a.user_id {
|
||||
q = q.bind(uid);
|
||||
}
|
||||
if let Some(v) = a.folder {
|
||||
q = q.bind(v);
|
||||
}
|
||||
if let Some(v) = a.entry_type {
|
||||
q = q.bind(v);
|
||||
}
|
||||
if let Some(v) = a.name {
|
||||
q = q.bind(v);
|
||||
}
|
||||
for tag in a.tags {
|
||||
q = q.bind(tag);
|
||||
}
|
||||
if let Some(v) = a.query {
|
||||
let pattern = format!("%{}%", v.replace('%', "\\%").replace('_', "\\_"));
|
||||
q = q.bind(pattern);
|
||||
}
|
||||
let n = q.fetch_one(pool).await?;
|
||||
Ok(n)
|
||||
}
|
||||
|
||||
/// Shared WHERE clause and the next `$n` index (for LIMIT/OFFSET in paged queries).
|
||||
fn entry_where_clause_and_next_idx(a: &SearchParams<'_>) -> (String, i32) {
|
||||
let mut conditions: Vec<String> = Vec::new();
|
||||
let mut idx: i32 = 1;
|
||||
|
||||
// user_id filtering — always comes first when present
|
||||
if a.user_id.is_some() {
|
||||
conditions.push(format!("user_id = ${}", idx));
|
||||
idx += 1;
|
||||
@@ -115,6 +112,55 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
||||
idx += 1;
|
||||
}
|
||||
|
||||
let where_clause = if conditions.is_empty() {
|
||||
String::new()
|
||||
} else {
|
||||
format!("WHERE {}", conditions.join(" AND "))
|
||||
};
|
||||
(where_clause, idx)
|
||||
}
|
||||
|
||||
pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult> {
|
||||
let entries = fetch_entries_paged(pool, ¶ms).await?;
|
||||
let entry_ids: Vec<Uuid> = entries.iter().map(|e| e.id).collect();
|
||||
let secret_schemas = if !entry_ids.is_empty() {
|
||||
fetch_secret_schemas(pool, &entry_ids).await?
|
||||
} else {
|
||||
HashMap::new()
|
||||
};
|
||||
Ok(SearchResult {
|
||||
entries,
|
||||
secret_schemas,
|
||||
})
|
||||
}
|
||||
|
||||
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
|
||||
pub async fn fetch_entries(
|
||||
pool: &PgPool,
|
||||
folder: Option<&str>,
|
||||
entry_type: Option<&str>,
|
||||
name: Option<&str>,
|
||||
tags: &[String],
|
||||
query: Option<&str>,
|
||||
user_id: Option<Uuid>,
|
||||
) -> Result<Vec<Entry>> {
|
||||
let params = SearchParams {
|
||||
folder,
|
||||
entry_type,
|
||||
name,
|
||||
tags,
|
||||
query,
|
||||
sort: "name",
|
||||
limit: FETCH_ALL_LIMIT,
|
||||
offset: 0,
|
||||
user_id,
|
||||
};
|
||||
list_entries(pool, params).await
|
||||
}
|
||||
|
||||
async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<Entry>> {
|
||||
let (where_clause, idx) = entry_where_clause_and_next_idx(a);
|
||||
|
||||
let order = match a.sort {
|
||||
"updated" => "updated_at DESC",
|
||||
"created" => "created_at DESC",
|
||||
@@ -122,14 +168,7 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
||||
};
|
||||
|
||||
let limit_idx = idx;
|
||||
idx += 1;
|
||||
let offset_idx = idx;
|
||||
|
||||
let where_clause = if conditions.is_empty() {
|
||||
String::new()
|
||||
} else {
|
||||
format!("WHERE {}", conditions.join(" AND "))
|
||||
};
|
||||
let offset_idx = idx + 1;
|
||||
|
||||
let sql = format!(
|
||||
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
|
||||
@@ -138,7 +177,6 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
||||
);
|
||||
|
||||
let mut q = sqlx::query_as::<_, EntryRaw>(&sql);
|
||||
|
||||
if let Some(uid) = a.user_id {
|
||||
q = q.bind(uid);
|
||||
}
|
||||
@@ -172,8 +210,12 @@ pub async fn fetch_secret_schemas(
|
||||
if entry_ids.is_empty() {
|
||||
return Ok(HashMap::new());
|
||||
}
|
||||
let fields: Vec<SecretField> = sqlx::query_as(
|
||||
"SELECT * FROM secrets WHERE entry_id = ANY($1) ORDER BY entry_id, field_name",
|
||||
let fields: Vec<EntrySecretRow> = sqlx::query_as(
|
||||
"SELECT es.entry_id, s.id, s.user_id, s.name, s.type, s.encrypted, s.version, s.created_at, s.updated_at \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = ANY($1) \
|
||||
ORDER BY es.entry_id, es.sort_order, s.name",
|
||||
)
|
||||
.bind(entry_ids)
|
||||
.fetch_all(pool)
|
||||
@@ -181,7 +223,8 @@ pub async fn fetch_secret_schemas(
|
||||
|
||||
let mut map: HashMap<Uuid, Vec<SecretField>> = HashMap::new();
|
||||
for f in fields {
|
||||
map.entry(f.entry_id).or_default().push(f);
|
||||
let entry_id = f.entry_id;
|
||||
map.entry(entry_id).or_default().push(f.secret());
|
||||
}
|
||||
Ok(map)
|
||||
}
|
||||
@@ -194,8 +237,12 @@ pub async fn fetch_secrets_for_entries(
|
||||
if entry_ids.is_empty() {
|
||||
return Ok(HashMap::new());
|
||||
}
|
||||
let fields: Vec<SecretField> = sqlx::query_as(
|
||||
"SELECT * FROM secrets WHERE entry_id = ANY($1) ORDER BY entry_id, field_name",
|
||||
let fields: Vec<EntrySecretRow> = sqlx::query_as(
|
||||
"SELECT es.entry_id, s.id, s.user_id, s.name, s.type, s.encrypted, s.version, s.created_at, s.updated_at \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = ANY($1) \
|
||||
ORDER BY es.entry_id, es.sort_order, s.name",
|
||||
)
|
||||
.bind(entry_ids)
|
||||
.fetch_all(pool)
|
||||
@@ -203,7 +250,8 @@ pub async fn fetch_secrets_for_entries(
|
||||
|
||||
let mut map: HashMap<Uuid, Vec<SecretField>> = HashMap::new();
|
||||
for f in fields {
|
||||
map.entry(f.entry_id).or_default().push(f);
|
||||
let entry_id = f.entry_id;
|
||||
map.entry(entry_id).or_default().push(f.secret());
|
||||
}
|
||||
Ok(map)
|
||||
}
|
||||
@@ -307,3 +355,32 @@ impl From<EntryRaw> for Entry {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct EntrySecretRow {
|
||||
entry_id: Uuid,
|
||||
id: Uuid,
|
||||
user_id: Option<Uuid>,
|
||||
name: String,
|
||||
#[sqlx(rename = "type")]
|
||||
secret_type: String,
|
||||
encrypted: Vec<u8>,
|
||||
version: i64,
|
||||
created_at: chrono::DateTime<chrono::Utc>,
|
||||
updated_at: chrono::DateTime<chrono::Utc>,
|
||||
}
|
||||
|
||||
impl EntrySecretRow {
|
||||
fn secret(self) -> SecretField {
|
||||
SecretField {
|
||||
id: self.id,
|
||||
user_id: self.user_id,
|
||||
name: self.name,
|
||||
secret_type: self.secret_type,
|
||||
encrypted: self.encrypted,
|
||||
version: self.version,
|
||||
created_at: self.created_at,
|
||||
updated_at: self.updated_at,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,10 +5,10 @@ use uuid::Uuid;
|
||||
|
||||
use crate::crypto;
|
||||
use crate::db;
|
||||
use crate::models::EntryRow;
|
||||
use crate::models::{EntryRow, EntryWriteRow};
|
||||
use crate::service::add::{
|
||||
collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path,
|
||||
parse_kv, remove_path,
|
||||
collect_field_paths, collect_key_paths, flatten_json_fields, infer_secret_type, insert_path,
|
||||
parse_key_path, parse_kv, remove_path,
|
||||
};
|
||||
|
||||
#[derive(Debug, serde::Serialize)]
|
||||
@@ -173,8 +173,6 @@ pub async fn run(
|
||||
);
|
||||
}
|
||||
|
||||
let new_version = row.version + 1;
|
||||
|
||||
for entry in params.secret_entries {
|
||||
let (path, field_value) = parse_kv(entry)?;
|
||||
let flat = flatten_json_fields("", &{
|
||||
@@ -192,7 +190,10 @@ pub async fn run(
|
||||
encrypted: Vec<u8>,
|
||||
}
|
||||
let ef: Option<ExistingField> = sqlx::query_as(
|
||||
"SELECT id, encrypted FROM secrets WHERE entry_id = $1 AND field_name = $2",
|
||||
"SELECT s.id, s.encrypted \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = $1 AND s.name = $2",
|
||||
)
|
||||
.bind(row.id)
|
||||
.bind(field_name)
|
||||
@@ -203,10 +204,8 @@ pub async fn run(
|
||||
&& let Err(e) = db::snapshot_secret_history(
|
||||
&mut tx,
|
||||
db::SecretSnapshotParams {
|
||||
entry_id: row.id,
|
||||
secret_id: ef.id,
|
||||
entry_version: row.version,
|
||||
field_name,
|
||||
name: field_name,
|
||||
encrypted: &ef.encrypted,
|
||||
action: "update",
|
||||
},
|
||||
@@ -216,16 +215,30 @@ pub async fn run(
|
||||
tracing::warn!(error = %e, "failed to snapshot secret field history");
|
||||
}
|
||||
|
||||
if let Some(ef) = ef {
|
||||
sqlx::query(
|
||||
"INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3) \
|
||||
ON CONFLICT (entry_id, field_name) DO UPDATE SET \
|
||||
encrypted = EXCLUDED.encrypted, version = secrets.version + 1, updated_at = NOW()",
|
||||
"UPDATE secrets SET encrypted = $1, version = version + 1, updated_at = NOW() WHERE id = $2",
|
||||
)
|
||||
.bind(row.id)
|
||||
.bind(field_name)
|
||||
.bind(&encrypted)
|
||||
.bind(ef.id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
} else {
|
||||
let secret_id: Uuid = sqlx::query_scalar(
|
||||
"INSERT INTO secrets (user_id, name, type, encrypted) VALUES ($1, $2, $3, $4) RETURNING id",
|
||||
)
|
||||
.bind(params.user_id)
|
||||
.bind(field_name)
|
||||
.bind(infer_secret_type(field_name))
|
||||
.bind(&encrypted)
|
||||
.fetch_one(&mut *tx)
|
||||
.await?;
|
||||
sqlx::query("INSERT INTO entry_secrets (entry_id, secret_id) VALUES ($1, $2)")
|
||||
.bind(row.id)
|
||||
.bind(secret_id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -239,7 +252,10 @@ pub async fn run(
|
||||
encrypted: Vec<u8>,
|
||||
}
|
||||
let field: Option<FieldToDelete> = sqlx::query_as(
|
||||
"SELECT id, encrypted FROM secrets WHERE entry_id = $1 AND field_name = $2",
|
||||
"SELECT s.id, s.encrypted \
|
||||
FROM entry_secrets es \
|
||||
JOIN secrets s ON s.id = es.secret_id \
|
||||
WHERE es.entry_id = $1 AND s.name = $2",
|
||||
)
|
||||
.bind(row.id)
|
||||
.bind(&field_name)
|
||||
@@ -250,10 +266,8 @@ pub async fn run(
|
||||
if let Err(e) = db::snapshot_secret_history(
|
||||
&mut tx,
|
||||
db::SecretSnapshotParams {
|
||||
entry_id: row.id,
|
||||
secret_id: f.id,
|
||||
entry_version: new_version,
|
||||
field_name: &field_name,
|
||||
name: &field_name,
|
||||
encrypted: &f.encrypted,
|
||||
action: "delete",
|
||||
},
|
||||
@@ -262,7 +276,16 @@ pub async fn run(
|
||||
{
|
||||
tracing::warn!(error = %e, "failed to snapshot secret field history before delete");
|
||||
}
|
||||
sqlx::query("DELETE FROM secrets WHERE id = $1")
|
||||
sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
|
||||
.bind(row.id)
|
||||
.bind(f.id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
sqlx::query(
|
||||
"DELETE FROM secrets s \
|
||||
WHERE s.id = $1 \
|
||||
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
|
||||
)
|
||||
.bind(f.id)
|
||||
.execute(&mut *tx)
|
||||
.await?;
|
||||
@@ -306,3 +329,118 @@ pub async fn run(
|
||||
remove_secrets: remove_secret_keys,
|
||||
})
|
||||
}
|
||||
|
||||
/// Update non-sensitive entry columns by primary key (multi-tenant: `user_id` must match).
|
||||
/// Does not read or modify `secrets` rows.
|
||||
pub struct UpdateEntryFieldsByIdParams<'a> {
|
||||
pub folder: &'a str,
|
||||
pub entry_type: &'a str,
|
||||
pub name: &'a str,
|
||||
pub notes: &'a str,
|
||||
pub tags: &'a [String],
|
||||
pub metadata: &'a serde_json::Value,
|
||||
}
|
||||
|
||||
pub async fn update_fields_by_id(
|
||||
pool: &PgPool,
|
||||
entry_id: Uuid,
|
||||
user_id: Uuid,
|
||||
params: UpdateEntryFieldsByIdParams<'_>,
|
||||
) -> Result<()> {
|
||||
if params.folder.len() > 128 {
|
||||
anyhow::bail!("folder must be at most 128 characters");
|
||||
}
|
||||
if params.entry_type.len() > 64 {
|
||||
anyhow::bail!("type must be at most 64 characters");
|
||||
}
|
||||
if params.name.len() > 256 {
|
||||
anyhow::bail!("name must be at most 256 characters");
|
||||
}
|
||||
|
||||
let mut tx = pool.begin().await?;
|
||||
|
||||
let row: Option<EntryWriteRow> = sqlx::query_as(
|
||||
"SELECT id, version, folder, type, name, tags, metadata, notes FROM entries \
|
||||
WHERE id = $1 AND user_id = $2 FOR UPDATE",
|
||||
)
|
||||
.bind(entry_id)
|
||||
.bind(user_id)
|
||||
.fetch_optional(&mut *tx)
|
||||
.await?;
|
||||
|
||||
let row = match row {
|
||||
Some(r) => r,
|
||||
None => {
|
||||
tx.rollback().await?;
|
||||
anyhow::bail!("Entry not found");
|
||||
}
|
||||
};
|
||||
|
||||
if let Err(e) = db::snapshot_entry_history(
|
||||
&mut tx,
|
||||
db::EntrySnapshotParams {
|
||||
entry_id: row.id,
|
||||
user_id: Some(user_id),
|
||||
folder: &row.folder,
|
||||
entry_type: &row.entry_type,
|
||||
name: &row.name,
|
||||
version: row.version,
|
||||
action: "update",
|
||||
tags: &row.tags,
|
||||
metadata: &row.metadata,
|
||||
},
|
||||
)
|
||||
.await
|
||||
{
|
||||
tracing::warn!(error = %e, "failed to snapshot entry history before web update");
|
||||
}
|
||||
|
||||
let res = sqlx::query(
|
||||
"UPDATE entries SET folder = $1, type = $2, name = $3, notes = $4, tags = $5, metadata = $6, \
|
||||
version = version + 1, updated_at = NOW() \
|
||||
WHERE id = $7 AND version = $8",
|
||||
)
|
||||
.bind(params.folder)
|
||||
.bind(params.entry_type)
|
||||
.bind(params.name)
|
||||
.bind(params.notes)
|
||||
.bind(params.tags)
|
||||
.bind(params.metadata)
|
||||
.bind(row.id)
|
||||
.bind(row.version)
|
||||
.execute(&mut *tx)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
if let sqlx::Error::Database(ref d) = e
|
||||
&& d.code().as_deref() == Some("23505")
|
||||
{
|
||||
return anyhow::anyhow!(
|
||||
"An entry with this folder and name already exists for your account."
|
||||
);
|
||||
}
|
||||
e.into()
|
||||
})?;
|
||||
|
||||
if res.rows_affected() == 0 {
|
||||
tx.rollback().await?;
|
||||
anyhow::bail!("Concurrent modification detected. Please refresh and try again.");
|
||||
}
|
||||
|
||||
crate::audit::log_tx(
|
||||
&mut tx,
|
||||
Some(user_id),
|
||||
"update",
|
||||
params.folder,
|
||||
params.entry_type,
|
||||
params.name,
|
||||
serde_json::json!({
|
||||
"source": "web",
|
||||
"entry_id": entry_id,
|
||||
"fields": ["folder", "type", "name", "notes", "tags", "metadata"],
|
||||
}),
|
||||
)
|
||||
.await;
|
||||
|
||||
tx.commit().await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "secrets-mcp"
|
||||
version = "0.3.2"
|
||||
version = "0.3.9"
|
||||
edition.workspace = true
|
||||
|
||||
[[bin]]
|
||||
|
||||
@@ -21,7 +21,7 @@ use tower_sessions_sqlx_store_chrono::PostgresStore;
|
||||
use tracing_subscriber::EnvFilter;
|
||||
use tracing_subscriber::fmt::time::FormatTime;
|
||||
|
||||
use secrets_core::config::resolve_db_url;
|
||||
use secrets_core::config::resolve_db_config;
|
||||
use secrets_core::db::{create_pool, migrate};
|
||||
|
||||
use crate::oauth::OAuthConfig;
|
||||
@@ -40,6 +40,14 @@ fn load_env_var(name: &str) -> Option<String> {
|
||||
std::env::var(name).ok().filter(|s| !s.is_empty())
|
||||
}
|
||||
|
||||
/// Pretty-print bind address in logs (`127.0.0.1` → `localhost`); actual socket bind unchanged.
|
||||
fn listen_addr_log_display(bind_addr: &str) -> String {
|
||||
bind_addr
|
||||
.strip_prefix("127.0.0.1:")
|
||||
.map(|port| format!("localhost:{port}"))
|
||||
.unwrap_or_else(|| bind_addr.to_string())
|
||||
}
|
||||
|
||||
fn load_oauth_config(prefix: &str, base_url: &str, path: &str) -> Option<OAuthConfig> {
|
||||
let client_id = load_env_var(&format!("{}_CLIENT_ID", prefix))?;
|
||||
let client_secret = load_env_var(&format!("{}_CLIENT_SECRET", prefix))?;
|
||||
@@ -78,9 +86,9 @@ async fn main() -> Result<()> {
|
||||
.init();
|
||||
|
||||
// ── Database ──────────────────────────────────────────────────────────────
|
||||
let db_url = resolve_db_url("")
|
||||
let db_config = resolve_db_config("")
|
||||
.context("Database not configured. Set SECRETS_DATABASE_URL environment variable.")?;
|
||||
let pool = create_pool(&db_url)
|
||||
let pool = create_pool(&db_config)
|
||||
.await
|
||||
.context("failed to connect to database")?;
|
||||
migrate(&pool)
|
||||
@@ -168,7 +176,10 @@ async fn main() -> Result<()> {
|
||||
.await
|
||||
.with_context(|| format!("failed to bind to {}", bind_addr))?;
|
||||
|
||||
tracing::info!("Secrets MCP Server listening on http://{}", bind_addr);
|
||||
tracing::info!(
|
||||
"Secrets MCP Server listening on http://{}",
|
||||
listen_addr_log_display(&bind_addr)
|
||||
);
|
||||
tracing::info!("MCP endpoint: {}/mcp", base_url);
|
||||
|
||||
axum::serve(
|
||||
|
||||
@@ -225,12 +225,18 @@ struct AddInput {
|
||||
description = "Metadata fields as a JSON object {\"key\": value}. Merged with 'meta' if both provided."
|
||||
)]
|
||||
meta_obj: Option<Map<String, Value>>,
|
||||
#[schemars(description = "Secret fields as 'key=value' strings")]
|
||||
#[schemars(
|
||||
description = "Secret fields as 'key=value' strings. Reminder: non-sensitive endpoint/address fields should go to metadata.address instead of secrets."
|
||||
)]
|
||||
secrets: Option<Vec<String>>,
|
||||
#[schemars(
|
||||
description = "Secret fields as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided."
|
||||
description = "Secret fields as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided. Reminder: non-sensitive endpoint/address fields should go to metadata.address."
|
||||
)]
|
||||
secrets_obj: Option<Map<String, Value>>,
|
||||
#[schemars(
|
||||
description = "Link existing secrets by secret name. Names must resolve uniquely under current user."
|
||||
)]
|
||||
link_secret_names: Option<Vec<String>>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, JsonSchema)]
|
||||
@@ -259,10 +265,12 @@ struct UpdateInput {
|
||||
meta_obj: Option<Map<String, Value>>,
|
||||
#[schemars(description = "Metadata field keys to remove")]
|
||||
remove_meta: Option<Vec<String>>,
|
||||
#[schemars(description = "Secret fields to update/add as 'key=value' strings")]
|
||||
#[schemars(
|
||||
description = "Secret fields to update/add as 'key=value' strings. Reminder: non-sensitive endpoint/address fields should go to metadata.address instead of secrets."
|
||||
)]
|
||||
secrets: Option<Vec<String>>,
|
||||
#[schemars(
|
||||
description = "Secret fields to update/add as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided."
|
||||
description = "Secret fields to update/add as a JSON object {\"key\": \"value\"}. Merged with 'secrets' if both provided. Reminder: non-sensitive endpoint/address fields should go to metadata.address."
|
||||
)]
|
||||
secrets_obj: Option<Map<String, Value>>,
|
||||
#[schemars(description = "Secret field keys to remove")]
|
||||
@@ -429,10 +437,20 @@ impl SecretsService {
|
||||
.entries
|
||||
.iter()
|
||||
.map(|e| {
|
||||
let schema: Vec<&str> = result
|
||||
let schema: Vec<serde_json::Value> = result
|
||||
.secret_schemas
|
||||
.get(&e.id)
|
||||
.map(|f| f.iter().map(|s| s.field_name.as_str()).collect())
|
||||
.map(|f| {
|
||||
f.iter()
|
||||
.map(|s| {
|
||||
serde_json::json!({
|
||||
"id": s.id,
|
||||
"name": s.name,
|
||||
"type": s.secret_type,
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
})
|
||||
.unwrap_or_default();
|
||||
serde_json::json!({
|
||||
"id": e.id,
|
||||
@@ -517,10 +535,20 @@ impl SecretsService {
|
||||
"updated_at": e.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
|
||||
})
|
||||
} else {
|
||||
let schema: Vec<&str> = result
|
||||
let schema: Vec<serde_json::Value> = result
|
||||
.secret_schemas
|
||||
.get(&e.id)
|
||||
.map(|f| f.iter().map(|s| s.field_name.as_str()).collect())
|
||||
.map(|f| {
|
||||
f.iter()
|
||||
.map(|s| {
|
||||
serde_json::json!({
|
||||
"id": s.id,
|
||||
"name": s.name,
|
||||
"type": s.secret_type,
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
})
|
||||
.unwrap_or_default();
|
||||
serde_json::json!({
|
||||
"id": e.id,
|
||||
@@ -639,6 +667,7 @@ impl SecretsService {
|
||||
if let Some(obj) = input.secrets_obj {
|
||||
secrets.extend(map_to_kv_strings(obj));
|
||||
}
|
||||
let link_secret_names = input.link_secret_names.unwrap_or_default();
|
||||
let folder = input.folder.as_deref().unwrap_or("");
|
||||
let entry_type = input.entry_type.as_deref().unwrap_or("");
|
||||
let notes = input.notes.as_deref().unwrap_or("");
|
||||
@@ -653,6 +682,7 @@ impl SecretsService {
|
||||
tags: &tags,
|
||||
meta_entries: &meta,
|
||||
secret_entries: &secrets,
|
||||
link_secret_names: &link_secret_names,
|
||||
user_id: Some(user_id),
|
||||
},
|
||||
&user_key,
|
||||
|
||||
@@ -8,9 +8,10 @@ use axum::{
|
||||
extract::{ConnectInfo, Path, Query, State},
|
||||
http::{HeaderMap, StatusCode, header},
|
||||
response::{Html, IntoResponse, Redirect, Response},
|
||||
routing::{get, post},
|
||||
routing::{get, patch, post},
|
||||
};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::json;
|
||||
use tower_sessions::Session;
|
||||
use uuid::Uuid;
|
||||
|
||||
@@ -19,6 +20,9 @@ use secrets_core::crypto::hex;
|
||||
use secrets_core::service::{
|
||||
api_key::{ensure_api_key, regenerate_api_key},
|
||||
audit_log::list_for_user,
|
||||
delete::delete_by_id,
|
||||
search::{SearchParams, count_entries, fetch_secret_schemas, list_entries},
|
||||
update::{UpdateEntryFieldsByIdParams, update_fields_by_id},
|
||||
user::{
|
||||
OAuthProfile, bind_oauth_account, find_or_create_user, get_user_by_id,
|
||||
unbind_oauth_account, update_user_key_setup,
|
||||
@@ -78,6 +82,51 @@ struct AuditEntryView {
|
||||
detail: String,
|
||||
}
|
||||
|
||||
#[derive(Template)]
|
||||
#[template(path = "entries.html")]
|
||||
struct EntriesPageTemplate {
|
||||
user_name: String,
|
||||
user_email: String,
|
||||
entries: Vec<EntryListItemView>,
|
||||
total_count: i64,
|
||||
shown_count: usize,
|
||||
limit: u32,
|
||||
filter_folder: String,
|
||||
filter_type: String,
|
||||
version: &'static str,
|
||||
}
|
||||
|
||||
/// Non-sensitive fields only (no `secrets` / ciphertext).
|
||||
struct EntryListItemView {
|
||||
id: String,
|
||||
folder: String,
|
||||
entry_type: String,
|
||||
name: String,
|
||||
notes: String,
|
||||
tags: String,
|
||||
metadata: String,
|
||||
secrets: Vec<SecretSummaryView>,
|
||||
/// RFC3339 UTC for `<time datetime>`; localized in entries.html.
|
||||
updated_at_iso: String,
|
||||
}
|
||||
|
||||
struct SecretSummaryView {
|
||||
id: String,
|
||||
name: String,
|
||||
secret_type: String,
|
||||
}
|
||||
|
||||
/// Cap for HTML list (avoids loading unbounded rows into memory).
|
||||
const ENTRIES_PAGE_LIMIT: u32 = 5_000;
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct EntriesQuery {
|
||||
folder: Option<String>,
|
||||
/// URL query key is `type` (maps to DB column `entries.type`).
|
||||
#[serde(rename = "type")]
|
||||
entry_type: Option<String>,
|
||||
}
|
||||
|
||||
// ── App state helpers ─────────────────────────────────────────────────────────
|
||||
|
||||
fn google_cfg(state: &AppState) -> Option<&OAuthConfig> {
|
||||
@@ -149,6 +198,7 @@ pub fn web_router() -> Router<AppState> {
|
||||
.route("/auth/google/callback", get(auth_google_callback))
|
||||
.route("/auth/logout", post(auth_logout))
|
||||
.route("/dashboard", get(dashboard))
|
||||
.route("/entries", get(entries_page))
|
||||
.route("/audit", get(audit_page))
|
||||
.route("/account/bind/google", get(account_bind_google))
|
||||
.route(
|
||||
@@ -160,6 +210,14 @@ pub fn web_router() -> Router<AppState> {
|
||||
.route("/api/key-setup", post(api_key_setup))
|
||||
.route("/api/apikey", get(api_apikey_get))
|
||||
.route("/api/apikey/regenerate", post(api_apikey_regenerate))
|
||||
.route(
|
||||
"/api/entries/{id}",
|
||||
patch(api_entry_patch).delete(api_entry_delete),
|
||||
)
|
||||
.route(
|
||||
"/api/entries/{entry_id}/secrets/{secret_id}",
|
||||
axum::routing::delete(api_entry_secret_unlink),
|
||||
)
|
||||
}
|
||||
|
||||
fn text_asset_response(content: &'static str, content_type: &'static str) -> Response {
|
||||
@@ -478,6 +536,109 @@ async fn dashboard(
|
||||
render_template(tmpl)
|
||||
}
|
||||
|
||||
async fn entries_page(
|
||||
State(state): State<AppState>,
|
||||
session: Session,
|
||||
Query(q): Query<EntriesQuery>,
|
||||
) -> Result<Response, StatusCode> {
|
||||
let Some(user_id) = current_user_id(&session).await else {
|
||||
return Ok(Redirect::to("/login").into_response());
|
||||
};
|
||||
|
||||
let user = match get_user_by_id(&state.pool, user_id).await.map_err(|e| {
|
||||
tracing::error!(error = %e, %user_id, "failed to load user for entries page");
|
||||
StatusCode::INTERNAL_SERVER_ERROR
|
||||
})? {
|
||||
Some(u) => u,
|
||||
None => return Ok(Redirect::to("/login").into_response()),
|
||||
};
|
||||
|
||||
let folder_filter = q
|
||||
.folder
|
||||
.as_ref()
|
||||
.map(|s| s.trim())
|
||||
.filter(|s| !s.is_empty())
|
||||
.map(|s| s.to_string());
|
||||
let type_filter = q
|
||||
.entry_type
|
||||
.as_ref()
|
||||
.map(|s| s.trim())
|
||||
.filter(|s| !s.is_empty())
|
||||
.map(|s| s.to_string());
|
||||
|
||||
let params = SearchParams {
|
||||
folder: folder_filter.as_deref(),
|
||||
entry_type: type_filter.as_deref(),
|
||||
name: None,
|
||||
tags: &[],
|
||||
query: None,
|
||||
sort: "updated",
|
||||
limit: ENTRIES_PAGE_LIMIT,
|
||||
offset: 0,
|
||||
user_id: Some(user_id),
|
||||
};
|
||||
|
||||
let total_count = count_entries(&state.pool, ¶ms).await.map_err(|e| {
|
||||
tracing::error!(error = %e, "failed to count entries for web");
|
||||
StatusCode::INTERNAL_SERVER_ERROR
|
||||
})?;
|
||||
|
||||
let rows = list_entries(&state.pool, params).await.map_err(|e| {
|
||||
tracing::error!(error = %e, "failed to load entries list for web");
|
||||
StatusCode::INTERNAL_SERVER_ERROR
|
||||
})?;
|
||||
let shown_count = rows.len();
|
||||
let entry_ids: Vec<Uuid> = rows.iter().map(|e| e.id).collect();
|
||||
let secret_schemas = fetch_secret_schemas(&state.pool, &entry_ids)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
tracing::error!(error = %e, "failed to load secret schema list for web");
|
||||
StatusCode::INTERNAL_SERVER_ERROR
|
||||
})?;
|
||||
|
||||
let entries = rows
|
||||
.into_iter()
|
||||
.map(|e| EntryListItemView {
|
||||
id: e.id.to_string(),
|
||||
folder: e.folder,
|
||||
entry_type: e.entry_type,
|
||||
name: e.name,
|
||||
notes: e.notes,
|
||||
tags: e.tags.join(", "),
|
||||
metadata: serde_json::to_string_pretty(&e.metadata)
|
||||
.unwrap_or_else(|_| "{}".to_string()),
|
||||
secrets: secret_schemas
|
||||
.get(&e.id)
|
||||
.map(|fields| {
|
||||
fields
|
||||
.iter()
|
||||
.map(|f| SecretSummaryView {
|
||||
id: f.id.to_string(),
|
||||
name: f.name.clone(),
|
||||
secret_type: f.secret_type.clone(),
|
||||
})
|
||||
.collect()
|
||||
})
|
||||
.unwrap_or_default(),
|
||||
updated_at_iso: e.updated_at.to_rfc3339_opts(SecondsFormat::Secs, true),
|
||||
})
|
||||
.collect();
|
||||
|
||||
let tmpl = EntriesPageTemplate {
|
||||
user_name: user.name.clone(),
|
||||
user_email: user.email.clone().unwrap_or_default(),
|
||||
entries,
|
||||
total_count,
|
||||
shown_count,
|
||||
limit: ENTRIES_PAGE_LIMIT,
|
||||
filter_folder: folder_filter.unwrap_or_default(),
|
||||
filter_type: type_filter.unwrap_or_default(),
|
||||
version: env!("CARGO_PKG_VERSION"),
|
||||
};
|
||||
|
||||
render_template(tmpl)
|
||||
}
|
||||
|
||||
async fn audit_page(
|
||||
State(state): State<AppState>,
|
||||
session: Session,
|
||||
@@ -751,6 +912,223 @@ async fn api_apikey_regenerate(
|
||||
Ok(Json(ApiKeyResponse { api_key }))
|
||||
}
|
||||
|
||||
// ── Entry management (Web UI, non-sensitive fields only) ───────────────────────
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct EntryPatchBody {
|
||||
folder: String,
|
||||
#[serde(rename = "type")]
|
||||
entry_type: String,
|
||||
name: String,
|
||||
notes: String,
|
||||
tags: Vec<String>,
|
||||
metadata: serde_json::Value,
|
||||
}
|
||||
|
||||
type EntryApiError = (StatusCode, Json<serde_json::Value>);
|
||||
|
||||
fn map_entry_mutation_err(e: anyhow::Error) -> EntryApiError {
|
||||
let msg = e.to_string();
|
||||
if msg.contains("Entry not found") {
|
||||
return (
|
||||
StatusCode::NOT_FOUND,
|
||||
Json(json!({ "error": "条目不存在或无权访问" })),
|
||||
);
|
||||
}
|
||||
if msg.contains("already exists") {
|
||||
return (
|
||||
StatusCode::CONFLICT,
|
||||
Json(json!({ "error": "该账号下已存在相同 folder + name 的条目" })),
|
||||
);
|
||||
}
|
||||
if msg.contains("Concurrent modification") {
|
||||
return (
|
||||
StatusCode::CONFLICT,
|
||||
Json(json!({ "error": "条目已被修改,请刷新后重试" })),
|
||||
);
|
||||
}
|
||||
if msg.contains("must be at most") {
|
||||
return (StatusCode::BAD_REQUEST, Json(json!({ "error": msg })));
|
||||
}
|
||||
tracing::error!(error = %e, "entry mutation failed");
|
||||
(
|
||||
StatusCode::INTERNAL_SERVER_ERROR,
|
||||
Json(json!({ "error": "操作失败,请稍后重试" })),
|
||||
)
|
||||
}
|
||||
|
||||
async fn api_entry_patch(
|
||||
State(state): State<AppState>,
|
||||
session: Session,
|
||||
Path(entry_id): Path<Uuid>,
|
||||
Json(body): Json<EntryPatchBody>,
|
||||
) -> Result<Json<serde_json::Value>, EntryApiError> {
|
||||
let user_id = current_user_id(&session)
|
||||
.await
|
||||
.ok_or((StatusCode::UNAUTHORIZED, Json(json!({ "error": "未登录" }))))?;
|
||||
|
||||
let folder = body.folder.trim();
|
||||
let entry_type = body.entry_type.trim();
|
||||
let name = body.name.trim();
|
||||
let notes = body.notes.trim();
|
||||
|
||||
if name.is_empty() {
|
||||
return Err((
|
||||
StatusCode::BAD_REQUEST,
|
||||
Json(json!({ "error": "name 不能为空" })),
|
||||
));
|
||||
}
|
||||
|
||||
let tags: Vec<String> = body
|
||||
.tags
|
||||
.into_iter()
|
||||
.map(|t| t.trim().to_string())
|
||||
.filter(|t| !t.is_empty())
|
||||
.collect();
|
||||
|
||||
if !body.metadata.is_object() {
|
||||
return Err((
|
||||
StatusCode::BAD_REQUEST,
|
||||
Json(json!({ "error": "metadata 必须是 JSON 对象" })),
|
||||
));
|
||||
}
|
||||
|
||||
update_fields_by_id(
|
||||
&state.pool,
|
||||
entry_id,
|
||||
user_id,
|
||||
UpdateEntryFieldsByIdParams {
|
||||
folder,
|
||||
entry_type,
|
||||
name,
|
||||
notes,
|
||||
tags: &tags,
|
||||
metadata: &body.metadata,
|
||||
},
|
||||
)
|
||||
.await
|
||||
.map_err(map_entry_mutation_err)?;
|
||||
|
||||
Ok(Json(json!({ "ok": true })))
|
||||
}
|
||||
|
||||
async fn api_entry_delete(
|
||||
State(state): State<AppState>,
|
||||
session: Session,
|
||||
Path(entry_id): Path<Uuid>,
|
||||
) -> Result<Json<serde_json::Value>, EntryApiError> {
|
||||
let user_id = current_user_id(&session)
|
||||
.await
|
||||
.ok_or((StatusCode::UNAUTHORIZED, Json(json!({ "error": "未登录" }))))?;
|
||||
|
||||
let result = delete_by_id(&state.pool, entry_id, user_id)
|
||||
.await
|
||||
.map_err(map_entry_mutation_err)?;
|
||||
|
||||
Ok(Json(json!({
|
||||
"ok": true,
|
||||
"migrated": result.migrated,
|
||||
})))
|
||||
}
|
||||
|
||||
async fn api_entry_secret_unlink(
|
||||
State(state): State<AppState>,
|
||||
session: Session,
|
||||
Path((entry_id, secret_id)): Path<(Uuid, Uuid)>,
|
||||
) -> Result<Json<serde_json::Value>, EntryApiError> {
|
||||
#[derive(sqlx::FromRow)]
|
||||
struct EntryAuditRow {
|
||||
folder: String,
|
||||
#[sqlx(rename = "type")]
|
||||
entry_type: String,
|
||||
name: String,
|
||||
}
|
||||
|
||||
let user_id = current_user_id(&session)
|
||||
.await
|
||||
.ok_or((StatusCode::UNAUTHORIZED, Json(json!({ "error": "未登录" }))))?;
|
||||
|
||||
let mut tx = state
|
||||
.pool
|
||||
.begin()
|
||||
.await
|
||||
.map_err(|e| map_entry_mutation_err(e.into()))?;
|
||||
|
||||
let entry_row: Option<EntryAuditRow> =
|
||||
sqlx::query_as("SELECT folder, type, name FROM entries WHERE id = $1 AND user_id = $2")
|
||||
.bind(entry_id)
|
||||
.bind(user_id)
|
||||
.fetch_optional(&mut *tx)
|
||||
.await
|
||||
.map_err(|e| map_entry_mutation_err(e.into()))?;
|
||||
|
||||
let Some(entry_row) = entry_row else {
|
||||
tx.rollback()
|
||||
.await
|
||||
.map_err(|e| map_entry_mutation_err(e.into()))?;
|
||||
return Err((
|
||||
StatusCode::NOT_FOUND,
|
||||
Json(json!({ "error": "条目不存在或无权访问" })),
|
||||
));
|
||||
};
|
||||
|
||||
let deleted = sqlx::query("DELETE FROM entry_secrets WHERE entry_id = $1 AND secret_id = $2")
|
||||
.bind(entry_id)
|
||||
.bind(secret_id)
|
||||
.execute(&mut *tx)
|
||||
.await
|
||||
.map_err(|e| map_entry_mutation_err(e.into()))?
|
||||
.rows_affected();
|
||||
|
||||
if deleted == 0 {
|
||||
tx.rollback()
|
||||
.await
|
||||
.map_err(|e| map_entry_mutation_err(e.into()))?;
|
||||
return Err((
|
||||
StatusCode::NOT_FOUND,
|
||||
Json(json!({ "error": "关联不存在" })),
|
||||
));
|
||||
}
|
||||
|
||||
let secret_deleted = sqlx::query(
|
||||
"DELETE FROM secrets s \
|
||||
WHERE s.id = $1 \
|
||||
AND NOT EXISTS (SELECT 1 FROM entry_secrets es WHERE es.secret_id = s.id)",
|
||||
)
|
||||
.bind(secret_id)
|
||||
.execute(&mut *tx)
|
||||
.await
|
||||
.map_err(|e| map_entry_mutation_err(e.into()))?
|
||||
.rows_affected()
|
||||
> 0;
|
||||
|
||||
secrets_core::audit::log_tx(
|
||||
&mut tx,
|
||||
Some(user_id),
|
||||
"unlink_secret",
|
||||
&entry_row.folder,
|
||||
&entry_row.entry_type,
|
||||
&entry_row.name,
|
||||
json!({
|
||||
"source": "web",
|
||||
"entry_id": entry_id,
|
||||
"secret_id": secret_id,
|
||||
"deleted_secret": secret_deleted,
|
||||
}),
|
||||
)
|
||||
.await;
|
||||
|
||||
tx.commit()
|
||||
.await
|
||||
.map_err(|e| map_entry_mutation_err(e.into()))?;
|
||||
|
||||
Ok(Json(json!({
|
||||
"ok": true,
|
||||
"deleted_relation": true,
|
||||
"deleted_secret": secret_deleted,
|
||||
})))
|
||||
}
|
||||
|
||||
// ── OAuth / Well-known ────────────────────────────────────────────────────────
|
||||
|
||||
/// RFC 9728 — OAuth 2.0 Protected Resource Metadata.
|
||||
|
||||
@@ -92,6 +92,7 @@
|
||||
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
|
||||
<nav class="sidebar-menu">
|
||||
<a href="/dashboard" class="sidebar-link">MCP</a>
|
||||
<a href="/entries" class="sidebar-link">条目</a>
|
||||
<a href="/audit" class="sidebar-link active">审计</a>
|
||||
</nav>
|
||||
</aside>
|
||||
|
||||
@@ -174,6 +174,7 @@
|
||||
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
|
||||
<nav class="sidebar-menu">
|
||||
<a href="/dashboard" class="sidebar-link active">MCP</a>
|
||||
<a href="/entries" class="sidebar-link">条目</a>
|
||||
<a href="/audit" class="sidebar-link">审计</a>
|
||||
</nav>
|
||||
</aside>
|
||||
|
||||
490
crates/secrets-mcp/templates/entries.html
Normal file
490
crates/secrets-mcp/templates/entries.html
Normal file
@@ -0,0 +1,490 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="zh-CN">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<link rel="icon" href="/favicon.svg?v={{ version }}" type="image/svg+xml">
|
||||
<title>Secrets — 条目</title>
|
||||
<style>
|
||||
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;600&family=Inter:wght@400;500;600&display=swap');
|
||||
:root {
|
||||
--bg: #0d1117; --surface: #161b22; --surface2: #21262d;
|
||||
--border: #30363d; --text: #e6edf3; --text-muted: #8b949e;
|
||||
--accent: #58a6ff; --accent-hover: #79b8ff;
|
||||
}
|
||||
body { background: var(--bg); color: var(--text); font-family: 'Inter', sans-serif; min-height: 100vh; }
|
||||
.layout { display: flex; min-height: 100vh; }
|
||||
.sidebar {
|
||||
width: 220px; flex-shrink: 0; background: var(--surface); border-right: 1px solid var(--border);
|
||||
padding: 24px 16px; display: flex; flex-direction: column; gap: 20px;
|
||||
}
|
||||
.sidebar-logo { font-family: 'JetBrains Mono', monospace; font-size: 16px; font-weight: 600;
|
||||
color: var(--text); text-decoration: none; padding: 0 10px; }
|
||||
.sidebar-logo span { color: var(--accent); }
|
||||
.sidebar-menu { display: flex; flex-direction: column; gap: 6px; }
|
||||
.sidebar-link {
|
||||
padding: 10px 12px; border-radius: 8px; color: var(--text-muted); text-decoration: none;
|
||||
border: 1px solid transparent; font-size: 13px; font-weight: 500;
|
||||
}
|
||||
.sidebar-link:hover { background: var(--surface2); color: var(--text); }
|
||||
.sidebar-link.active {
|
||||
background: rgba(88,166,255,0.12); color: var(--text); border-color: rgba(88,166,255,0.35);
|
||||
}
|
||||
.content-shell { flex: 1; min-width: 0; display: flex; flex-direction: column; }
|
||||
.topbar {
|
||||
background: var(--surface); border-bottom: 1px solid var(--border); padding: 0 24px;
|
||||
display: flex; align-items: center; gap: 12px; min-height: 52px;
|
||||
}
|
||||
.topbar-spacer { flex: 1; }
|
||||
.nav-user { font-size: 13px; color: var(--text-muted); }
|
||||
.btn-sign-out {
|
||||
padding: 5px 12px; border-radius: 6px; border: 1px solid var(--border);
|
||||
background: none; color: var(--text); font-size: 12px; text-decoration: none; cursor: pointer;
|
||||
}
|
||||
.btn-sign-out:hover { background: var(--surface2); }
|
||||
.main { padding: 32px 24px 40px; flex: 1; }
|
||||
.card { background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
|
||||
padding: 24px; width: 100%; max-width: 1480px; margin: 0 auto; }
|
||||
.card-title { font-size: 20px; font-weight: 600; margin-bottom: 8px; }
|
||||
.card-subtitle { color: var(--text-muted); font-size: 13px; margin-bottom: 20px; }
|
||||
.filter-bar {
|
||||
display: flex; flex-wrap: wrap; align-items: flex-end; gap: 12px 16px;
|
||||
margin-bottom: 20px; padding: 16px; background: var(--bg); border: 1px solid var(--border);
|
||||
border-radius: 10px;
|
||||
}
|
||||
.filter-field { display: flex; flex-direction: column; gap: 6px; min-width: 140px; flex: 1; }
|
||||
.filter-field label { font-size: 12px; color: var(--text-muted); font-weight: 500; }
|
||||
.filter-field input {
|
||||
background: var(--surface); border: 1px solid var(--border); border-radius: 6px;
|
||||
color: var(--text); padding: 8px 10px; font-size: 13px; font-family: 'JetBrains Mono', monospace;
|
||||
outline: none; width: 100%;
|
||||
}
|
||||
.filter-field input:focus { border-color: var(--accent); }
|
||||
.filter-actions { display: flex; flex-wrap: wrap; align-items: center; gap: 8px; }
|
||||
.btn-filter {
|
||||
padding: 8px 16px; border-radius: 6px; border: none; background: var(--accent); color: #0d1117;
|
||||
font-size: 13px; font-weight: 600; cursor: pointer;
|
||||
}
|
||||
.btn-filter:hover { background: var(--accent-hover); }
|
||||
.btn-clear {
|
||||
padding: 8px 14px; border-radius: 6px; border: 1px solid var(--border); background: transparent;
|
||||
color: var(--text-muted); font-size: 13px; text-decoration: none; cursor: pointer;
|
||||
}
|
||||
.btn-clear:hover { background: var(--surface2); color: var(--text); }
|
||||
.empty { color: var(--text-muted); font-size: 14px; padding: 20px 0; }
|
||||
.table-wrap {
|
||||
overflow: auto;
|
||||
border: 1px solid var(--border);
|
||||
border-radius: 10px;
|
||||
background: var(--bg);
|
||||
max-height: 72vh;
|
||||
}
|
||||
table {
|
||||
width: max-content;
|
||||
min-width: 1240px;
|
||||
border-collapse: separate;
|
||||
border-spacing: 0;
|
||||
}
|
||||
th, td { text-align: left; vertical-align: top; padding: 12px 10px; border-top: 1px solid var(--border); }
|
||||
th {
|
||||
color: var(--text-muted);
|
||||
font-size: 12px;
|
||||
font-weight: 600;
|
||||
white-space: nowrap;
|
||||
position: sticky;
|
||||
top: 0;
|
||||
z-index: 2;
|
||||
background: var(--surface);
|
||||
}
|
||||
td { font-size: 13px; line-height: 1.45; }
|
||||
tbody tr:nth-child(2n) td { background: rgba(255, 255, 255, 0.01); }
|
||||
.mono { font-family: 'JetBrains Mono', monospace; }
|
||||
.col-updated { min-width: 168px; }
|
||||
.col-folder { min-width: 128px; }
|
||||
.col-type { min-width: 108px; }
|
||||
.col-name { min-width: 180px; max-width: 260px; }
|
||||
.col-tags { min-width: 160px; max-width: 220px; }
|
||||
.col-actions { min-width: 132px; }
|
||||
.cell-name, .cell-tags-val {
|
||||
overflow-wrap: anywhere;
|
||||
word-break: break-word;
|
||||
}
|
||||
.cell-notes, .cell-meta { min-width: 260px; max-width: 360px; }
|
||||
.notes-scroll {
|
||||
max-height: 120px;
|
||||
overflow: auto;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
padding: 8px;
|
||||
background: var(--bg);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: 8px;
|
||||
font-size: 12px;
|
||||
}
|
||||
.detail {
|
||||
background: var(--bg); border: 1px solid var(--border); border-radius: 8px;
|
||||
padding: 10px; white-space: pre-wrap; word-break: break-word; font-size: 12px;
|
||||
max-width: 360px; max-height: 120px; overflow: auto;
|
||||
}
|
||||
.col-actions { white-space: nowrap; }
|
||||
.row-actions { display: flex; flex-wrap: wrap; gap: 6px; }
|
||||
.col-secrets { min-width: 300px; max-width: 420px; }
|
||||
.secret-list { display: flex; flex-wrap: wrap; gap: 6px; max-width: 400px; }
|
||||
.secret-chip {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
border: 1px solid var(--border);
|
||||
border-radius: 999px;
|
||||
padding: 3px 8px;
|
||||
font-size: 11px;
|
||||
background: var(--surface2);
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
max-width: 100%;
|
||||
min-width: 0;
|
||||
}
|
||||
.secret-name {
|
||||
min-width: 0;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
white-space: nowrap;
|
||||
}
|
||||
.secret-type {
|
||||
color: var(--text-muted);
|
||||
border-left: 1px solid var(--border);
|
||||
padding-left: 6px;
|
||||
}
|
||||
.btn-unlink-secret {
|
||||
border: none;
|
||||
background: transparent;
|
||||
color: #f85149;
|
||||
cursor: pointer;
|
||||
font-size: 12px;
|
||||
line-height: 1;
|
||||
padding: 0;
|
||||
}
|
||||
.btn-row {
|
||||
padding: 4px 10px; border-radius: 6px; font-size: 12px; cursor: pointer;
|
||||
border: 1px solid var(--border); background: var(--surface2); color: var(--text-muted);
|
||||
font-family: inherit;
|
||||
}
|
||||
.btn-row:hover { color: var(--text); border-color: var(--text-muted); }
|
||||
.btn-row.danger:hover { border-color: #f85149; color: #f85149; }
|
||||
.modal-overlay {
|
||||
position: fixed; inset: 0; background: rgba(1, 4, 9, 0.65); z-index: 200;
|
||||
display: flex; align-items: center; justify-content: center; padding: 16px;
|
||||
}
|
||||
.modal-overlay[hidden] { display: none !important; }
|
||||
.modal {
|
||||
background: var(--surface); border: 1px solid var(--border); border-radius: 12px;
|
||||
padding: 22px; width: 100%; max-width: 520px; max-height: 90vh; overflow: auto;
|
||||
box-shadow: 0 16px 48px rgba(0,0,0,0.45);
|
||||
}
|
||||
.modal-title { font-size: 16px; font-weight: 600; margin-bottom: 14px; }
|
||||
.modal-field { margin-bottom: 12px; }
|
||||
.modal-field label { display: block; font-size: 12px; color: var(--text-muted); margin-bottom: 5px; }
|
||||
.modal-field input, .modal-field textarea {
|
||||
width: 100%; background: var(--bg); border: 1px solid var(--border); border-radius: 6px;
|
||||
color: var(--text); padding: 8px 10px; font-size: 13px; font-family: 'JetBrains Mono', monospace;
|
||||
outline: none;
|
||||
}
|
||||
.modal-field textarea { min-height: 72px; resize: vertical; }
|
||||
.modal-field textarea.metadata-edit { min-height: 140px; }
|
||||
.modal-error { color: #f85149; font-size: 12px; margin-bottom: 10px; display: none; }
|
||||
.modal-error.visible { display: block; }
|
||||
.modal-footer { display: flex; flex-wrap: wrap; gap: 8px; justify-content: flex-end; margin-top: 16px; }
|
||||
.btn-modal { padding: 8px 16px; border-radius: 6px; font-size: 13px; cursor: pointer; font-family: inherit; border: 1px solid var(--border); background: transparent; color: var(--text); }
|
||||
.btn-modal.primary { background: var(--accent); color: #0d1117; border-color: transparent; font-weight: 600; }
|
||||
.btn-modal.primary:hover { background: var(--accent-hover); }
|
||||
.btn-modal.danger { border-color: #f85149; color: #f85149; }
|
||||
@media (max-width: 900px) {
|
||||
.layout { flex-direction: column; }
|
||||
.sidebar {
|
||||
width: 100%; border-right: none; border-bottom: 1px solid var(--border);
|
||||
padding: 16px; gap: 14px;
|
||||
}
|
||||
.sidebar-menu { flex-direction: row; flex-wrap: wrap; }
|
||||
.sidebar-link { flex: 1; text-align: center; min-width: 72px; }
|
||||
.main { padding: 20px 12px 28px; }
|
||||
.card { padding: 16px; }
|
||||
.topbar { padding: 12px 16px; flex-wrap: wrap; }
|
||||
.table-wrap { max-height: none; border: none; background: transparent; }
|
||||
table, thead, tbody, th, td, tr { display: block; min-width: 0; width: 100%; }
|
||||
thead { display: none; }
|
||||
tr { border-top: 1px solid var(--border); padding: 12px 0; }
|
||||
td { border-top: none; padding: 6px 0; max-width: none; }
|
||||
td::before {
|
||||
display: block; color: var(--text-muted); font-size: 11px;
|
||||
margin-bottom: 4px; text-transform: uppercase;
|
||||
}
|
||||
td.col-updated::before { content: "更新"; }
|
||||
td.col-folder::before { content: "Folder"; }
|
||||
td.col-type::before { content: "Type"; }
|
||||
td.col-name::before { content: "Name"; }
|
||||
td.col-notes::before { content: "Notes"; }
|
||||
td.col-tags::before { content: "Tags"; }
|
||||
td.col-meta::before { content: "Metadata"; }
|
||||
td.col-secrets::before { content: "Secrets"; }
|
||||
td.col-actions::before { content: "操作"; }
|
||||
.detail, .notes-scroll, .secret-list { max-width: none; }
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="layout">
|
||||
<aside class="sidebar">
|
||||
<a href="/dashboard" class="sidebar-logo"><span>secrets</span></a>
|
||||
<nav class="sidebar-menu">
|
||||
<a href="/dashboard" class="sidebar-link">MCP</a>
|
||||
<a href="/entries" class="sidebar-link active">条目</a>
|
||||
<a href="/audit" class="sidebar-link">审计</a>
|
||||
</nav>
|
||||
</aside>
|
||||
|
||||
<div class="content-shell">
|
||||
<div class="topbar">
|
||||
<span class="topbar-spacer"></span>
|
||||
<span class="nav-user">{{ user_name }}{% if !user_email.is_empty() %} · {{ user_email }}{% endif %}</span>
|
||||
<form action="/auth/logout" method="post" style="display:inline">
|
||||
<button type="submit" class="btn-sign-out">退出</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<main class="main">
|
||||
<section class="card">
|
||||
<div class="card-title">我的条目</div>
|
||||
<div class="card-subtitle">在当前筛选条件下,共 <strong>{{ total_count }}</strong> 条记录;本页显示 <strong>{{ shown_count }}</strong> 条(按更新时间降序,单页最多 {{ limit }} 条)。不含密文字段。时间为浏览器本地时区。提示:非敏感地址类字段(如 address / endpoint / url)建议放在 Metadata(例如 <code>metadata.address</code>),仅密码/令牌等放 Secrets。</div>
|
||||
|
||||
<form class="filter-bar" method="get" action="/entries">
|
||||
<div class="filter-field">
|
||||
<label for="filter-folder">Folder(精确匹配)</label>
|
||||
<input id="filter-folder" name="folder" type="text" value="{{ filter_folder }}" placeholder="例如 refining" autocomplete="off">
|
||||
</div>
|
||||
<div class="filter-field">
|
||||
<label for="filter-type">Type(精确匹配)</label>
|
||||
<input id="filter-type" name="type" type="text" value="{{ filter_type }}" placeholder="例如 server" autocomplete="off">
|
||||
</div>
|
||||
<div class="filter-actions">
|
||||
<button type="submit" class="btn-filter">筛选</button>
|
||||
<a href="/entries" class="btn-clear">清空</a>
|
||||
</div>
|
||||
</form>
|
||||
|
||||
{% if entries.is_empty() %}
|
||||
<div class="empty">暂无条目。</div>
|
||||
{% else %}
|
||||
<div class="table-wrap">
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>更新</th>
|
||||
<th>Folder</th>
|
||||
<th>Type</th>
|
||||
<th>Name</th>
|
||||
<th>Notes</th>
|
||||
<th>Tags</th>
|
||||
<th>Metadata</th>
|
||||
<th>Secrets</th>
|
||||
<th>操作</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for entry in entries %}
|
||||
<tr data-entry-id="{{ entry.id }}">
|
||||
<td class="col-updated mono"><time class="entry-local-time" datetime="{{ entry.updated_at_iso }}">{{ entry.updated_at_iso }}</time></td>
|
||||
<td class="col-folder mono cell-folder">{{ entry.folder }}</td>
|
||||
<td class="col-type mono cell-type">{{ entry.entry_type }}</td>
|
||||
<td class="col-name mono cell-name">{{ entry.name }}</td>
|
||||
<td class="col-notes cell-notes">{% if !entry.notes.is_empty() %}<div class="notes-scroll cell-notes-val">{{ entry.notes }}</div>{% endif %}</td>
|
||||
<td class="col-tags mono cell-tags-val">{{ entry.tags }}</td>
|
||||
<td class="col-meta cell-meta"><pre class="detail cell-meta-val">{{ entry.metadata }}</pre></td>
|
||||
<td class="col-secrets">
|
||||
<div class="secret-list">
|
||||
{% for s in entry.secrets %}
|
||||
<span class="secret-chip">
|
||||
<span class="secret-name" title="{{ s.name }}">{{ s.name }}</span>
|
||||
<span class="secret-type">{{ s.secret_type }}</span>
|
||||
<button type="button" class="btn-unlink-secret" data-secret-id="{{ s.id }}" data-secret-name="{{ s.name }}" title="解除关联">x</button>
|
||||
</span>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</td>
|
||||
<td class="col-actions">
|
||||
<div class="row-actions">
|
||||
<button type="button" class="btn-row btn-edit">编辑</button>
|
||||
<button type="button" class="btn-row danger btn-del">删除</button>
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{% endif %}
|
||||
</section>
|
||||
</main>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="edit-overlay" class="modal-overlay" hidden>
|
||||
<div class="modal" role="dialog" aria-modal="true" aria-labelledby="edit-title">
|
||||
<div class="modal-title" id="edit-title">编辑条目</div>
|
||||
<div id="edit-error" class="modal-error"></div>
|
||||
<div class="modal-field"><label for="edit-folder">Folder</label><input id="edit-folder" type="text" autocomplete="off"></div>
|
||||
<div class="modal-field"><label for="edit-type">Type</label><input id="edit-type" type="text" autocomplete="off"></div>
|
||||
<div class="modal-field"><label for="edit-name">Name</label><input id="edit-name" type="text" autocomplete="off"></div>
|
||||
<div class="modal-field"><label for="edit-notes">Notes</label><textarea id="edit-notes"></textarea></div>
|
||||
<div class="modal-field"><label for="edit-tags">Tags(逗号分隔)</label><input id="edit-tags" type="text" autocomplete="off"></div>
|
||||
<div class="modal-field"><label for="edit-metadata">Metadata(JSON 对象)</label><textarea id="edit-metadata" class="metadata-edit"></textarea></div>
|
||||
<div class="modal-footer">
|
||||
<button type="button" class="btn-modal" id="edit-cancel">取消</button>
|
||||
<button type="button" class="btn-modal primary" id="edit-save">保存</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<script>
|
||||
(function () {
|
||||
document.querySelectorAll('time.entry-local-time[datetime]').forEach(function (el) {
|
||||
var raw = el.getAttribute('datetime');
|
||||
var d = raw ? new Date(raw) : null;
|
||||
if (d && !isNaN(d.getTime())) {
|
||||
el.textContent = d.toLocaleString(undefined, { dateStyle: 'medium', timeStyle: 'medium' });
|
||||
el.title = raw + ' (UTC)';
|
||||
}
|
||||
});
|
||||
|
||||
var editOverlay = document.getElementById('edit-overlay');
|
||||
var editError = document.getElementById('edit-error');
|
||||
var editFolder = document.getElementById('edit-folder');
|
||||
var editType = document.getElementById('edit-type');
|
||||
var editName = document.getElementById('edit-name');
|
||||
var editNotes = document.getElementById('edit-notes');
|
||||
var editTags = document.getElementById('edit-tags');
|
||||
var editMetadata = document.getElementById('edit-metadata');
|
||||
var currentEntryId = null;
|
||||
|
||||
function showEditErr(msg) {
|
||||
editError.textContent = msg || '';
|
||||
editError.classList.toggle('visible', !!msg);
|
||||
}
|
||||
|
||||
function openEdit(tr) {
|
||||
var id = tr.getAttribute('data-entry-id');
|
||||
if (!id) return;
|
||||
currentEntryId = id;
|
||||
showEditErr('');
|
||||
editFolder.value = tr.querySelector('.cell-folder') ? tr.querySelector('.cell-folder').textContent.trim() : '';
|
||||
editType.value = tr.querySelector('.cell-type') ? tr.querySelector('.cell-type').textContent.trim() : '';
|
||||
editName.value = tr.querySelector('.cell-name') ? tr.querySelector('.cell-name').textContent.trim() : '';
|
||||
editNotes.value = tr.querySelector('.cell-notes-val') ? tr.querySelector('.cell-notes-val').textContent : '';
|
||||
var tagsText = tr.querySelector('.cell-tags-val') ? tr.querySelector('.cell-tags-val').textContent.trim() : '';
|
||||
editTags.value = tagsText;
|
||||
var metaPre = tr.querySelector('.cell-meta-val');
|
||||
editMetadata.value = metaPre ? metaPre.textContent : '{}';
|
||||
editOverlay.hidden = false;
|
||||
}
|
||||
|
||||
function closeEdit() {
|
||||
editOverlay.hidden = true;
|
||||
currentEntryId = null;
|
||||
showEditErr('');
|
||||
}
|
||||
|
||||
document.getElementById('edit-cancel').addEventListener('click', closeEdit);
|
||||
editOverlay.addEventListener('click', function (e) {
|
||||
if (e.target === editOverlay) closeEdit();
|
||||
});
|
||||
|
||||
document.getElementById('edit-save').addEventListener('click', function () {
|
||||
if (!currentEntryId) return;
|
||||
var meta;
|
||||
try {
|
||||
meta = JSON.parse(editMetadata.value);
|
||||
} catch (err) {
|
||||
showEditErr('Metadata 不是合法 JSON');
|
||||
return;
|
||||
}
|
||||
if (meta === null || typeof meta !== 'object' || Array.isArray(meta)) {
|
||||
showEditErr('Metadata 必须是 JSON 对象');
|
||||
return;
|
||||
}
|
||||
var tags = editTags.value.split(',').map(function (s) { return s.trim(); }).filter(Boolean);
|
||||
var body = {
|
||||
folder: editFolder.value,
|
||||
type: editType.value,
|
||||
name: editName.value.trim(),
|
||||
notes: editNotes.value,
|
||||
tags: tags,
|
||||
metadata: meta
|
||||
};
|
||||
showEditErr('');
|
||||
fetch('/api/entries/' + encodeURIComponent(currentEntryId), {
|
||||
method: 'PATCH',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'same-origin',
|
||||
body: JSON.stringify(body)
|
||||
}).then(function (r) {
|
||||
return r.json().then(function (data) {
|
||||
if (!r.ok) throw new Error(data.error || ('HTTP ' + r.status));
|
||||
return data;
|
||||
});
|
||||
}).then(function () {
|
||||
closeEdit();
|
||||
window.location.reload();
|
||||
}).catch(function (e) {
|
||||
showEditErr(e.message || String(e));
|
||||
});
|
||||
});
|
||||
|
||||
document.querySelectorAll('tr[data-entry-id]').forEach(function (tr) {
|
||||
tr.querySelector('.btn-edit').addEventListener('click', function () { openEdit(tr); });
|
||||
tr.querySelector('.btn-del').addEventListener('click', function () {
|
||||
var id = tr.getAttribute('data-entry-id');
|
||||
var nameEl = tr.querySelector('.cell-name');
|
||||
var name = nameEl ? nameEl.textContent.trim() : '';
|
||||
if (!id) return;
|
||||
if (!confirm('确定删除条目「' + name + '」?')) return;
|
||||
fetch('/api/entries/' + encodeURIComponent(id), { method: 'DELETE', credentials: 'same-origin' })
|
||||
.then(function (r) {
|
||||
return r.json().then(function (data) {
|
||||
if (!r.ok) throw new Error(data.error || ('HTTP ' + r.status));
|
||||
return data;
|
||||
});
|
||||
})
|
||||
.then(function (data) {
|
||||
if (data && Array.isArray(data.migrated) && data.migrated.length > 0) {
|
||||
alert('已自动迁移共享 key 引用:' + data.migrated.length + ' 个条目完成重定向。');
|
||||
}
|
||||
window.location.reload();
|
||||
})
|
||||
.catch(function (e) { alert(e.message || String(e)); });
|
||||
});
|
||||
|
||||
tr.querySelectorAll('.btn-unlink-secret').forEach(function (btn) {
|
||||
btn.addEventListener('click', function () {
|
||||
var entryId = tr.getAttribute('data-entry-id');
|
||||
var secretId = btn.getAttribute('data-secret-id');
|
||||
var secretName = btn.getAttribute('data-secret-name') || '';
|
||||
if (!entryId || !secretId) return;
|
||||
if (!confirm('确定解除 secret 关联「' + secretName + '」?')) return;
|
||||
fetch('/api/entries/' + encodeURIComponent(entryId) + '/secrets/' + encodeURIComponent(secretId), {
|
||||
method: 'DELETE',
|
||||
credentials: 'same-origin'
|
||||
}).then(function (r) {
|
||||
return r.json().then(function (data) {
|
||||
if (!r.ok) throw new Error(data.error || ('HTTP ' + r.status));
|
||||
return data;
|
||||
});
|
||||
}).then(function () {
|
||||
window.location.reload();
|
||||
}).catch(function (e) {
|
||||
alert(e.message || String(e));
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
})();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -3,7 +3,13 @@
|
||||
|
||||
# ─── 数据库 ───────────────────────────────────────────────────────────
|
||||
# Web 会话(tower-sessions)与业务数据共用此库;启动时会自动 migrate 会话表,无需额外环境变量。
|
||||
SECRETS_DATABASE_URL=postgres://postgres:PASSWORD@HOST:PORT/secrets-mcp
|
||||
SECRETS_DATABASE_URL=postgres://postgres:PASSWORD@db.refining.ltd:5432/secrets-mcp
|
||||
# 强烈建议生产使用 verify-full(至少 verify-ca)
|
||||
SECRETS_DATABASE_SSL_MODE=verify-full
|
||||
# 私有 CA 或自建链路时填写 CA 根证书路径;使用公共受信 CA 可留空
|
||||
# SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
|
||||
# 当设为 prod/production 时,服务会拒绝弱 TLS 模式(prefer/disable/allow/require)
|
||||
SECRETS_ENV=production
|
||||
|
||||
# ─── 服务地址 ─────────────────────────────────────────────────────────
|
||||
# 内网监听地址(Cloudflare / Nginx 反代时填内网端口)
|
||||
|
||||
92
deploy/postgres-tls-hardening.md
Normal file
92
deploy/postgres-tls-hardening.md
Normal file
@@ -0,0 +1,92 @@
|
||||
# PostgreSQL TLS Hardening Runbook
|
||||
|
||||
This runbook applies to:
|
||||
|
||||
- PostgreSQL server: `47.117.131.22` (`db.refining.ltd`)
|
||||
- `secrets-mcp` app server: `47.238.146.244` (`secrets.refining.app`)
|
||||
|
||||
## 1) Issue certificate for `db.refining.ltd` (Let's Encrypt + Cloudflare DNS-01)
|
||||
|
||||
Install `acme.sh` on the PostgreSQL server and use a Cloudflare API token with DNS edit permission for the target zone.
|
||||
|
||||
```bash
|
||||
curl https://get.acme.sh | sh -s email=ops@refining.ltd
|
||||
export CF_Token="your_cloudflare_dns_token"
|
||||
export CF_Zone_ID="your_zone_id"
|
||||
~/.acme.sh/acme.sh --issue --dns dns_cf -d db.refining.ltd --keylength ec-256
|
||||
```
|
||||
|
||||
Install cert/key into a PostgreSQL-readable path:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /etc/postgresql/tls
|
||||
sudo ~/.acme.sh/acme.sh --install-cert -d db.refining.ltd --ecc \
|
||||
--fullchain-file /etc/postgresql/tls/fullchain.pem \
|
||||
--key-file /etc/postgresql/tls/privkey.pem \
|
||||
--reloadcmd "systemctl reload postgresql || systemctl restart postgresql"
|
||||
sudo chown -R postgres:postgres /etc/postgresql/tls
|
||||
sudo chmod 600 /etc/postgresql/tls/privkey.pem
|
||||
sudo chmod 644 /etc/postgresql/tls/fullchain.pem
|
||||
```
|
||||
|
||||
## 2) Configure PostgreSQL TLS and access rules
|
||||
|
||||
In `postgresql.conf`:
|
||||
|
||||
```conf
|
||||
ssl = on
|
||||
ssl_cert_file = '/etc/postgresql/tls/fullchain.pem'
|
||||
ssl_key_file = '/etc/postgresql/tls/privkey.pem'
|
||||
```
|
||||
|
||||
In `pg_hba.conf`, allow app traffic via TLS only (example):
|
||||
|
||||
```conf
|
||||
hostssl secrets-mcp postgres 47.238.146.244/32 scram-sha-256
|
||||
```
|
||||
|
||||
Keep a safe admin path (`local` socket or restricted source CIDR) before removing old plaintext `host` rules.
|
||||
|
||||
Reload PostgreSQL:
|
||||
|
||||
```bash
|
||||
sudo systemctl reload postgresql
|
||||
```
|
||||
|
||||
## 3) Verify server-side TLS
|
||||
|
||||
```bash
|
||||
openssl s_client -starttls postgres -connect db.refining.ltd:5432 -servername db.refining.ltd
|
||||
```
|
||||
|
||||
The handshake should succeed and the certificate should match `db.refining.ltd`.
|
||||
|
||||
## 4) Update `secrets-mcp` app server env
|
||||
|
||||
Use environment values like:
|
||||
|
||||
```bash
|
||||
SECRETS_DATABASE_URL=postgres://postgres:***@db.refining.ltd:5432/secrets-mcp
|
||||
SECRETS_DATABASE_SSL_MODE=verify-full
|
||||
SECRETS_ENV=production
|
||||
```
|
||||
|
||||
If you use private CA instead of public CA, also set:
|
||||
|
||||
```bash
|
||||
SECRETS_DATABASE_SSL_ROOT_CERT=/etc/secrets/pg-ca.crt
|
||||
```
|
||||
|
||||
Restart `secrets-mcp` after updating env.
|
||||
|
||||
## 5) Verify from app server
|
||||
|
||||
Run positive and negative checks:
|
||||
|
||||
- Positive: app starts, migrations pass, dashboard + MCP API work.
|
||||
- Negative:
|
||||
- wrong hostname -> connection fails
|
||||
- wrong CA file -> connection fails
|
||||
- disable TLS on DB -> connection fails
|
||||
|
||||
This ensures no silent downgrade to weak TLS in production.
|
||||
126
migrations/001_nn_schema.sql
Normal file
126
migrations/001_nn_schema.sql
Normal file
@@ -0,0 +1,126 @@
|
||||
-- Entry-Secret N:N migration (manual SQL)
|
||||
-- Safe to re-run: uses IF EXISTS/IF NOT EXISTS guards.
|
||||
|
||||
BEGIN;
|
||||
|
||||
-- 1) secrets: add new columns
|
||||
ALTER TABLE secrets
|
||||
ADD COLUMN IF NOT EXISTS user_id UUID REFERENCES users(id) ON DELETE SET NULL;
|
||||
ALTER TABLE secrets
|
||||
ADD COLUMN IF NOT EXISTS type VARCHAR(64) NOT NULL DEFAULT 'text';
|
||||
|
||||
-- 2) rename field_name -> name (idempotent)
|
||||
DO $$ BEGIN
|
||||
IF EXISTS (
|
||||
SELECT 1
|
||||
FROM information_schema.columns
|
||||
WHERE table_name = 'secrets' AND column_name = 'field_name'
|
||||
) THEN
|
||||
ALTER TABLE secrets RENAME COLUMN field_name TO name;
|
||||
END IF;
|
||||
END $$;
|
||||
|
||||
-- 3) create join table
|
||||
CREATE TABLE IF NOT EXISTS entry_secrets (
|
||||
entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
|
||||
secret_id UUID NOT NULL REFERENCES secrets(id) ON DELETE CASCADE,
|
||||
sort_order INT NOT NULL DEFAULT 0,
|
||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
PRIMARY KEY (entry_id, secret_id)
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_entry_secrets_secret_id ON entry_secrets(secret_id);
|
||||
|
||||
-- 4) backfill user_id and relationship from old secrets.entry_id
|
||||
DO $$ BEGIN
|
||||
IF EXISTS (
|
||||
SELECT 1
|
||||
FROM information_schema.columns
|
||||
WHERE table_name = 'secrets' AND column_name = 'entry_id'
|
||||
) THEN
|
||||
UPDATE secrets s
|
||||
SET user_id = e.user_id
|
||||
FROM entries e
|
||||
WHERE s.entry_id = e.id AND s.user_id IS NULL;
|
||||
|
||||
INSERT INTO entry_secrets(entry_id, secret_id, sort_order)
|
||||
SELECT entry_id, id, 0
|
||||
FROM secrets
|
||||
WHERE entry_id IS NOT NULL
|
||||
ON CONFLICT DO NOTHING;
|
||||
END IF;
|
||||
END $$;
|
||||
|
||||
-- 5) backfill secret types
|
||||
UPDATE secrets SET type = 'pem' WHERE name IN ('ssh_key');
|
||||
UPDATE secrets SET type = 'password' WHERE name IN ('password');
|
||||
UPDATE secrets SET type = 'phone' WHERE name LIKE 'phone%';
|
||||
UPDATE secrets SET type = 'url' WHERE name IN ('webhook_url', 'address');
|
||||
UPDATE secrets
|
||||
SET type = 'token'
|
||||
WHERE name IN (
|
||||
'access_key_id',
|
||||
'access_key_secret',
|
||||
'global_api_key',
|
||||
'api_key',
|
||||
'secret_key',
|
||||
'personal_access_token',
|
||||
'runner_token',
|
||||
'GOOGLE_CLIENT_ID',
|
||||
'GOOGLE_CLIENT_SECRET'
|
||||
);
|
||||
|
||||
-- 6) drop old entry_id path
|
||||
ALTER TABLE secrets DROP CONSTRAINT IF EXISTS secrets_entry_id_fkey;
|
||||
DROP INDEX IF EXISTS idx_secrets_entry_id;
|
||||
ALTER TABLE secrets DROP CONSTRAINT IF EXISTS secrets_entry_id_field_name_key;
|
||||
ALTER TABLE secrets DROP CONSTRAINT IF EXISTS secrets_entry_id_name_key;
|
||||
ALTER TABLE secrets DROP COLUMN IF EXISTS entry_id;
|
||||
|
||||
-- 7) add indexes for new access paths
|
||||
CREATE INDEX IF NOT EXISTS idx_secrets_user_id
|
||||
ON secrets(user_id) WHERE user_id IS NOT NULL;
|
||||
DO $$
|
||||
DECLARE
|
||||
duplicate_samples TEXT;
|
||||
BEGIN
|
||||
SELECT string_agg(
|
||||
format('user_id=%s, name=%s, count=%s', t.user_id, t.name, t.cnt),
|
||||
E'\n'
|
||||
)
|
||||
INTO duplicate_samples
|
||||
FROM (
|
||||
SELECT user_id::TEXT AS user_id, name, COUNT(*) AS cnt
|
||||
FROM secrets
|
||||
WHERE user_id IS NOT NULL
|
||||
GROUP BY user_id, name
|
||||
HAVING COUNT(*) > 1
|
||||
ORDER BY cnt DESC, user_id, name
|
||||
LIMIT 20
|
||||
) t;
|
||||
|
||||
IF duplicate_samples IS NOT NULL THEN
|
||||
RAISE EXCEPTION
|
||||
'Cannot enforce unique constraint on secrets(user_id, name). Duplicates found:%',
|
||||
E'\n' || duplicate_samples
|
||||
USING HINT = 'Please deduplicate conflicting rows, then rerun migration.';
|
||||
END IF;
|
||||
END $$;
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_secrets_unique_user_name
|
||||
ON secrets(user_id, name) WHERE user_id IS NOT NULL;
|
||||
CREATE INDEX IF NOT EXISTS idx_secrets_name ON secrets(name);
|
||||
CREATE INDEX IF NOT EXISTS idx_secrets_type ON secrets(type);
|
||||
|
||||
-- 8) secrets_history: rename and remove entry-scoped columns
|
||||
DO $$ BEGIN
|
||||
IF EXISTS (
|
||||
SELECT 1
|
||||
FROM information_schema.columns
|
||||
WHERE table_name = 'secrets_history' AND column_name = 'field_name'
|
||||
) THEN
|
||||
ALTER TABLE secrets_history RENAME COLUMN field_name TO name;
|
||||
END IF;
|
||||
END $$;
|
||||
ALTER TABLE secrets_history DROP COLUMN IF EXISTS entry_id;
|
||||
ALTER TABLE secrets_history DROP COLUMN IF EXISTS entry_version;
|
||||
|
||||
COMMIT;
|
||||
67
migrations/002_data_cleanup.sql
Normal file
67
migrations/002_data_cleanup.sql
Normal file
@@ -0,0 +1,67 @@
|
||||
-- Metadata cleanup migration (manual SQL)
|
||||
-- Keep tags/type as dedicated columns; remove duplicated metadata keys.
|
||||
|
||||
BEGIN;
|
||||
|
||||
-- 1) Promote metadata.type -> entries.type when present.
|
||||
UPDATE entries
|
||||
SET type = metadata->>'type'
|
||||
WHERE metadata->>'type' IS NOT NULL
|
||||
AND metadata->>'type' <> '';
|
||||
|
||||
-- 2) Remove metadata.type.
|
||||
UPDATE entries
|
||||
SET metadata = metadata - 'type'
|
||||
WHERE metadata ? 'type';
|
||||
|
||||
-- 3) Remove metadata.environment (duplicated by tags prod/dev).
|
||||
UPDATE entries
|
||||
SET metadata = metadata - 'environment'
|
||||
WHERE metadata ? 'environment';
|
||||
|
||||
-- 4) Remove metadata.account when equal to folder.
|
||||
UPDATE entries
|
||||
SET metadata = metadata - 'account'
|
||||
WHERE metadata->>'account' = folder;
|
||||
|
||||
-- 5) Normalize manufacturer -> provider.
|
||||
UPDATE entries
|
||||
SET metadata = (metadata - 'manufacturer')
|
||||
|| jsonb_build_object('provider', metadata->>'manufacturer')
|
||||
WHERE metadata ? 'manufacturer'
|
||||
AND NOT metadata ? 'provider';
|
||||
|
||||
UPDATE entries
|
||||
SET metadata = metadata - 'manufacturer'
|
||||
WHERE metadata ? 'manufacturer'
|
||||
AND metadata ? 'provider';
|
||||
|
||||
-- 6) Drop ssh_key_format (moved to secrets.type).
|
||||
UPDATE entries
|
||||
SET metadata = metadata - 'ssh_key_format'
|
||||
WHERE metadata ? 'ssh_key_format';
|
||||
|
||||
-- 7) Remove display_name when duplicated by name.
|
||||
UPDATE entries
|
||||
SET metadata = metadata - 'display_name'
|
||||
WHERE metadata->>'display_name' = name;
|
||||
|
||||
-- 8) Condense server_* metadata into server_ref.
|
||||
UPDATE entries
|
||||
SET metadata = metadata
|
||||
- 'server_account'
|
||||
- 'server_hostname'
|
||||
- 'server_location'
|
||||
- 'server_public_ip'
|
||||
|| CASE
|
||||
WHEN metadata ? 'server_entry_name'
|
||||
THEN jsonb_build_object('server_ref', metadata->>'server_entry_name')
|
||||
ELSE '{}'::jsonb
|
||||
END
|
||||
WHERE metadata ? 'server_entry_name' OR metadata ? 'server_account';
|
||||
|
||||
UPDATE entries
|
||||
SET metadata = metadata - 'server_entry_name'
|
||||
WHERE metadata ? 'server_entry_name';
|
||||
|
||||
COMMIT;
|
||||
81
scripts/migrate-db-prod-to-nn-test.sh
Executable file
81
scripts/migrate-db-prod-to-nn-test.sh
Executable file
@@ -0,0 +1,81 @@
|
||||
#!/usr/bin/env bash
|
||||
# Migrate PostgreSQL data from secrets-mcp-prod to secrets-nn-test.
|
||||
#
|
||||
# Prereqs: pg_dump and pg_restore (PostgreSQL client tools) on PATH.
|
||||
# TLS: Use the same connection parameters as your MCP / app (e.g. sslmode=verify-full
|
||||
# and PGSSLROOTCERT if needed). If local psql fails with "certificate verify failed",
|
||||
# run this script from a host that trusts the server CA, or set PGSSLROOTCERT.
|
||||
#
|
||||
# Usage:
|
||||
# export SOURCE_DATABASE_URL='postgres://USER:PASS@host:5432/secrets-mcp-prod?sslmode=verify-full'
|
||||
# export TARGET_DATABASE_URL='postgres://USER:PASS@host:5432/secrets-nn-test?sslmode=verify-full'
|
||||
# ./scripts/migrate-db-prod-to-nn-test.sh
|
||||
#
|
||||
# Options (env):
|
||||
# BACKUP_TARGET_FIRST=1 # default: dump target to ./backup-secrets-nn-test-<timestamp>.dump before restore
|
||||
# RUN_NN_SQL=1 # default: run migrations/001_nn_schema.sql then 002_data_cleanup.sql on target after restore
|
||||
# SKIP_TARGET_BACKUP=1 # skip target backup
|
||||
#
|
||||
# WARNINGS:
|
||||
# - pg_restore with --clean --if-exists drops objects that exist in the dump; target DB is replaced
|
||||
# to match the logical content of the source dump (same as typical full restore).
|
||||
# - Optionally keep a manual dump of the target before proceeding.
|
||||
# - 001_nn_schema.sql will fail if secrets has duplicate (user_id, name) after backfill; fix data first.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
cd "$ROOT"
|
||||
|
||||
SOURCE_URL="${SOURCE_DATABASE_URL:-}"
|
||||
TARGET_URL="${TARGET_DATABASE_URL:-}"
|
||||
|
||||
if [[ -z "$SOURCE_URL" || -z "$TARGET_URL" ]]; then
|
||||
echo "Set SOURCE_DATABASE_URL and TARGET_DATABASE_URL (postgres URLs)." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v pg_dump >/dev/null || ! command -v pg_restore >/dev/null; then
|
||||
echo "pg_dump and pg_restore are required." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TS="$(date +%Y%m%d%H%M%S)"
|
||||
DUMP_FILE="${DUMP_FILE:-$ROOT/tmp/secrets-mcp-prod-${TS}.dump}"
|
||||
mkdir -p "$(dirname "$DUMP_FILE")"
|
||||
|
||||
if [[ "${EXCLUDE_TOWER_SESSIONS:-}" == "1" ]]; then
|
||||
echo "==> Excluding schema tower_sessions from dump"
|
||||
pg_dump "$SOURCE_URL" -Fc --no-owner --no-acl --exclude-schema=tower_sessions -f "$DUMP_FILE"
|
||||
else
|
||||
echo "==> Dumping source (custom format) -> $DUMP_FILE"
|
||||
pg_dump "$SOURCE_URL" -Fc --no-owner --no-acl -f "$DUMP_FILE"
|
||||
fi
|
||||
|
||||
if [[ "${SKIP_TARGET_BACKUP:-}" != "1" && "${BACKUP_TARGET_FIRST:-1}" == "1" ]]; then
|
||||
BACKUP_FILE="$ROOT/tmp/secrets-nn-test-before-${TS}.dump"
|
||||
echo "==> Backing up target -> $BACKUP_FILE"
|
||||
pg_dump "$TARGET_URL" -Fc --no-owner --no-acl -f "$BACKUP_FILE" || {
|
||||
echo "Target backup failed (empty DB is OK). Continuing." >&2
|
||||
}
|
||||
fi
|
||||
|
||||
echo "==> Restoring into target (--clean --if-exists)"
|
||||
pg_restore -d "$TARGET_URL" --no-owner --no-acl --clean --if-exists --verbose "$DUMP_FILE"
|
||||
|
||||
if [[ "${RUN_NN_SQL:-1}" == "1" ]]; then
|
||||
if [[ ! -f "$ROOT/migrations/001_nn_schema.sql" ]]; then
|
||||
echo "migrations/001_nn_schema.sql not found; skip NN SQL." >&2
|
||||
else
|
||||
echo "==> Applying migrations/001_nn_schema.sql on target"
|
||||
psql "$TARGET_URL" -v ON_ERROR_STOP=1 -f "$ROOT/migrations/001_nn_schema.sql"
|
||||
fi
|
||||
if [[ -f "$ROOT/migrations/002_data_cleanup.sql" ]]; then
|
||||
echo "==> Applying migrations/002_data_cleanup.sql on target"
|
||||
psql "$TARGET_URL" -v ON_ERROR_STOP=1 -f "$ROOT/migrations/002_data_cleanup.sql"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "==> Done. Suggested verification:"
|
||||
echo " psql \"\$TARGET_DATABASE_URL\" -c \"SELECT COUNT(*) FROM entries; SELECT COUNT(*) FROM secrets; SELECT COUNT(*) FROM entry_secrets;\""
|
||||
echo " ./scripts/release-check.sh # optional app-side sanity"
|
||||
Reference in New Issue
Block a user