diff --git a/AGENTS.md b/AGENTS.md index 33540b8..67bec0a 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -29,7 +29,8 @@ secrets/ - **建议库名**:`secrets-mcp`(专用实例,与历史库名区分)。 - **连接**:环境变量 **`SECRETS_DATABASE_URL`**(本分支无本地配置文件路径)。 -- **表**:`entries`(含 `user_id`)、`secrets`、`entries_history`、`secrets_history`、`audit_log`、`users`、`oauth_accounts`,首次连接 **auto-migrate**。 +- **表**:`entries`(含 `user_id`)、`secrets`、`entries_history`、`secrets_history`、`audit_log`、`users`、`oauth_accounts`,首次连接 **auto-migrate**(`secrets-core` 的 `migrate`)。 +- **Web 会话**:与上项 **同一数据库 URL**;`secrets-mcp` 启动时对 tower-sessions 的 PostgreSQL 存储 **auto-migrate**(会话表与业务表共存于该实例,无需第二套连接串)。 ### 表结构(摘录) @@ -37,15 +38,18 @@ secrets/ entries ( id UUID PRIMARY KEY DEFAULT uuidv7(), user_id UUID, -- 多租户:NULL=遗留行;非空=归属用户 - namespace VARCHAR(64) NOT NULL, - kind VARCHAR(64) NOT NULL, + folder VARCHAR(128) NOT NULL DEFAULT '', + type VARCHAR(64) NOT NULL DEFAULT '', name VARCHAR(256) NOT NULL, + notes TEXT NOT NULL DEFAULT '', tags TEXT[] NOT NULL DEFAULT '{}', metadata JSONB NOT NULL DEFAULT '{}', version BIGINT NOT NULL DEFAULT 1, created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW() ) +-- 唯一:UNIQUE(user_id, folder, name) WHERE user_id IS NOT NULL; +-- UNIQUE(folder, name) WHERE user_id IS NULL(单租户遗留) ``` ```sql @@ -82,22 +86,31 @@ oauth_accounts ( user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE, provider VARCHAR(32) NOT NULL, provider_id VARCHAR(256) NOT NULL, - ... + email VARCHAR(256), + name VARCHAR(256), + avatar_url TEXT, + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), UNIQUE(provider, provider_id) ) +-- 另有唯一索引 UNIQUE(user_id, provider)(迁移中 idx_oauth_accounts_user_provider):同一用户每种 provider 至多一条关联。 ``` ### audit_log / history -与迁移脚本一致:`audit_log`、`entries_history`、`secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs` 内 `migrate` SQL。`audit_log` 中普通业务事件的 `namespace/kind/name` 对应 entry 坐标;登录类事件固定使用 `namespace='auth'`,此时 `kind/name` 表示认证目标而非 entry 身份。 +与迁移脚本一致:`audit_log`、`entries_history`、`secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs` 内 `migrate` SQL。`audit_log` 含可选 **`user_id`**(多租户下标识操作者;可空以兼容遗留数据)。`audit_log` 中普通业务事件使用 **`folder` / `type` / `name`** 对应 entry 坐标;登录类事件固定使用 **`folder='auth'`**,此时 `type`/`name` 表示认证目标而非 entry 身份。 + +### MCP 消歧(AI 调用) + +按 `name` 定位条目的工具(`get` / `update` / 单条 `delete` / `history` / `rollback`):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder`。`secrets_delete` 的 `dry_run=true` 与真实删除使用相同消歧规则。 ### 字段职责 | 字段 | 含义 | 示例 | |------|------|------| -| `namespace` | 隔离空间 | `refining` | -| `kind` | 记录类型 | `server`, `service`, `key` | -| `name` | 标识名 | `gitea`, `i-example0…` | +| `folder` | 隔离空间(参与唯一键) | `refining` | +| `type` | 软分类(不参与唯一键) | `server`, `service`, `key`, `person` | +| `name` | 标识名 | `gitea`, `aliyun` | +| `notes` | 非敏感说明 | 自由文本 | | `tags` | 标签 | `["aliyun","prod"]` | | `metadata` | 明文描述 | `ip`、`url`、`key_ref` | | `secrets.field_name` | 加密字段名(明文) | `token`, `ssh_key` | @@ -105,7 +118,7 @@ oauth_accounts ( ### PEM 共享(`key_ref`) -将共享 PEM 存为 `kind=key` 的 entry;其它记录在 `metadata.key_ref` 指向该 key 的 `name`。更新 key 记录后,引用方通过服务层解析合并逻辑即可使用新密钥(实现见 `secrets_core::service`)。 +将共享 PEM 存为 **`type=key`** 的 entry;其它记录在 `metadata.key_ref` 指向该 key 的 `name`。更新 key 记录后,引用方通过服务层解析合并逻辑即可使用新密钥(实现见 `secrets_core::service`)。 ## 代码规范 diff --git a/Cargo.lock b/Cargo.lock index 8f8495d..714ff2c 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -1968,7 +1968,7 @@ dependencies = [ [[package]] name = "secrets-mcp" -version = "0.2.2" +version = "0.3.0" dependencies = [ "anyhow", "askama", diff --git a/README.md b/README.md index 61144e7..ee23dbf 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,15 @@ cargo run -p secrets-mcp ``` - **Web**:`BASE_URL`(登录、Dashboard、设置密码短语、创建 API Key)。 -- **MCP**:Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer ` + `X-Encryption-Key: ` 请求头。 +- **MCP**:Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer ` + `X-Encryption-Key: ` 请求头(读密文工具须带密钥)。 + +## MCP 与 AI 工作流(v0.3+) + +条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun` 与 `ricnsmart/aliyun`)。 + +- **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。 +- **`secrets_get` / `secrets_update` / `secrets_delete`(按 name)/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。 +- **`secrets_delete`**:`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder`。 ## 加密架构(混合 E2EE) @@ -122,13 +130,14 @@ flowchart LR ## 数据模型 -主表 **`entries`**(`namespace`、`kind`、`name`、`tags`、`metadata`,多租户时带 `user_id`)+ 子表 **`secrets`**(每行一个加密字段:`field_name`、`encrypted`)。另有 `entries_history`、`secrets_history`、`audit_log`,以及 **`users`**(含 `key_salt`、`key_check`、`key_params`、`api_key`)、**`oauth_accounts`**。首次连库自动迁移建表。 +主表 **`entries`**(`folder`、`type`、`name`、`notes`、`tags`、`metadata`,多租户时带 `user_id`)+ 子表 **`secrets`**(每行一个加密字段:`field_name`、`encrypted`)。**唯一性**:`UNIQUE(user_id, folder, name)`(`user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history`、`secrets_history`、`audit_log`,以及 **`users`**(含 `key_salt`、`key_check`、`key_params`、`api_key`)、**`oauth_accounts`**。首次连库自动迁移建表(`secrets-core` 的 `migrate`);已有库可对照 [`scripts/migrate-v0.3.0.sql`](scripts/migrate-v0.3.0.sql) 做列重命名与索引重建。**Web 登录会话**(tower-sessions)使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp` 中 `PostgresStore::migrate`),无需额外环境变量。 | 位置 | 字段 | 说明 | |------|------|------| -| entries | namespace | 一级隔离,如 `refining`、`ricnsmart` | -| entries | kind | `server`、`service`、`key` 等(可扩展) | -| entries | name | 人类可读标识 | +| entries | folder | 组织/隔离空间,如 `refining`、`ricnsmart`;参与唯一键 | +| entries | type | 软分类,如 `server`、`service`、`key`、`person`(可扩展,不参与唯一键) | +| entries | name | 人类可读标识;与 `folder` 一起在用户内唯一 | +| entries | notes | 非敏感说明文本 | | entries | metadata | 明文 JSON(ip、url、`key_ref` 等) | | secrets | field_name | 明文字段名,便于 schema 展示 | | secrets | encrypted | AES-GCM 密文(含 nonce) | @@ -138,15 +147,15 @@ flowchart LR ### PEM 共享(`key_ref`) -同一 PEM 可被多条 `server` 记录引用:将 PEM 存为 `kind=key` 的 entry,在服务器条目的 `metadata.key_ref` 中写 key 的名称;轮换时只更新 key 对应记录即可。 +同一 PEM 可被多条 `server` 等记录引用:将 PEM 存为 **`type=key`** 的 entry,在其它条目的 `metadata.key_ref` 中写该 key 条目的 `name`;轮换时只更新 key 对应记录即可。 ## 审计日志 -`add`、`update`、`delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。 -其中业务条目事件使用 `[namespace/kind] name` 语义;登录类事件使用 `namespace='auth'`,此时 `kind/name` 表示认证目标(例如 `oauth/google`),不表示某条 secrets entry。 +`add`、`update`、`delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。多租户场景下可写 **`user_id`**(可空,兼容遗留行)。 +业务条目事件使用 **`folder` / `type` / `name`**;登录类事件使用 **`folder='auth'`**,此时 `type`/`name` 表示认证目标(例如 `oauth` / `google`),不表示某条 secrets entry。 ```sql -SELECT action, namespace, kind, name, detail, created_at +SELECT action, folder, type, name, detail, user_id, created_at FROM audit_log ORDER BY created_at DESC LIMIT 20; @@ -159,6 +168,7 @@ Cargo.toml crates/secrets-core/ # db / crypto / models / audit / service crates/secrets-mcp/ # MCP HTTP、Web、OAuth、API Key scripts/ + migrate-v0.3.0.sql # 可选:手动 SQL 迁移(namespace/kind → folder/type、唯一键含 folder) deploy/ # systemd、.env 示例 ``` diff --git a/crates/secrets-core/src/audit.rs b/crates/secrets-core/src/audit.rs index f12eec8..bccc671 100644 --- a/crates/secrets-core/src/audit.rs +++ b/crates/secrets-core/src/audit.rs @@ -3,7 +3,7 @@ use sqlx::{PgPool, Postgres, Transaction}; use uuid::Uuid; pub const ACTION_LOGIN: &str = "login"; -pub const NAMESPACE_AUTH: &str = "auth"; +pub const FOLDER_AUTH: &str = "auth"; fn login_detail(provider: &str, client_ip: Option<&str>, user_agent: Option<&str>) -> Value { json!({ @@ -16,7 +16,7 @@ fn login_detail(provider: &str, client_ip: Option<&str>, user_agent: Option<&str /// Write a login audit entry without requiring an explicit transaction. pub async fn log_login( pool: &PgPool, - kind: &str, + entry_type: &str, provider: &str, user_id: Uuid, client_ip: Option<&str>, @@ -24,22 +24,22 @@ pub async fn log_login( ) { let detail = login_detail(provider, client_ip, user_agent); let result: Result<_, sqlx::Error> = sqlx::query( - "INSERT INTO audit_log (user_id, action, namespace, kind, name, detail) \ + "INSERT INTO audit_log (user_id, action, folder, type, name, detail) \ VALUES ($1, $2, $3, $4, $5, $6)", ) .bind(user_id) .bind(ACTION_LOGIN) - .bind(NAMESPACE_AUTH) - .bind(kind) + .bind(FOLDER_AUTH) + .bind(entry_type) .bind(provider) .bind(&detail) .execute(pool) .await; if let Err(e) = result { - tracing::warn!(error = %e, kind, provider, "failed to write login audit log"); + tracing::warn!(error = %e, entry_type, provider, "failed to write login audit log"); } else { - tracing::debug!(kind, provider, ?user_id, "login audit logged"); + tracing::debug!(entry_type, provider, ?user_id, "login audit logged"); } } @@ -48,19 +48,19 @@ pub async fn log_tx( tx: &mut Transaction<'_, Postgres>, user_id: Option, action: &str, - namespace: &str, - kind: &str, + folder: &str, + entry_type: &str, name: &str, detail: Value, ) { let result: Result<_, sqlx::Error> = sqlx::query( - "INSERT INTO audit_log (user_id, action, namespace, kind, name, detail) \ + "INSERT INTO audit_log (user_id, action, folder, type, name, detail) \ VALUES ($1, $2, $3, $4, $5, $6)", ) .bind(user_id) .bind(action) - .bind(namespace) - .bind(kind) + .bind(folder) + .bind(entry_type) .bind(name) .bind(&detail) .execute(&mut **tx) @@ -69,7 +69,7 @@ pub async fn log_tx( if let Err(e) = result { tracing::warn!(error = %e, "failed to write audit log"); } else { - tracing::debug!(action, namespace, kind, name, "audit logged"); + tracing::debug!(action, folder, entry_type, name, "audit logged"); } } diff --git a/crates/secrets-core/src/db.rs b/crates/secrets-core/src/db.rs index 4ec576b..0e76be8 100644 --- a/crates/secrets-core/src/db.rs +++ b/crates/secrets-core/src/db.rs @@ -22,9 +22,10 @@ pub async fn migrate(pool: &PgPool) -> Result<()> { CREATE TABLE IF NOT EXISTS entries ( id UUID PRIMARY KEY DEFAULT uuidv7(), user_id UUID, - namespace VARCHAR(64) NOT NULL, - kind VARCHAR(64) NOT NULL, + folder VARCHAR(128) NOT NULL DEFAULT '', + type VARCHAR(64) NOT NULL DEFAULT '', name VARCHAR(256) NOT NULL, + notes TEXT NOT NULL DEFAULT '', tags TEXT[] NOT NULL DEFAULT '{}', metadata JSONB NOT NULL DEFAULT '{}', version BIGINT NOT NULL DEFAULT 1, @@ -34,19 +35,19 @@ pub async fn migrate(pool: &PgPool) -> Result<()> { -- Legacy unique constraint without user_id (single-user mode) CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy - ON entries(namespace, kind, name) + ON entries(folder, name) WHERE user_id IS NULL; -- Multi-user unique constraint CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user - ON entries(user_id, namespace, kind, name) + ON entries(user_id, folder, name) WHERE user_id IS NOT NULL; - CREATE INDEX IF NOT EXISTS idx_entries_namespace ON entries(namespace); - CREATE INDEX IF NOT EXISTS idx_entries_kind ON entries(kind); - CREATE INDEX IF NOT EXISTS idx_entries_user_id ON entries(user_id) WHERE user_id IS NOT NULL; - CREATE INDEX IF NOT EXISTS idx_entries_tags ON entries USING GIN(tags); - CREATE INDEX IF NOT EXISTS idx_entries_metadata ON entries USING GIN(metadata jsonb_path_ops); + CREATE INDEX IF NOT EXISTS idx_entries_folder ON entries(folder) WHERE folder <> ''; + CREATE INDEX IF NOT EXISTS idx_entries_type ON entries(type) WHERE type <> ''; + CREATE INDEX IF NOT EXISTS idx_entries_user_id ON entries(user_id) WHERE user_id IS NOT NULL; + CREATE INDEX IF NOT EXISTS idx_entries_tags ON entries USING GIN(tags); + CREATE INDEX IF NOT EXISTS idx_entries_metadata ON entries USING GIN(metadata jsonb_path_ops); -- ── secrets: one row per encrypted field ───────────────────────────────── CREATE TABLE IF NOT EXISTS secrets ( @@ -67,23 +68,23 @@ pub async fn migrate(pool: &PgPool) -> Result<()> { id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, user_id UUID, action VARCHAR(32) NOT NULL, - namespace VARCHAR(64) NOT NULL, - kind VARCHAR(64) NOT NULL, + folder VARCHAR(128) NOT NULL DEFAULT '', + type VARCHAR(64) NOT NULL DEFAULT '', name VARCHAR(256) NOT NULL, detail JSONB NOT NULL DEFAULT '{}', created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() ); - CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC); - CREATE INDEX IF NOT EXISTS idx_audit_log_ns_kind ON audit_log(namespace, kind); - CREATE INDEX IF NOT EXISTS idx_audit_log_user_id ON audit_log(user_id) WHERE user_id IS NOT NULL; + CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC); + CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type ON audit_log(folder, type); + CREATE INDEX IF NOT EXISTS idx_audit_log_user_id ON audit_log(user_id) WHERE user_id IS NOT NULL; -- ── entries_history ─────────────────────────────────────────────────────── CREATE TABLE IF NOT EXISTS entries_history ( id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, entry_id UUID NOT NULL, - namespace VARCHAR(64) NOT NULL, - kind VARCHAR(64) NOT NULL, + folder VARCHAR(128) NOT NULL DEFAULT '', + type VARCHAR(64) NOT NULL DEFAULT '', name VARCHAR(256) NOT NULL, version BIGINT NOT NULL, action VARCHAR(16) NOT NULL, @@ -94,8 +95,8 @@ pub async fn migrate(pool: &PgPool) -> Result<()> { CREATE INDEX IF NOT EXISTS idx_entries_history_entry_id ON entries_history(entry_id, version DESC); - CREATE INDEX IF NOT EXISTS idx_entries_history_ns_kind_name - ON entries_history(namespace, kind, name, version DESC); + CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name + ON entries_history(folder, type, name, version DESC); -- Backfill: add user_id to entries_history for multi-tenant isolation ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID; @@ -103,6 +104,9 @@ pub async fn migrate(pool: &PgPool) -> Result<()> { ON entries_history(user_id) WHERE user_id IS NOT NULL; ALTER TABLE entries_history DROP COLUMN IF EXISTS actor; + -- Backfill: add notes to entries if not present (fresh installs already have it) + ALTER TABLE entries ADD COLUMN IF NOT EXISTS notes TEXT NOT NULL DEFAULT ''; + -- ── secrets_history: field-level snapshot ──────────────────────────────── CREATE TABLE IF NOT EXISTS secrets_history ( id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, @@ -123,9 +127,6 @@ pub async fn migrate(pool: &PgPool) -> Result<()> { -- Drop redundant actor column (derivable via entries_history JOIN) ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor; - -- Drop redundant actor column; user_id already identifies the business user - ALTER TABLE audit_log DROP COLUMN IF EXISTS actor; - -- ── users ───────────────────────────────────────────────────────────────── CREATE TABLE IF NOT EXISTS users ( id UUID PRIMARY KEY DEFAULT uuidv7(), @@ -191,12 +192,179 @@ pub async fn migrate(pool: &PgPool) -> Result<()> { ) .execute(pool) .await?; + migrate_schema(pool).await?; restore_plaintext_api_keys(pool).await?; tracing::debug!("migrations complete"); Ok(()) } +/// Idempotent schema migration: rename namespace→folder, kind→type in existing databases. +async fn migrate_schema(pool: &PgPool) -> Result<()> { + sqlx::raw_sql( + r#" + -- ── entries: rename namespace→folder, kind→type ────────────────────────── + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries' AND column_name = 'namespace' + ) THEN + ALTER TABLE entries RENAME COLUMN namespace TO folder; + END IF; + END $$; + + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries' AND column_name = 'kind' + ) THEN + ALTER TABLE entries RENAME COLUMN kind TO type; + END IF; + END $$; + + -- ── audit_log: rename namespace→folder, kind→type ──────────────────────── + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'audit_log' AND column_name = 'namespace' + ) THEN + ALTER TABLE audit_log RENAME COLUMN namespace TO folder; + END IF; + END $$; + + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'audit_log' AND column_name = 'kind' + ) THEN + ALTER TABLE audit_log RENAME COLUMN kind TO type; + END IF; + END $$; + + -- ── entries_history: rename namespace→folder, kind→type ────────────────── + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries_history' AND column_name = 'namespace' + ) THEN + ALTER TABLE entries_history RENAME COLUMN namespace TO folder; + END IF; + END $$; + + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries_history' AND column_name = 'kind' + ) THEN + ALTER TABLE entries_history RENAME COLUMN kind TO type; + END IF; + END $$; + + -- ── Set empty defaults for new folder/type columns ──────────────────────── + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries' AND column_name = 'folder' + ) THEN + UPDATE entries SET folder = '' WHERE folder IS NULL; + ALTER TABLE entries ALTER COLUMN folder SET NOT NULL; + ALTER TABLE entries ALTER COLUMN folder SET DEFAULT ''; + END IF; + END $$; + + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries' AND column_name = 'type' + ) THEN + UPDATE entries SET type = '' WHERE type IS NULL; + ALTER TABLE entries ALTER COLUMN type SET NOT NULL; + ALTER TABLE entries ALTER COLUMN type SET DEFAULT ''; + END IF; + END $$; + + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'audit_log' AND column_name = 'folder' + ) THEN + UPDATE audit_log SET folder = '' WHERE folder IS NULL; + ALTER TABLE audit_log ALTER COLUMN folder SET NOT NULL; + ALTER TABLE audit_log ALTER COLUMN folder SET DEFAULT ''; + END IF; + END $$; + + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'audit_log' AND column_name = 'type' + ) THEN + UPDATE audit_log SET type = '' WHERE type IS NULL; + ALTER TABLE audit_log ALTER COLUMN type SET NOT NULL; + ALTER TABLE audit_log ALTER COLUMN type SET DEFAULT ''; + END IF; + END $$; + + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries_history' AND column_name = 'folder' + ) THEN + UPDATE entries_history SET folder = '' WHERE folder IS NULL; + ALTER TABLE entries_history ALTER COLUMN folder SET NOT NULL; + ALTER TABLE entries_history ALTER COLUMN folder SET DEFAULT ''; + END IF; + END $$; + + DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries_history' AND column_name = 'type' + ) THEN + UPDATE entries_history SET type = '' WHERE type IS NULL; + ALTER TABLE entries_history ALTER COLUMN type SET NOT NULL; + ALTER TABLE entries_history ALTER COLUMN type SET DEFAULT ''; + END IF; + END $$; + + -- ── Rebuild unique indexes on entries: folder is now part of the key ──────── + -- (user_id, folder, name) allows same name in different folders. + DROP INDEX IF EXISTS idx_entries_unique_legacy; + DROP INDEX IF EXISTS idx_entries_unique_user; + + CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy + ON entries(folder, name) + WHERE user_id IS NULL; + + CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user + ON entries(user_id, folder, name) + WHERE user_id IS NOT NULL; + + -- ── Replace old namespace/kind indexes ──────────────────────────────────── + DROP INDEX IF EXISTS idx_entries_namespace; + DROP INDEX IF EXISTS idx_entries_kind; + DROP INDEX IF EXISTS idx_audit_log_ns_kind; + DROP INDEX IF EXISTS idx_entries_history_ns_kind_name; + + CREATE INDEX IF NOT EXISTS idx_entries_folder + ON entries(folder) WHERE folder <> ''; + CREATE INDEX IF NOT EXISTS idx_entries_type + ON entries(type) WHERE type <> ''; + CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type + ON audit_log(folder, type); + CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name + ON entries_history(folder, type, name, version DESC); + + -- ── Drop legacy actor columns ───────────────────────────────────────────── + ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor; + ALTER TABLE audit_log DROP COLUMN IF EXISTS actor; + "#, + ) + .execute(pool) + .await?; + Ok(()) +} + async fn restore_plaintext_api_keys(pool: &PgPool) -> Result<()> { let has_users_api_key: bool = sqlx::query_scalar( "SELECT EXISTS ( @@ -265,8 +433,8 @@ async fn restore_plaintext_api_keys(pool: &PgPool) -> Result<()> { pub struct EntrySnapshotParams<'a> { pub entry_id: uuid::Uuid, pub user_id: Option, - pub namespace: &'a str, - pub kind: &'a str, + pub folder: &'a str, + pub entry_type: &'a str, pub name: &'a str, pub version: i64, pub action: &'a str, @@ -280,12 +448,12 @@ pub async fn snapshot_entry_history( ) -> Result<()> { sqlx::query( "INSERT INTO entries_history \ - (entry_id, namespace, kind, name, version, action, tags, metadata, user_id) \ + (entry_id, folder, type, name, version, action, tags, metadata, user_id) \ VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)", ) .bind(p.entry_id) - .bind(p.namespace) - .bind(p.kind) + .bind(p.folder) + .bind(p.entry_type) .bind(p.name) .bind(p.version) .bind(p.action) diff --git a/crates/secrets-core/src/models.rs b/crates/secrets-core/src/models.rs index 9f44e4f..055bad9 100644 --- a/crates/secrets-core/src/models.rs +++ b/crates/secrets-core/src/models.rs @@ -4,15 +4,18 @@ use serde_json::Value; use std::collections::BTreeMap; use uuid::Uuid; -/// A top-level entry (server, service, key, …). +/// A top-level entry (server, service, key, person, …). /// Sensitive fields are stored separately in `secrets`. #[derive(Debug, Serialize, Deserialize, sqlx::FromRow)] pub struct Entry { pub id: Uuid, pub user_id: Option, - pub namespace: String, - pub kind: String, + pub folder: String, + #[serde(rename = "type")] + #[sqlx(rename = "type")] + pub entry_type: String, pub name: String, + pub notes: String, pub tags: Vec, pub metadata: Value, pub version: i64, @@ -40,8 +43,12 @@ pub struct SecretField { pub struct EntryRow { pub id: Uuid, pub version: i64, + pub folder: String, + #[sqlx(rename = "type")] + pub entry_type: String, pub tags: Vec, pub metadata: Value, + pub notes: String, } /// Minimal secret field row fetched before snapshots or cascade deletes. @@ -128,10 +135,14 @@ pub struct ExportData { /// A single entry with decrypted secrets for export/import. #[derive(Debug, Serialize, Deserialize)] pub struct ExportEntry { - pub namespace: String, - pub kind: String, pub name: String, #[serde(default)] + pub folder: String, + #[serde(default, rename = "type")] + pub entry_type: String, + #[serde(default)] + pub notes: String, + #[serde(default)] pub tags: Vec, #[serde(default)] pub metadata: Value, @@ -181,8 +192,10 @@ pub struct AuditLogEntry { pub id: i64, pub user_id: Option, pub action: String, - pub namespace: String, - pub kind: String, + pub folder: String, + #[serde(rename = "type")] + #[sqlx(rename = "type")] + pub entry_type: String, pub name: String, pub detail: Value, pub created_at: DateTime, diff --git a/crates/secrets-core/src/service/add.rs b/crates/secrets-core/src/service/add.rs index d3206c0..cfc754e 100644 --- a/crates/secrets-core/src/service/add.rs +++ b/crates/secrets-core/src/service/add.rs @@ -159,18 +159,20 @@ pub fn flatten_json_fields(prefix: &str, value: &Value) -> Vec<(String, Value)> #[derive(Debug, serde::Serialize)] pub struct AddResult { - pub namespace: String, - pub kind: String, pub name: String, + pub folder: String, + #[serde(rename = "type")] + pub entry_type: String, pub tags: Vec, pub meta_keys: Vec, pub secret_keys: Vec, } pub struct AddParams<'a> { - pub namespace: &'a str, - pub kind: &'a str, pub name: &'a str, + pub folder: &'a str, + pub entry_type: &'a str, + pub notes: &'a str, pub tags: &'a [String], pub meta_entries: &'a [String], pub secret_entries: &'a [String], @@ -186,25 +188,23 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> let mut tx = pool.begin().await?; - // Fetch existing entry (user-scoped or global depending on user_id) + // Fetch existing entry by (user_id, folder, name) — the natural unique key let existing: Option = if let Some(uid) = params.user_id { sqlx::query_as( - "SELECT id, version, tags, metadata FROM entries \ - WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4", + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE user_id = $1 AND folder = $2 AND name = $3", ) .bind(uid) - .bind(params.namespace) - .bind(params.kind) + .bind(params.folder) .bind(params.name) .fetch_optional(&mut *tx) .await? } else { sqlx::query_as( - "SELECT id, version, tags, metadata FROM entries \ - WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3", + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE user_id IS NULL AND folder = $1 AND name = $2", ) - .bind(params.namespace) - .bind(params.kind) + .bind(params.folder) .bind(params.name) .fetch_optional(&mut *tx) .await? @@ -216,8 +216,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> db::EntrySnapshotParams { entry_id: ex.id, user_id: params.user_id, - namespace: params.namespace, - kind: params.kind, + folder: params.folder, + entry_type: params.entry_type, name: params.name, version: ex.version, action: "add", @@ -232,10 +232,13 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> let entry_id: Uuid = if let Some(uid) = params.user_id { sqlx::query_scalar( - r#"INSERT INTO entries (user_id, namespace, kind, name, tags, metadata, version, updated_at) - VALUES ($1, $2, $3, $4, $5, $6, 1, NOW()) - ON CONFLICT (user_id, namespace, kind, name) WHERE user_id IS NOT NULL + r#"INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata, version, updated_at) + VALUES ($1, $2, $3, $4, $5, $6, $7, 1, NOW()) + ON CONFLICT (user_id, folder, name) WHERE user_id IS NOT NULL DO UPDATE SET + folder = EXCLUDED.folder, + type = EXCLUDED.type, + notes = EXCLUDED.notes, tags = EXCLUDED.tags, metadata = EXCLUDED.metadata, version = entries.version + 1, @@ -243,28 +246,33 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> RETURNING id"#, ) .bind(uid) - .bind(params.namespace) - .bind(params.kind) + .bind(params.folder) + .bind(params.entry_type) .bind(params.name) + .bind(params.notes) .bind(params.tags) .bind(&metadata) .fetch_one(&mut *tx) .await? } else { sqlx::query_scalar( - r#"INSERT INTO entries (namespace, kind, name, tags, metadata, version, updated_at) - VALUES ($1, $2, $3, $4, $5, 1, NOW()) - ON CONFLICT (namespace, kind, name) WHERE user_id IS NULL + r#"INSERT INTO entries (folder, type, name, notes, tags, metadata, version, updated_at) + VALUES ($1, $2, $3, $4, $5, $6, 1, NOW()) + ON CONFLICT (folder, name) WHERE user_id IS NULL DO UPDATE SET + folder = EXCLUDED.folder, + type = EXCLUDED.type, + notes = EXCLUDED.notes, tags = EXCLUDED.tags, metadata = EXCLUDED.metadata, version = entries.version + 1, updated_at = NOW() RETURNING id"#, ) - .bind(params.namespace) - .bind(params.kind) + .bind(params.folder) + .bind(params.entry_type) .bind(params.name) + .bind(params.notes) .bind(params.tags) .bind(&metadata) .fetch_one(&mut *tx) @@ -282,8 +290,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> db::EntrySnapshotParams { entry_id, user_id: params.user_id, - namespace: params.namespace, - kind: params.kind, + folder: params.folder, + entry_type: params.entry_type, name: params.name, version: new_entry_version, action: "create", @@ -348,8 +356,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> &mut tx, params.user_id, "add", - params.namespace, - params.kind, + params.folder, + params.entry_type, params.name, serde_json::json!({ "tags": params.tags, @@ -362,9 +370,9 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) -> tx.commit().await?; Ok(AddResult { - namespace: params.namespace.to_string(), - kind: params.kind.to_string(), name: params.name.to_string(), + folder: params.folder.to_string(), + entry_type: params.entry_type.to_string(), tags: params.tags.to_vec(), meta_keys, secret_keys, diff --git a/crates/secrets-core/src/service/audit_log.rs b/crates/secrets-core/src/service/audit_log.rs index 4acc00c..52a59dd 100644 --- a/crates/secrets-core/src/service/audit_log.rs +++ b/crates/secrets-core/src/service/audit_log.rs @@ -8,7 +8,7 @@ pub async fn list_for_user(pool: &PgPool, user_id: Uuid, limit: i64) -> Result { - pub namespace: &'a str, - pub kind: Option<&'a str>, + /// If set, delete a single entry by name. pub name: Option<&'a str>, + /// Folder filter for bulk delete. + pub folder: Option<&'a str>, + /// Type filter for bulk delete. + pub entry_type: Option<&'a str>, pub dry_run: bool, pub user_id: Option, } pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result { match params.name { - Some(name) => { - let kind = params - .kind - .ok_or_else(|| anyhow::anyhow!("--kind is required when --name is specified"))?; - delete_one( - pool, - params.namespace, - kind, - name, - params.dry_run, - params.user_id, - ) - .await - } + Some(name) => delete_one(pool, name, params.folder, params.dry_run, params.user_id).await, None => { + if params.folder.is_none() && params.entry_type.is_none() { + anyhow::bail!( + "Bulk delete requires at least one of: name, folder, or type filter." + ); + } delete_bulk( pool, - params.namespace, - params.kind, + params.folder, + params.entry_type, params.dry_run, params.user_id, ) @@ -58,93 +54,169 @@ pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result, dry_run: bool, user_id: Option, ) -> Result { if dry_run { - let exists: bool = if let Some(uid) = user_id { - sqlx::query_scalar( - "SELECT EXISTS(SELECT 1 FROM entries \ - WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4)", + // Dry-run uses the same disambiguation logic as actual delete: + // - 0 matches → nothing to delete + // - 1 match → show what would be deleted (with correct folder/type) + // - 2+ matches → disambiguation error (same as non-dry-run) + #[derive(sqlx::FromRow)] + struct DryRunRow { + folder: String, + #[sqlx(rename = "type")] + entry_type: String, + } + + let rows: Vec = if let Some(uid) = user_id { + if let Some(f) = folder { + sqlx::query_as( + "SELECT folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3", + ) + .bind(uid) + .bind(f) + .bind(name) + .fetch_all(pool) + .await? + } else { + sqlx::query_as("SELECT folder, type FROM entries WHERE user_id = $1 AND name = $2") + .bind(uid) + .bind(name) + .fetch_all(pool) + .await? + } + } else if let Some(f) = folder { + sqlx::query_as( + "SELECT folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2", ) - .bind(uid) - .bind(namespace) - .bind(kind) + .bind(f) .bind(name) - .fetch_one(pool) + .fetch_all(pool) .await? } else { - sqlx::query_scalar( - "SELECT EXISTS(SELECT 1 FROM entries \ - WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3)", - ) - .bind(namespace) - .bind(kind) - .bind(name) - .fetch_one(pool) - .await? + sqlx::query_as("SELECT folder, type FROM entries WHERE user_id IS NULL AND name = $1") + .bind(name) + .fetch_all(pool) + .await? }; - let deleted = if exists { - vec![DeletedEntry { - namespace: namespace.to_string(), - kind: kind.to_string(), - name: name.to_string(), - }] - } else { - vec![] + return match rows.len() { + 0 => Ok(DeleteResult { + deleted: vec![], + dry_run: true, + }), + 1 => { + let row = rows.into_iter().next().unwrap(); + Ok(DeleteResult { + deleted: vec![DeletedEntry { + name: name.to_string(), + folder: row.folder, + entry_type: row.entry_type, + }], + dry_run: true, + }) + } + _ => { + let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect(); + anyhow::bail!( + "Ambiguous: {} entries named '{}' found in folders: [{}]. \ + Specify 'folder' to disambiguate.", + rows.len(), + name, + folders.join(", ") + ) + } }; - return Ok(DeleteResult { - deleted, - dry_run: true, - }); } let mut tx = pool.begin().await?; - let row: Option = if let Some(uid) = user_id { + // Fetch matching rows with FOR UPDATE; use folder when provided to resolve ambiguity. + let rows: Vec = if let Some(uid) = user_id { + if let Some(f) = folder { + sqlx::query_as( + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE user_id = $1 AND folder = $2 AND name = $3 FOR UPDATE", + ) + .bind(uid) + .bind(f) + .bind(name) + .fetch_all(&mut *tx) + .await? + } else { + sqlx::query_as( + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE user_id = $1 AND name = $2 FOR UPDATE", + ) + .bind(uid) + .bind(name) + .fetch_all(&mut *tx) + .await? + } + } else if let Some(f) = folder { sqlx::query_as( - "SELECT id, version, tags, metadata FROM entries \ - WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4 FOR UPDATE", + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE user_id IS NULL AND folder = $1 AND name = $2 FOR UPDATE", ) - .bind(uid) - .bind(namespace) - .bind(kind) + .bind(f) .bind(name) - .fetch_optional(&mut *tx) + .fetch_all(&mut *tx) .await? } else { sqlx::query_as( - "SELECT id, version, tags, metadata FROM entries \ - WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE", + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE user_id IS NULL AND name = $1 FOR UPDATE", ) - .bind(namespace) - .bind(kind) .bind(name) - .fetch_optional(&mut *tx) + .fetch_all(&mut *tx) .await? }; - let Some(row) = row else { - tx.rollback().await?; - return Ok(DeleteResult { - deleted: vec![], - dry_run: false, - }); + let row = match rows.len() { + 0 => { + tx.rollback().await?; + return Ok(DeleteResult { + deleted: vec![], + dry_run: false, + }); + } + 1 => rows.into_iter().next().unwrap(), + _ => { + tx.rollback().await?; + let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect(); + anyhow::bail!( + "Ambiguous: {} entries named '{}' found in folders: [{}]. \ + Specify 'folder' to disambiguate.", + rows.len(), + name, + folders.join(", ") + ) + } }; - snapshot_and_delete(&mut tx, namespace, kind, name, &row, user_id).await?; - crate::audit::log_tx(&mut tx, user_id, "delete", namespace, kind, name, json!({})).await; + let folder = row.folder.clone(); + let entry_type = row.entry_type.clone(); + snapshot_and_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?; + crate::audit::log_tx( + &mut tx, + user_id, + "delete", + &folder, + &entry_type, + name, + json!({}), + ) + .await; tx.commit().await?; Ok(DeleteResult { deleted: vec![DeletedEntry { - namespace: namespace.to_string(), - kind: kind.to_string(), name: name.to_string(), + folder, + entry_type, }], dry_run: false, }) @@ -152,8 +224,8 @@ async fn delete_one( async fn delete_bulk( pool: &PgPool, - namespace: &str, - kind: Option<&str>, + folder: Option<&str>, + entry_type: Option<&str>, dry_run: bool, user_id: Option, ) -> Result { @@ -161,62 +233,57 @@ async fn delete_bulk( struct FullEntryRow { id: Uuid, version: i64, - kind: String, + folder: String, + #[sqlx(rename = "type")] + entry_type: String, name: String, metadata: serde_json::Value, tags: Vec, + notes: String, } - let rows: Vec = match (user_id, kind) { - (Some(uid), Some(k)) => { - sqlx::query_as( - "SELECT id, version, kind, name, metadata, tags FROM entries \ - WHERE user_id = $1 AND namespace = $2 AND kind = $3 ORDER BY name", - ) - .bind(uid) - .bind(namespace) - .bind(k) - .fetch_all(pool) - .await? - } - (Some(uid), None) => { - sqlx::query_as( - "SELECT id, version, kind, name, metadata, tags FROM entries \ - WHERE user_id = $1 AND namespace = $2 ORDER BY kind, name", - ) - .bind(uid) - .bind(namespace) - .fetch_all(pool) - .await? - } - (None, Some(k)) => { - sqlx::query_as( - "SELECT id, version, kind, name, metadata, tags FROM entries \ - WHERE user_id IS NULL AND namespace = $1 AND kind = $2 ORDER BY name", - ) - .bind(namespace) - .bind(k) - .fetch_all(pool) - .await? - } - (None, None) => { - sqlx::query_as( - "SELECT id, version, kind, name, metadata, tags FROM entries \ - WHERE user_id IS NULL AND namespace = $1 ORDER BY kind, name", - ) - .bind(namespace) - .fetch_all(pool) - .await? - } - }; + let mut conditions: Vec = Vec::new(); + let mut idx: i32 = 1; + + if user_id.is_some() { + conditions.push(format!("user_id = ${}", idx)); + idx += 1; + } else { + conditions.push("user_id IS NULL".to_string()); + } + if folder.is_some() { + conditions.push(format!("folder = ${}", idx)); + idx += 1; + } + if entry_type.is_some() { + conditions.push(format!("type = ${}", idx)); + } + + let where_clause = format!("WHERE {}", conditions.join(" AND ")); + let sql = format!( + "SELECT id, version, folder, type, name, metadata, tags, notes \ + FROM entries {where_clause} ORDER BY type, name" + ); + + let mut q = sqlx::query_as::<_, FullEntryRow>(&sql); + if let Some(uid) = user_id { + q = q.bind(uid); + } + if let Some(f) = folder { + q = q.bind(f); + } + if let Some(t) = entry_type { + q = q.bind(t); + } + let rows = q.fetch_all(pool).await?; if dry_run { let deleted = rows .iter() .map(|r| DeletedEntry { - namespace: namespace.to_string(), - kind: r.kind.clone(), name: r.name.clone(), + folder: r.folder.clone(), + entry_type: r.entry_type.clone(), }) .collect(); return Ok(DeleteResult { @@ -230,29 +297,37 @@ async fn delete_bulk( let entry_row = EntryRow { id: row.id, version: row.version, + folder: row.folder.clone(), + entry_type: row.entry_type.clone(), tags: row.tags.clone(), metadata: row.metadata.clone(), + notes: row.notes.clone(), }; let mut tx = pool.begin().await?; snapshot_and_delete( - &mut tx, namespace, &row.kind, &row.name, &entry_row, user_id, + &mut tx, + &row.folder, + &row.entry_type, + &row.name, + &entry_row, + user_id, ) .await?; crate::audit::log_tx( &mut tx, user_id, "delete", - namespace, - &row.kind, + &row.folder, + &row.entry_type, &row.name, json!({"bulk": true}), ) .await; tx.commit().await?; deleted.push(DeletedEntry { - namespace: namespace.to_string(), - kind: row.kind.clone(), name: row.name.clone(), + folder: row.folder.clone(), + entry_type: row.entry_type.clone(), }); } @@ -264,8 +339,8 @@ async fn delete_bulk( async fn snapshot_and_delete( tx: &mut sqlx::Transaction<'_, sqlx::Postgres>, - namespace: &str, - kind: &str, + folder: &str, + entry_type: &str, name: &str, row: &EntryRow, user_id: Option, @@ -275,8 +350,8 @@ async fn snapshot_and_delete( db::EntrySnapshotParams { entry_id: row.id, user_id, - namespace, - kind, + folder, + entry_type, name, version: row.version, action: "delete", diff --git a/crates/secrets-core/src/service/env_map.rs b/crates/secrets-core/src/service/env_map.rs index 86adccf..ff90712 100644 --- a/crates/secrets-core/src/service/env_map.rs +++ b/crates/secrets-core/src/service/env_map.rs @@ -12,8 +12,8 @@ use crate::service::search::{fetch_entries, fetch_secrets_for_entries}; #[allow(clippy::too_many_arguments)] pub async fn build_env_map( pool: &PgPool, - namespace: Option<&str>, - kind: Option<&str>, + folder: Option<&str>, + entry_type: Option<&str>, name: Option<&str>, tags: &[String], only_fields: &[String], @@ -21,7 +21,7 @@ pub async fn build_env_map( master_key: &[u8; 32], user_id: Option, ) -> Result> { - let entries = fetch_entries(pool, namespace, kind, name, tags, None, user_id).await?; + let entries = fetch_entries(pool, folder, entry_type, name, tags, None, user_id).await?; let mut combined: HashMap = HashMap::new(); @@ -68,16 +68,8 @@ async fn build_entry_env_map( // Resolve key_ref if let Some(key_ref) = entry.metadata.get("key_ref").and_then(|v| v.as_str()) { - let key_entries = fetch_entries( - pool, - Some(&entry.namespace), - Some("key"), - Some(key_ref), - &[], - None, - None, - ) - .await?; + let key_entries = + fetch_entries(pool, None, Some("key"), Some(key_ref), &[], None, None).await?; if let Some(key_entry) = key_entries.first() { let key_ids = vec![key_entry.id]; diff --git a/crates/secrets-core/src/service/export.rs b/crates/secrets-core/src/service/export.rs index b7bc9eb..5d5fd3b 100644 --- a/crates/secrets-core/src/service/export.rs +++ b/crates/secrets-core/src/service/export.rs @@ -9,8 +9,8 @@ use crate::models::{ExportData, ExportEntry, ExportFormat}; use crate::service::search::{fetch_entries, fetch_secrets_for_entries}; pub struct ExportParams<'a> { - pub namespace: Option<&'a str>, - pub kind: Option<&'a str>, + pub folder: Option<&'a str>, + pub entry_type: Option<&'a str>, pub name: Option<&'a str>, pub tags: &'a [String], pub query: Option<&'a str>, @@ -25,8 +25,8 @@ pub async fn export( ) -> Result { let entries = fetch_entries( pool, - params.namespace, - params.kind, + params.folder, + params.entry_type, params.name, params.tags, params.query, @@ -62,9 +62,10 @@ pub async fn export( }; export_entries.push(ExportEntry { - namespace: entry.namespace.clone(), - kind: entry.kind.clone(), name: entry.name.clone(), + folder: entry.folder.clone(), + entry_type: entry.entry_type.clone(), + notes: entry.notes.clone(), tags: entry.tags.clone(), metadata: entry.metadata.clone(), secrets, diff --git a/crates/secrets-core/src/service/get_secret.rs b/crates/secrets-core/src/service/get_secret.rs index 7ddec64..48dfcd1 100644 --- a/crates/secrets-core/src/service/get_secret.rs +++ b/crates/secrets-core/src/service/get_secret.rs @@ -5,31 +5,19 @@ use std::collections::HashMap; use uuid::Uuid; use crate::crypto; -use crate::service::search::{fetch_entries, fetch_secrets_for_entries}; +use crate::service::search::{fetch_secrets_for_entries, resolve_entry}; /// Decrypt a single named field from an entry. +/// `folder` is optional; if omitted and multiple entries share the name, an error is returned. pub async fn get_secret_field( pool: &PgPool, - namespace: &str, - kind: &str, name: &str, + folder: Option<&str>, field_name: &str, master_key: &[u8; 32], user_id: Option, ) -> Result { - let entries = fetch_entries( - pool, - Some(namespace), - Some(kind), - Some(name), - &[], - None, - user_id, - ) - .await?; - let entry = entries - .first() - .ok_or_else(|| anyhow::anyhow!("Not found: [{}/{}] {}", namespace, kind, name))?; + let entry = resolve_entry(pool, name, folder, user_id).await?; let entry_ids = vec![entry.id]; let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?; @@ -44,27 +32,15 @@ pub async fn get_secret_field( } /// Decrypt all secret fields from an entry. Returns a map field_name → decrypted Value. +/// `folder` is optional; if omitted and multiple entries share the name, an error is returned. pub async fn get_all_secrets( pool: &PgPool, - namespace: &str, - kind: &str, name: &str, + folder: Option<&str>, master_key: &[u8; 32], user_id: Option, ) -> Result> { - let entries = fetch_entries( - pool, - Some(namespace), - Some(kind), - Some(name), - &[], - None, - user_id, - ) - .await?; - let entry = entries - .first() - .ok_or_else(|| anyhow::anyhow!("Not found: [{}/{}] {}", namespace, kind, name))?; + let entry = resolve_entry(pool, name, folder, user_id).await?; let entry_ids = vec![entry.id]; let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?; diff --git a/crates/secrets-core/src/service/history.rs b/crates/secrets-core/src/service/history.rs index ac2b31e..97514b8 100644 --- a/crates/secrets-core/src/service/history.rs +++ b/crates/secrets-core/src/service/history.rs @@ -3,6 +3,8 @@ use serde_json::Value; use sqlx::PgPool; use uuid::Uuid; +use crate::service::search::resolve_entry; + #[derive(Debug, serde::Serialize)] pub struct HistoryEntry { pub version: i64, @@ -10,11 +12,12 @@ pub struct HistoryEntry { pub created_at: String, } +/// Return version history for the entry identified by `name`. +/// `folder` is optional; if omitted and multiple entries share the name, an error is returned. pub async fn run( pool: &PgPool, - namespace: &str, - kind: &str, name: &str, + folder: Option<&str>, limit: u32, user_id: Option, ) -> Result> { @@ -25,32 +28,16 @@ pub async fn run( created_at: chrono::DateTime, } - let rows: Vec = if let Some(uid) = user_id { - sqlx::query_as( - "SELECT version, action, created_at FROM entries_history \ - WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id = $4 \ - ORDER BY id DESC LIMIT $5", - ) - .bind(namespace) - .bind(kind) - .bind(name) - .bind(uid) - .bind(limit as i64) - .fetch_all(pool) - .await? - } else { - sqlx::query_as( - "SELECT version, action, created_at FROM entries_history \ - WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id IS NULL \ - ORDER BY id DESC LIMIT $4", - ) - .bind(namespace) - .bind(kind) - .bind(name) - .bind(limit as i64) - .fetch_all(pool) - .await? - }; + let entry = resolve_entry(pool, name, folder, user_id).await?; + + let rows: Vec = sqlx::query_as( + "SELECT version, action, created_at FROM entries_history \ + WHERE entry_id = $1 ORDER BY id DESC LIMIT $2", + ) + .bind(entry.id) + .bind(limit as i64) + .fetch_all(pool) + .await?; Ok(rows .into_iter() @@ -64,12 +51,11 @@ pub async fn run( pub async fn run_json( pool: &PgPool, - namespace: &str, - kind: &str, name: &str, + folder: Option<&str>, limit: u32, user_id: Option, ) -> Result { - let entries = run(pool, namespace, kind, name, limit, user_id).await?; + let entries = run(pool, name, folder, limit, user_id).await?; Ok(serde_json::to_value(entries)?) } diff --git a/crates/secrets-core/src/service/import.rs b/crates/secrets-core/src/service/import.rs index dbb890d..6b723bf 100644 --- a/crates/secrets-core/src/service/import.rs +++ b/crates/secrets-core/src/service/import.rs @@ -47,10 +47,9 @@ pub async fn run( for entry in &data.entries { let exists: bool = sqlx::query_scalar( "SELECT EXISTS(SELECT 1 FROM entries \ - WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id IS NOT DISTINCT FROM $4)", + WHERE folder = $1 AND name = $2 AND user_id IS NOT DISTINCT FROM $3)", ) - .bind(&entry.namespace) - .bind(&entry.kind) + .bind(&entry.folder) .bind(&entry.name) .bind(params.user_id) .fetch_one(pool) @@ -59,9 +58,7 @@ pub async fn run( if exists && !params.force { return Err(anyhow::anyhow!( - "Import aborted: conflict on [{}/{}/{}]", - entry.namespace, - entry.kind, + "Import aborted: conflict on '{}'", entry.name )); } @@ -81,9 +78,10 @@ pub async fn run( match add_run( pool, AddParams { - namespace: &entry.namespace, - kind: &entry.kind, name: &entry.name, + folder: &entry.folder, + entry_type: &entry.entry_type, + notes: &entry.notes, tags: &entry.tags, meta_entries: &meta_entries, secret_entries: &secret_entries, @@ -98,8 +96,6 @@ pub async fn run( } Err(e) => { tracing::error!( - namespace = entry.namespace, - kind = entry.kind, name = entry.name, error = %e, "failed to import entry" diff --git a/crates/secrets-core/src/service/rollback.rs b/crates/secrets-core/src/service/rollback.rs index 56d6605..d562e7d 100644 --- a/crates/secrets-core/src/service/rollback.rs +++ b/crates/secrets-core/src/service/rollback.rs @@ -8,17 +8,19 @@ use crate::db; #[derive(Debug, serde::Serialize)] pub struct RollbackResult { - pub namespace: String, - pub kind: String, pub name: String, + pub folder: String, + #[serde(rename = "type")] + pub entry_type: String, pub restored_version: i64, } +/// Roll back entry `name` to `to_version` (or the most recent snapshot if None). +/// `folder` is optional; if omitted and multiple entries share the name, an error is returned. pub async fn run( pool: &PgPool, - namespace: &str, - kind: &str, name: &str, + folder: Option<&str>, to_version: Option, master_key: &[u8; 32], user_id: Option, @@ -26,69 +28,122 @@ pub async fn run( #[derive(sqlx::FromRow)] struct EntryHistoryRow { entry_id: Uuid, + folder: String, + #[sqlx(rename = "type")] + entry_type: String, version: i64, action: String, tags: Vec, metadata: Value, } - let snap: Option = if let Some(ver) = to_version { - if let Some(uid) = user_id { - sqlx::query_as( - "SELECT entry_id, version, action, tags, metadata FROM entries_history \ - WHERE namespace = $1 AND kind = $2 AND name = $3 AND version = $4 \ - AND user_id = $5 ORDER BY id DESC LIMIT 1", + // Disambiguate: find the unique entry_id for (name, folder). + // Query entries_history by entry_id once we know it; first resolve via name + optional folder. + let entry_id: Option = if let Some(uid) = user_id { + if let Some(f) = folder { + sqlx::query_scalar( + "SELECT DISTINCT entry_id FROM entries_history \ + WHERE name = $1 AND folder = $2 AND user_id = $3 LIMIT 1", ) - .bind(namespace) - .bind(kind) .bind(name) - .bind(ver) + .bind(f) .bind(uid) .fetch_optional(pool) .await? } else { - sqlx::query_as( - "SELECT entry_id, version, action, tags, metadata FROM entries_history \ - WHERE namespace = $1 AND kind = $2 AND name = $3 AND version = $4 \ - AND user_id IS NULL ORDER BY id DESC LIMIT 1", + let ids: Vec = sqlx::query_scalar( + "SELECT DISTINCT entry_id FROM entries_history \ + WHERE name = $1 AND user_id = $2", ) - .bind(namespace) - .bind(kind) .bind(name) - .bind(ver) - .fetch_optional(pool) - .await? + .bind(uid) + .fetch_all(pool) + .await?; + match ids.len() { + 0 => None, + 1 => Some(ids[0]), + _ => { + let folders: Vec = sqlx::query_scalar( + "SELECT DISTINCT folder FROM entries_history \ + WHERE name = $1 AND user_id = $2", + ) + .bind(name) + .bind(uid) + .fetch_all(pool) + .await?; + anyhow::bail!( + "Ambiguous: entries named '{}' exist in folders: [{}]. \ + Specify 'folder' to disambiguate.", + name, + folders.join(", ") + ) + } + } } - } else if let Some(uid) = user_id { - sqlx::query_as( - "SELECT entry_id, version, action, tags, metadata FROM entries_history \ - WHERE namespace = $1 AND kind = $2 AND name = $3 \ - AND user_id = $4 ORDER BY id DESC LIMIT 1", + } else if let Some(f) = folder { + sqlx::query_scalar( + "SELECT DISTINCT entry_id FROM entries_history \ + WHERE name = $1 AND folder = $2 AND user_id IS NULL LIMIT 1", ) - .bind(namespace) - .bind(kind) .bind(name) - .bind(uid) + .bind(f) + .fetch_optional(pool) + .await? + } else { + let ids: Vec = sqlx::query_scalar( + "SELECT DISTINCT entry_id FROM entries_history \ + WHERE name = $1 AND user_id IS NULL", + ) + .bind(name) + .fetch_all(pool) + .await?; + match ids.len() { + 0 => None, + 1 => Some(ids[0]), + _ => { + let folders: Vec = sqlx::query_scalar( + "SELECT DISTINCT folder FROM entries_history \ + WHERE name = $1 AND user_id IS NULL", + ) + .bind(name) + .fetch_all(pool) + .await?; + anyhow::bail!( + "Ambiguous: entries named '{}' exist in folders: [{}]. \ + Specify 'folder' to disambiguate.", + name, + folders.join(", ") + ) + } + } + }; + + let entry_id = entry_id.ok_or_else(|| anyhow::anyhow!("No history found for '{}'", name))?; + + let snap: Option = if let Some(ver) = to_version { + sqlx::query_as( + "SELECT entry_id, folder, type, version, action, tags, metadata \ + FROM entries_history \ + WHERE entry_id = $1 AND version = $2 ORDER BY id DESC LIMIT 1", + ) + .bind(entry_id) + .bind(ver) .fetch_optional(pool) .await? } else { sqlx::query_as( - "SELECT entry_id, version, action, tags, metadata FROM entries_history \ - WHERE namespace = $1 AND kind = $2 AND name = $3 \ - AND user_id IS NULL ORDER BY id DESC LIMIT 1", + "SELECT entry_id, folder, type, version, action, tags, metadata \ + FROM entries_history \ + WHERE entry_id = $1 ORDER BY id DESC LIMIT 1", ) - .bind(namespace) - .bind(kind) - .bind(name) + .bind(entry_id) .fetch_optional(pool) .await? }; let snap = snap.ok_or_else(|| { anyhow::anyhow!( - "No history found for [{}/{}] {}{}.", - namespace, - kind, + "No history found for '{}'{}.", name, to_version .map(|v| format!(" at version {}", v)) @@ -130,43 +185,32 @@ pub async fn run( struct LiveEntry { id: Uuid, version: i64, + folder: String, + #[sqlx(rename = "type")] + entry_type: String, tags: Vec, metadata: Value, + #[allow(dead_code)] + notes: String, } - // Query live entry with correct user_id scoping to avoid PK conflicts - let live: Option = if let Some(uid) = user_id { - sqlx::query_as( - "SELECT id, version, tags, metadata FROM entries \ - WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4 FOR UPDATE", - ) - .bind(uid) - .bind(namespace) - .bind(kind) - .bind(name) - .fetch_optional(&mut *tx) - .await? - } else { - sqlx::query_as( - "SELECT id, version, tags, metadata FROM entries \ - WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE", - ) - .bind(namespace) - .bind(kind) - .bind(name) - .fetch_optional(&mut *tx) - .await? - }; + // Lock the live entry if it exists (matched by entry_id for precision). + let live: Option = sqlx::query_as( + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE id = $1 FOR UPDATE", + ) + .bind(entry_id) + .fetch_optional(&mut *tx) + .await?; - let entry_id = if let Some(ref lr) = live { - // Snapshot current state before overwriting + let live_entry_id = if let Some(ref lr) = live { if let Err(e) = db::snapshot_entry_history( &mut tx, db::EntrySnapshotParams { entry_id: lr.id, user_id, - namespace, - kind, + folder: &lr.folder, + entry_type: &lr.entry_type, name, version: lr.version, action: "rollback", @@ -209,7 +253,6 @@ pub async fn run( } } - // Update the existing row in-place to preserve its primary key and user_id sqlx::query( "UPDATE entries SET tags = $1, metadata = $2, version = version + 1, \ updated_at = NOW() WHERE id = $3", @@ -222,16 +265,15 @@ pub async fn run( lr.id } else { - // No live entry — insert a fresh one with a new UUID if let Some(uid) = user_id { sqlx::query_scalar( "INSERT INTO entries \ - (user_id, namespace, kind, name, tags, metadata, version, updated_at) \ - VALUES ($1, $2, $3, $4, $5, $6, $7, NOW()) RETURNING id", + (user_id, folder, type, name, notes, tags, metadata, version, updated_at) \ + VALUES ($1, $2, $3, $4, '', $5, $6, $7, NOW()) RETURNING id", ) .bind(uid) - .bind(namespace) - .bind(kind) + .bind(&snap.folder) + .bind(&snap.entry_type) .bind(name) .bind(&snap.tags) .bind(&snap.metadata) @@ -241,11 +283,11 @@ pub async fn run( } else { sqlx::query_scalar( "INSERT INTO entries \ - (namespace, kind, name, tags, metadata, version, updated_at) \ - VALUES ($1, $2, $3, $4, $5, $6, NOW()) RETURNING id", + (folder, type, name, notes, tags, metadata, version, updated_at) \ + VALUES ($1, $2, $3, '', $4, $5, $6, NOW()) RETURNING id", ) - .bind(namespace) - .bind(kind) + .bind(&snap.folder) + .bind(&snap.entry_type) .bind(name) .bind(&snap.tags) .bind(&snap.metadata) @@ -256,7 +298,7 @@ pub async fn run( }; sqlx::query("DELETE FROM secrets WHERE entry_id = $1") - .bind(entry_id) + .bind(live_entry_id) .execute(&mut *tx) .await?; @@ -265,7 +307,7 @@ pub async fn run( continue; } sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)") - .bind(entry_id) + .bind(live_entry_id) .bind(&f.field_name) .bind(&f.encrypted) .execute(&mut *tx) @@ -276,8 +318,8 @@ pub async fn run( &mut tx, user_id, "rollback", - namespace, - kind, + &snap.folder, + &snap.entry_type, name, serde_json::json!({ "restored_version": snap.version, @@ -289,9 +331,9 @@ pub async fn run( tx.commit().await?; Ok(RollbackResult { - namespace: namespace.to_string(), - kind: kind.to_string(), name: name.to_string(), + folder: snap.folder, + entry_type: snap.entry_type, restored_version: snap.version, }) } diff --git a/crates/secrets-core/src/service/search.rs b/crates/secrets-core/src/service/search.rs index c788b10..bebb8ff 100644 --- a/crates/secrets-core/src/service/search.rs +++ b/crates/secrets-core/src/service/search.rs @@ -9,8 +9,8 @@ use crate::models::{Entry, SecretField}; pub const FETCH_ALL_LIMIT: u32 = 100_000; pub struct SearchParams<'a> { - pub namespace: Option<&'a str>, - pub kind: Option<&'a str>, + pub folder: Option<&'a str>, + pub entry_type: Option<&'a str>, pub name: Option<&'a str>, pub tags: &'a [String], pub query: Option<&'a str>, @@ -44,16 +44,16 @@ pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result, - kind: Option<&str>, + folder: Option<&str>, + entry_type: Option<&str>, name: Option<&str>, tags: &[String], query: Option<&str>, user_id: Option, ) -> Result> { let params = SearchParams { - namespace, - kind, + folder, + entry_type, name, tags, query, @@ -77,12 +77,12 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result) -> Result) -> Result) -> Result, + user_id: Option, +) -> Result { + let entries = fetch_entries(pool, folder, None, Some(name), &[], None, user_id).await?; + match entries.len() { + 0 => { + if let Some(f) = folder { + anyhow::bail!("Not found: '{}' in folder '{}'", name, f) + } else { + anyhow::bail!("Not found: '{}'", name) + } + } + 1 => Ok(entries.into_iter().next().unwrap()), + _ => { + let folders: Vec<&str> = entries.iter().map(|e| e.folder.as_str()).collect(); + anyhow::bail!( + "Ambiguous: {} entries named '{}' found in folders: [{}]. \ + Specify 'folder' to disambiguate.", + entries.len(), + name, + folders.join(", ") + ) + } + } +} +// ── Internal raw row (because user_id is nullable in DB) ───────────────────── #[derive(sqlx::FromRow)] struct EntryRaw { id: Uuid, user_id: Option, - namespace: String, - kind: String, + folder: String, + #[sqlx(rename = "type")] + entry_type: String, name: String, + notes: String, tags: Vec, metadata: Value, version: i64, @@ -228,9 +265,10 @@ impl From for Entry { Entry { id: r.id, user_id: r.user_id, - namespace: r.namespace, - kind: r.kind, + folder: r.folder, + entry_type: r.entry_type, name: r.name, + notes: r.notes, tags: r.tags, metadata: r.metadata, version: r.version, diff --git a/crates/secrets-core/src/service/update.rs b/crates/secrets-core/src/service/update.rs index d2ea926..914465e 100644 --- a/crates/secrets-core/src/service/update.rs +++ b/crates/secrets-core/src/service/update.rs @@ -13,9 +13,10 @@ use crate::service::add::{ #[derive(Debug, serde::Serialize)] pub struct UpdateResult { - pub namespace: String, - pub kind: String, pub name: String, + pub folder: String, + #[serde(rename = "type")] + pub entry_type: String, pub add_tags: Vec, pub remove_tags: Vec, pub meta_keys: Vec, @@ -25,9 +26,10 @@ pub struct UpdateResult { } pub struct UpdateParams<'a> { - pub namespace: &'a str, - pub kind: &'a str, pub name: &'a str, + /// Optional folder for disambiguation when multiple entries share the same name. + pub folder: Option<&'a str>, + pub notes: Option<&'a str>, pub add_tags: &'a [String], pub remove_tags: &'a [String], pub meta_entries: &'a [String], @@ -44,45 +46,76 @@ pub async fn run( ) -> Result { let mut tx = pool.begin().await?; - let row: Option = if let Some(uid) = params.user_id { + // Fetch matching rows with FOR UPDATE; use folder when provided to resolve ambiguity. + let rows: Vec = if let Some(uid) = params.user_id { + if let Some(folder) = params.folder { + sqlx::query_as( + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE user_id = $1 AND folder = $2 AND name = $3 FOR UPDATE", + ) + .bind(uid) + .bind(folder) + .bind(params.name) + .fetch_all(&mut *tx) + .await? + } else { + sqlx::query_as( + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE user_id = $1 AND name = $2 FOR UPDATE", + ) + .bind(uid) + .bind(params.name) + .fetch_all(&mut *tx) + .await? + } + } else if let Some(folder) = params.folder { sqlx::query_as( - "SELECT id, version, tags, metadata FROM entries \ - WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4 FOR UPDATE", + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE user_id IS NULL AND folder = $1 AND name = $2 FOR UPDATE", ) - .bind(uid) - .bind(params.namespace) - .bind(params.kind) + .bind(folder) .bind(params.name) - .fetch_optional(&mut *tx) + .fetch_all(&mut *tx) .await? } else { sqlx::query_as( - "SELECT id, version, tags, metadata FROM entries \ - WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE", + "SELECT id, version, folder, type, tags, metadata, notes FROM entries \ + WHERE user_id IS NULL AND name = $1 FOR UPDATE", ) - .bind(params.namespace) - .bind(params.kind) .bind(params.name) - .fetch_optional(&mut *tx) + .fetch_all(&mut *tx) .await? }; - let row = row.ok_or_else(|| { - anyhow::anyhow!( - "Not found: [{}/{}] {}. Use `add` to create it first.", - params.namespace, - params.kind, - params.name - ) - })?; + let row = match rows.len() { + 0 => { + tx.rollback().await?; + anyhow::bail!( + "Not found: '{}'. Use `add` to create it first.", + params.name + ) + } + 1 => rows.into_iter().next().unwrap(), + _ => { + tx.rollback().await?; + let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect(); + anyhow::bail!( + "Ambiguous: {} entries named '{}' found in folders: [{}]. \ + Specify 'folder' to disambiguate.", + rows.len(), + params.name, + folders.join(", ") + ) + } + }; if let Err(e) = db::snapshot_entry_history( &mut tx, db::EntrySnapshotParams { entry_id: row.id, user_id: params.user_id, - namespace: params.namespace, - kind: params.kind, + folder: &row.folder, + entry_type: &row.entry_type, name: params.name, version: row.version, action: "update", @@ -117,12 +150,16 @@ pub async fn run( } let metadata = Value::Object(meta_map); + let new_notes = params.notes.unwrap_or(&row.notes); + let result = sqlx::query( - "UPDATE entries SET tags = $1, metadata = $2, version = version + 1, updated_at = NOW() \ - WHERE id = $3 AND version = $4", + "UPDATE entries SET tags = $1, metadata = $2, notes = $3, \ + version = version + 1, updated_at = NOW() \ + WHERE id = $4 AND version = $5", ) .bind(&tags) .bind(&metadata) + .bind(new_notes) .bind(row.id) .bind(row.version) .execute(&mut *tx) @@ -131,9 +168,7 @@ pub async fn run( if result.rows_affected() == 0 { tx.rollback().await?; anyhow::bail!( - "Concurrent modification detected for [{}/{}] {}. Please retry.", - params.namespace, - params.kind, + "Concurrent modification detected for '{}'. Please retry.", params.name ); } @@ -243,8 +278,8 @@ pub async fn run( &mut tx, params.user_id, "update", - params.namespace, - params.kind, + "", + "", params.name, serde_json::json!({ "add_tags": params.add_tags, @@ -260,9 +295,9 @@ pub async fn run( tx.commit().await?; Ok(UpdateResult { - namespace: params.namespace.to_string(), - kind: params.kind.to_string(), name: params.name.to_string(), + folder: row.folder.clone(), + entry_type: row.entry_type.clone(), add_tags: params.add_tags.to_vec(), remove_tags: params.remove_tags.to_vec(), meta_keys, diff --git a/crates/secrets-mcp/Cargo.toml b/crates/secrets-mcp/Cargo.toml index 33c44da..5809db7 100644 --- a/crates/secrets-mcp/Cargo.toml +++ b/crates/secrets-mcp/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "secrets-mcp" -version = "0.2.2" +version = "0.3.0" edition.workspace = true [[bin]] diff --git a/crates/secrets-mcp/src/tools.rs b/crates/secrets-mcp/src/tools.rs index d6ec5a9..ac6eaf3 100644 --- a/crates/secrets-mcp/src/tools.rs +++ b/crates/secrets-mcp/src/tools.rs @@ -155,17 +155,18 @@ impl SecretsService { #[derive(Debug, Deserialize, JsonSchema)] struct SearchInput { - #[schemars(description = "Namespace filter (e.g. 'refining', 'ricnsmart')")] - namespace: Option, - #[schemars(description = "Kind filter (e.g. 'server', 'service', 'key')")] - kind: Option, - #[schemars(description = "Exact record name")] + #[schemars(description = "Fuzzy search across name, folder, type, notes, tags, metadata")] + query: Option, + #[schemars(description = "Folder filter (e.g. 'refining', 'personal', 'family')")] + folder: Option, + #[schemars(description = "Type filter (e.g. 'server', 'service', 'person', 'key')")] + #[serde(rename = "type")] + entry_type: Option, + #[schemars(description = "Exact name to match")] name: Option, #[schemars(description = "Tag filters (all must match)")] tags: Option>, - #[schemars(description = "Fuzzy search across name, namespace, kind, tags, metadata")] - query: Option, - #[schemars(description = "Return only summary fields (name/tags/desc/updated_at)")] + #[schemars(description = "Return only summary fields (name/tags/notes/updated_at)")] summary: Option, #[schemars(description = "Sort order: 'name' (default), 'updated', 'created'")] sort: Option, @@ -177,24 +178,29 @@ struct SearchInput { #[derive(Debug, Deserialize, JsonSchema)] struct GetSecretInput { - #[schemars(description = "Namespace of the entry")] - namespace: String, - #[schemars(description = "Kind of the entry (e.g. 'server', 'service')")] - kind: String, #[schemars(description = "Name of the entry")] name: String, + #[schemars( + description = "Folder for disambiguation when multiple entries share the same name (optional)" + )] + folder: Option, #[schemars(description = "Specific field to retrieve. If omitted, returns all fields.")] field: Option, } #[derive(Debug, Deserialize, JsonSchema)] struct AddInput { - #[schemars(description = "Namespace")] - namespace: String, - #[schemars(description = "Kind (e.g. 'server', 'service', 'key')")] - kind: String, - #[schemars(description = "Unique name within namespace+kind")] + #[schemars(description = "Unique name for this entry")] name: String, + #[schemars(description = "Folder for organization (optional, e.g. 'personal', 'refining')")] + folder: Option, + #[schemars( + description = "Type/category of this entry (optional, e.g. 'server', 'person', 'key')" + )] + #[serde(rename = "type")] + entry_type: Option, + #[schemars(description = "Free-text notes for this entry (optional)")] + notes: Option, #[schemars(description = "Tags for this entry")] tags: Option>, #[schemars(description = "Metadata fields as 'key=value' or 'key:=json' strings")] @@ -205,12 +211,14 @@ struct AddInput { #[derive(Debug, Deserialize, JsonSchema)] struct UpdateInput { - #[schemars(description = "Namespace")] - namespace: String, - #[schemars(description = "Kind")] - kind: String, - #[schemars(description = "Name")] + #[schemars(description = "Name of the entry to update")] name: String, + #[schemars( + description = "Folder for disambiguation when multiple entries share the same name (optional)" + )] + folder: Option, + #[schemars(description = "Update the notes field")] + notes: Option, #[schemars(description = "Tags to add")] add_tags: Option>, #[schemars(description = "Tags to remove")] @@ -227,46 +235,49 @@ struct UpdateInput { #[derive(Debug, Deserialize, JsonSchema)] struct DeleteInput { - #[schemars(description = "Namespace")] - namespace: String, - #[schemars(description = "Kind filter (required for single delete)")] - kind: Option, - #[schemars(description = "Exact name to delete. Omit for bulk delete by namespace+kind.")] + #[schemars(description = "Name of the entry to delete (single delete). \ + Omit to bulk delete by folder/type filters.")] name: Option, + #[schemars(description = "Folder filter for bulk delete")] + folder: Option, + #[schemars(description = "Type filter for bulk delete")] + #[serde(rename = "type")] + entry_type: Option, #[schemars(description = "Preview deletions without writing")] dry_run: Option, } #[derive(Debug, Deserialize, JsonSchema)] struct HistoryInput { - #[schemars(description = "Namespace")] - namespace: String, - #[schemars(description = "Kind")] - kind: String, - #[schemars(description = "Name")] + #[schemars(description = "Name of the entry")] name: String, + #[schemars( + description = "Folder for disambiguation when multiple entries share the same name (optional)" + )] + folder: Option, #[schemars(description = "Max history entries to return (default 20)")] limit: Option, } #[derive(Debug, Deserialize, JsonSchema)] struct RollbackInput { - #[schemars(description = "Namespace")] - namespace: String, - #[schemars(description = "Kind")] - kind: String, - #[schemars(description = "Name")] + #[schemars(description = "Name of the entry")] name: String, + #[schemars( + description = "Folder for disambiguation when multiple entries share the same name (optional)" + )] + folder: Option, #[schemars(description = "Target version number. Omit to restore the most recent snapshot.")] to_version: Option, } #[derive(Debug, Deserialize, JsonSchema)] struct ExportInput { - #[schemars(description = "Namespace filter")] - namespace: Option, - #[schemars(description = "Kind filter")] - kind: Option, + #[schemars(description = "Folder filter")] + folder: Option, + #[schemars(description = "Type filter")] + #[serde(rename = "type")] + entry_type: Option, #[schemars(description = "Exact name filter")] name: Option, #[schemars(description = "Tag filters")] @@ -279,10 +290,11 @@ struct ExportInput { #[derive(Debug, Deserialize, JsonSchema)] struct EnvMapInput { - #[schemars(description = "Namespace filter")] - namespace: Option, - #[schemars(description = "Kind filter")] - kind: Option, + #[schemars(description = "Folder filter")] + folder: Option, + #[schemars(description = "Type filter")] + #[serde(rename = "type")] + entry_type: Option, #[schemars(description = "Exact name filter")] name: Option, #[schemars(description = "Tag filters")] @@ -316,8 +328,8 @@ impl SecretsService { tracing::info!( tool = "secrets_search", ?user_id, - namespace = input.namespace.as_deref(), - kind = input.kind.as_deref(), + folder = input.folder.as_deref(), + entry_type = input.entry_type.as_deref(), name = input.name.as_deref(), query = input.query.as_deref(), "tool call start", @@ -326,8 +338,8 @@ impl SecretsService { let result = svc_search( &self.pool, SearchParams { - namespace: input.namespace.as_deref(), - kind: input.kind.as_deref(), + folder: input.folder.as_deref(), + entry_type: input.entry_type.as_deref(), name: input.name.as_deref(), tags: &tags, query: input.query.as_deref(), @@ -347,12 +359,11 @@ impl SecretsService { .map(|e| { if summary { serde_json::json!({ - "namespace": e.namespace, - "kind": e.kind, "name": e.name, + "folder": e.folder, + "type": e.entry_type, "tags": e.tags, - "desc": e.metadata.get("desc").or_else(|| e.metadata.get("url")) - .and_then(|v| v.as_str()).unwrap_or(""), + "notes": e.notes, "updated_at": e.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(), }) } else { @@ -363,9 +374,10 @@ impl SecretsService { .unwrap_or_default(); serde_json::json!({ "id": e.id, - "namespace": e.namespace, - "kind": e.kind, "name": e.name, + "folder": e.folder, + "type": e.entry_type, + "notes": e.notes, "tags": e.tags, "metadata": e.metadata, "secret_fields": schema, @@ -408,8 +420,6 @@ impl SecretsService { tracing::info!( tool = "secrets_get", ?user_id, - namespace = %input.namespace, - kind = %input.kind, name = %input.name, field = input.field.as_deref(), "tool call start", @@ -418,9 +428,8 @@ impl SecretsService { if let Some(field_name) = &input.field { let value = get_secret_field( &self.pool, - &input.namespace, - &input.kind, &input.name, + input.folder.as_deref(), field_name, &user_key, Some(user_id), @@ -440,9 +449,8 @@ impl SecretsService { } else { let secrets = get_all_secrets( &self.pool, - &input.namespace, - &input.kind, &input.name, + input.folder.as_deref(), &user_key, Some(user_id), ) @@ -478,22 +486,26 @@ impl SecretsService { tracing::info!( tool = "secrets_add", ?user_id, - namespace = %input.namespace, - kind = %input.kind, name = %input.name, + folder = input.folder.as_deref(), + entry_type = input.entry_type.as_deref(), "tool call start", ); let tags = input.tags.unwrap_or_default(); let meta = input.meta.unwrap_or_default(); let secrets = input.secrets.unwrap_or_default(); + let folder = input.folder.as_deref().unwrap_or(""); + let entry_type = input.entry_type.as_deref().unwrap_or(""); + let notes = input.notes.as_deref().unwrap_or(""); let result = svc_add( &self.pool, AddParams { - namespace: &input.namespace, - kind: &input.kind, name: &input.name, + folder, + entry_type, + notes, tags: &tags, meta_entries: &meta, secret_entries: &secrets, @@ -507,8 +519,6 @@ impl SecretsService { tracing::info!( tool = "secrets_add", ?user_id, - namespace = %input.namespace, - kind = %input.kind, name = %input.name, elapsed_ms = t.elapsed().as_millis(), "tool call ok", @@ -532,8 +542,6 @@ impl SecretsService { tracing::info!( tool = "secrets_update", ?user_id, - namespace = %input.namespace, - kind = %input.kind, name = %input.name, "tool call start", ); @@ -548,9 +556,9 @@ impl SecretsService { let result = svc_update( &self.pool, UpdateParams { - namespace: &input.namespace, - kind: &input.kind, name: &input.name, + folder: input.folder.as_deref(), + notes: input.notes.as_deref(), add_tags: &add_tags, remove_tags: &remove_tags, meta_entries: &meta, @@ -567,8 +575,6 @@ impl SecretsService { tracing::info!( tool = "secrets_update", ?user_id, - namespace = %input.namespace, - kind = %input.kind, name = %input.name, elapsed_ms = t.elapsed().as_millis(), "tool call ok", @@ -578,8 +584,8 @@ impl SecretsService { } #[tool( - description = "Delete one entry (specify namespace+kind+name) or bulk delete all \ - entries matching namespace+kind. Use dry_run=true to preview.", + description = "Delete one entry by name, or bulk delete entries matching folder and/or type. \ + Use dry_run=true to preview.", annotations(title = "Delete Secret Entry", destructive_hint = true) )] async fn secrets_delete( @@ -592,9 +598,9 @@ impl SecretsService { tracing::info!( tool = "secrets_delete", ?user_id, - namespace = %input.namespace, - kind = input.kind.as_deref(), name = input.name.as_deref(), + folder = input.folder.as_deref(), + entry_type = input.entry_type.as_deref(), dry_run = input.dry_run.unwrap_or(false), "tool call start", ); @@ -602,9 +608,9 @@ impl SecretsService { let result = svc_delete( &self.pool, DeleteParams { - namespace: &input.namespace, - kind: input.kind.as_deref(), name: input.name.as_deref(), + folder: input.folder.as_deref(), + entry_type: input.entry_type.as_deref(), dry_run: input.dry_run.unwrap_or(false), user_id, }, @@ -615,7 +621,6 @@ impl SecretsService { tracing::info!( tool = "secrets_delete", ?user_id, - namespace = %input.namespace, elapsed_ms = t.elapsed().as_millis(), "tool call ok", ); @@ -642,17 +647,14 @@ impl SecretsService { tracing::info!( tool = "secrets_history", ?user_id, - namespace = %input.namespace, - kind = %input.kind, name = %input.name, "tool call start", ); let result = svc_history( &self.pool, - &input.namespace, - &input.kind, &input.name, + input.folder.as_deref(), input.limit.unwrap_or(20), user_id, ) @@ -684,8 +686,6 @@ impl SecretsService { tracing::info!( tool = "secrets_rollback", ?user_id, - namespace = %input.namespace, - kind = %input.kind, name = %input.name, to_version = input.to_version, "tool call start", @@ -693,9 +693,8 @@ impl SecretsService { let result = svc_rollback( &self.pool, - &input.namespace, - &input.kind, &input.name, + input.folder.as_deref(), input.to_version, &user_key, Some(user_id), @@ -734,8 +733,8 @@ impl SecretsService { tracing::info!( tool = "secrets_export", ?user_id, - namespace = input.namespace.as_deref(), - kind = input.kind.as_deref(), + folder = input.folder.as_deref(), + entry_type = input.entry_type.as_deref(), format, "tool call start", ); @@ -743,8 +742,8 @@ impl SecretsService { let data = svc_export( &self.pool, ExportParams { - namespace: input.namespace.as_deref(), - kind: input.kind.as_deref(), + folder: input.folder.as_deref(), + entry_type: input.entry_type.as_deref(), name: input.name.as_deref(), tags: &tags, query: input.query.as_deref(), @@ -800,16 +799,16 @@ impl SecretsService { tracing::info!( tool = "secrets_env_map", ?user_id, - namespace = input.namespace.as_deref(), - kind = input.kind.as_deref(), + folder = input.folder.as_deref(), + entry_type = input.entry_type.as_deref(), prefix = input.prefix.as_deref().unwrap_or(""), "tool call start", ); let env_map = secrets_core::service::env_map::build_env_map( &self.pool, - input.namespace.as_deref(), - input.kind.as_deref(), + input.folder.as_deref(), + input.entry_type.as_deref(), input.name.as_deref(), &tags, &only_fields, diff --git a/crates/secrets-mcp/src/web.rs b/crates/secrets-mcp/src/web.rs index be2f451..6bd99d2 100644 --- a/crates/secrets-mcp/src/web.rs +++ b/crates/secrets-mcp/src/web.rs @@ -506,7 +506,7 @@ async fn audit_page( .map(|row| AuditEntryView { created_at_iso: row.created_at.to_rfc3339_opts(SecondsFormat::Secs, true), action: row.action, - target: format_audit_target(&row.namespace, &row.kind, &row.name), + target: format_audit_target(&row.folder, &row.entry_type, &row.name), detail: serde_json::to_string_pretty(&row.detail).unwrap_or_else(|_| "{}".to_string()), }) .collect(); @@ -783,11 +783,15 @@ fn render_template(tmpl: T) -> Result { Ok(Html(html).into_response()) } -fn format_audit_target(namespace: &str, kind: &str, name: &str) -> String { - // Auth events reuse kind/name as a provider-scoped target, not an entry identity. - if namespace == "auth" { - format!("{}/{}", kind, name) +fn format_audit_target(folder: &str, entry_type: &str, name: &str) -> String { + // Auth events (folder="auth") use entry_type/name as provider-scoped target. + if folder == "auth" { + format!("{}/{}", entry_type, name) + } else if !folder.is_empty() && !entry_type.is_empty() { + format!("[{}/{}] {}", folder, entry_type, name) + } else if !folder.is_empty() { + format!("[{}] {}", folder, name) } else { - format!("[{}/{}] {}", namespace, kind, name) + name.to_string() } } diff --git a/scripts/migrate-v0.3.0.sql b/scripts/migrate-v0.3.0.sql new file mode 100644 index 0000000..d1cb4b9 --- /dev/null +++ b/scripts/migrate-v0.3.0.sql @@ -0,0 +1,194 @@ +-- ============================================================================ +-- migrate-v0.3.0.sql +-- Schema migration from v0.2.x → v0.3.0 +-- +-- Changes: +-- • entries: namespace → folder, kind → type; add notes column +-- • audit_log: namespace → folder, kind → type +-- • entries_history: namespace → folder, kind → type; add user_id column +-- • Unique index: (user_id, name) → (user_id, folder, name) +-- Same name in different folders is now allowed; no rename needed. +-- +-- Safe to run multiple times (fully idempotent). +-- Preserves all data in users, entries, secrets. +-- ============================================================================ + +BEGIN; + +-- ── entries: rename namespace→folder, kind→type ────────────────────────────── +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries' AND column_name = 'namespace' + ) THEN + ALTER TABLE entries RENAME COLUMN namespace TO folder; + END IF; +END $$; + +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries' AND column_name = 'kind' + ) THEN + ALTER TABLE entries RENAME COLUMN kind TO type; + END IF; +END $$; + +-- Set NOT NULL + default for folder/type in entries +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries' AND column_name = 'folder' + ) THEN + UPDATE entries SET folder = '' WHERE folder IS NULL; + ALTER TABLE entries ALTER COLUMN folder SET NOT NULL; + ALTER TABLE entries ALTER COLUMN folder SET DEFAULT ''; + END IF; +END $$; + +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries' AND column_name = 'type' + ) THEN + UPDATE entries SET type = '' WHERE type IS NULL; + ALTER TABLE entries ALTER COLUMN type SET NOT NULL; + ALTER TABLE entries ALTER COLUMN type SET DEFAULT ''; + END IF; +END $$; + +-- Add notes column to entries if missing +ALTER TABLE entries ADD COLUMN IF NOT EXISTS notes TEXT NOT NULL DEFAULT ''; + +-- ── audit_log: rename namespace→folder, kind→type ──────────────────────────── +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'audit_log' AND column_name = 'namespace' + ) THEN + ALTER TABLE audit_log RENAME COLUMN namespace TO folder; + END IF; +END $$; + +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'audit_log' AND column_name = 'kind' + ) THEN + ALTER TABLE audit_log RENAME COLUMN kind TO type; + END IF; +END $$; + +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'audit_log' AND column_name = 'folder' + ) THEN + UPDATE audit_log SET folder = '' WHERE folder IS NULL; + ALTER TABLE audit_log ALTER COLUMN folder SET NOT NULL; + ALTER TABLE audit_log ALTER COLUMN folder SET DEFAULT ''; + END IF; +END $$; + +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'audit_log' AND column_name = 'type' + ) THEN + UPDATE audit_log SET type = '' WHERE type IS NULL; + ALTER TABLE audit_log ALTER COLUMN type SET NOT NULL; + ALTER TABLE audit_log ALTER COLUMN type SET DEFAULT ''; + END IF; +END $$; + +ALTER TABLE audit_log DROP COLUMN IF EXISTS actor; + +-- ── entries_history: rename namespace→folder, kind→type; add user_id ───────── +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries_history' AND column_name = 'namespace' + ) THEN + ALTER TABLE entries_history RENAME COLUMN namespace TO folder; + END IF; +END $$; + +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries_history' AND column_name = 'kind' + ) THEN + ALTER TABLE entries_history RENAME COLUMN kind TO type; + END IF; +END $$; + +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries_history' AND column_name = 'folder' + ) THEN + UPDATE entries_history SET folder = '' WHERE folder IS NULL; + ALTER TABLE entries_history ALTER COLUMN folder SET NOT NULL; + ALTER TABLE entries_history ALTER COLUMN folder SET DEFAULT ''; + END IF; +END $$; + +DO $$ BEGIN + IF EXISTS ( + SELECT 1 FROM information_schema.columns + WHERE table_name = 'entries_history' AND column_name = 'type' + ) THEN + UPDATE entries_history SET type = '' WHERE type IS NULL; + ALTER TABLE entries_history ALTER COLUMN type SET NOT NULL; + ALTER TABLE entries_history ALTER COLUMN type SET DEFAULT ''; + END IF; +END $$; + +ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID; +ALTER TABLE entries_history DROP COLUMN IF EXISTS actor; + +-- ── secrets_history: drop actor column ─────────────────────────────────────── +ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor; + +-- ── Rebuild unique indexes: (user_id, folder, name) ────────────────────────── +-- Note: folder is now part of the key, so same name in different folders is +-- naturally distinct — no rename of existing rows needed. +DROP INDEX IF EXISTS idx_entries_unique_legacy; +DROP INDEX IF EXISTS idx_entries_unique_user; + +CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy + ON entries(folder, name) + WHERE user_id IS NULL; + +CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user + ON entries(user_id, folder, name) + WHERE user_id IS NOT NULL; + +-- ── Replace old namespace/kind indexes with folder/type ────────────────────── +DROP INDEX IF EXISTS idx_entries_namespace; +DROP INDEX IF EXISTS idx_entries_kind; +DROP INDEX IF EXISTS idx_audit_log_ns_kind; +DROP INDEX IF EXISTS idx_entries_history_ns_kind_name; + +CREATE INDEX IF NOT EXISTS idx_entries_folder + ON entries(folder) WHERE folder <> ''; +CREATE INDEX IF NOT EXISTS idx_entries_type + ON entries(type) WHERE type <> ''; +CREATE INDEX IF NOT EXISTS idx_entries_user_id + ON entries(user_id) WHERE user_id IS NOT NULL; +CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type + ON audit_log(folder, type); +CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name + ON entries_history(folder, type, name, version DESC); +CREATE INDEX IF NOT EXISTS idx_entries_history_user_id + ON entries_history(user_id) WHERE user_id IS NOT NULL; + +COMMIT; + +-- ── Verification queries (run these manually to confirm) ───────────────────── +-- SELECT column_name, data_type FROM information_schema.columns +-- WHERE table_name = 'entries' ORDER BY ordinal_position; +-- SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'entries'; +-- SELECT COUNT(*) FROM entries; +-- SELECT COUNT(*) FROM users; +-- SELECT COUNT(*) FROM secrets;