Release secrets-mcp 0.3.0: folder/type schema and MCP folder disambiguation
- Rename namespace/kind to folder/type on entries, audit_log, and history tables; add notes. Unique key is (user_id, folder, name). - Service layer and MCP tools support name-first lookup with optional folder when multiple entries share the same name. - secrets_delete dry_run uses the same disambiguation as real deletes. - Add scripts/migrate-v0.3.0.sql for manual DB migration. Refresh README and AGENTS.md. Made-with: Cursor
This commit is contained in:
31
AGENTS.md
31
AGENTS.md
@@ -29,7 +29,8 @@ secrets/
|
|||||||
|
|
||||||
- **建议库名**:`secrets-mcp`(专用实例,与历史库名区分)。
|
- **建议库名**:`secrets-mcp`(专用实例,与历史库名区分)。
|
||||||
- **连接**:环境变量 **`SECRETS_DATABASE_URL`**(本分支无本地配置文件路径)。
|
- **连接**:环境变量 **`SECRETS_DATABASE_URL`**(本分支无本地配置文件路径)。
|
||||||
- **表**:`entries`(含 `user_id`)、`secrets`、`entries_history`、`secrets_history`、`audit_log`、`users`、`oauth_accounts`,首次连接 **auto-migrate**。
|
- **表**:`entries`(含 `user_id`)、`secrets`、`entries_history`、`secrets_history`、`audit_log`、`users`、`oauth_accounts`,首次连接 **auto-migrate**(`secrets-core` 的 `migrate`)。
|
||||||
|
- **Web 会话**:与上项 **同一数据库 URL**;`secrets-mcp` 启动时对 tower-sessions 的 PostgreSQL 存储 **auto-migrate**(会话表与业务表共存于该实例,无需第二套连接串)。
|
||||||
|
|
||||||
### 表结构(摘录)
|
### 表结构(摘录)
|
||||||
|
|
||||||
@@ -37,15 +38,18 @@ secrets/
|
|||||||
entries (
|
entries (
|
||||||
id UUID PRIMARY KEY DEFAULT uuidv7(),
|
id UUID PRIMARY KEY DEFAULT uuidv7(),
|
||||||
user_id UUID, -- 多租户:NULL=遗留行;非空=归属用户
|
user_id UUID, -- 多租户:NULL=遗留行;非空=归属用户
|
||||||
namespace VARCHAR(64) NOT NULL,
|
folder VARCHAR(128) NOT NULL DEFAULT '',
|
||||||
kind VARCHAR(64) NOT NULL,
|
type VARCHAR(64) NOT NULL DEFAULT '',
|
||||||
name VARCHAR(256) NOT NULL,
|
name VARCHAR(256) NOT NULL,
|
||||||
|
notes TEXT NOT NULL DEFAULT '',
|
||||||
tags TEXT[] NOT NULL DEFAULT '{}',
|
tags TEXT[] NOT NULL DEFAULT '{}',
|
||||||
metadata JSONB NOT NULL DEFAULT '{}',
|
metadata JSONB NOT NULL DEFAULT '{}',
|
||||||
version BIGINT NOT NULL DEFAULT 1,
|
version BIGINT NOT NULL DEFAULT 1,
|
||||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
)
|
)
|
||||||
|
-- 唯一:UNIQUE(user_id, folder, name) WHERE user_id IS NOT NULL;
|
||||||
|
-- UNIQUE(folder, name) WHERE user_id IS NULL(单租户遗留)
|
||||||
```
|
```
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
@@ -82,22 +86,31 @@ oauth_accounts (
|
|||||||
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||||
provider VARCHAR(32) NOT NULL,
|
provider VARCHAR(32) NOT NULL,
|
||||||
provider_id VARCHAR(256) NOT NULL,
|
provider_id VARCHAR(256) NOT NULL,
|
||||||
...
|
email VARCHAR(256),
|
||||||
|
name VARCHAR(256),
|
||||||
|
avatar_url TEXT,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
UNIQUE(provider, provider_id)
|
UNIQUE(provider, provider_id)
|
||||||
)
|
)
|
||||||
|
-- 另有唯一索引 UNIQUE(user_id, provider)(迁移中 idx_oauth_accounts_user_provider):同一用户每种 provider 至多一条关联。
|
||||||
```
|
```
|
||||||
|
|
||||||
### audit_log / history
|
### audit_log / history
|
||||||
|
|
||||||
与迁移脚本一致:`audit_log`、`entries_history`、`secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs` 内 `migrate` SQL。`audit_log` 中普通业务事件的 `namespace/kind/name` 对应 entry 坐标;登录类事件固定使用 `namespace='auth'`,此时 `kind/name` 表示认证目标而非 entry 身份。
|
与迁移脚本一致:`audit_log`、`entries_history`、`secrets_history` 用于审计与时间旅行恢复;字段定义见 `crates/secrets-core/src/db.rs` 内 `migrate` SQL。`audit_log` 含可选 **`user_id`**(多租户下标识操作者;可空以兼容遗留数据)。`audit_log` 中普通业务事件使用 **`folder` / `type` / `name`** 对应 entry 坐标;登录类事件固定使用 **`folder='auth'`**,此时 `type`/`name` 表示认证目标而非 entry 身份。
|
||||||
|
|
||||||
|
### MCP 消歧(AI 调用)
|
||||||
|
|
||||||
|
按 `name` 定位条目的工具(`get` / `update` / 单条 `delete` / `history` / `rollback`):若该用户下仅一条匹配则直接执行;若多条(同 `name`、不同 `folder`)则返回错误并提示补全 `folder`。`secrets_delete` 的 `dry_run=true` 与真实删除使用相同消歧规则。
|
||||||
|
|
||||||
### 字段职责
|
### 字段职责
|
||||||
|
|
||||||
| 字段 | 含义 | 示例 |
|
| 字段 | 含义 | 示例 |
|
||||||
|------|------|------|
|
|------|------|------|
|
||||||
| `namespace` | 隔离空间 | `refining` |
|
| `folder` | 隔离空间(参与唯一键) | `refining` |
|
||||||
| `kind` | 记录类型 | `server`, `service`, `key` |
|
| `type` | 软分类(不参与唯一键) | `server`, `service`, `key`, `person` |
|
||||||
| `name` | 标识名 | `gitea`, `i-example0…` |
|
| `name` | 标识名 | `gitea`, `aliyun` |
|
||||||
|
| `notes` | 非敏感说明 | 自由文本 |
|
||||||
| `tags` | 标签 | `["aliyun","prod"]` |
|
| `tags` | 标签 | `["aliyun","prod"]` |
|
||||||
| `metadata` | 明文描述 | `ip`、`url`、`key_ref` |
|
| `metadata` | 明文描述 | `ip`、`url`、`key_ref` |
|
||||||
| `secrets.field_name` | 加密字段名(明文) | `token`, `ssh_key` |
|
| `secrets.field_name` | 加密字段名(明文) | `token`, `ssh_key` |
|
||||||
@@ -105,7 +118,7 @@ oauth_accounts (
|
|||||||
|
|
||||||
### PEM 共享(`key_ref`)
|
### PEM 共享(`key_ref`)
|
||||||
|
|
||||||
将共享 PEM 存为 `kind=key` 的 entry;其它记录在 `metadata.key_ref` 指向该 key 的 `name`。更新 key 记录后,引用方通过服务层解析合并逻辑即可使用新密钥(实现见 `secrets_core::service`)。
|
将共享 PEM 存为 **`type=key`** 的 entry;其它记录在 `metadata.key_ref` 指向该 key 的 `name`。更新 key 记录后,引用方通过服务层解析合并逻辑即可使用新密钥(实现见 `secrets_core::service`)。
|
||||||
|
|
||||||
## 代码规范
|
## 代码规范
|
||||||
|
|
||||||
|
|||||||
2
Cargo.lock
generated
2
Cargo.lock
generated
@@ -1968,7 +1968,7 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "secrets-mcp"
|
name = "secrets-mcp"
|
||||||
version = "0.2.2"
|
version = "0.3.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"askama",
|
"askama",
|
||||||
|
|||||||
28
README.md
28
README.md
@@ -28,7 +28,15 @@ cargo run -p secrets-mcp
|
|||||||
```
|
```
|
||||||
|
|
||||||
- **Web**:`BASE_URL`(登录、Dashboard、设置密码短语、创建 API Key)。
|
- **Web**:`BASE_URL`(登录、Dashboard、设置密码短语、创建 API Key)。
|
||||||
- **MCP**:Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer <api_key>` + `X-Encryption-Key: <hex>` 请求头。
|
- **MCP**:Streamable HTTP 基址 `{BASE_URL}/mcp`,需 `Authorization: Bearer <api_key>` + `X-Encryption-Key: <hex>` 请求头(读密文工具须带密钥)。
|
||||||
|
|
||||||
|
## MCP 与 AI 工作流(v0.3+)
|
||||||
|
|
||||||
|
条目在逻辑上以 **`(folder, name)`** 在用户内唯一(数据库唯一索引:`user_id + folder + name`)。同名可在不同 folder 下各存一条(例如 `refining/aliyun` 与 `ricnsmart/aliyun`)。
|
||||||
|
|
||||||
|
- **`secrets_search`**:发现条目(可按 query / folder / type / name 过滤);不要求加密头。
|
||||||
|
- **`secrets_get` / `secrets_update` / `secrets_delete`(按 name)/ `secrets_history` / `secrets_rollback`**:仅 `name` 且全局唯一则直接命中;若多条同名,返回消歧错误,需在参数中补 **`folder`**。
|
||||||
|
- **`secrets_delete`**:`dry_run=true` 时与真实删除相同的消歧规则——唯一则预览一条,多条则报错并要求 `folder`。
|
||||||
|
|
||||||
## 加密架构(混合 E2EE)
|
## 加密架构(混合 E2EE)
|
||||||
|
|
||||||
@@ -122,13 +130,14 @@ flowchart LR
|
|||||||
|
|
||||||
## 数据模型
|
## 数据模型
|
||||||
|
|
||||||
主表 **`entries`**(`namespace`、`kind`、`name`、`tags`、`metadata`,多租户时带 `user_id`)+ 子表 **`secrets`**(每行一个加密字段:`field_name`、`encrypted`)。另有 `entries_history`、`secrets_history`、`audit_log`,以及 **`users`**(含 `key_salt`、`key_check`、`key_params`、`api_key`)、**`oauth_accounts`**。首次连库自动迁移建表。
|
主表 **`entries`**(`folder`、`type`、`name`、`notes`、`tags`、`metadata`,多租户时带 `user_id`)+ 子表 **`secrets`**(每行一个加密字段:`field_name`、`encrypted`)。**唯一性**:`UNIQUE(user_id, folder, name)`(`user_id` 为空时为遗留行唯一 `(folder, name)`)。另有 `entries_history`、`secrets_history`、`audit_log`,以及 **`users`**(含 `key_salt`、`key_check`、`key_params`、`api_key`)、**`oauth_accounts`**。首次连库自动迁移建表(`secrets-core` 的 `migrate`);已有库可对照 [`scripts/migrate-v0.3.0.sql`](scripts/migrate-v0.3.0.sql) 做列重命名与索引重建。**Web 登录会话**(tower-sessions)使用同一 `SECRETS_DATABASE_URL`,进程启动时对会话存储执行迁移(见 `secrets-mcp` 中 `PostgresStore::migrate`),无需额外环境变量。
|
||||||
|
|
||||||
| 位置 | 字段 | 说明 |
|
| 位置 | 字段 | 说明 |
|
||||||
|------|------|------|
|
|------|------|------|
|
||||||
| entries | namespace | 一级隔离,如 `refining`、`ricnsmart` |
|
| entries | folder | 组织/隔离空间,如 `refining`、`ricnsmart`;参与唯一键 |
|
||||||
| entries | kind | `server`、`service`、`key` 等(可扩展) |
|
| entries | type | 软分类,如 `server`、`service`、`key`、`person`(可扩展,不参与唯一键) |
|
||||||
| entries | name | 人类可读标识 |
|
| entries | name | 人类可读标识;与 `folder` 一起在用户内唯一 |
|
||||||
|
| entries | notes | 非敏感说明文本 |
|
||||||
| entries | metadata | 明文 JSON(ip、url、`key_ref` 等) |
|
| entries | metadata | 明文 JSON(ip、url、`key_ref` 等) |
|
||||||
| secrets | field_name | 明文字段名,便于 schema 展示 |
|
| secrets | field_name | 明文字段名,便于 schema 展示 |
|
||||||
| secrets | encrypted | AES-GCM 密文(含 nonce) |
|
| secrets | encrypted | AES-GCM 密文(含 nonce) |
|
||||||
@@ -138,15 +147,15 @@ flowchart LR
|
|||||||
|
|
||||||
### PEM 共享(`key_ref`)
|
### PEM 共享(`key_ref`)
|
||||||
|
|
||||||
同一 PEM 可被多条 `server` 记录引用:将 PEM 存为 `kind=key` 的 entry,在服务器条目的 `metadata.key_ref` 中写 key 的名称;轮换时只更新 key 对应记录即可。
|
同一 PEM 可被多条 `server` 等记录引用:将 PEM 存为 **`type=key`** 的 entry,在其它条目的 `metadata.key_ref` 中写该 key 条目的 `name`;轮换时只更新 key 对应记录即可。
|
||||||
|
|
||||||
## 审计日志
|
## 审计日志
|
||||||
|
|
||||||
`add`、`update`、`delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。
|
`add`、`update`、`delete` 等写操作写入 **`audit_log`**(操作类型、对象、摘要,不含 secret 明文)。多租户场景下可写 **`user_id`**(可空,兼容遗留行)。
|
||||||
其中业务条目事件使用 `[namespace/kind] name` 语义;登录类事件使用 `namespace='auth'`,此时 `kind/name` 表示认证目标(例如 `oauth/google`),不表示某条 secrets entry。
|
业务条目事件使用 **`folder` / `type` / `name`**;登录类事件使用 **`folder='auth'`**,此时 `type`/`name` 表示认证目标(例如 `oauth` / `google`),不表示某条 secrets entry。
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT action, namespace, kind, name, detail, created_at
|
SELECT action, folder, type, name, detail, user_id, created_at
|
||||||
FROM audit_log
|
FROM audit_log
|
||||||
ORDER BY created_at DESC
|
ORDER BY created_at DESC
|
||||||
LIMIT 20;
|
LIMIT 20;
|
||||||
@@ -159,6 +168,7 @@ Cargo.toml
|
|||||||
crates/secrets-core/ # db / crypto / models / audit / service
|
crates/secrets-core/ # db / crypto / models / audit / service
|
||||||
crates/secrets-mcp/ # MCP HTTP、Web、OAuth、API Key
|
crates/secrets-mcp/ # MCP HTTP、Web、OAuth、API Key
|
||||||
scripts/
|
scripts/
|
||||||
|
migrate-v0.3.0.sql # 可选:手动 SQL 迁移(namespace/kind → folder/type、唯一键含 folder)
|
||||||
deploy/ # systemd、.env 示例
|
deploy/ # systemd、.env 示例
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ use sqlx::{PgPool, Postgres, Transaction};
|
|||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
pub const ACTION_LOGIN: &str = "login";
|
pub const ACTION_LOGIN: &str = "login";
|
||||||
pub const NAMESPACE_AUTH: &str = "auth";
|
pub const FOLDER_AUTH: &str = "auth";
|
||||||
|
|
||||||
fn login_detail(provider: &str, client_ip: Option<&str>, user_agent: Option<&str>) -> Value {
|
fn login_detail(provider: &str, client_ip: Option<&str>, user_agent: Option<&str>) -> Value {
|
||||||
json!({
|
json!({
|
||||||
@@ -16,7 +16,7 @@ fn login_detail(provider: &str, client_ip: Option<&str>, user_agent: Option<&str
|
|||||||
/// Write a login audit entry without requiring an explicit transaction.
|
/// Write a login audit entry without requiring an explicit transaction.
|
||||||
pub async fn log_login(
|
pub async fn log_login(
|
||||||
pool: &PgPool,
|
pool: &PgPool,
|
||||||
kind: &str,
|
entry_type: &str,
|
||||||
provider: &str,
|
provider: &str,
|
||||||
user_id: Uuid,
|
user_id: Uuid,
|
||||||
client_ip: Option<&str>,
|
client_ip: Option<&str>,
|
||||||
@@ -24,22 +24,22 @@ pub async fn log_login(
|
|||||||
) {
|
) {
|
||||||
let detail = login_detail(provider, client_ip, user_agent);
|
let detail = login_detail(provider, client_ip, user_agent);
|
||||||
let result: Result<_, sqlx::Error> = sqlx::query(
|
let result: Result<_, sqlx::Error> = sqlx::query(
|
||||||
"INSERT INTO audit_log (user_id, action, namespace, kind, name, detail) \
|
"INSERT INTO audit_log (user_id, action, folder, type, name, detail) \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6)",
|
VALUES ($1, $2, $3, $4, $5, $6)",
|
||||||
)
|
)
|
||||||
.bind(user_id)
|
.bind(user_id)
|
||||||
.bind(ACTION_LOGIN)
|
.bind(ACTION_LOGIN)
|
||||||
.bind(NAMESPACE_AUTH)
|
.bind(FOLDER_AUTH)
|
||||||
.bind(kind)
|
.bind(entry_type)
|
||||||
.bind(provider)
|
.bind(provider)
|
||||||
.bind(&detail)
|
.bind(&detail)
|
||||||
.execute(pool)
|
.execute(pool)
|
||||||
.await;
|
.await;
|
||||||
|
|
||||||
if let Err(e) = result {
|
if let Err(e) = result {
|
||||||
tracing::warn!(error = %e, kind, provider, "failed to write login audit log");
|
tracing::warn!(error = %e, entry_type, provider, "failed to write login audit log");
|
||||||
} else {
|
} else {
|
||||||
tracing::debug!(kind, provider, ?user_id, "login audit logged");
|
tracing::debug!(entry_type, provider, ?user_id, "login audit logged");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -48,19 +48,19 @@ pub async fn log_tx(
|
|||||||
tx: &mut Transaction<'_, Postgres>,
|
tx: &mut Transaction<'_, Postgres>,
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
action: &str,
|
action: &str,
|
||||||
namespace: &str,
|
folder: &str,
|
||||||
kind: &str,
|
entry_type: &str,
|
||||||
name: &str,
|
name: &str,
|
||||||
detail: Value,
|
detail: Value,
|
||||||
) {
|
) {
|
||||||
let result: Result<_, sqlx::Error> = sqlx::query(
|
let result: Result<_, sqlx::Error> = sqlx::query(
|
||||||
"INSERT INTO audit_log (user_id, action, namespace, kind, name, detail) \
|
"INSERT INTO audit_log (user_id, action, folder, type, name, detail) \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6)",
|
VALUES ($1, $2, $3, $4, $5, $6)",
|
||||||
)
|
)
|
||||||
.bind(user_id)
|
.bind(user_id)
|
||||||
.bind(action)
|
.bind(action)
|
||||||
.bind(namespace)
|
.bind(folder)
|
||||||
.bind(kind)
|
.bind(entry_type)
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.bind(&detail)
|
.bind(&detail)
|
||||||
.execute(&mut **tx)
|
.execute(&mut **tx)
|
||||||
@@ -69,7 +69,7 @@ pub async fn log_tx(
|
|||||||
if let Err(e) = result {
|
if let Err(e) = result {
|
||||||
tracing::warn!(error = %e, "failed to write audit log");
|
tracing::warn!(error = %e, "failed to write audit log");
|
||||||
} else {
|
} else {
|
||||||
tracing::debug!(action, namespace, kind, name, "audit logged");
|
tracing::debug!(action, folder, entry_type, name, "audit logged");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -22,9 +22,10 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
CREATE TABLE IF NOT EXISTS entries (
|
CREATE TABLE IF NOT EXISTS entries (
|
||||||
id UUID PRIMARY KEY DEFAULT uuidv7(),
|
id UUID PRIMARY KEY DEFAULT uuidv7(),
|
||||||
user_id UUID,
|
user_id UUID,
|
||||||
namespace VARCHAR(64) NOT NULL,
|
folder VARCHAR(128) NOT NULL DEFAULT '',
|
||||||
kind VARCHAR(64) NOT NULL,
|
type VARCHAR(64) NOT NULL DEFAULT '',
|
||||||
name VARCHAR(256) NOT NULL,
|
name VARCHAR(256) NOT NULL,
|
||||||
|
notes TEXT NOT NULL DEFAULT '',
|
||||||
tags TEXT[] NOT NULL DEFAULT '{}',
|
tags TEXT[] NOT NULL DEFAULT '{}',
|
||||||
metadata JSONB NOT NULL DEFAULT '{}',
|
metadata JSONB NOT NULL DEFAULT '{}',
|
||||||
version BIGINT NOT NULL DEFAULT 1,
|
version BIGINT NOT NULL DEFAULT 1,
|
||||||
@@ -34,19 +35,19 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
|
|
||||||
-- Legacy unique constraint without user_id (single-user mode)
|
-- Legacy unique constraint without user_id (single-user mode)
|
||||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy
|
||||||
ON entries(namespace, kind, name)
|
ON entries(folder, name)
|
||||||
WHERE user_id IS NULL;
|
WHERE user_id IS NULL;
|
||||||
|
|
||||||
-- Multi-user unique constraint
|
-- Multi-user unique constraint
|
||||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user
|
||||||
ON entries(user_id, namespace, kind, name)
|
ON entries(user_id, folder, name)
|
||||||
WHERE user_id IS NOT NULL;
|
WHERE user_id IS NOT NULL;
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_entries_namespace ON entries(namespace);
|
CREATE INDEX IF NOT EXISTS idx_entries_folder ON entries(folder) WHERE folder <> '';
|
||||||
CREATE INDEX IF NOT EXISTS idx_entries_kind ON entries(kind);
|
CREATE INDEX IF NOT EXISTS idx_entries_type ON entries(type) WHERE type <> '';
|
||||||
CREATE INDEX IF NOT EXISTS idx_entries_user_id ON entries(user_id) WHERE user_id IS NOT NULL;
|
CREATE INDEX IF NOT EXISTS idx_entries_user_id ON entries(user_id) WHERE user_id IS NOT NULL;
|
||||||
CREATE INDEX IF NOT EXISTS idx_entries_tags ON entries USING GIN(tags);
|
CREATE INDEX IF NOT EXISTS idx_entries_tags ON entries USING GIN(tags);
|
||||||
CREATE INDEX IF NOT EXISTS idx_entries_metadata ON entries USING GIN(metadata jsonb_path_ops);
|
CREATE INDEX IF NOT EXISTS idx_entries_metadata ON entries USING GIN(metadata jsonb_path_ops);
|
||||||
|
|
||||||
-- ── secrets: one row per encrypted field ─────────────────────────────────
|
-- ── secrets: one row per encrypted field ─────────────────────────────────
|
||||||
CREATE TABLE IF NOT EXISTS secrets (
|
CREATE TABLE IF NOT EXISTS secrets (
|
||||||
@@ -67,23 +68,23 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
||||||
user_id UUID,
|
user_id UUID,
|
||||||
action VARCHAR(32) NOT NULL,
|
action VARCHAR(32) NOT NULL,
|
||||||
namespace VARCHAR(64) NOT NULL,
|
folder VARCHAR(128) NOT NULL DEFAULT '',
|
||||||
kind VARCHAR(64) NOT NULL,
|
type VARCHAR(64) NOT NULL DEFAULT '',
|
||||||
name VARCHAR(256) NOT NULL,
|
name VARCHAR(256) NOT NULL,
|
||||||
detail JSONB NOT NULL DEFAULT '{}',
|
detail JSONB NOT NULL DEFAULT '{}',
|
||||||
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC);
|
CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC);
|
||||||
CREATE INDEX IF NOT EXISTS idx_audit_log_ns_kind ON audit_log(namespace, kind);
|
CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type ON audit_log(folder, type);
|
||||||
CREATE INDEX IF NOT EXISTS idx_audit_log_user_id ON audit_log(user_id) WHERE user_id IS NOT NULL;
|
CREATE INDEX IF NOT EXISTS idx_audit_log_user_id ON audit_log(user_id) WHERE user_id IS NOT NULL;
|
||||||
|
|
||||||
-- ── entries_history ───────────────────────────────────────────────────────
|
-- ── entries_history ───────────────────────────────────────────────────────
|
||||||
CREATE TABLE IF NOT EXISTS entries_history (
|
CREATE TABLE IF NOT EXISTS entries_history (
|
||||||
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
||||||
entry_id UUID NOT NULL,
|
entry_id UUID NOT NULL,
|
||||||
namespace VARCHAR(64) NOT NULL,
|
folder VARCHAR(128) NOT NULL DEFAULT '',
|
||||||
kind VARCHAR(64) NOT NULL,
|
type VARCHAR(64) NOT NULL DEFAULT '',
|
||||||
name VARCHAR(256) NOT NULL,
|
name VARCHAR(256) NOT NULL,
|
||||||
version BIGINT NOT NULL,
|
version BIGINT NOT NULL,
|
||||||
action VARCHAR(16) NOT NULL,
|
action VARCHAR(16) NOT NULL,
|
||||||
@@ -94,8 +95,8 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_entries_history_entry_id
|
CREATE INDEX IF NOT EXISTS idx_entries_history_entry_id
|
||||||
ON entries_history(entry_id, version DESC);
|
ON entries_history(entry_id, version DESC);
|
||||||
CREATE INDEX IF NOT EXISTS idx_entries_history_ns_kind_name
|
CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name
|
||||||
ON entries_history(namespace, kind, name, version DESC);
|
ON entries_history(folder, type, name, version DESC);
|
||||||
|
|
||||||
-- Backfill: add user_id to entries_history for multi-tenant isolation
|
-- Backfill: add user_id to entries_history for multi-tenant isolation
|
||||||
ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID;
|
ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID;
|
||||||
@@ -103,6 +104,9 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
ON entries_history(user_id) WHERE user_id IS NOT NULL;
|
ON entries_history(user_id) WHERE user_id IS NOT NULL;
|
||||||
ALTER TABLE entries_history DROP COLUMN IF EXISTS actor;
|
ALTER TABLE entries_history DROP COLUMN IF EXISTS actor;
|
||||||
|
|
||||||
|
-- Backfill: add notes to entries if not present (fresh installs already have it)
|
||||||
|
ALTER TABLE entries ADD COLUMN IF NOT EXISTS notes TEXT NOT NULL DEFAULT '';
|
||||||
|
|
||||||
-- ── secrets_history: field-level snapshot ────────────────────────────────
|
-- ── secrets_history: field-level snapshot ────────────────────────────────
|
||||||
CREATE TABLE IF NOT EXISTS secrets_history (
|
CREATE TABLE IF NOT EXISTS secrets_history (
|
||||||
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
|
||||||
@@ -123,9 +127,6 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
-- Drop redundant actor column (derivable via entries_history JOIN)
|
-- Drop redundant actor column (derivable via entries_history JOIN)
|
||||||
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
|
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
|
||||||
|
|
||||||
-- Drop redundant actor column; user_id already identifies the business user
|
|
||||||
ALTER TABLE audit_log DROP COLUMN IF EXISTS actor;
|
|
||||||
|
|
||||||
-- ── users ─────────────────────────────────────────────────────────────────
|
-- ── users ─────────────────────────────────────────────────────────────────
|
||||||
CREATE TABLE IF NOT EXISTS users (
|
CREATE TABLE IF NOT EXISTS users (
|
||||||
id UUID PRIMARY KEY DEFAULT uuidv7(),
|
id UUID PRIMARY KEY DEFAULT uuidv7(),
|
||||||
@@ -191,12 +192,179 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
|
|||||||
)
|
)
|
||||||
.execute(pool)
|
.execute(pool)
|
||||||
.await?;
|
.await?;
|
||||||
|
migrate_schema(pool).await?;
|
||||||
restore_plaintext_api_keys(pool).await?;
|
restore_plaintext_api_keys(pool).await?;
|
||||||
|
|
||||||
tracing::debug!("migrations complete");
|
tracing::debug!("migrations complete");
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Idempotent schema migration: rename namespace→folder, kind→type in existing databases.
|
||||||
|
async fn migrate_schema(pool: &PgPool) -> Result<()> {
|
||||||
|
sqlx::raw_sql(
|
||||||
|
r#"
|
||||||
|
-- ── entries: rename namespace→folder, kind→type ──────────────────────────
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries' AND column_name = 'namespace'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE entries RENAME COLUMN namespace TO folder;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries' AND column_name = 'kind'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE entries RENAME COLUMN kind TO type;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
-- ── audit_log: rename namespace→folder, kind→type ────────────────────────
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'audit_log' AND column_name = 'namespace'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE audit_log RENAME COLUMN namespace TO folder;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'audit_log' AND column_name = 'kind'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE audit_log RENAME COLUMN kind TO type;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
-- ── entries_history: rename namespace→folder, kind→type ──────────────────
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries_history' AND column_name = 'namespace'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE entries_history RENAME COLUMN namespace TO folder;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries_history' AND column_name = 'kind'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE entries_history RENAME COLUMN kind TO type;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
-- ── Set empty defaults for new folder/type columns ────────────────────────
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries' AND column_name = 'folder'
|
||||||
|
) THEN
|
||||||
|
UPDATE entries SET folder = '' WHERE folder IS NULL;
|
||||||
|
ALTER TABLE entries ALTER COLUMN folder SET NOT NULL;
|
||||||
|
ALTER TABLE entries ALTER COLUMN folder SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries' AND column_name = 'type'
|
||||||
|
) THEN
|
||||||
|
UPDATE entries SET type = '' WHERE type IS NULL;
|
||||||
|
ALTER TABLE entries ALTER COLUMN type SET NOT NULL;
|
||||||
|
ALTER TABLE entries ALTER COLUMN type SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'audit_log' AND column_name = 'folder'
|
||||||
|
) THEN
|
||||||
|
UPDATE audit_log SET folder = '' WHERE folder IS NULL;
|
||||||
|
ALTER TABLE audit_log ALTER COLUMN folder SET NOT NULL;
|
||||||
|
ALTER TABLE audit_log ALTER COLUMN folder SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'audit_log' AND column_name = 'type'
|
||||||
|
) THEN
|
||||||
|
UPDATE audit_log SET type = '' WHERE type IS NULL;
|
||||||
|
ALTER TABLE audit_log ALTER COLUMN type SET NOT NULL;
|
||||||
|
ALTER TABLE audit_log ALTER COLUMN type SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries_history' AND column_name = 'folder'
|
||||||
|
) THEN
|
||||||
|
UPDATE entries_history SET folder = '' WHERE folder IS NULL;
|
||||||
|
ALTER TABLE entries_history ALTER COLUMN folder SET NOT NULL;
|
||||||
|
ALTER TABLE entries_history ALTER COLUMN folder SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries_history' AND column_name = 'type'
|
||||||
|
) THEN
|
||||||
|
UPDATE entries_history SET type = '' WHERE type IS NULL;
|
||||||
|
ALTER TABLE entries_history ALTER COLUMN type SET NOT NULL;
|
||||||
|
ALTER TABLE entries_history ALTER COLUMN type SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
-- ── Rebuild unique indexes on entries: folder is now part of the key ────────
|
||||||
|
-- (user_id, folder, name) allows same name in different folders.
|
||||||
|
DROP INDEX IF EXISTS idx_entries_unique_legacy;
|
||||||
|
DROP INDEX IF EXISTS idx_entries_unique_user;
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy
|
||||||
|
ON entries(folder, name)
|
||||||
|
WHERE user_id IS NULL;
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user
|
||||||
|
ON entries(user_id, folder, name)
|
||||||
|
WHERE user_id IS NOT NULL;
|
||||||
|
|
||||||
|
-- ── Replace old namespace/kind indexes ────────────────────────────────────
|
||||||
|
DROP INDEX IF EXISTS idx_entries_namespace;
|
||||||
|
DROP INDEX IF EXISTS idx_entries_kind;
|
||||||
|
DROP INDEX IF EXISTS idx_audit_log_ns_kind;
|
||||||
|
DROP INDEX IF EXISTS idx_entries_history_ns_kind_name;
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_entries_folder
|
||||||
|
ON entries(folder) WHERE folder <> '';
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_entries_type
|
||||||
|
ON entries(type) WHERE type <> '';
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type
|
||||||
|
ON audit_log(folder, type);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name
|
||||||
|
ON entries_history(folder, type, name, version DESC);
|
||||||
|
|
||||||
|
-- ── Drop legacy actor columns ─────────────────────────────────────────────
|
||||||
|
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
|
||||||
|
ALTER TABLE audit_log DROP COLUMN IF EXISTS actor;
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.execute(pool)
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
async fn restore_plaintext_api_keys(pool: &PgPool) -> Result<()> {
|
async fn restore_plaintext_api_keys(pool: &PgPool) -> Result<()> {
|
||||||
let has_users_api_key: bool = sqlx::query_scalar(
|
let has_users_api_key: bool = sqlx::query_scalar(
|
||||||
"SELECT EXISTS (
|
"SELECT EXISTS (
|
||||||
@@ -265,8 +433,8 @@ async fn restore_plaintext_api_keys(pool: &PgPool) -> Result<()> {
|
|||||||
pub struct EntrySnapshotParams<'a> {
|
pub struct EntrySnapshotParams<'a> {
|
||||||
pub entry_id: uuid::Uuid,
|
pub entry_id: uuid::Uuid,
|
||||||
pub user_id: Option<uuid::Uuid>,
|
pub user_id: Option<uuid::Uuid>,
|
||||||
pub namespace: &'a str,
|
pub folder: &'a str,
|
||||||
pub kind: &'a str,
|
pub entry_type: &'a str,
|
||||||
pub name: &'a str,
|
pub name: &'a str,
|
||||||
pub version: i64,
|
pub version: i64,
|
||||||
pub action: &'a str,
|
pub action: &'a str,
|
||||||
@@ -280,12 +448,12 @@ pub async fn snapshot_entry_history(
|
|||||||
) -> Result<()> {
|
) -> Result<()> {
|
||||||
sqlx::query(
|
sqlx::query(
|
||||||
"INSERT INTO entries_history \
|
"INSERT INTO entries_history \
|
||||||
(entry_id, namespace, kind, name, version, action, tags, metadata, user_id) \
|
(entry_id, folder, type, name, version, action, tags, metadata, user_id) \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)",
|
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)",
|
||||||
)
|
)
|
||||||
.bind(p.entry_id)
|
.bind(p.entry_id)
|
||||||
.bind(p.namespace)
|
.bind(p.folder)
|
||||||
.bind(p.kind)
|
.bind(p.entry_type)
|
||||||
.bind(p.name)
|
.bind(p.name)
|
||||||
.bind(p.version)
|
.bind(p.version)
|
||||||
.bind(p.action)
|
.bind(p.action)
|
||||||
|
|||||||
@@ -4,15 +4,18 @@ use serde_json::Value;
|
|||||||
use std::collections::BTreeMap;
|
use std::collections::BTreeMap;
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
/// A top-level entry (server, service, key, …).
|
/// A top-level entry (server, service, key, person, …).
|
||||||
/// Sensitive fields are stored separately in `secrets`.
|
/// Sensitive fields are stored separately in `secrets`.
|
||||||
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
|
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
|
||||||
pub struct Entry {
|
pub struct Entry {
|
||||||
pub id: Uuid,
|
pub id: Uuid,
|
||||||
pub user_id: Option<Uuid>,
|
pub user_id: Option<Uuid>,
|
||||||
pub namespace: String,
|
pub folder: String,
|
||||||
pub kind: String,
|
#[serde(rename = "type")]
|
||||||
|
#[sqlx(rename = "type")]
|
||||||
|
pub entry_type: String,
|
||||||
pub name: String,
|
pub name: String,
|
||||||
|
pub notes: String,
|
||||||
pub tags: Vec<String>,
|
pub tags: Vec<String>,
|
||||||
pub metadata: Value,
|
pub metadata: Value,
|
||||||
pub version: i64,
|
pub version: i64,
|
||||||
@@ -40,8 +43,12 @@ pub struct SecretField {
|
|||||||
pub struct EntryRow {
|
pub struct EntryRow {
|
||||||
pub id: Uuid,
|
pub id: Uuid,
|
||||||
pub version: i64,
|
pub version: i64,
|
||||||
|
pub folder: String,
|
||||||
|
#[sqlx(rename = "type")]
|
||||||
|
pub entry_type: String,
|
||||||
pub tags: Vec<String>,
|
pub tags: Vec<String>,
|
||||||
pub metadata: Value,
|
pub metadata: Value,
|
||||||
|
pub notes: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Minimal secret field row fetched before snapshots or cascade deletes.
|
/// Minimal secret field row fetched before snapshots or cascade deletes.
|
||||||
@@ -128,10 +135,14 @@ pub struct ExportData {
|
|||||||
/// A single entry with decrypted secrets for export/import.
|
/// A single entry with decrypted secrets for export/import.
|
||||||
#[derive(Debug, Serialize, Deserialize)]
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
pub struct ExportEntry {
|
pub struct ExportEntry {
|
||||||
pub namespace: String,
|
|
||||||
pub kind: String,
|
|
||||||
pub name: String,
|
pub name: String,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
|
pub folder: String,
|
||||||
|
#[serde(default, rename = "type")]
|
||||||
|
pub entry_type: String,
|
||||||
|
#[serde(default)]
|
||||||
|
pub notes: String,
|
||||||
|
#[serde(default)]
|
||||||
pub tags: Vec<String>,
|
pub tags: Vec<String>,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
pub metadata: Value,
|
pub metadata: Value,
|
||||||
@@ -181,8 +192,10 @@ pub struct AuditLogEntry {
|
|||||||
pub id: i64,
|
pub id: i64,
|
||||||
pub user_id: Option<Uuid>,
|
pub user_id: Option<Uuid>,
|
||||||
pub action: String,
|
pub action: String,
|
||||||
pub namespace: String,
|
pub folder: String,
|
||||||
pub kind: String,
|
#[serde(rename = "type")]
|
||||||
|
#[sqlx(rename = "type")]
|
||||||
|
pub entry_type: String,
|
||||||
pub name: String,
|
pub name: String,
|
||||||
pub detail: Value,
|
pub detail: Value,
|
||||||
pub created_at: DateTime<Utc>,
|
pub created_at: DateTime<Utc>,
|
||||||
|
|||||||
@@ -159,18 +159,20 @@ pub fn flatten_json_fields(prefix: &str, value: &Value) -> Vec<(String, Value)>
|
|||||||
|
|
||||||
#[derive(Debug, serde::Serialize)]
|
#[derive(Debug, serde::Serialize)]
|
||||||
pub struct AddResult {
|
pub struct AddResult {
|
||||||
pub namespace: String,
|
|
||||||
pub kind: String,
|
|
||||||
pub name: String,
|
pub name: String,
|
||||||
|
pub folder: String,
|
||||||
|
#[serde(rename = "type")]
|
||||||
|
pub entry_type: String,
|
||||||
pub tags: Vec<String>,
|
pub tags: Vec<String>,
|
||||||
pub meta_keys: Vec<String>,
|
pub meta_keys: Vec<String>,
|
||||||
pub secret_keys: Vec<String>,
|
pub secret_keys: Vec<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct AddParams<'a> {
|
pub struct AddParams<'a> {
|
||||||
pub namespace: &'a str,
|
|
||||||
pub kind: &'a str,
|
|
||||||
pub name: &'a str,
|
pub name: &'a str,
|
||||||
|
pub folder: &'a str,
|
||||||
|
pub entry_type: &'a str,
|
||||||
|
pub notes: &'a str,
|
||||||
pub tags: &'a [String],
|
pub tags: &'a [String],
|
||||||
pub meta_entries: &'a [String],
|
pub meta_entries: &'a [String],
|
||||||
pub secret_entries: &'a [String],
|
pub secret_entries: &'a [String],
|
||||||
@@ -186,25 +188,23 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
|||||||
|
|
||||||
let mut tx = pool.begin().await?;
|
let mut tx = pool.begin().await?;
|
||||||
|
|
||||||
// Fetch existing entry (user-scoped or global depending on user_id)
|
// Fetch existing entry by (user_id, folder, name) — the natural unique key
|
||||||
let existing: Option<EntryRow> = if let Some(uid) = params.user_id {
|
let existing: Option<EntryRow> = if let Some(uid) = params.user_id {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT id, version, tags, metadata FROM entries \
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4",
|
WHERE user_id = $1 AND folder = $2 AND name = $3",
|
||||||
)
|
)
|
||||||
.bind(uid)
|
.bind(uid)
|
||||||
.bind(params.namespace)
|
.bind(params.folder)
|
||||||
.bind(params.kind)
|
|
||||||
.bind(params.name)
|
.bind(params.name)
|
||||||
.fetch_optional(&mut *tx)
|
.fetch_optional(&mut *tx)
|
||||||
.await?
|
.await?
|
||||||
} else {
|
} else {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT id, version, tags, metadata FROM entries \
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3",
|
WHERE user_id IS NULL AND folder = $1 AND name = $2",
|
||||||
)
|
)
|
||||||
.bind(params.namespace)
|
.bind(params.folder)
|
||||||
.bind(params.kind)
|
|
||||||
.bind(params.name)
|
.bind(params.name)
|
||||||
.fetch_optional(&mut *tx)
|
.fetch_optional(&mut *tx)
|
||||||
.await?
|
.await?
|
||||||
@@ -216,8 +216,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
|||||||
db::EntrySnapshotParams {
|
db::EntrySnapshotParams {
|
||||||
entry_id: ex.id,
|
entry_id: ex.id,
|
||||||
user_id: params.user_id,
|
user_id: params.user_id,
|
||||||
namespace: params.namespace,
|
folder: params.folder,
|
||||||
kind: params.kind,
|
entry_type: params.entry_type,
|
||||||
name: params.name,
|
name: params.name,
|
||||||
version: ex.version,
|
version: ex.version,
|
||||||
action: "add",
|
action: "add",
|
||||||
@@ -232,10 +232,13 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
|||||||
|
|
||||||
let entry_id: Uuid = if let Some(uid) = params.user_id {
|
let entry_id: Uuid = if let Some(uid) = params.user_id {
|
||||||
sqlx::query_scalar(
|
sqlx::query_scalar(
|
||||||
r#"INSERT INTO entries (user_id, namespace, kind, name, tags, metadata, version, updated_at)
|
r#"INSERT INTO entries (user_id, folder, type, name, notes, tags, metadata, version, updated_at)
|
||||||
VALUES ($1, $2, $3, $4, $5, $6, 1, NOW())
|
VALUES ($1, $2, $3, $4, $5, $6, $7, 1, NOW())
|
||||||
ON CONFLICT (user_id, namespace, kind, name) WHERE user_id IS NOT NULL
|
ON CONFLICT (user_id, folder, name) WHERE user_id IS NOT NULL
|
||||||
DO UPDATE SET
|
DO UPDATE SET
|
||||||
|
folder = EXCLUDED.folder,
|
||||||
|
type = EXCLUDED.type,
|
||||||
|
notes = EXCLUDED.notes,
|
||||||
tags = EXCLUDED.tags,
|
tags = EXCLUDED.tags,
|
||||||
metadata = EXCLUDED.metadata,
|
metadata = EXCLUDED.metadata,
|
||||||
version = entries.version + 1,
|
version = entries.version + 1,
|
||||||
@@ -243,28 +246,33 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
|||||||
RETURNING id"#,
|
RETURNING id"#,
|
||||||
)
|
)
|
||||||
.bind(uid)
|
.bind(uid)
|
||||||
.bind(params.namespace)
|
.bind(params.folder)
|
||||||
.bind(params.kind)
|
.bind(params.entry_type)
|
||||||
.bind(params.name)
|
.bind(params.name)
|
||||||
|
.bind(params.notes)
|
||||||
.bind(params.tags)
|
.bind(params.tags)
|
||||||
.bind(&metadata)
|
.bind(&metadata)
|
||||||
.fetch_one(&mut *tx)
|
.fetch_one(&mut *tx)
|
||||||
.await?
|
.await?
|
||||||
} else {
|
} else {
|
||||||
sqlx::query_scalar(
|
sqlx::query_scalar(
|
||||||
r#"INSERT INTO entries (namespace, kind, name, tags, metadata, version, updated_at)
|
r#"INSERT INTO entries (folder, type, name, notes, tags, metadata, version, updated_at)
|
||||||
VALUES ($1, $2, $3, $4, $5, 1, NOW())
|
VALUES ($1, $2, $3, $4, $5, $6, 1, NOW())
|
||||||
ON CONFLICT (namespace, kind, name) WHERE user_id IS NULL
|
ON CONFLICT (folder, name) WHERE user_id IS NULL
|
||||||
DO UPDATE SET
|
DO UPDATE SET
|
||||||
|
folder = EXCLUDED.folder,
|
||||||
|
type = EXCLUDED.type,
|
||||||
|
notes = EXCLUDED.notes,
|
||||||
tags = EXCLUDED.tags,
|
tags = EXCLUDED.tags,
|
||||||
metadata = EXCLUDED.metadata,
|
metadata = EXCLUDED.metadata,
|
||||||
version = entries.version + 1,
|
version = entries.version + 1,
|
||||||
updated_at = NOW()
|
updated_at = NOW()
|
||||||
RETURNING id"#,
|
RETURNING id"#,
|
||||||
)
|
)
|
||||||
.bind(params.namespace)
|
.bind(params.folder)
|
||||||
.bind(params.kind)
|
.bind(params.entry_type)
|
||||||
.bind(params.name)
|
.bind(params.name)
|
||||||
|
.bind(params.notes)
|
||||||
.bind(params.tags)
|
.bind(params.tags)
|
||||||
.bind(&metadata)
|
.bind(&metadata)
|
||||||
.fetch_one(&mut *tx)
|
.fetch_one(&mut *tx)
|
||||||
@@ -282,8 +290,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
|||||||
db::EntrySnapshotParams {
|
db::EntrySnapshotParams {
|
||||||
entry_id,
|
entry_id,
|
||||||
user_id: params.user_id,
|
user_id: params.user_id,
|
||||||
namespace: params.namespace,
|
folder: params.folder,
|
||||||
kind: params.kind,
|
entry_type: params.entry_type,
|
||||||
name: params.name,
|
name: params.name,
|
||||||
version: new_entry_version,
|
version: new_entry_version,
|
||||||
action: "create",
|
action: "create",
|
||||||
@@ -348,8 +356,8 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
|||||||
&mut tx,
|
&mut tx,
|
||||||
params.user_id,
|
params.user_id,
|
||||||
"add",
|
"add",
|
||||||
params.namespace,
|
params.folder,
|
||||||
params.kind,
|
params.entry_type,
|
||||||
params.name,
|
params.name,
|
||||||
serde_json::json!({
|
serde_json::json!({
|
||||||
"tags": params.tags,
|
"tags": params.tags,
|
||||||
@@ -362,9 +370,9 @@ pub async fn run(pool: &PgPool, params: AddParams<'_>, master_key: &[u8; 32]) ->
|
|||||||
tx.commit().await?;
|
tx.commit().await?;
|
||||||
|
|
||||||
Ok(AddResult {
|
Ok(AddResult {
|
||||||
namespace: params.namespace.to_string(),
|
|
||||||
kind: params.kind.to_string(),
|
|
||||||
name: params.name.to_string(),
|
name: params.name.to_string(),
|
||||||
|
folder: params.folder.to_string(),
|
||||||
|
entry_type: params.entry_type.to_string(),
|
||||||
tags: params.tags.to_vec(),
|
tags: params.tags.to_vec(),
|
||||||
meta_keys,
|
meta_keys,
|
||||||
secret_keys,
|
secret_keys,
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ pub async fn list_for_user(pool: &PgPool, user_id: Uuid, limit: i64) -> Result<V
|
|||||||
let limit = limit.clamp(1, 200);
|
let limit = limit.clamp(1, 200);
|
||||||
|
|
||||||
let rows = sqlx::query_as(
|
let rows = sqlx::query_as(
|
||||||
"SELECT id, user_id, action, namespace, kind, name, detail, created_at \
|
"SELECT id, user_id, action, folder, type, name, detail, created_at \
|
||||||
FROM audit_log \
|
FROM audit_log \
|
||||||
WHERE user_id = $1 \
|
WHERE user_id = $1 \
|
||||||
ORDER BY created_at DESC, id DESC \
|
ORDER BY created_at DESC, id DESC \
|
||||||
|
|||||||
@@ -8,9 +8,10 @@ use crate::models::{EntryRow, SecretFieldRow};
|
|||||||
|
|
||||||
#[derive(Debug, serde::Serialize)]
|
#[derive(Debug, serde::Serialize)]
|
||||||
pub struct DeletedEntry {
|
pub struct DeletedEntry {
|
||||||
pub namespace: String,
|
|
||||||
pub kind: String,
|
|
||||||
pub name: String,
|
pub name: String,
|
||||||
|
pub folder: String,
|
||||||
|
#[serde(rename = "type")]
|
||||||
|
pub entry_type: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, serde::Serialize)]
|
#[derive(Debug, serde::Serialize)]
|
||||||
@@ -20,34 +21,29 @@ pub struct DeleteResult {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub struct DeleteParams<'a> {
|
pub struct DeleteParams<'a> {
|
||||||
pub namespace: &'a str,
|
/// If set, delete a single entry by name.
|
||||||
pub kind: Option<&'a str>,
|
|
||||||
pub name: Option<&'a str>,
|
pub name: Option<&'a str>,
|
||||||
|
/// Folder filter for bulk delete.
|
||||||
|
pub folder: Option<&'a str>,
|
||||||
|
/// Type filter for bulk delete.
|
||||||
|
pub entry_type: Option<&'a str>,
|
||||||
pub dry_run: bool,
|
pub dry_run: bool,
|
||||||
pub user_id: Option<Uuid>,
|
pub user_id: Option<Uuid>,
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult> {
|
pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult> {
|
||||||
match params.name {
|
match params.name {
|
||||||
Some(name) => {
|
Some(name) => delete_one(pool, name, params.folder, params.dry_run, params.user_id).await,
|
||||||
let kind = params
|
|
||||||
.kind
|
|
||||||
.ok_or_else(|| anyhow::anyhow!("--kind is required when --name is specified"))?;
|
|
||||||
delete_one(
|
|
||||||
pool,
|
|
||||||
params.namespace,
|
|
||||||
kind,
|
|
||||||
name,
|
|
||||||
params.dry_run,
|
|
||||||
params.user_id,
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
}
|
|
||||||
None => {
|
None => {
|
||||||
|
if params.folder.is_none() && params.entry_type.is_none() {
|
||||||
|
anyhow::bail!(
|
||||||
|
"Bulk delete requires at least one of: name, folder, or type filter."
|
||||||
|
);
|
||||||
|
}
|
||||||
delete_bulk(
|
delete_bulk(
|
||||||
pool,
|
pool,
|
||||||
params.namespace,
|
params.folder,
|
||||||
params.kind,
|
params.entry_type,
|
||||||
params.dry_run,
|
params.dry_run,
|
||||||
params.user_id,
|
params.user_id,
|
||||||
)
|
)
|
||||||
@@ -58,93 +54,169 @@ pub async fn run(pool: &PgPool, params: DeleteParams<'_>) -> Result<DeleteResult
|
|||||||
|
|
||||||
async fn delete_one(
|
async fn delete_one(
|
||||||
pool: &PgPool,
|
pool: &PgPool,
|
||||||
namespace: &str,
|
|
||||||
kind: &str,
|
|
||||||
name: &str,
|
name: &str,
|
||||||
|
folder: Option<&str>,
|
||||||
dry_run: bool,
|
dry_run: bool,
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
) -> Result<DeleteResult> {
|
) -> Result<DeleteResult> {
|
||||||
if dry_run {
|
if dry_run {
|
||||||
let exists: bool = if let Some(uid) = user_id {
|
// Dry-run uses the same disambiguation logic as actual delete:
|
||||||
sqlx::query_scalar(
|
// - 0 matches → nothing to delete
|
||||||
"SELECT EXISTS(SELECT 1 FROM entries \
|
// - 1 match → show what would be deleted (with correct folder/type)
|
||||||
WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4)",
|
// - 2+ matches → disambiguation error (same as non-dry-run)
|
||||||
|
#[derive(sqlx::FromRow)]
|
||||||
|
struct DryRunRow {
|
||||||
|
folder: String,
|
||||||
|
#[sqlx(rename = "type")]
|
||||||
|
entry_type: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
let rows: Vec<DryRunRow> = if let Some(uid) = user_id {
|
||||||
|
if let Some(f) = folder {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT folder, type FROM entries WHERE user_id = $1 AND folder = $2 AND name = $3",
|
||||||
|
)
|
||||||
|
.bind(uid)
|
||||||
|
.bind(f)
|
||||||
|
.bind(name)
|
||||||
|
.fetch_all(pool)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
sqlx::query_as("SELECT folder, type FROM entries WHERE user_id = $1 AND name = $2")
|
||||||
|
.bind(uid)
|
||||||
|
.bind(name)
|
||||||
|
.fetch_all(pool)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
} else if let Some(f) = folder {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT folder, type FROM entries WHERE user_id IS NULL AND folder = $1 AND name = $2",
|
||||||
)
|
)
|
||||||
.bind(uid)
|
.bind(f)
|
||||||
.bind(namespace)
|
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.fetch_one(pool)
|
.fetch_all(pool)
|
||||||
.await?
|
.await?
|
||||||
} else {
|
} else {
|
||||||
sqlx::query_scalar(
|
sqlx::query_as("SELECT folder, type FROM entries WHERE user_id IS NULL AND name = $1")
|
||||||
"SELECT EXISTS(SELECT 1 FROM entries \
|
.bind(name)
|
||||||
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3)",
|
.fetch_all(pool)
|
||||||
)
|
.await?
|
||||||
.bind(namespace)
|
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
|
||||||
.fetch_one(pool)
|
|
||||||
.await?
|
|
||||||
};
|
};
|
||||||
|
|
||||||
let deleted = if exists {
|
return match rows.len() {
|
||||||
vec![DeletedEntry {
|
0 => Ok(DeleteResult {
|
||||||
namespace: namespace.to_string(),
|
deleted: vec![],
|
||||||
kind: kind.to_string(),
|
dry_run: true,
|
||||||
name: name.to_string(),
|
}),
|
||||||
}]
|
1 => {
|
||||||
} else {
|
let row = rows.into_iter().next().unwrap();
|
||||||
vec![]
|
Ok(DeleteResult {
|
||||||
|
deleted: vec![DeletedEntry {
|
||||||
|
name: name.to_string(),
|
||||||
|
folder: row.folder,
|
||||||
|
entry_type: row.entry_type,
|
||||||
|
}],
|
||||||
|
dry_run: true,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
_ => {
|
||||||
|
let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect();
|
||||||
|
anyhow::bail!(
|
||||||
|
"Ambiguous: {} entries named '{}' found in folders: [{}]. \
|
||||||
|
Specify 'folder' to disambiguate.",
|
||||||
|
rows.len(),
|
||||||
|
name,
|
||||||
|
folders.join(", ")
|
||||||
|
)
|
||||||
|
}
|
||||||
};
|
};
|
||||||
return Ok(DeleteResult {
|
|
||||||
deleted,
|
|
||||||
dry_run: true,
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut tx = pool.begin().await?;
|
let mut tx = pool.begin().await?;
|
||||||
|
|
||||||
let row: Option<EntryRow> = if let Some(uid) = user_id {
|
// Fetch matching rows with FOR UPDATE; use folder when provided to resolve ambiguity.
|
||||||
|
let rows: Vec<EntryRow> = if let Some(uid) = user_id {
|
||||||
|
if let Some(f) = folder {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
|
WHERE user_id = $1 AND folder = $2 AND name = $3 FOR UPDATE",
|
||||||
|
)
|
||||||
|
.bind(uid)
|
||||||
|
.bind(f)
|
||||||
|
.bind(name)
|
||||||
|
.fetch_all(&mut *tx)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
|
WHERE user_id = $1 AND name = $2 FOR UPDATE",
|
||||||
|
)
|
||||||
|
.bind(uid)
|
||||||
|
.bind(name)
|
||||||
|
.fetch_all(&mut *tx)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
} else if let Some(f) = folder {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT id, version, tags, metadata FROM entries \
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4 FOR UPDATE",
|
WHERE user_id IS NULL AND folder = $1 AND name = $2 FOR UPDATE",
|
||||||
)
|
)
|
||||||
.bind(uid)
|
.bind(f)
|
||||||
.bind(namespace)
|
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.fetch_optional(&mut *tx)
|
.fetch_all(&mut *tx)
|
||||||
.await?
|
.await?
|
||||||
} else {
|
} else {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT id, version, tags, metadata FROM entries \
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE",
|
WHERE user_id IS NULL AND name = $1 FOR UPDATE",
|
||||||
)
|
)
|
||||||
.bind(namespace)
|
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.fetch_optional(&mut *tx)
|
.fetch_all(&mut *tx)
|
||||||
.await?
|
.await?
|
||||||
};
|
};
|
||||||
|
|
||||||
let Some(row) = row else {
|
let row = match rows.len() {
|
||||||
tx.rollback().await?;
|
0 => {
|
||||||
return Ok(DeleteResult {
|
tx.rollback().await?;
|
||||||
deleted: vec![],
|
return Ok(DeleteResult {
|
||||||
dry_run: false,
|
deleted: vec![],
|
||||||
});
|
dry_run: false,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
1 => rows.into_iter().next().unwrap(),
|
||||||
|
_ => {
|
||||||
|
tx.rollback().await?;
|
||||||
|
let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect();
|
||||||
|
anyhow::bail!(
|
||||||
|
"Ambiguous: {} entries named '{}' found in folders: [{}]. \
|
||||||
|
Specify 'folder' to disambiguate.",
|
||||||
|
rows.len(),
|
||||||
|
name,
|
||||||
|
folders.join(", ")
|
||||||
|
)
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
snapshot_and_delete(&mut tx, namespace, kind, name, &row, user_id).await?;
|
let folder = row.folder.clone();
|
||||||
crate::audit::log_tx(&mut tx, user_id, "delete", namespace, kind, name, json!({})).await;
|
let entry_type = row.entry_type.clone();
|
||||||
|
snapshot_and_delete(&mut tx, &folder, &entry_type, name, &row, user_id).await?;
|
||||||
|
crate::audit::log_tx(
|
||||||
|
&mut tx,
|
||||||
|
user_id,
|
||||||
|
"delete",
|
||||||
|
&folder,
|
||||||
|
&entry_type,
|
||||||
|
name,
|
||||||
|
json!({}),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
tx.commit().await?;
|
tx.commit().await?;
|
||||||
|
|
||||||
Ok(DeleteResult {
|
Ok(DeleteResult {
|
||||||
deleted: vec![DeletedEntry {
|
deleted: vec![DeletedEntry {
|
||||||
namespace: namespace.to_string(),
|
|
||||||
kind: kind.to_string(),
|
|
||||||
name: name.to_string(),
|
name: name.to_string(),
|
||||||
|
folder,
|
||||||
|
entry_type,
|
||||||
}],
|
}],
|
||||||
dry_run: false,
|
dry_run: false,
|
||||||
})
|
})
|
||||||
@@ -152,8 +224,8 @@ async fn delete_one(
|
|||||||
|
|
||||||
async fn delete_bulk(
|
async fn delete_bulk(
|
||||||
pool: &PgPool,
|
pool: &PgPool,
|
||||||
namespace: &str,
|
folder: Option<&str>,
|
||||||
kind: Option<&str>,
|
entry_type: Option<&str>,
|
||||||
dry_run: bool,
|
dry_run: bool,
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
) -> Result<DeleteResult> {
|
) -> Result<DeleteResult> {
|
||||||
@@ -161,62 +233,57 @@ async fn delete_bulk(
|
|||||||
struct FullEntryRow {
|
struct FullEntryRow {
|
||||||
id: Uuid,
|
id: Uuid,
|
||||||
version: i64,
|
version: i64,
|
||||||
kind: String,
|
folder: String,
|
||||||
|
#[sqlx(rename = "type")]
|
||||||
|
entry_type: String,
|
||||||
name: String,
|
name: String,
|
||||||
metadata: serde_json::Value,
|
metadata: serde_json::Value,
|
||||||
tags: Vec<String>,
|
tags: Vec<String>,
|
||||||
|
notes: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
let rows: Vec<FullEntryRow> = match (user_id, kind) {
|
let mut conditions: Vec<String> = Vec::new();
|
||||||
(Some(uid), Some(k)) => {
|
let mut idx: i32 = 1;
|
||||||
sqlx::query_as(
|
|
||||||
"SELECT id, version, kind, name, metadata, tags FROM entries \
|
if user_id.is_some() {
|
||||||
WHERE user_id = $1 AND namespace = $2 AND kind = $3 ORDER BY name",
|
conditions.push(format!("user_id = ${}", idx));
|
||||||
)
|
idx += 1;
|
||||||
.bind(uid)
|
} else {
|
||||||
.bind(namespace)
|
conditions.push("user_id IS NULL".to_string());
|
||||||
.bind(k)
|
}
|
||||||
.fetch_all(pool)
|
if folder.is_some() {
|
||||||
.await?
|
conditions.push(format!("folder = ${}", idx));
|
||||||
}
|
idx += 1;
|
||||||
(Some(uid), None) => {
|
}
|
||||||
sqlx::query_as(
|
if entry_type.is_some() {
|
||||||
"SELECT id, version, kind, name, metadata, tags FROM entries \
|
conditions.push(format!("type = ${}", idx));
|
||||||
WHERE user_id = $1 AND namespace = $2 ORDER BY kind, name",
|
}
|
||||||
)
|
|
||||||
.bind(uid)
|
let where_clause = format!("WHERE {}", conditions.join(" AND "));
|
||||||
.bind(namespace)
|
let sql = format!(
|
||||||
.fetch_all(pool)
|
"SELECT id, version, folder, type, name, metadata, tags, notes \
|
||||||
.await?
|
FROM entries {where_clause} ORDER BY type, name"
|
||||||
}
|
);
|
||||||
(None, Some(k)) => {
|
|
||||||
sqlx::query_as(
|
let mut q = sqlx::query_as::<_, FullEntryRow>(&sql);
|
||||||
"SELECT id, version, kind, name, metadata, tags FROM entries \
|
if let Some(uid) = user_id {
|
||||||
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 ORDER BY name",
|
q = q.bind(uid);
|
||||||
)
|
}
|
||||||
.bind(namespace)
|
if let Some(f) = folder {
|
||||||
.bind(k)
|
q = q.bind(f);
|
||||||
.fetch_all(pool)
|
}
|
||||||
.await?
|
if let Some(t) = entry_type {
|
||||||
}
|
q = q.bind(t);
|
||||||
(None, None) => {
|
}
|
||||||
sqlx::query_as(
|
let rows = q.fetch_all(pool).await?;
|
||||||
"SELECT id, version, kind, name, metadata, tags FROM entries \
|
|
||||||
WHERE user_id IS NULL AND namespace = $1 ORDER BY kind, name",
|
|
||||||
)
|
|
||||||
.bind(namespace)
|
|
||||||
.fetch_all(pool)
|
|
||||||
.await?
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
if dry_run {
|
if dry_run {
|
||||||
let deleted = rows
|
let deleted = rows
|
||||||
.iter()
|
.iter()
|
||||||
.map(|r| DeletedEntry {
|
.map(|r| DeletedEntry {
|
||||||
namespace: namespace.to_string(),
|
|
||||||
kind: r.kind.clone(),
|
|
||||||
name: r.name.clone(),
|
name: r.name.clone(),
|
||||||
|
folder: r.folder.clone(),
|
||||||
|
entry_type: r.entry_type.clone(),
|
||||||
})
|
})
|
||||||
.collect();
|
.collect();
|
||||||
return Ok(DeleteResult {
|
return Ok(DeleteResult {
|
||||||
@@ -230,29 +297,37 @@ async fn delete_bulk(
|
|||||||
let entry_row = EntryRow {
|
let entry_row = EntryRow {
|
||||||
id: row.id,
|
id: row.id,
|
||||||
version: row.version,
|
version: row.version,
|
||||||
|
folder: row.folder.clone(),
|
||||||
|
entry_type: row.entry_type.clone(),
|
||||||
tags: row.tags.clone(),
|
tags: row.tags.clone(),
|
||||||
metadata: row.metadata.clone(),
|
metadata: row.metadata.clone(),
|
||||||
|
notes: row.notes.clone(),
|
||||||
};
|
};
|
||||||
let mut tx = pool.begin().await?;
|
let mut tx = pool.begin().await?;
|
||||||
snapshot_and_delete(
|
snapshot_and_delete(
|
||||||
&mut tx, namespace, &row.kind, &row.name, &entry_row, user_id,
|
&mut tx,
|
||||||
|
&row.folder,
|
||||||
|
&row.entry_type,
|
||||||
|
&row.name,
|
||||||
|
&entry_row,
|
||||||
|
user_id,
|
||||||
)
|
)
|
||||||
.await?;
|
.await?;
|
||||||
crate::audit::log_tx(
|
crate::audit::log_tx(
|
||||||
&mut tx,
|
&mut tx,
|
||||||
user_id,
|
user_id,
|
||||||
"delete",
|
"delete",
|
||||||
namespace,
|
&row.folder,
|
||||||
&row.kind,
|
&row.entry_type,
|
||||||
&row.name,
|
&row.name,
|
||||||
json!({"bulk": true}),
|
json!({"bulk": true}),
|
||||||
)
|
)
|
||||||
.await;
|
.await;
|
||||||
tx.commit().await?;
|
tx.commit().await?;
|
||||||
deleted.push(DeletedEntry {
|
deleted.push(DeletedEntry {
|
||||||
namespace: namespace.to_string(),
|
|
||||||
kind: row.kind.clone(),
|
|
||||||
name: row.name.clone(),
|
name: row.name.clone(),
|
||||||
|
folder: row.folder.clone(),
|
||||||
|
entry_type: row.entry_type.clone(),
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -264,8 +339,8 @@ async fn delete_bulk(
|
|||||||
|
|
||||||
async fn snapshot_and_delete(
|
async fn snapshot_and_delete(
|
||||||
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
|
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
|
||||||
namespace: &str,
|
folder: &str,
|
||||||
kind: &str,
|
entry_type: &str,
|
||||||
name: &str,
|
name: &str,
|
||||||
row: &EntryRow,
|
row: &EntryRow,
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
@@ -275,8 +350,8 @@ async fn snapshot_and_delete(
|
|||||||
db::EntrySnapshotParams {
|
db::EntrySnapshotParams {
|
||||||
entry_id: row.id,
|
entry_id: row.id,
|
||||||
user_id,
|
user_id,
|
||||||
namespace,
|
folder,
|
||||||
kind,
|
entry_type,
|
||||||
name,
|
name,
|
||||||
version: row.version,
|
version: row.version,
|
||||||
action: "delete",
|
action: "delete",
|
||||||
|
|||||||
@@ -12,8 +12,8 @@ use crate::service::search::{fetch_entries, fetch_secrets_for_entries};
|
|||||||
#[allow(clippy::too_many_arguments)]
|
#[allow(clippy::too_many_arguments)]
|
||||||
pub async fn build_env_map(
|
pub async fn build_env_map(
|
||||||
pool: &PgPool,
|
pool: &PgPool,
|
||||||
namespace: Option<&str>,
|
folder: Option<&str>,
|
||||||
kind: Option<&str>,
|
entry_type: Option<&str>,
|
||||||
name: Option<&str>,
|
name: Option<&str>,
|
||||||
tags: &[String],
|
tags: &[String],
|
||||||
only_fields: &[String],
|
only_fields: &[String],
|
||||||
@@ -21,7 +21,7 @@ pub async fn build_env_map(
|
|||||||
master_key: &[u8; 32],
|
master_key: &[u8; 32],
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
) -> Result<HashMap<String, String>> {
|
) -> Result<HashMap<String, String>> {
|
||||||
let entries = fetch_entries(pool, namespace, kind, name, tags, None, user_id).await?;
|
let entries = fetch_entries(pool, folder, entry_type, name, tags, None, user_id).await?;
|
||||||
|
|
||||||
let mut combined: HashMap<String, String> = HashMap::new();
|
let mut combined: HashMap<String, String> = HashMap::new();
|
||||||
|
|
||||||
@@ -68,16 +68,8 @@ async fn build_entry_env_map(
|
|||||||
|
|
||||||
// Resolve key_ref
|
// Resolve key_ref
|
||||||
if let Some(key_ref) = entry.metadata.get("key_ref").and_then(|v| v.as_str()) {
|
if let Some(key_ref) = entry.metadata.get("key_ref").and_then(|v| v.as_str()) {
|
||||||
let key_entries = fetch_entries(
|
let key_entries =
|
||||||
pool,
|
fetch_entries(pool, None, Some("key"), Some(key_ref), &[], None, None).await?;
|
||||||
Some(&entry.namespace),
|
|
||||||
Some("key"),
|
|
||||||
Some(key_ref),
|
|
||||||
&[],
|
|
||||||
None,
|
|
||||||
None,
|
|
||||||
)
|
|
||||||
.await?;
|
|
||||||
|
|
||||||
if let Some(key_entry) = key_entries.first() {
|
if let Some(key_entry) = key_entries.first() {
|
||||||
let key_ids = vec![key_entry.id];
|
let key_ids = vec![key_entry.id];
|
||||||
|
|||||||
@@ -9,8 +9,8 @@ use crate::models::{ExportData, ExportEntry, ExportFormat};
|
|||||||
use crate::service::search::{fetch_entries, fetch_secrets_for_entries};
|
use crate::service::search::{fetch_entries, fetch_secrets_for_entries};
|
||||||
|
|
||||||
pub struct ExportParams<'a> {
|
pub struct ExportParams<'a> {
|
||||||
pub namespace: Option<&'a str>,
|
pub folder: Option<&'a str>,
|
||||||
pub kind: Option<&'a str>,
|
pub entry_type: Option<&'a str>,
|
||||||
pub name: Option<&'a str>,
|
pub name: Option<&'a str>,
|
||||||
pub tags: &'a [String],
|
pub tags: &'a [String],
|
||||||
pub query: Option<&'a str>,
|
pub query: Option<&'a str>,
|
||||||
@@ -25,8 +25,8 @@ pub async fn export(
|
|||||||
) -> Result<ExportData> {
|
) -> Result<ExportData> {
|
||||||
let entries = fetch_entries(
|
let entries = fetch_entries(
|
||||||
pool,
|
pool,
|
||||||
params.namespace,
|
params.folder,
|
||||||
params.kind,
|
params.entry_type,
|
||||||
params.name,
|
params.name,
|
||||||
params.tags,
|
params.tags,
|
||||||
params.query,
|
params.query,
|
||||||
@@ -62,9 +62,10 @@ pub async fn export(
|
|||||||
};
|
};
|
||||||
|
|
||||||
export_entries.push(ExportEntry {
|
export_entries.push(ExportEntry {
|
||||||
namespace: entry.namespace.clone(),
|
|
||||||
kind: entry.kind.clone(),
|
|
||||||
name: entry.name.clone(),
|
name: entry.name.clone(),
|
||||||
|
folder: entry.folder.clone(),
|
||||||
|
entry_type: entry.entry_type.clone(),
|
||||||
|
notes: entry.notes.clone(),
|
||||||
tags: entry.tags.clone(),
|
tags: entry.tags.clone(),
|
||||||
metadata: entry.metadata.clone(),
|
metadata: entry.metadata.clone(),
|
||||||
secrets,
|
secrets,
|
||||||
|
|||||||
@@ -5,31 +5,19 @@ use std::collections::HashMap;
|
|||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
use crate::crypto;
|
use crate::crypto;
|
||||||
use crate::service::search::{fetch_entries, fetch_secrets_for_entries};
|
use crate::service::search::{fetch_secrets_for_entries, resolve_entry};
|
||||||
|
|
||||||
/// Decrypt a single named field from an entry.
|
/// Decrypt a single named field from an entry.
|
||||||
|
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
|
||||||
pub async fn get_secret_field(
|
pub async fn get_secret_field(
|
||||||
pool: &PgPool,
|
pool: &PgPool,
|
||||||
namespace: &str,
|
|
||||||
kind: &str,
|
|
||||||
name: &str,
|
name: &str,
|
||||||
|
folder: Option<&str>,
|
||||||
field_name: &str,
|
field_name: &str,
|
||||||
master_key: &[u8; 32],
|
master_key: &[u8; 32],
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
) -> Result<Value> {
|
) -> Result<Value> {
|
||||||
let entries = fetch_entries(
|
let entry = resolve_entry(pool, name, folder, user_id).await?;
|
||||||
pool,
|
|
||||||
Some(namespace),
|
|
||||||
Some(kind),
|
|
||||||
Some(name),
|
|
||||||
&[],
|
|
||||||
None,
|
|
||||||
user_id,
|
|
||||||
)
|
|
||||||
.await?;
|
|
||||||
let entry = entries
|
|
||||||
.first()
|
|
||||||
.ok_or_else(|| anyhow::anyhow!("Not found: [{}/{}] {}", namespace, kind, name))?;
|
|
||||||
|
|
||||||
let entry_ids = vec![entry.id];
|
let entry_ids = vec![entry.id];
|
||||||
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
|
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
|
||||||
@@ -44,27 +32,15 @@ pub async fn get_secret_field(
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Decrypt all secret fields from an entry. Returns a map field_name → decrypted Value.
|
/// Decrypt all secret fields from an entry. Returns a map field_name → decrypted Value.
|
||||||
|
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
|
||||||
pub async fn get_all_secrets(
|
pub async fn get_all_secrets(
|
||||||
pool: &PgPool,
|
pool: &PgPool,
|
||||||
namespace: &str,
|
|
||||||
kind: &str,
|
|
||||||
name: &str,
|
name: &str,
|
||||||
|
folder: Option<&str>,
|
||||||
master_key: &[u8; 32],
|
master_key: &[u8; 32],
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
) -> Result<HashMap<String, Value>> {
|
) -> Result<HashMap<String, Value>> {
|
||||||
let entries = fetch_entries(
|
let entry = resolve_entry(pool, name, folder, user_id).await?;
|
||||||
pool,
|
|
||||||
Some(namespace),
|
|
||||||
Some(kind),
|
|
||||||
Some(name),
|
|
||||||
&[],
|
|
||||||
None,
|
|
||||||
user_id,
|
|
||||||
)
|
|
||||||
.await?;
|
|
||||||
let entry = entries
|
|
||||||
.first()
|
|
||||||
.ok_or_else(|| anyhow::anyhow!("Not found: [{}/{}] {}", namespace, kind, name))?;
|
|
||||||
|
|
||||||
let entry_ids = vec![entry.id];
|
let entry_ids = vec![entry.id];
|
||||||
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
|
let secrets_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
|
||||||
|
|||||||
@@ -3,6 +3,8 @@ use serde_json::Value;
|
|||||||
use sqlx::PgPool;
|
use sqlx::PgPool;
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::service::search::resolve_entry;
|
||||||
|
|
||||||
#[derive(Debug, serde::Serialize)]
|
#[derive(Debug, serde::Serialize)]
|
||||||
pub struct HistoryEntry {
|
pub struct HistoryEntry {
|
||||||
pub version: i64,
|
pub version: i64,
|
||||||
@@ -10,11 +12,12 @@ pub struct HistoryEntry {
|
|||||||
pub created_at: String,
|
pub created_at: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Return version history for the entry identified by `name`.
|
||||||
|
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
|
||||||
pub async fn run(
|
pub async fn run(
|
||||||
pool: &PgPool,
|
pool: &PgPool,
|
||||||
namespace: &str,
|
|
||||||
kind: &str,
|
|
||||||
name: &str,
|
name: &str,
|
||||||
|
folder: Option<&str>,
|
||||||
limit: u32,
|
limit: u32,
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
) -> Result<Vec<HistoryEntry>> {
|
) -> Result<Vec<HistoryEntry>> {
|
||||||
@@ -25,32 +28,16 @@ pub async fn run(
|
|||||||
created_at: chrono::DateTime<chrono::Utc>,
|
created_at: chrono::DateTime<chrono::Utc>,
|
||||||
}
|
}
|
||||||
|
|
||||||
let rows: Vec<Row> = if let Some(uid) = user_id {
|
let entry = resolve_entry(pool, name, folder, user_id).await?;
|
||||||
sqlx::query_as(
|
|
||||||
"SELECT version, action, created_at FROM entries_history \
|
let rows: Vec<Row> = sqlx::query_as(
|
||||||
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id = $4 \
|
"SELECT version, action, created_at FROM entries_history \
|
||||||
ORDER BY id DESC LIMIT $5",
|
WHERE entry_id = $1 ORDER BY id DESC LIMIT $2",
|
||||||
)
|
)
|
||||||
.bind(namespace)
|
.bind(entry.id)
|
||||||
.bind(kind)
|
.bind(limit as i64)
|
||||||
.bind(name)
|
.fetch_all(pool)
|
||||||
.bind(uid)
|
.await?;
|
||||||
.bind(limit as i64)
|
|
||||||
.fetch_all(pool)
|
|
||||||
.await?
|
|
||||||
} else {
|
|
||||||
sqlx::query_as(
|
|
||||||
"SELECT version, action, created_at FROM entries_history \
|
|
||||||
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id IS NULL \
|
|
||||||
ORDER BY id DESC LIMIT $4",
|
|
||||||
)
|
|
||||||
.bind(namespace)
|
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
|
||||||
.bind(limit as i64)
|
|
||||||
.fetch_all(pool)
|
|
||||||
.await?
|
|
||||||
};
|
|
||||||
|
|
||||||
Ok(rows
|
Ok(rows
|
||||||
.into_iter()
|
.into_iter()
|
||||||
@@ -64,12 +51,11 @@ pub async fn run(
|
|||||||
|
|
||||||
pub async fn run_json(
|
pub async fn run_json(
|
||||||
pool: &PgPool,
|
pool: &PgPool,
|
||||||
namespace: &str,
|
|
||||||
kind: &str,
|
|
||||||
name: &str,
|
name: &str,
|
||||||
|
folder: Option<&str>,
|
||||||
limit: u32,
|
limit: u32,
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
) -> Result<Value> {
|
) -> Result<Value> {
|
||||||
let entries = run(pool, namespace, kind, name, limit, user_id).await?;
|
let entries = run(pool, name, folder, limit, user_id).await?;
|
||||||
Ok(serde_json::to_value(entries)?)
|
Ok(serde_json::to_value(entries)?)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -47,10 +47,9 @@ pub async fn run(
|
|||||||
for entry in &data.entries {
|
for entry in &data.entries {
|
||||||
let exists: bool = sqlx::query_scalar(
|
let exists: bool = sqlx::query_scalar(
|
||||||
"SELECT EXISTS(SELECT 1 FROM entries \
|
"SELECT EXISTS(SELECT 1 FROM entries \
|
||||||
WHERE namespace = $1 AND kind = $2 AND name = $3 AND user_id IS NOT DISTINCT FROM $4)",
|
WHERE folder = $1 AND name = $2 AND user_id IS NOT DISTINCT FROM $3)",
|
||||||
)
|
)
|
||||||
.bind(&entry.namespace)
|
.bind(&entry.folder)
|
||||||
.bind(&entry.kind)
|
|
||||||
.bind(&entry.name)
|
.bind(&entry.name)
|
||||||
.bind(params.user_id)
|
.bind(params.user_id)
|
||||||
.fetch_one(pool)
|
.fetch_one(pool)
|
||||||
@@ -59,9 +58,7 @@ pub async fn run(
|
|||||||
|
|
||||||
if exists && !params.force {
|
if exists && !params.force {
|
||||||
return Err(anyhow::anyhow!(
|
return Err(anyhow::anyhow!(
|
||||||
"Import aborted: conflict on [{}/{}/{}]",
|
"Import aborted: conflict on '{}'",
|
||||||
entry.namespace,
|
|
||||||
entry.kind,
|
|
||||||
entry.name
|
entry.name
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
@@ -81,9 +78,10 @@ pub async fn run(
|
|||||||
match add_run(
|
match add_run(
|
||||||
pool,
|
pool,
|
||||||
AddParams {
|
AddParams {
|
||||||
namespace: &entry.namespace,
|
|
||||||
kind: &entry.kind,
|
|
||||||
name: &entry.name,
|
name: &entry.name,
|
||||||
|
folder: &entry.folder,
|
||||||
|
entry_type: &entry.entry_type,
|
||||||
|
notes: &entry.notes,
|
||||||
tags: &entry.tags,
|
tags: &entry.tags,
|
||||||
meta_entries: &meta_entries,
|
meta_entries: &meta_entries,
|
||||||
secret_entries: &secret_entries,
|
secret_entries: &secret_entries,
|
||||||
@@ -98,8 +96,6 @@ pub async fn run(
|
|||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
tracing::error!(
|
tracing::error!(
|
||||||
namespace = entry.namespace,
|
|
||||||
kind = entry.kind,
|
|
||||||
name = entry.name,
|
name = entry.name,
|
||||||
error = %e,
|
error = %e,
|
||||||
"failed to import entry"
|
"failed to import entry"
|
||||||
|
|||||||
@@ -8,17 +8,19 @@ use crate::db;
|
|||||||
|
|
||||||
#[derive(Debug, serde::Serialize)]
|
#[derive(Debug, serde::Serialize)]
|
||||||
pub struct RollbackResult {
|
pub struct RollbackResult {
|
||||||
pub namespace: String,
|
|
||||||
pub kind: String,
|
|
||||||
pub name: String,
|
pub name: String,
|
||||||
|
pub folder: String,
|
||||||
|
#[serde(rename = "type")]
|
||||||
|
pub entry_type: String,
|
||||||
pub restored_version: i64,
|
pub restored_version: i64,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Roll back entry `name` to `to_version` (or the most recent snapshot if None).
|
||||||
|
/// `folder` is optional; if omitted and multiple entries share the name, an error is returned.
|
||||||
pub async fn run(
|
pub async fn run(
|
||||||
pool: &PgPool,
|
pool: &PgPool,
|
||||||
namespace: &str,
|
|
||||||
kind: &str,
|
|
||||||
name: &str,
|
name: &str,
|
||||||
|
folder: Option<&str>,
|
||||||
to_version: Option<i64>,
|
to_version: Option<i64>,
|
||||||
master_key: &[u8; 32],
|
master_key: &[u8; 32],
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
@@ -26,69 +28,122 @@ pub async fn run(
|
|||||||
#[derive(sqlx::FromRow)]
|
#[derive(sqlx::FromRow)]
|
||||||
struct EntryHistoryRow {
|
struct EntryHistoryRow {
|
||||||
entry_id: Uuid,
|
entry_id: Uuid,
|
||||||
|
folder: String,
|
||||||
|
#[sqlx(rename = "type")]
|
||||||
|
entry_type: String,
|
||||||
version: i64,
|
version: i64,
|
||||||
action: String,
|
action: String,
|
||||||
tags: Vec<String>,
|
tags: Vec<String>,
|
||||||
metadata: Value,
|
metadata: Value,
|
||||||
}
|
}
|
||||||
|
|
||||||
let snap: Option<EntryHistoryRow> = if let Some(ver) = to_version {
|
// Disambiguate: find the unique entry_id for (name, folder).
|
||||||
if let Some(uid) = user_id {
|
// Query entries_history by entry_id once we know it; first resolve via name + optional folder.
|
||||||
sqlx::query_as(
|
let entry_id: Option<Uuid> = if let Some(uid) = user_id {
|
||||||
"SELECT entry_id, version, action, tags, metadata FROM entries_history \
|
if let Some(f) = folder {
|
||||||
WHERE namespace = $1 AND kind = $2 AND name = $3 AND version = $4 \
|
sqlx::query_scalar(
|
||||||
AND user_id = $5 ORDER BY id DESC LIMIT 1",
|
"SELECT DISTINCT entry_id FROM entries_history \
|
||||||
|
WHERE name = $1 AND folder = $2 AND user_id = $3 LIMIT 1",
|
||||||
)
|
)
|
||||||
.bind(namespace)
|
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.bind(ver)
|
.bind(f)
|
||||||
.bind(uid)
|
.bind(uid)
|
||||||
.fetch_optional(pool)
|
.fetch_optional(pool)
|
||||||
.await?
|
.await?
|
||||||
} else {
|
} else {
|
||||||
sqlx::query_as(
|
let ids: Vec<Uuid> = sqlx::query_scalar(
|
||||||
"SELECT entry_id, version, action, tags, metadata FROM entries_history \
|
"SELECT DISTINCT entry_id FROM entries_history \
|
||||||
WHERE namespace = $1 AND kind = $2 AND name = $3 AND version = $4 \
|
WHERE name = $1 AND user_id = $2",
|
||||||
AND user_id IS NULL ORDER BY id DESC LIMIT 1",
|
|
||||||
)
|
)
|
||||||
.bind(namespace)
|
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.bind(ver)
|
.bind(uid)
|
||||||
.fetch_optional(pool)
|
.fetch_all(pool)
|
||||||
.await?
|
.await?;
|
||||||
|
match ids.len() {
|
||||||
|
0 => None,
|
||||||
|
1 => Some(ids[0]),
|
||||||
|
_ => {
|
||||||
|
let folders: Vec<String> = sqlx::query_scalar(
|
||||||
|
"SELECT DISTINCT folder FROM entries_history \
|
||||||
|
WHERE name = $1 AND user_id = $2",
|
||||||
|
)
|
||||||
|
.bind(name)
|
||||||
|
.bind(uid)
|
||||||
|
.fetch_all(pool)
|
||||||
|
.await?;
|
||||||
|
anyhow::bail!(
|
||||||
|
"Ambiguous: entries named '{}' exist in folders: [{}]. \
|
||||||
|
Specify 'folder' to disambiguate.",
|
||||||
|
name,
|
||||||
|
folders.join(", ")
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
} else if let Some(uid) = user_id {
|
} else if let Some(f) = folder {
|
||||||
sqlx::query_as(
|
sqlx::query_scalar(
|
||||||
"SELECT entry_id, version, action, tags, metadata FROM entries_history \
|
"SELECT DISTINCT entry_id FROM entries_history \
|
||||||
WHERE namespace = $1 AND kind = $2 AND name = $3 \
|
WHERE name = $1 AND folder = $2 AND user_id IS NULL LIMIT 1",
|
||||||
AND user_id = $4 ORDER BY id DESC LIMIT 1",
|
|
||||||
)
|
)
|
||||||
.bind(namespace)
|
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.bind(uid)
|
.bind(f)
|
||||||
|
.fetch_optional(pool)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
let ids: Vec<Uuid> = sqlx::query_scalar(
|
||||||
|
"SELECT DISTINCT entry_id FROM entries_history \
|
||||||
|
WHERE name = $1 AND user_id IS NULL",
|
||||||
|
)
|
||||||
|
.bind(name)
|
||||||
|
.fetch_all(pool)
|
||||||
|
.await?;
|
||||||
|
match ids.len() {
|
||||||
|
0 => None,
|
||||||
|
1 => Some(ids[0]),
|
||||||
|
_ => {
|
||||||
|
let folders: Vec<String> = sqlx::query_scalar(
|
||||||
|
"SELECT DISTINCT folder FROM entries_history \
|
||||||
|
WHERE name = $1 AND user_id IS NULL",
|
||||||
|
)
|
||||||
|
.bind(name)
|
||||||
|
.fetch_all(pool)
|
||||||
|
.await?;
|
||||||
|
anyhow::bail!(
|
||||||
|
"Ambiguous: entries named '{}' exist in folders: [{}]. \
|
||||||
|
Specify 'folder' to disambiguate.",
|
||||||
|
name,
|
||||||
|
folders.join(", ")
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let entry_id = entry_id.ok_or_else(|| anyhow::anyhow!("No history found for '{}'", name))?;
|
||||||
|
|
||||||
|
let snap: Option<EntryHistoryRow> = if let Some(ver) = to_version {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT entry_id, folder, type, version, action, tags, metadata \
|
||||||
|
FROM entries_history \
|
||||||
|
WHERE entry_id = $1 AND version = $2 ORDER BY id DESC LIMIT 1",
|
||||||
|
)
|
||||||
|
.bind(entry_id)
|
||||||
|
.bind(ver)
|
||||||
.fetch_optional(pool)
|
.fetch_optional(pool)
|
||||||
.await?
|
.await?
|
||||||
} else {
|
} else {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT entry_id, version, action, tags, metadata FROM entries_history \
|
"SELECT entry_id, folder, type, version, action, tags, metadata \
|
||||||
WHERE namespace = $1 AND kind = $2 AND name = $3 \
|
FROM entries_history \
|
||||||
AND user_id IS NULL ORDER BY id DESC LIMIT 1",
|
WHERE entry_id = $1 ORDER BY id DESC LIMIT 1",
|
||||||
)
|
)
|
||||||
.bind(namespace)
|
.bind(entry_id)
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
|
||||||
.fetch_optional(pool)
|
.fetch_optional(pool)
|
||||||
.await?
|
.await?
|
||||||
};
|
};
|
||||||
|
|
||||||
let snap = snap.ok_or_else(|| {
|
let snap = snap.ok_or_else(|| {
|
||||||
anyhow::anyhow!(
|
anyhow::anyhow!(
|
||||||
"No history found for [{}/{}] {}{}.",
|
"No history found for '{}'{}.",
|
||||||
namespace,
|
|
||||||
kind,
|
|
||||||
name,
|
name,
|
||||||
to_version
|
to_version
|
||||||
.map(|v| format!(" at version {}", v))
|
.map(|v| format!(" at version {}", v))
|
||||||
@@ -130,43 +185,32 @@ pub async fn run(
|
|||||||
struct LiveEntry {
|
struct LiveEntry {
|
||||||
id: Uuid,
|
id: Uuid,
|
||||||
version: i64,
|
version: i64,
|
||||||
|
folder: String,
|
||||||
|
#[sqlx(rename = "type")]
|
||||||
|
entry_type: String,
|
||||||
tags: Vec<String>,
|
tags: Vec<String>,
|
||||||
metadata: Value,
|
metadata: Value,
|
||||||
|
#[allow(dead_code)]
|
||||||
|
notes: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Query live entry with correct user_id scoping to avoid PK conflicts
|
// Lock the live entry if it exists (matched by entry_id for precision).
|
||||||
let live: Option<LiveEntry> = if let Some(uid) = user_id {
|
let live: Option<LiveEntry> = sqlx::query_as(
|
||||||
sqlx::query_as(
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
"SELECT id, version, tags, metadata FROM entries \
|
WHERE id = $1 FOR UPDATE",
|
||||||
WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4 FOR UPDATE",
|
)
|
||||||
)
|
.bind(entry_id)
|
||||||
.bind(uid)
|
.fetch_optional(&mut *tx)
|
||||||
.bind(namespace)
|
.await?;
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
|
||||||
.fetch_optional(&mut *tx)
|
|
||||||
.await?
|
|
||||||
} else {
|
|
||||||
sqlx::query_as(
|
|
||||||
"SELECT id, version, tags, metadata FROM entries \
|
|
||||||
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE",
|
|
||||||
)
|
|
||||||
.bind(namespace)
|
|
||||||
.bind(kind)
|
|
||||||
.bind(name)
|
|
||||||
.fetch_optional(&mut *tx)
|
|
||||||
.await?
|
|
||||||
};
|
|
||||||
|
|
||||||
let entry_id = if let Some(ref lr) = live {
|
let live_entry_id = if let Some(ref lr) = live {
|
||||||
// Snapshot current state before overwriting
|
|
||||||
if let Err(e) = db::snapshot_entry_history(
|
if let Err(e) = db::snapshot_entry_history(
|
||||||
&mut tx,
|
&mut tx,
|
||||||
db::EntrySnapshotParams {
|
db::EntrySnapshotParams {
|
||||||
entry_id: lr.id,
|
entry_id: lr.id,
|
||||||
user_id,
|
user_id,
|
||||||
namespace,
|
folder: &lr.folder,
|
||||||
kind,
|
entry_type: &lr.entry_type,
|
||||||
name,
|
name,
|
||||||
version: lr.version,
|
version: lr.version,
|
||||||
action: "rollback",
|
action: "rollback",
|
||||||
@@ -209,7 +253,6 @@ pub async fn run(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update the existing row in-place to preserve its primary key and user_id
|
|
||||||
sqlx::query(
|
sqlx::query(
|
||||||
"UPDATE entries SET tags = $1, metadata = $2, version = version + 1, \
|
"UPDATE entries SET tags = $1, metadata = $2, version = version + 1, \
|
||||||
updated_at = NOW() WHERE id = $3",
|
updated_at = NOW() WHERE id = $3",
|
||||||
@@ -222,16 +265,15 @@ pub async fn run(
|
|||||||
|
|
||||||
lr.id
|
lr.id
|
||||||
} else {
|
} else {
|
||||||
// No live entry — insert a fresh one with a new UUID
|
|
||||||
if let Some(uid) = user_id {
|
if let Some(uid) = user_id {
|
||||||
sqlx::query_scalar(
|
sqlx::query_scalar(
|
||||||
"INSERT INTO entries \
|
"INSERT INTO entries \
|
||||||
(user_id, namespace, kind, name, tags, metadata, version, updated_at) \
|
(user_id, folder, type, name, notes, tags, metadata, version, updated_at) \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6, $7, NOW()) RETURNING id",
|
VALUES ($1, $2, $3, $4, '', $5, $6, $7, NOW()) RETURNING id",
|
||||||
)
|
)
|
||||||
.bind(uid)
|
.bind(uid)
|
||||||
.bind(namespace)
|
.bind(&snap.folder)
|
||||||
.bind(kind)
|
.bind(&snap.entry_type)
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.bind(&snap.tags)
|
.bind(&snap.tags)
|
||||||
.bind(&snap.metadata)
|
.bind(&snap.metadata)
|
||||||
@@ -241,11 +283,11 @@ pub async fn run(
|
|||||||
} else {
|
} else {
|
||||||
sqlx::query_scalar(
|
sqlx::query_scalar(
|
||||||
"INSERT INTO entries \
|
"INSERT INTO entries \
|
||||||
(namespace, kind, name, tags, metadata, version, updated_at) \
|
(folder, type, name, notes, tags, metadata, version, updated_at) \
|
||||||
VALUES ($1, $2, $3, $4, $5, $6, NOW()) RETURNING id",
|
VALUES ($1, $2, $3, '', $4, $5, $6, NOW()) RETURNING id",
|
||||||
)
|
)
|
||||||
.bind(namespace)
|
.bind(&snap.folder)
|
||||||
.bind(kind)
|
.bind(&snap.entry_type)
|
||||||
.bind(name)
|
.bind(name)
|
||||||
.bind(&snap.tags)
|
.bind(&snap.tags)
|
||||||
.bind(&snap.metadata)
|
.bind(&snap.metadata)
|
||||||
@@ -256,7 +298,7 @@ pub async fn run(
|
|||||||
};
|
};
|
||||||
|
|
||||||
sqlx::query("DELETE FROM secrets WHERE entry_id = $1")
|
sqlx::query("DELETE FROM secrets WHERE entry_id = $1")
|
||||||
.bind(entry_id)
|
.bind(live_entry_id)
|
||||||
.execute(&mut *tx)
|
.execute(&mut *tx)
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
@@ -265,7 +307,7 @@ pub async fn run(
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
|
sqlx::query("INSERT INTO secrets (entry_id, field_name, encrypted) VALUES ($1, $2, $3)")
|
||||||
.bind(entry_id)
|
.bind(live_entry_id)
|
||||||
.bind(&f.field_name)
|
.bind(&f.field_name)
|
||||||
.bind(&f.encrypted)
|
.bind(&f.encrypted)
|
||||||
.execute(&mut *tx)
|
.execute(&mut *tx)
|
||||||
@@ -276,8 +318,8 @@ pub async fn run(
|
|||||||
&mut tx,
|
&mut tx,
|
||||||
user_id,
|
user_id,
|
||||||
"rollback",
|
"rollback",
|
||||||
namespace,
|
&snap.folder,
|
||||||
kind,
|
&snap.entry_type,
|
||||||
name,
|
name,
|
||||||
serde_json::json!({
|
serde_json::json!({
|
||||||
"restored_version": snap.version,
|
"restored_version": snap.version,
|
||||||
@@ -289,9 +331,9 @@ pub async fn run(
|
|||||||
tx.commit().await?;
|
tx.commit().await?;
|
||||||
|
|
||||||
Ok(RollbackResult {
|
Ok(RollbackResult {
|
||||||
namespace: namespace.to_string(),
|
|
||||||
kind: kind.to_string(),
|
|
||||||
name: name.to_string(),
|
name: name.to_string(),
|
||||||
|
folder: snap.folder,
|
||||||
|
entry_type: snap.entry_type,
|
||||||
restored_version: snap.version,
|
restored_version: snap.version,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,8 +9,8 @@ use crate::models::{Entry, SecretField};
|
|||||||
pub const FETCH_ALL_LIMIT: u32 = 100_000;
|
pub const FETCH_ALL_LIMIT: u32 = 100_000;
|
||||||
|
|
||||||
pub struct SearchParams<'a> {
|
pub struct SearchParams<'a> {
|
||||||
pub namespace: Option<&'a str>,
|
pub folder: Option<&'a str>,
|
||||||
pub kind: Option<&'a str>,
|
pub entry_type: Option<&'a str>,
|
||||||
pub name: Option<&'a str>,
|
pub name: Option<&'a str>,
|
||||||
pub tags: &'a [String],
|
pub tags: &'a [String],
|
||||||
pub query: Option<&'a str>,
|
pub query: Option<&'a str>,
|
||||||
@@ -44,16 +44,16 @@ pub async fn run(pool: &PgPool, params: SearchParams<'_>) -> Result<SearchResult
|
|||||||
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
|
/// Fetch entries matching the given filters — returns all matching entries up to FETCH_ALL_LIMIT.
|
||||||
pub async fn fetch_entries(
|
pub async fn fetch_entries(
|
||||||
pool: &PgPool,
|
pool: &PgPool,
|
||||||
namespace: Option<&str>,
|
folder: Option<&str>,
|
||||||
kind: Option<&str>,
|
entry_type: Option<&str>,
|
||||||
name: Option<&str>,
|
name: Option<&str>,
|
||||||
tags: &[String],
|
tags: &[String],
|
||||||
query: Option<&str>,
|
query: Option<&str>,
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
) -> Result<Vec<Entry>> {
|
) -> Result<Vec<Entry>> {
|
||||||
let params = SearchParams {
|
let params = SearchParams {
|
||||||
namespace,
|
folder,
|
||||||
kind,
|
entry_type,
|
||||||
name,
|
name,
|
||||||
tags,
|
tags,
|
||||||
query,
|
query,
|
||||||
@@ -77,12 +77,12 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
|||||||
conditions.push("user_id IS NULL".to_string());
|
conditions.push("user_id IS NULL".to_string());
|
||||||
}
|
}
|
||||||
|
|
||||||
if a.namespace.is_some() {
|
if a.folder.is_some() {
|
||||||
conditions.push(format!("namespace = ${}", idx));
|
conditions.push(format!("folder = ${}", idx));
|
||||||
idx += 1;
|
idx += 1;
|
||||||
}
|
}
|
||||||
if a.kind.is_some() {
|
if a.entry_type.is_some() {
|
||||||
conditions.push(format!("kind = ${}", idx));
|
conditions.push(format!("type = ${}", idx));
|
||||||
idx += 1;
|
idx += 1;
|
||||||
}
|
}
|
||||||
if a.name.is_some() {
|
if a.name.is_some() {
|
||||||
@@ -106,8 +106,9 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
|||||||
}
|
}
|
||||||
if a.query.is_some() {
|
if a.query.is_some() {
|
||||||
conditions.push(format!(
|
conditions.push(format!(
|
||||||
"(name ILIKE ${i} ESCAPE '\\' OR namespace ILIKE ${i} ESCAPE '\\' \
|
"(name ILIKE ${i} ESCAPE '\\' OR folder ILIKE ${i} ESCAPE '\\' \
|
||||||
OR kind ILIKE ${i} ESCAPE '\\' OR metadata::text ILIKE ${i} ESCAPE '\\' \
|
OR type ILIKE ${i} ESCAPE '\\' OR notes ILIKE ${i} ESCAPE '\\' \
|
||||||
|
OR metadata::text ILIKE ${i} ESCAPE '\\' \
|
||||||
OR EXISTS (SELECT 1 FROM unnest(tags) t WHERE t ILIKE ${i} ESCAPE '\\'))",
|
OR EXISTS (SELECT 1 FROM unnest(tags) t WHERE t ILIKE ${i} ESCAPE '\\'))",
|
||||||
i = idx
|
i = idx
|
||||||
));
|
));
|
||||||
@@ -131,8 +132,8 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
|||||||
};
|
};
|
||||||
|
|
||||||
let sql = format!(
|
let sql = format!(
|
||||||
"SELECT id, user_id, \
|
"SELECT id, user_id, folder, type, name, notes, tags, metadata, version, \
|
||||||
namespace, kind, name, tags, metadata, version, created_at, updated_at \
|
created_at, updated_at \
|
||||||
FROM entries {where_clause} ORDER BY {order} LIMIT ${limit_idx} OFFSET ${offset_idx}"
|
FROM entries {where_clause} ORDER BY {order} LIMIT ${limit_idx} OFFSET ${offset_idx}"
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -141,10 +142,10 @@ async fn fetch_entries_paged(pool: &PgPool, a: &SearchParams<'_>) -> Result<Vec<
|
|||||||
if let Some(uid) = a.user_id {
|
if let Some(uid) = a.user_id {
|
||||||
q = q.bind(uid);
|
q = q.bind(uid);
|
||||||
}
|
}
|
||||||
if let Some(v) = a.namespace {
|
if let Some(v) = a.folder {
|
||||||
q = q.bind(v);
|
q = q.bind(v);
|
||||||
}
|
}
|
||||||
if let Some(v) = a.kind {
|
if let Some(v) = a.entry_type {
|
||||||
q = q.bind(v);
|
q = q.bind(v);
|
||||||
}
|
}
|
||||||
if let Some(v) = a.name {
|
if let Some(v) = a.name {
|
||||||
@@ -207,15 +208,51 @@ pub async fn fetch_secrets_for_entries(
|
|||||||
Ok(map)
|
Ok(map)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ── Internal raw row (because user_id is nullable in DB) ─────────────────────
|
/// Resolve exactly one entry by name, with optional folder for disambiguation.
|
||||||
|
///
|
||||||
|
/// - If `folder` is provided: exact `(folder, name)` match.
|
||||||
|
/// - If `folder` is None and exactly one entry matches: returns it.
|
||||||
|
/// - If `folder` is None and multiple entries match: returns an error listing
|
||||||
|
/// the folders and asking the caller to specify one.
|
||||||
|
pub async fn resolve_entry(
|
||||||
|
pool: &PgPool,
|
||||||
|
name: &str,
|
||||||
|
folder: Option<&str>,
|
||||||
|
user_id: Option<Uuid>,
|
||||||
|
) -> Result<crate::models::Entry> {
|
||||||
|
let entries = fetch_entries(pool, folder, None, Some(name), &[], None, user_id).await?;
|
||||||
|
match entries.len() {
|
||||||
|
0 => {
|
||||||
|
if let Some(f) = folder {
|
||||||
|
anyhow::bail!("Not found: '{}' in folder '{}'", name, f)
|
||||||
|
} else {
|
||||||
|
anyhow::bail!("Not found: '{}'", name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
1 => Ok(entries.into_iter().next().unwrap()),
|
||||||
|
_ => {
|
||||||
|
let folders: Vec<&str> = entries.iter().map(|e| e.folder.as_str()).collect();
|
||||||
|
anyhow::bail!(
|
||||||
|
"Ambiguous: {} entries named '{}' found in folders: [{}]. \
|
||||||
|
Specify 'folder' to disambiguate.",
|
||||||
|
entries.len(),
|
||||||
|
name,
|
||||||
|
folders.join(", ")
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Internal raw row (because user_id is nullable in DB) ─────────────────────
|
||||||
#[derive(sqlx::FromRow)]
|
#[derive(sqlx::FromRow)]
|
||||||
struct EntryRaw {
|
struct EntryRaw {
|
||||||
id: Uuid,
|
id: Uuid,
|
||||||
user_id: Option<Uuid>,
|
user_id: Option<Uuid>,
|
||||||
namespace: String,
|
folder: String,
|
||||||
kind: String,
|
#[sqlx(rename = "type")]
|
||||||
|
entry_type: String,
|
||||||
name: String,
|
name: String,
|
||||||
|
notes: String,
|
||||||
tags: Vec<String>,
|
tags: Vec<String>,
|
||||||
metadata: Value,
|
metadata: Value,
|
||||||
version: i64,
|
version: i64,
|
||||||
@@ -228,9 +265,10 @@ impl From<EntryRaw> for Entry {
|
|||||||
Entry {
|
Entry {
|
||||||
id: r.id,
|
id: r.id,
|
||||||
user_id: r.user_id,
|
user_id: r.user_id,
|
||||||
namespace: r.namespace,
|
folder: r.folder,
|
||||||
kind: r.kind,
|
entry_type: r.entry_type,
|
||||||
name: r.name,
|
name: r.name,
|
||||||
|
notes: r.notes,
|
||||||
tags: r.tags,
|
tags: r.tags,
|
||||||
metadata: r.metadata,
|
metadata: r.metadata,
|
||||||
version: r.version,
|
version: r.version,
|
||||||
|
|||||||
@@ -13,9 +13,10 @@ use crate::service::add::{
|
|||||||
|
|
||||||
#[derive(Debug, serde::Serialize)]
|
#[derive(Debug, serde::Serialize)]
|
||||||
pub struct UpdateResult {
|
pub struct UpdateResult {
|
||||||
pub namespace: String,
|
|
||||||
pub kind: String,
|
|
||||||
pub name: String,
|
pub name: String,
|
||||||
|
pub folder: String,
|
||||||
|
#[serde(rename = "type")]
|
||||||
|
pub entry_type: String,
|
||||||
pub add_tags: Vec<String>,
|
pub add_tags: Vec<String>,
|
||||||
pub remove_tags: Vec<String>,
|
pub remove_tags: Vec<String>,
|
||||||
pub meta_keys: Vec<String>,
|
pub meta_keys: Vec<String>,
|
||||||
@@ -25,9 +26,10 @@ pub struct UpdateResult {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub struct UpdateParams<'a> {
|
pub struct UpdateParams<'a> {
|
||||||
pub namespace: &'a str,
|
|
||||||
pub kind: &'a str,
|
|
||||||
pub name: &'a str,
|
pub name: &'a str,
|
||||||
|
/// Optional folder for disambiguation when multiple entries share the same name.
|
||||||
|
pub folder: Option<&'a str>,
|
||||||
|
pub notes: Option<&'a str>,
|
||||||
pub add_tags: &'a [String],
|
pub add_tags: &'a [String],
|
||||||
pub remove_tags: &'a [String],
|
pub remove_tags: &'a [String],
|
||||||
pub meta_entries: &'a [String],
|
pub meta_entries: &'a [String],
|
||||||
@@ -44,45 +46,76 @@ pub async fn run(
|
|||||||
) -> Result<UpdateResult> {
|
) -> Result<UpdateResult> {
|
||||||
let mut tx = pool.begin().await?;
|
let mut tx = pool.begin().await?;
|
||||||
|
|
||||||
let row: Option<EntryRow> = if let Some(uid) = params.user_id {
|
// Fetch matching rows with FOR UPDATE; use folder when provided to resolve ambiguity.
|
||||||
|
let rows: Vec<EntryRow> = if let Some(uid) = params.user_id {
|
||||||
|
if let Some(folder) = params.folder {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
|
WHERE user_id = $1 AND folder = $2 AND name = $3 FOR UPDATE",
|
||||||
|
)
|
||||||
|
.bind(uid)
|
||||||
|
.bind(folder)
|
||||||
|
.bind(params.name)
|
||||||
|
.fetch_all(&mut *tx)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
sqlx::query_as(
|
||||||
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
|
WHERE user_id = $1 AND name = $2 FOR UPDATE",
|
||||||
|
)
|
||||||
|
.bind(uid)
|
||||||
|
.bind(params.name)
|
||||||
|
.fetch_all(&mut *tx)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
} else if let Some(folder) = params.folder {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT id, version, tags, metadata FROM entries \
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
WHERE user_id = $1 AND namespace = $2 AND kind = $3 AND name = $4 FOR UPDATE",
|
WHERE user_id IS NULL AND folder = $1 AND name = $2 FOR UPDATE",
|
||||||
)
|
)
|
||||||
.bind(uid)
|
.bind(folder)
|
||||||
.bind(params.namespace)
|
|
||||||
.bind(params.kind)
|
|
||||||
.bind(params.name)
|
.bind(params.name)
|
||||||
.fetch_optional(&mut *tx)
|
.fetch_all(&mut *tx)
|
||||||
.await?
|
.await?
|
||||||
} else {
|
} else {
|
||||||
sqlx::query_as(
|
sqlx::query_as(
|
||||||
"SELECT id, version, tags, metadata FROM entries \
|
"SELECT id, version, folder, type, tags, metadata, notes FROM entries \
|
||||||
WHERE user_id IS NULL AND namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE",
|
WHERE user_id IS NULL AND name = $1 FOR UPDATE",
|
||||||
)
|
)
|
||||||
.bind(params.namespace)
|
|
||||||
.bind(params.kind)
|
|
||||||
.bind(params.name)
|
.bind(params.name)
|
||||||
.fetch_optional(&mut *tx)
|
.fetch_all(&mut *tx)
|
||||||
.await?
|
.await?
|
||||||
};
|
};
|
||||||
|
|
||||||
let row = row.ok_or_else(|| {
|
let row = match rows.len() {
|
||||||
anyhow::anyhow!(
|
0 => {
|
||||||
"Not found: [{}/{}] {}. Use `add` to create it first.",
|
tx.rollback().await?;
|
||||||
params.namespace,
|
anyhow::bail!(
|
||||||
params.kind,
|
"Not found: '{}'. Use `add` to create it first.",
|
||||||
params.name
|
params.name
|
||||||
)
|
)
|
||||||
})?;
|
}
|
||||||
|
1 => rows.into_iter().next().unwrap(),
|
||||||
|
_ => {
|
||||||
|
tx.rollback().await?;
|
||||||
|
let folders: Vec<&str> = rows.iter().map(|r| r.folder.as_str()).collect();
|
||||||
|
anyhow::bail!(
|
||||||
|
"Ambiguous: {} entries named '{}' found in folders: [{}]. \
|
||||||
|
Specify 'folder' to disambiguate.",
|
||||||
|
rows.len(),
|
||||||
|
params.name,
|
||||||
|
folders.join(", ")
|
||||||
|
)
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
if let Err(e) = db::snapshot_entry_history(
|
if let Err(e) = db::snapshot_entry_history(
|
||||||
&mut tx,
|
&mut tx,
|
||||||
db::EntrySnapshotParams {
|
db::EntrySnapshotParams {
|
||||||
entry_id: row.id,
|
entry_id: row.id,
|
||||||
user_id: params.user_id,
|
user_id: params.user_id,
|
||||||
namespace: params.namespace,
|
folder: &row.folder,
|
||||||
kind: params.kind,
|
entry_type: &row.entry_type,
|
||||||
name: params.name,
|
name: params.name,
|
||||||
version: row.version,
|
version: row.version,
|
||||||
action: "update",
|
action: "update",
|
||||||
@@ -117,12 +150,16 @@ pub async fn run(
|
|||||||
}
|
}
|
||||||
let metadata = Value::Object(meta_map);
|
let metadata = Value::Object(meta_map);
|
||||||
|
|
||||||
|
let new_notes = params.notes.unwrap_or(&row.notes);
|
||||||
|
|
||||||
let result = sqlx::query(
|
let result = sqlx::query(
|
||||||
"UPDATE entries SET tags = $1, metadata = $2, version = version + 1, updated_at = NOW() \
|
"UPDATE entries SET tags = $1, metadata = $2, notes = $3, \
|
||||||
WHERE id = $3 AND version = $4",
|
version = version + 1, updated_at = NOW() \
|
||||||
|
WHERE id = $4 AND version = $5",
|
||||||
)
|
)
|
||||||
.bind(&tags)
|
.bind(&tags)
|
||||||
.bind(&metadata)
|
.bind(&metadata)
|
||||||
|
.bind(new_notes)
|
||||||
.bind(row.id)
|
.bind(row.id)
|
||||||
.bind(row.version)
|
.bind(row.version)
|
||||||
.execute(&mut *tx)
|
.execute(&mut *tx)
|
||||||
@@ -131,9 +168,7 @@ pub async fn run(
|
|||||||
if result.rows_affected() == 0 {
|
if result.rows_affected() == 0 {
|
||||||
tx.rollback().await?;
|
tx.rollback().await?;
|
||||||
anyhow::bail!(
|
anyhow::bail!(
|
||||||
"Concurrent modification detected for [{}/{}] {}. Please retry.",
|
"Concurrent modification detected for '{}'. Please retry.",
|
||||||
params.namespace,
|
|
||||||
params.kind,
|
|
||||||
params.name
|
params.name
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -243,8 +278,8 @@ pub async fn run(
|
|||||||
&mut tx,
|
&mut tx,
|
||||||
params.user_id,
|
params.user_id,
|
||||||
"update",
|
"update",
|
||||||
params.namespace,
|
"",
|
||||||
params.kind,
|
"",
|
||||||
params.name,
|
params.name,
|
||||||
serde_json::json!({
|
serde_json::json!({
|
||||||
"add_tags": params.add_tags,
|
"add_tags": params.add_tags,
|
||||||
@@ -260,9 +295,9 @@ pub async fn run(
|
|||||||
tx.commit().await?;
|
tx.commit().await?;
|
||||||
|
|
||||||
Ok(UpdateResult {
|
Ok(UpdateResult {
|
||||||
namespace: params.namespace.to_string(),
|
|
||||||
kind: params.kind.to_string(),
|
|
||||||
name: params.name.to_string(),
|
name: params.name.to_string(),
|
||||||
|
folder: row.folder.clone(),
|
||||||
|
entry_type: row.entry_type.clone(),
|
||||||
add_tags: params.add_tags.to_vec(),
|
add_tags: params.add_tags.to_vec(),
|
||||||
remove_tags: params.remove_tags.to_vec(),
|
remove_tags: params.remove_tags.to_vec(),
|
||||||
meta_keys,
|
meta_keys,
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
[package]
|
[package]
|
||||||
name = "secrets-mcp"
|
name = "secrets-mcp"
|
||||||
version = "0.2.2"
|
version = "0.3.0"
|
||||||
edition.workspace = true
|
edition.workspace = true
|
||||||
|
|
||||||
[[bin]]
|
[[bin]]
|
||||||
|
|||||||
@@ -155,17 +155,18 @@ impl SecretsService {
|
|||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct SearchInput {
|
struct SearchInput {
|
||||||
#[schemars(description = "Namespace filter (e.g. 'refining', 'ricnsmart')")]
|
#[schemars(description = "Fuzzy search across name, folder, type, notes, tags, metadata")]
|
||||||
namespace: Option<String>,
|
query: Option<String>,
|
||||||
#[schemars(description = "Kind filter (e.g. 'server', 'service', 'key')")]
|
#[schemars(description = "Folder filter (e.g. 'refining', 'personal', 'family')")]
|
||||||
kind: Option<String>,
|
folder: Option<String>,
|
||||||
#[schemars(description = "Exact record name")]
|
#[schemars(description = "Type filter (e.g. 'server', 'service', 'person', 'key')")]
|
||||||
|
#[serde(rename = "type")]
|
||||||
|
entry_type: Option<String>,
|
||||||
|
#[schemars(description = "Exact name to match")]
|
||||||
name: Option<String>,
|
name: Option<String>,
|
||||||
#[schemars(description = "Tag filters (all must match)")]
|
#[schemars(description = "Tag filters (all must match)")]
|
||||||
tags: Option<Vec<String>>,
|
tags: Option<Vec<String>>,
|
||||||
#[schemars(description = "Fuzzy search across name, namespace, kind, tags, metadata")]
|
#[schemars(description = "Return only summary fields (name/tags/notes/updated_at)")]
|
||||||
query: Option<String>,
|
|
||||||
#[schemars(description = "Return only summary fields (name/tags/desc/updated_at)")]
|
|
||||||
summary: Option<bool>,
|
summary: Option<bool>,
|
||||||
#[schemars(description = "Sort order: 'name' (default), 'updated', 'created'")]
|
#[schemars(description = "Sort order: 'name' (default), 'updated', 'created'")]
|
||||||
sort: Option<String>,
|
sort: Option<String>,
|
||||||
@@ -177,24 +178,29 @@ struct SearchInput {
|
|||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct GetSecretInput {
|
struct GetSecretInput {
|
||||||
#[schemars(description = "Namespace of the entry")]
|
|
||||||
namespace: String,
|
|
||||||
#[schemars(description = "Kind of the entry (e.g. 'server', 'service')")]
|
|
||||||
kind: String,
|
|
||||||
#[schemars(description = "Name of the entry")]
|
#[schemars(description = "Name of the entry")]
|
||||||
name: String,
|
name: String,
|
||||||
|
#[schemars(
|
||||||
|
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
||||||
|
)]
|
||||||
|
folder: Option<String>,
|
||||||
#[schemars(description = "Specific field to retrieve. If omitted, returns all fields.")]
|
#[schemars(description = "Specific field to retrieve. If omitted, returns all fields.")]
|
||||||
field: Option<String>,
|
field: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct AddInput {
|
struct AddInput {
|
||||||
#[schemars(description = "Namespace")]
|
#[schemars(description = "Unique name for this entry")]
|
||||||
namespace: String,
|
|
||||||
#[schemars(description = "Kind (e.g. 'server', 'service', 'key')")]
|
|
||||||
kind: String,
|
|
||||||
#[schemars(description = "Unique name within namespace+kind")]
|
|
||||||
name: String,
|
name: String,
|
||||||
|
#[schemars(description = "Folder for organization (optional, e.g. 'personal', 'refining')")]
|
||||||
|
folder: Option<String>,
|
||||||
|
#[schemars(
|
||||||
|
description = "Type/category of this entry (optional, e.g. 'server', 'person', 'key')"
|
||||||
|
)]
|
||||||
|
#[serde(rename = "type")]
|
||||||
|
entry_type: Option<String>,
|
||||||
|
#[schemars(description = "Free-text notes for this entry (optional)")]
|
||||||
|
notes: Option<String>,
|
||||||
#[schemars(description = "Tags for this entry")]
|
#[schemars(description = "Tags for this entry")]
|
||||||
tags: Option<Vec<String>>,
|
tags: Option<Vec<String>>,
|
||||||
#[schemars(description = "Metadata fields as 'key=value' or 'key:=json' strings")]
|
#[schemars(description = "Metadata fields as 'key=value' or 'key:=json' strings")]
|
||||||
@@ -205,12 +211,14 @@ struct AddInput {
|
|||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct UpdateInput {
|
struct UpdateInput {
|
||||||
#[schemars(description = "Namespace")]
|
#[schemars(description = "Name of the entry to update")]
|
||||||
namespace: String,
|
|
||||||
#[schemars(description = "Kind")]
|
|
||||||
kind: String,
|
|
||||||
#[schemars(description = "Name")]
|
|
||||||
name: String,
|
name: String,
|
||||||
|
#[schemars(
|
||||||
|
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
||||||
|
)]
|
||||||
|
folder: Option<String>,
|
||||||
|
#[schemars(description = "Update the notes field")]
|
||||||
|
notes: Option<String>,
|
||||||
#[schemars(description = "Tags to add")]
|
#[schemars(description = "Tags to add")]
|
||||||
add_tags: Option<Vec<String>>,
|
add_tags: Option<Vec<String>>,
|
||||||
#[schemars(description = "Tags to remove")]
|
#[schemars(description = "Tags to remove")]
|
||||||
@@ -227,46 +235,49 @@ struct UpdateInput {
|
|||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct DeleteInput {
|
struct DeleteInput {
|
||||||
#[schemars(description = "Namespace")]
|
#[schemars(description = "Name of the entry to delete (single delete). \
|
||||||
namespace: String,
|
Omit to bulk delete by folder/type filters.")]
|
||||||
#[schemars(description = "Kind filter (required for single delete)")]
|
|
||||||
kind: Option<String>,
|
|
||||||
#[schemars(description = "Exact name to delete. Omit for bulk delete by namespace+kind.")]
|
|
||||||
name: Option<String>,
|
name: Option<String>,
|
||||||
|
#[schemars(description = "Folder filter for bulk delete")]
|
||||||
|
folder: Option<String>,
|
||||||
|
#[schemars(description = "Type filter for bulk delete")]
|
||||||
|
#[serde(rename = "type")]
|
||||||
|
entry_type: Option<String>,
|
||||||
#[schemars(description = "Preview deletions without writing")]
|
#[schemars(description = "Preview deletions without writing")]
|
||||||
dry_run: Option<bool>,
|
dry_run: Option<bool>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct HistoryInput {
|
struct HistoryInput {
|
||||||
#[schemars(description = "Namespace")]
|
#[schemars(description = "Name of the entry")]
|
||||||
namespace: String,
|
|
||||||
#[schemars(description = "Kind")]
|
|
||||||
kind: String,
|
|
||||||
#[schemars(description = "Name")]
|
|
||||||
name: String,
|
name: String,
|
||||||
|
#[schemars(
|
||||||
|
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
||||||
|
)]
|
||||||
|
folder: Option<String>,
|
||||||
#[schemars(description = "Max history entries to return (default 20)")]
|
#[schemars(description = "Max history entries to return (default 20)")]
|
||||||
limit: Option<u32>,
|
limit: Option<u32>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct RollbackInput {
|
struct RollbackInput {
|
||||||
#[schemars(description = "Namespace")]
|
#[schemars(description = "Name of the entry")]
|
||||||
namespace: String,
|
|
||||||
#[schemars(description = "Kind")]
|
|
||||||
kind: String,
|
|
||||||
#[schemars(description = "Name")]
|
|
||||||
name: String,
|
name: String,
|
||||||
|
#[schemars(
|
||||||
|
description = "Folder for disambiguation when multiple entries share the same name (optional)"
|
||||||
|
)]
|
||||||
|
folder: Option<String>,
|
||||||
#[schemars(description = "Target version number. Omit to restore the most recent snapshot.")]
|
#[schemars(description = "Target version number. Omit to restore the most recent snapshot.")]
|
||||||
to_version: Option<i64>,
|
to_version: Option<i64>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct ExportInput {
|
struct ExportInput {
|
||||||
#[schemars(description = "Namespace filter")]
|
#[schemars(description = "Folder filter")]
|
||||||
namespace: Option<String>,
|
folder: Option<String>,
|
||||||
#[schemars(description = "Kind filter")]
|
#[schemars(description = "Type filter")]
|
||||||
kind: Option<String>,
|
#[serde(rename = "type")]
|
||||||
|
entry_type: Option<String>,
|
||||||
#[schemars(description = "Exact name filter")]
|
#[schemars(description = "Exact name filter")]
|
||||||
name: Option<String>,
|
name: Option<String>,
|
||||||
#[schemars(description = "Tag filters")]
|
#[schemars(description = "Tag filters")]
|
||||||
@@ -279,10 +290,11 @@ struct ExportInput {
|
|||||||
|
|
||||||
#[derive(Debug, Deserialize, JsonSchema)]
|
#[derive(Debug, Deserialize, JsonSchema)]
|
||||||
struct EnvMapInput {
|
struct EnvMapInput {
|
||||||
#[schemars(description = "Namespace filter")]
|
#[schemars(description = "Folder filter")]
|
||||||
namespace: Option<String>,
|
folder: Option<String>,
|
||||||
#[schemars(description = "Kind filter")]
|
#[schemars(description = "Type filter")]
|
||||||
kind: Option<String>,
|
#[serde(rename = "type")]
|
||||||
|
entry_type: Option<String>,
|
||||||
#[schemars(description = "Exact name filter")]
|
#[schemars(description = "Exact name filter")]
|
||||||
name: Option<String>,
|
name: Option<String>,
|
||||||
#[schemars(description = "Tag filters")]
|
#[schemars(description = "Tag filters")]
|
||||||
@@ -316,8 +328,8 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_search",
|
tool = "secrets_search",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = input.namespace.as_deref(),
|
folder = input.folder.as_deref(),
|
||||||
kind = input.kind.as_deref(),
|
entry_type = input.entry_type.as_deref(),
|
||||||
name = input.name.as_deref(),
|
name = input.name.as_deref(),
|
||||||
query = input.query.as_deref(),
|
query = input.query.as_deref(),
|
||||||
"tool call start",
|
"tool call start",
|
||||||
@@ -326,8 +338,8 @@ impl SecretsService {
|
|||||||
let result = svc_search(
|
let result = svc_search(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
SearchParams {
|
SearchParams {
|
||||||
namespace: input.namespace.as_deref(),
|
folder: input.folder.as_deref(),
|
||||||
kind: input.kind.as_deref(),
|
entry_type: input.entry_type.as_deref(),
|
||||||
name: input.name.as_deref(),
|
name: input.name.as_deref(),
|
||||||
tags: &tags,
|
tags: &tags,
|
||||||
query: input.query.as_deref(),
|
query: input.query.as_deref(),
|
||||||
@@ -347,12 +359,11 @@ impl SecretsService {
|
|||||||
.map(|e| {
|
.map(|e| {
|
||||||
if summary {
|
if summary {
|
||||||
serde_json::json!({
|
serde_json::json!({
|
||||||
"namespace": e.namespace,
|
|
||||||
"kind": e.kind,
|
|
||||||
"name": e.name,
|
"name": e.name,
|
||||||
|
"folder": e.folder,
|
||||||
|
"type": e.entry_type,
|
||||||
"tags": e.tags,
|
"tags": e.tags,
|
||||||
"desc": e.metadata.get("desc").or_else(|| e.metadata.get("url"))
|
"notes": e.notes,
|
||||||
.and_then(|v| v.as_str()).unwrap_or(""),
|
|
||||||
"updated_at": e.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
|
"updated_at": e.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
|
||||||
})
|
})
|
||||||
} else {
|
} else {
|
||||||
@@ -363,9 +374,10 @@ impl SecretsService {
|
|||||||
.unwrap_or_default();
|
.unwrap_or_default();
|
||||||
serde_json::json!({
|
serde_json::json!({
|
||||||
"id": e.id,
|
"id": e.id,
|
||||||
"namespace": e.namespace,
|
|
||||||
"kind": e.kind,
|
|
||||||
"name": e.name,
|
"name": e.name,
|
||||||
|
"folder": e.folder,
|
||||||
|
"type": e.entry_type,
|
||||||
|
"notes": e.notes,
|
||||||
"tags": e.tags,
|
"tags": e.tags,
|
||||||
"metadata": e.metadata,
|
"metadata": e.metadata,
|
||||||
"secret_fields": schema,
|
"secret_fields": schema,
|
||||||
@@ -408,8 +420,6 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_get",
|
tool = "secrets_get",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = %input.namespace,
|
|
||||||
kind = %input.kind,
|
|
||||||
name = %input.name,
|
name = %input.name,
|
||||||
field = input.field.as_deref(),
|
field = input.field.as_deref(),
|
||||||
"tool call start",
|
"tool call start",
|
||||||
@@ -418,9 +428,8 @@ impl SecretsService {
|
|||||||
if let Some(field_name) = &input.field {
|
if let Some(field_name) = &input.field {
|
||||||
let value = get_secret_field(
|
let value = get_secret_field(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
&input.namespace,
|
|
||||||
&input.kind,
|
|
||||||
&input.name,
|
&input.name,
|
||||||
|
input.folder.as_deref(),
|
||||||
field_name,
|
field_name,
|
||||||
&user_key,
|
&user_key,
|
||||||
Some(user_id),
|
Some(user_id),
|
||||||
@@ -440,9 +449,8 @@ impl SecretsService {
|
|||||||
} else {
|
} else {
|
||||||
let secrets = get_all_secrets(
|
let secrets = get_all_secrets(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
&input.namespace,
|
|
||||||
&input.kind,
|
|
||||||
&input.name,
|
&input.name,
|
||||||
|
input.folder.as_deref(),
|
||||||
&user_key,
|
&user_key,
|
||||||
Some(user_id),
|
Some(user_id),
|
||||||
)
|
)
|
||||||
@@ -478,22 +486,26 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_add",
|
tool = "secrets_add",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = %input.namespace,
|
|
||||||
kind = %input.kind,
|
|
||||||
name = %input.name,
|
name = %input.name,
|
||||||
|
folder = input.folder.as_deref(),
|
||||||
|
entry_type = input.entry_type.as_deref(),
|
||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
|
|
||||||
let tags = input.tags.unwrap_or_default();
|
let tags = input.tags.unwrap_or_default();
|
||||||
let meta = input.meta.unwrap_or_default();
|
let meta = input.meta.unwrap_or_default();
|
||||||
let secrets = input.secrets.unwrap_or_default();
|
let secrets = input.secrets.unwrap_or_default();
|
||||||
|
let folder = input.folder.as_deref().unwrap_or("");
|
||||||
|
let entry_type = input.entry_type.as_deref().unwrap_or("");
|
||||||
|
let notes = input.notes.as_deref().unwrap_or("");
|
||||||
|
|
||||||
let result = svc_add(
|
let result = svc_add(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
AddParams {
|
AddParams {
|
||||||
namespace: &input.namespace,
|
|
||||||
kind: &input.kind,
|
|
||||||
name: &input.name,
|
name: &input.name,
|
||||||
|
folder,
|
||||||
|
entry_type,
|
||||||
|
notes,
|
||||||
tags: &tags,
|
tags: &tags,
|
||||||
meta_entries: &meta,
|
meta_entries: &meta,
|
||||||
secret_entries: &secrets,
|
secret_entries: &secrets,
|
||||||
@@ -507,8 +519,6 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_add",
|
tool = "secrets_add",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = %input.namespace,
|
|
||||||
kind = %input.kind,
|
|
||||||
name = %input.name,
|
name = %input.name,
|
||||||
elapsed_ms = t.elapsed().as_millis(),
|
elapsed_ms = t.elapsed().as_millis(),
|
||||||
"tool call ok",
|
"tool call ok",
|
||||||
@@ -532,8 +542,6 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_update",
|
tool = "secrets_update",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = %input.namespace,
|
|
||||||
kind = %input.kind,
|
|
||||||
name = %input.name,
|
name = %input.name,
|
||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
@@ -548,9 +556,9 @@ impl SecretsService {
|
|||||||
let result = svc_update(
|
let result = svc_update(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
UpdateParams {
|
UpdateParams {
|
||||||
namespace: &input.namespace,
|
|
||||||
kind: &input.kind,
|
|
||||||
name: &input.name,
|
name: &input.name,
|
||||||
|
folder: input.folder.as_deref(),
|
||||||
|
notes: input.notes.as_deref(),
|
||||||
add_tags: &add_tags,
|
add_tags: &add_tags,
|
||||||
remove_tags: &remove_tags,
|
remove_tags: &remove_tags,
|
||||||
meta_entries: &meta,
|
meta_entries: &meta,
|
||||||
@@ -567,8 +575,6 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_update",
|
tool = "secrets_update",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = %input.namespace,
|
|
||||||
kind = %input.kind,
|
|
||||||
name = %input.name,
|
name = %input.name,
|
||||||
elapsed_ms = t.elapsed().as_millis(),
|
elapsed_ms = t.elapsed().as_millis(),
|
||||||
"tool call ok",
|
"tool call ok",
|
||||||
@@ -578,8 +584,8 @@ impl SecretsService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[tool(
|
#[tool(
|
||||||
description = "Delete one entry (specify namespace+kind+name) or bulk delete all \
|
description = "Delete one entry by name, or bulk delete entries matching folder and/or type. \
|
||||||
entries matching namespace+kind. Use dry_run=true to preview.",
|
Use dry_run=true to preview.",
|
||||||
annotations(title = "Delete Secret Entry", destructive_hint = true)
|
annotations(title = "Delete Secret Entry", destructive_hint = true)
|
||||||
)]
|
)]
|
||||||
async fn secrets_delete(
|
async fn secrets_delete(
|
||||||
@@ -592,9 +598,9 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_delete",
|
tool = "secrets_delete",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = %input.namespace,
|
|
||||||
kind = input.kind.as_deref(),
|
|
||||||
name = input.name.as_deref(),
|
name = input.name.as_deref(),
|
||||||
|
folder = input.folder.as_deref(),
|
||||||
|
entry_type = input.entry_type.as_deref(),
|
||||||
dry_run = input.dry_run.unwrap_or(false),
|
dry_run = input.dry_run.unwrap_or(false),
|
||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
@@ -602,9 +608,9 @@ impl SecretsService {
|
|||||||
let result = svc_delete(
|
let result = svc_delete(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
DeleteParams {
|
DeleteParams {
|
||||||
namespace: &input.namespace,
|
|
||||||
kind: input.kind.as_deref(),
|
|
||||||
name: input.name.as_deref(),
|
name: input.name.as_deref(),
|
||||||
|
folder: input.folder.as_deref(),
|
||||||
|
entry_type: input.entry_type.as_deref(),
|
||||||
dry_run: input.dry_run.unwrap_or(false),
|
dry_run: input.dry_run.unwrap_or(false),
|
||||||
user_id,
|
user_id,
|
||||||
},
|
},
|
||||||
@@ -615,7 +621,6 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_delete",
|
tool = "secrets_delete",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = %input.namespace,
|
|
||||||
elapsed_ms = t.elapsed().as_millis(),
|
elapsed_ms = t.elapsed().as_millis(),
|
||||||
"tool call ok",
|
"tool call ok",
|
||||||
);
|
);
|
||||||
@@ -642,17 +647,14 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_history",
|
tool = "secrets_history",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = %input.namespace,
|
|
||||||
kind = %input.kind,
|
|
||||||
name = %input.name,
|
name = %input.name,
|
||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
|
|
||||||
let result = svc_history(
|
let result = svc_history(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
&input.namespace,
|
|
||||||
&input.kind,
|
|
||||||
&input.name,
|
&input.name,
|
||||||
|
input.folder.as_deref(),
|
||||||
input.limit.unwrap_or(20),
|
input.limit.unwrap_or(20),
|
||||||
user_id,
|
user_id,
|
||||||
)
|
)
|
||||||
@@ -684,8 +686,6 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_rollback",
|
tool = "secrets_rollback",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = %input.namespace,
|
|
||||||
kind = %input.kind,
|
|
||||||
name = %input.name,
|
name = %input.name,
|
||||||
to_version = input.to_version,
|
to_version = input.to_version,
|
||||||
"tool call start",
|
"tool call start",
|
||||||
@@ -693,9 +693,8 @@ impl SecretsService {
|
|||||||
|
|
||||||
let result = svc_rollback(
|
let result = svc_rollback(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
&input.namespace,
|
|
||||||
&input.kind,
|
|
||||||
&input.name,
|
&input.name,
|
||||||
|
input.folder.as_deref(),
|
||||||
input.to_version,
|
input.to_version,
|
||||||
&user_key,
|
&user_key,
|
||||||
Some(user_id),
|
Some(user_id),
|
||||||
@@ -734,8 +733,8 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_export",
|
tool = "secrets_export",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = input.namespace.as_deref(),
|
folder = input.folder.as_deref(),
|
||||||
kind = input.kind.as_deref(),
|
entry_type = input.entry_type.as_deref(),
|
||||||
format,
|
format,
|
||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
@@ -743,8 +742,8 @@ impl SecretsService {
|
|||||||
let data = svc_export(
|
let data = svc_export(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
ExportParams {
|
ExportParams {
|
||||||
namespace: input.namespace.as_deref(),
|
folder: input.folder.as_deref(),
|
||||||
kind: input.kind.as_deref(),
|
entry_type: input.entry_type.as_deref(),
|
||||||
name: input.name.as_deref(),
|
name: input.name.as_deref(),
|
||||||
tags: &tags,
|
tags: &tags,
|
||||||
query: input.query.as_deref(),
|
query: input.query.as_deref(),
|
||||||
@@ -800,16 +799,16 @@ impl SecretsService {
|
|||||||
tracing::info!(
|
tracing::info!(
|
||||||
tool = "secrets_env_map",
|
tool = "secrets_env_map",
|
||||||
?user_id,
|
?user_id,
|
||||||
namespace = input.namespace.as_deref(),
|
folder = input.folder.as_deref(),
|
||||||
kind = input.kind.as_deref(),
|
entry_type = input.entry_type.as_deref(),
|
||||||
prefix = input.prefix.as_deref().unwrap_or(""),
|
prefix = input.prefix.as_deref().unwrap_or(""),
|
||||||
"tool call start",
|
"tool call start",
|
||||||
);
|
);
|
||||||
|
|
||||||
let env_map = secrets_core::service::env_map::build_env_map(
|
let env_map = secrets_core::service::env_map::build_env_map(
|
||||||
&self.pool,
|
&self.pool,
|
||||||
input.namespace.as_deref(),
|
input.folder.as_deref(),
|
||||||
input.kind.as_deref(),
|
input.entry_type.as_deref(),
|
||||||
input.name.as_deref(),
|
input.name.as_deref(),
|
||||||
&tags,
|
&tags,
|
||||||
&only_fields,
|
&only_fields,
|
||||||
|
|||||||
@@ -506,7 +506,7 @@ async fn audit_page(
|
|||||||
.map(|row| AuditEntryView {
|
.map(|row| AuditEntryView {
|
||||||
created_at_iso: row.created_at.to_rfc3339_opts(SecondsFormat::Secs, true),
|
created_at_iso: row.created_at.to_rfc3339_opts(SecondsFormat::Secs, true),
|
||||||
action: row.action,
|
action: row.action,
|
||||||
target: format_audit_target(&row.namespace, &row.kind, &row.name),
|
target: format_audit_target(&row.folder, &row.entry_type, &row.name),
|
||||||
detail: serde_json::to_string_pretty(&row.detail).unwrap_or_else(|_| "{}".to_string()),
|
detail: serde_json::to_string_pretty(&row.detail).unwrap_or_else(|_| "{}".to_string()),
|
||||||
})
|
})
|
||||||
.collect();
|
.collect();
|
||||||
@@ -783,11 +783,15 @@ fn render_template<T: Template>(tmpl: T) -> Result<Response, StatusCode> {
|
|||||||
Ok(Html(html).into_response())
|
Ok(Html(html).into_response())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn format_audit_target(namespace: &str, kind: &str, name: &str) -> String {
|
fn format_audit_target(folder: &str, entry_type: &str, name: &str) -> String {
|
||||||
// Auth events reuse kind/name as a provider-scoped target, not an entry identity.
|
// Auth events (folder="auth") use entry_type/name as provider-scoped target.
|
||||||
if namespace == "auth" {
|
if folder == "auth" {
|
||||||
format!("{}/{}", kind, name)
|
format!("{}/{}", entry_type, name)
|
||||||
|
} else if !folder.is_empty() && !entry_type.is_empty() {
|
||||||
|
format!("[{}/{}] {}", folder, entry_type, name)
|
||||||
|
} else if !folder.is_empty() {
|
||||||
|
format!("[{}] {}", folder, name)
|
||||||
} else {
|
} else {
|
||||||
format!("[{}/{}] {}", namespace, kind, name)
|
name.to_string()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
194
scripts/migrate-v0.3.0.sql
Normal file
194
scripts/migrate-v0.3.0.sql
Normal file
@@ -0,0 +1,194 @@
|
|||||||
|
-- ============================================================================
|
||||||
|
-- migrate-v0.3.0.sql
|
||||||
|
-- Schema migration from v0.2.x → v0.3.0
|
||||||
|
--
|
||||||
|
-- Changes:
|
||||||
|
-- • entries: namespace → folder, kind → type; add notes column
|
||||||
|
-- • audit_log: namespace → folder, kind → type
|
||||||
|
-- • entries_history: namespace → folder, kind → type; add user_id column
|
||||||
|
-- • Unique index: (user_id, name) → (user_id, folder, name)
|
||||||
|
-- Same name in different folders is now allowed; no rename needed.
|
||||||
|
--
|
||||||
|
-- Safe to run multiple times (fully idempotent).
|
||||||
|
-- Preserves all data in users, entries, secrets.
|
||||||
|
-- ============================================================================
|
||||||
|
|
||||||
|
BEGIN;
|
||||||
|
|
||||||
|
-- ── entries: rename namespace→folder, kind→type ──────────────────────────────
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries' AND column_name = 'namespace'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE entries RENAME COLUMN namespace TO folder;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries' AND column_name = 'kind'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE entries RENAME COLUMN kind TO type;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
-- Set NOT NULL + default for folder/type in entries
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries' AND column_name = 'folder'
|
||||||
|
) THEN
|
||||||
|
UPDATE entries SET folder = '' WHERE folder IS NULL;
|
||||||
|
ALTER TABLE entries ALTER COLUMN folder SET NOT NULL;
|
||||||
|
ALTER TABLE entries ALTER COLUMN folder SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries' AND column_name = 'type'
|
||||||
|
) THEN
|
||||||
|
UPDATE entries SET type = '' WHERE type IS NULL;
|
||||||
|
ALTER TABLE entries ALTER COLUMN type SET NOT NULL;
|
||||||
|
ALTER TABLE entries ALTER COLUMN type SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
-- Add notes column to entries if missing
|
||||||
|
ALTER TABLE entries ADD COLUMN IF NOT EXISTS notes TEXT NOT NULL DEFAULT '';
|
||||||
|
|
||||||
|
-- ── audit_log: rename namespace→folder, kind→type ────────────────────────────
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'audit_log' AND column_name = 'namespace'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE audit_log RENAME COLUMN namespace TO folder;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'audit_log' AND column_name = 'kind'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE audit_log RENAME COLUMN kind TO type;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'audit_log' AND column_name = 'folder'
|
||||||
|
) THEN
|
||||||
|
UPDATE audit_log SET folder = '' WHERE folder IS NULL;
|
||||||
|
ALTER TABLE audit_log ALTER COLUMN folder SET NOT NULL;
|
||||||
|
ALTER TABLE audit_log ALTER COLUMN folder SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'audit_log' AND column_name = 'type'
|
||||||
|
) THEN
|
||||||
|
UPDATE audit_log SET type = '' WHERE type IS NULL;
|
||||||
|
ALTER TABLE audit_log ALTER COLUMN type SET NOT NULL;
|
||||||
|
ALTER TABLE audit_log ALTER COLUMN type SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
ALTER TABLE audit_log DROP COLUMN IF EXISTS actor;
|
||||||
|
|
||||||
|
-- ── entries_history: rename namespace→folder, kind→type; add user_id ─────────
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries_history' AND column_name = 'namespace'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE entries_history RENAME COLUMN namespace TO folder;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries_history' AND column_name = 'kind'
|
||||||
|
) THEN
|
||||||
|
ALTER TABLE entries_history RENAME COLUMN kind TO type;
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries_history' AND column_name = 'folder'
|
||||||
|
) THEN
|
||||||
|
UPDATE entries_history SET folder = '' WHERE folder IS NULL;
|
||||||
|
ALTER TABLE entries_history ALTER COLUMN folder SET NOT NULL;
|
||||||
|
ALTER TABLE entries_history ALTER COLUMN folder SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
DO $$ BEGIN
|
||||||
|
IF EXISTS (
|
||||||
|
SELECT 1 FROM information_schema.columns
|
||||||
|
WHERE table_name = 'entries_history' AND column_name = 'type'
|
||||||
|
) THEN
|
||||||
|
UPDATE entries_history SET type = '' WHERE type IS NULL;
|
||||||
|
ALTER TABLE entries_history ALTER COLUMN type SET NOT NULL;
|
||||||
|
ALTER TABLE entries_history ALTER COLUMN type SET DEFAULT '';
|
||||||
|
END IF;
|
||||||
|
END $$;
|
||||||
|
|
||||||
|
ALTER TABLE entries_history ADD COLUMN IF NOT EXISTS user_id UUID;
|
||||||
|
ALTER TABLE entries_history DROP COLUMN IF EXISTS actor;
|
||||||
|
|
||||||
|
-- ── secrets_history: drop actor column ───────────────────────────────────────
|
||||||
|
ALTER TABLE secrets_history DROP COLUMN IF EXISTS actor;
|
||||||
|
|
||||||
|
-- ── Rebuild unique indexes: (user_id, folder, name) ──────────────────────────
|
||||||
|
-- Note: folder is now part of the key, so same name in different folders is
|
||||||
|
-- naturally distinct — no rename of existing rows needed.
|
||||||
|
DROP INDEX IF EXISTS idx_entries_unique_legacy;
|
||||||
|
DROP INDEX IF EXISTS idx_entries_unique_user;
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_legacy
|
||||||
|
ON entries(folder, name)
|
||||||
|
WHERE user_id IS NULL;
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_entries_unique_user
|
||||||
|
ON entries(user_id, folder, name)
|
||||||
|
WHERE user_id IS NOT NULL;
|
||||||
|
|
||||||
|
-- ── Replace old namespace/kind indexes with folder/type ──────────────────────
|
||||||
|
DROP INDEX IF EXISTS idx_entries_namespace;
|
||||||
|
DROP INDEX IF EXISTS idx_entries_kind;
|
||||||
|
DROP INDEX IF EXISTS idx_audit_log_ns_kind;
|
||||||
|
DROP INDEX IF EXISTS idx_entries_history_ns_kind_name;
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_entries_folder
|
||||||
|
ON entries(folder) WHERE folder <> '';
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_entries_type
|
||||||
|
ON entries(type) WHERE type <> '';
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_entries_user_id
|
||||||
|
ON entries(user_id) WHERE user_id IS NOT NULL;
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_audit_log_folder_type
|
||||||
|
ON audit_log(folder, type);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_entries_history_folder_type_name
|
||||||
|
ON entries_history(folder, type, name, version DESC);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_entries_history_user_id
|
||||||
|
ON entries_history(user_id) WHERE user_id IS NOT NULL;
|
||||||
|
|
||||||
|
COMMIT;
|
||||||
|
|
||||||
|
-- ── Verification queries (run these manually to confirm) ─────────────────────
|
||||||
|
-- SELECT column_name, data_type FROM information_schema.columns
|
||||||
|
-- WHERE table_name = 'entries' ORDER BY ordinal_position;
|
||||||
|
-- SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'entries';
|
||||||
|
-- SELECT COUNT(*) FROM entries;
|
||||||
|
-- SELECT COUNT(*) FROM users;
|
||||||
|
-- SELECT COUNT(*) FROM secrets;
|
||||||
Reference in New Issue
Block a user