Compare commits

...

8 Commits

Author SHA1 Message Date
voson
854720f10c chore: remove field_type and value_len from secrets schema
Some checks failed
Secrets CLI - Build & Release / 版本 & Release (push) Successful in 3s
Secrets CLI - Build & Release / 质量检查 (fmt / clippy / test) (push) Successful in 2m34s
Secrets CLI - Build & Release / Build (macOS aarch64 + x86_64) (push) Successful in 1m3s
Secrets CLI - Build & Release / Build (x86_64-unknown-linux-musl) (push) Successful in 1m15s
Secrets CLI - Build & Release / 发布草稿 Release (push) Has been cancelled
Secrets CLI - Build & Release / Build (x86_64-pc-windows-msvc) (push) Has been cancelled
- Drop field_type, value_len from secrets and secrets_history tables
- Remove infer_field_type, compute_value_len from add.rs
- Simplify search output to field names only
- Update AGENTS.md, README.md documentation

Bump version to 0.9.4

Made-with: Cursor
2026-03-19 16:48:23 +08:00
voson
62a1df316b docs: README 补充 delete 批量删除与 --dry-run 示例
Some checks failed
Secrets CLI - Build & Release / 版本 & Release (push) Successful in 3s
Secrets CLI - Build & Release / 质量检查 (fmt / clippy / test) (push) Successful in 2m30s
Secrets CLI - Build & Release / Build (macOS aarch64 + x86_64) (push) Successful in 1m1s
Secrets CLI - Build & Release / Build (x86_64-unknown-linux-musl) (push) Successful in 1m17s
Secrets CLI - Build & Release / 发布草稿 Release (push) Has been cancelled
Secrets CLI - Build & Release / Build (x86_64-pc-windows-msvc) (push) Has been cancelled
Made-with: Cursor
2026-03-19 16:32:20 +08:00
voson
d0796e9c9a feat: delete 命令支持批量删除,--name 改为可选
省略 --name 时按 namespace(+ 可选 --kind)批量删除所有匹配记录;
支持 --dry-run 预览;删除前自动快照历史并写入审计日志。
移除独立的 delete-ns 子命令,合并为统一的 delete 入口。
更新 AGENTS.md 文档,版本 bump 至 0.9.3。

Made-with: Cursor
2026-03-19 16:31:18 +08:00
voson
66b6417faa feat: 开源准备与 upgrade URL 构建时配置
- upgrade: SECRETS_UPGRADE_URL 改为构建时优先(option_env!),CI 自动注入
- upgrade: 支持运行时回退(.env/export),添加 dotenvy 加载 .env
- 泛化示例:IP/实例 ID/域名/密钥名改为示例值(10.0.0.1、example.com 等)
- tasks.json: 文件 secret 测试改用 test-fixtures/example-key.pem
- 文档更新:AGENTS.md、README.md

Made-with: Cursor
2026-03-19 16:08:27 +08:00
voson
56a28e8cf7 refactor: 消除冗余、统一设计,bump 0.9.1
Some checks failed
Secrets CLI - Build & Release / 版本 & Release (push) Successful in 3s
Secrets CLI - Build & Release / 质量检查 (fmt / clippy / test) (push) Successful in 2m46s
Secrets CLI - Build & Release / Build (macOS aarch64 + x86_64) (push) Successful in 1m27s
Secrets CLI - Build & Release / Build (x86_64-unknown-linux-musl) (push) Successful in 2m0s
Secrets CLI - Build & Release / 发布草稿 Release (push) Has been cancelled
Secrets CLI - Build & Release / Build (x86_64-pc-windows-msvc) (push) Has been cancelled
- 提取 EntryRow/SecretFieldRow 到 models.rs
- 提取 current_actor()、print_json() 公共函数
- ExportFormat::from_extension 复用 from_str
- fetch_entries 默认 limit 100k(export/inject/run 不再截断)
- history 独立为 history.rs 模块
- delete 改用 DeleteArgs 结构体
- config_dir 改为 Result,Argon2id 参数提取常量
- Cargo 依赖 ^ 前缀、tokio 精简 features
- 更新 AGENTS.md 项目结构

Made-with: Cursor
2026-03-19 15:46:57 +08:00
voson
12aec6675a feat: add export/import commands for batch backup (JSON/TOML/YAML)
Some checks failed
Secrets CLI - Build & Release / 发布草稿 Release (push) Has been cancelled
Secrets CLI - Build & Release / Build (x86_64-pc-windows-msvc) (push) Has been cancelled
Secrets CLI - Build & Release / 版本 & Release (push) Successful in 3s
Secrets CLI - Build & Release / 质量检查 (fmt / clippy / test) (push) Successful in 2m14s
Secrets CLI - Build & Release / Build (macOS aarch64 + x86_64) (push) Successful in 1m3s
Secrets CLI - Build & Release / Build (x86_64-unknown-linux-musl) (push) Successful in 1m15s
- export: filter by namespace/kind/name/tag/query, decrypt secrets, write to file or stdout
- import: parse file, conflict check (error by default, --force to overwrite), --dry-run preview
- Add ExportFormat enum, ExportData/ExportEntry in models.rs with TOML↔JSON conversion
- Bump version to 0.9.0

Made-with: Cursor
2026-03-19 15:29:26 +08:00
voson
e1cd6e736c refactor: entries + secrets 双表,search 展示 field schema,key_ref PEM 共享
Some checks failed
Secrets CLI - Build & Release / 质量检查 (fmt / clippy / test) (push) Successful in 1m57s
Secrets CLI - Build & Release / 版本 & Release (push) Successful in 3s
Secrets CLI - Build & Release / Build (macOS aarch64 + x86_64) (push) Successful in 51s
Secrets CLI - Build & Release / Build (x86_64-unknown-linux-musl) (push) Successful in 1m6s
Secrets CLI - Build & Release / 发布草稿 Release (push) Has been cancelled
Secrets CLI - Build & Release / Build (x86_64-pc-windows-msvc) (push) Has been cancelled
- secrets 表拆为 entries(主表)+ secrets(每字段一行)
- search 无需 master_key 即可展示 secrets 字段名、类型、长度
- inject/run 支持 metadata.key_ref 引用 kind=key 记录,PEM 轮换 O(1)
- entries_history + secrets_history 字段级历史,rollback 按 version 恢复
- 移除迁移用 DROP 语句,migrate 幂等
- v0.8.0

Made-with: Cursor
2026-03-19 15:18:12 +08:00
voson
0a5317e477 feat: remove -o env from search command
Some checks failed
Secrets CLI - Build & Release / 版本 & Release (push) Successful in 3s
Secrets CLI - Build & Release / 质量检查 (fmt / clippy / test) (push) Successful in 1m58s
Secrets CLI - Build & Release / Build (macOS aarch64 + x86_64) (push) Successful in 1m1s
Secrets CLI - Build & Release / Build (x86_64-unknown-linux-musl) (push) Successful in 1m2s
Secrets CLI - Build & Release / 发布草稿 Release (push) Has been cancelled
Secrets CLI - Build & Release / Build (x86_64-pc-windows-msvc) (push) Has been cancelled
- Remove OutputMode::Env from output.rs
- Remove env output branch and shell_quote from search.rs
- Update docs (AGENTS.md, README.md, main.rs help)

Bump version to 0.7.5

Made-with: Cursor
2026-03-19 14:33:38 +08:00
26 changed files with 2205 additions and 686 deletions

View File

@@ -17,6 +17,7 @@ permissions:
env: env:
BINARY_NAME: secrets BINARY_NAME: secrets
SECRETS_UPGRADE_URL: ${{ github.server_url }}/api/v1/repos/${{ github.repository }}/releases/latest
CARGO_INCREMENTAL: 0 CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10 CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always CARGO_TERM_COLOR: always

2
.vscode/tasks.json vendored
View File

@@ -142,7 +142,7 @@
{ {
"label": "test: add with file secret", "label": "test: add with file secret",
"type": "shell", "type": "shell",
"command": "echo '--- add key from file ---' && ./target/debug/secrets add -n test --kind key --name test-key --tag test -s content=@./refining/keys/Vultr && echo '--- verify metadata ---' && ./target/debug/secrets search -n test --kind key && echo '--- verify inject ---' && ./target/debug/secrets inject -n test --kind key --name test-key && echo '--- cleanup ---' && ./target/debug/secrets delete -n test --kind key --name test-key", "command": "echo '--- add key from file ---' && ./target/debug/secrets add -n test --kind key --name test-key --tag test -s content=@./test-fixtures/example-key.pem && echo '--- verify metadata ---' && ./target/debug/secrets search -n test --kind key && echo '--- verify inject ---' && ./target/debug/secrets inject -n test --kind key --name test-key && echo '--- cleanup ---' && ./target/debug/secrets delete -n test --kind key --name test-key",
"dependsOn": "build" "dependsOn": "build"
} }
] ]

267
AGENTS.md
View File

@@ -7,7 +7,7 @@
3. 若当前版本对应 tag 已存在,必须先 bump `Cargo.toml``version`,再执行 `cargo build` 同步 `Cargo.lock`,然后才能提交。 3. 若当前版本对应 tag 已存在,必须先 bump `Cargo.toml``version`,再执行 `cargo build` 同步 `Cargo.lock`,然后才能提交。
4. 提交前优先运行 `./scripts/release-check.sh`;该脚本会检查重复版本并执行 `cargo fmt -- --check && cargo clippy --locked -- -D warnings && cargo test --locked` 4. 提交前优先运行 `./scripts/release-check.sh`;该脚本会检查重复版本并执行 `cargo fmt -- --check && cargo clippy --locked -- -D warnings && cargo test --locked`
跨设备密钥与配置管理 CLI 工具,将 refining / ricnsmart 两个项目的服务器信息、服务凭据存储到 PostgreSQL 18供 AI 工具读取上下文。敏感数据encrypted 字段)使用 AES-256-GCM 加密,主密钥由 Argon2id 从主密码派生并存入平台安全存储macOS Keychain / Windows Credential Manager / Linux keyutils 跨设备密钥与配置管理 CLI 工具,将服务器信息、服务凭据存储到 PostgreSQL 18供 AI 工具读取上下文。每个加密字段单独行存储(`secrets` 子表),字段名、类型、长度以明文保存,主密钥由 Argon2id 从主密码派生并存入平台安全存储macOS Keychain / Windows Credential Manager / Linux keyutils
## 项目结构 ## 项目结构
@@ -17,20 +17,23 @@ secrets/
main.rs # CLI 入口clap 命令定义auto-migrate--verbose 全局参数 main.rs # CLI 入口clap 命令定义auto-migrate--verbose 全局参数
output.rs # OutputMode 枚举 + TTY 检测TTY→text非 TTY→json-compact output.rs # OutputMode 枚举 + TTY 检测TTY→text非 TTY→json-compact
config.rs # 配置读写:~/.config/secrets/config.tomldatabase_url config.rs # 配置读写:~/.config/secrets/config.tomldatabase_url
db.rs # PgPool 创建 + 建表/索引(幂等,含 audit_log + kv_config + secrets_history db.rs # PgPool 创建 + 建表/索引(DROP+CREATE含所有表
crypto.rs # AES-256-GCM 加解密、Argon2id 派生、OS 钥匙串 crypto.rs # AES-256-GCM 加解密、Argon2id 派生、OS 钥匙串
models.rs # Secret 结构体sqlx::FromRow + serde,含 version 字段 models.rs # Entry + SecretField 结构体sqlx::FromRow + serde
audit.rs # 审计写入log_tx事务内/ log保留备用 audit.rs # 审计写入log_tx事务内
commands/ commands/
init.rs # init 命令:主密钥初始化(每台设备一次) init.rs # init 命令:主密钥初始化(每台设备一次)
add.rs # add 命令upsert,事务化,含历史快照,支持 key:=json 类型化值与嵌套路径写入 add.rs # add 命令upsert entries + 逐字段写入 secrets含历史快照
config.rs # config 命令set-db / show / path持久化 database_url config.rs # config 命令set-db / show / path持久化 database_url
search.rs # search 命令:多条件查询,公开 fetch_rows / build_env_map search.rs # search 命令:多条件查询,展示 secrets 字段 schema无需 master_key
delete.rs # delete 命令:事务化,含历史快照 delete.rs # delete 命令:事务化,CASCADE 删除 secrets含历史快照
update.rs # update 命令增量更新CAS 并发保护,含历史快照 update.rs # update 命令:增量更新,secrets 行级 UPSERT/DELETECAS 并发保护
rollback.rs # rollback / history 命令:版本回滚与历史查看 rollback.rs # rollback 命令:按 entry_version 恢复 entry + secrets
run.rs # inject / run 命令:临时环境变量注入 history.rs # history 命令:查看 entry 变更历史列表
run.rs # inject / run 命令:逐字段解密 + key_ref 引用解析
upgrade.rs # upgrade 命令:检查、校验摘要并下载最新版本,自动替换二进制 upgrade.rs # upgrade 命令:检查、校验摘要并下载最新版本,自动替换二进制
export_cmd.rs # export 命令:批量导出记录,支持 JSON/TOML/YAML含解密明文
import_cmd.rs # import 命令批量导入记录冲突检测dry-run重新加密写入
scripts/ scripts/
release-check.sh # 发版前检查版本号/tag 是否重复,并执行 fmt/clippy/test release-check.sh # 发版前检查版本号/tag 是否重复,并执行 fmt/clippy/test
setup-gitea-actions.sh # 配置 Gitea Actions 变量与 Secrets setup-gitea-actions.sh # 配置 Gitea Actions 变量与 Secrets
@@ -44,19 +47,18 @@ secrets/
- **Host**: `<host>:<port>` - **Host**: `<host>:<port>`
- **Database**: `secrets` - **Database**: `secrets`
- **连接串**: `postgres://postgres:<password>@<host>:<port>/secrets` - **连接串**: `postgres://postgres:<password>@<host>:<port>/secrets`
- **表**: `secrets`表)+ `audit_log`(审计表)+ `kv_config`Argon2 salt 等)首次连接自动建表auto-migrate - **表**: `entries`(主表)+ `secrets`加密字段子表)+ `entries_history` + `secrets_history` + `audit_log` + `kv_config`首次连接自动建表auto-migrate
### 表结构 ### 表结构
```sql ```sql
secrets ( entries (
id UUID PRIMARY KEY DEFAULT uuidv7(), -- PG18 时间有序 UUID id UUID PRIMARY KEY DEFAULT uuidv7(), -- PG18 时间有序 UUID
namespace VARCHAR(64) NOT NULL, -- 一级隔离: "refining" | "ricnsmart" namespace VARCHAR(64) NOT NULL, -- 一级隔离: "refining" | "ricnsmart"
kind VARCHAR(64) NOT NULL, -- 类型: "server" | "service"(可扩展) kind VARCHAR(64) NOT NULL, -- 类型: "server" | "service" | "key"(可扩展)
name VARCHAR(256) NOT NULL, -- 人类可读标识 name VARCHAR(256) NOT NULL, -- 人类可读标识
tags TEXT[] NOT NULL DEFAULT '{}', -- 灵活标签: ["aliyun","hongkong"] tags TEXT[] NOT NULL DEFAULT '{}', -- 灵活标签: ["aliyun","hongkong"]
metadata JSONB NOT NULL DEFAULT '{}', -- 明文描述: ip, desc, domains, location... metadata JSONB NOT NULL DEFAULT '{}', -- 明文描述: ip, desc, domains, location...
encrypted BYTEA NOT NULL DEFAULT '\x', -- AES-256-GCM 密文: nonce(12B)||ciphertext+tag
version BIGINT NOT NULL DEFAULT 1, -- 乐观锁版本号,每次写操作自增 version BIGINT NOT NULL DEFAULT 1, -- 乐观锁版本号,每次写操作自增
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
@@ -65,26 +67,22 @@ secrets (
``` ```
```sql ```sql
secrets_history ( secrets (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, id UUID PRIMARY KEY DEFAULT uuidv7(),
secret_id UUID NOT NULL, -- 对应 secrets.id entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
namespace VARCHAR(64) NOT NULL, field_name VARCHAR(256) NOT NULL, -- 明文字段名: "username", "token", "ssh_key"
kind VARCHAR(64) NOT NULL, encrypted BYTEA NOT NULL DEFAULT '\x', -- 仅加密值本身nonce(12B)||ciphertext+tag
name VARCHAR(256) NOT NULL, version BIGINT NOT NULL DEFAULT 1,
version BIGINT NOT NULL, -- 被快照时的版本号 created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
action VARCHAR(16) NOT NULL, -- 'add' | 'update' | 'delete' | 'rollback' updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
tags TEXT[] NOT NULL DEFAULT '{}', UNIQUE(entry_id, field_name)
metadata JSONB NOT NULL DEFAULT '{}',
encrypted BYTEA NOT NULL DEFAULT '\x', -- 快照时的加密密文
actor VARCHAR(128) NOT NULL DEFAULT '',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
) )
``` ```
```sql ```sql
kv_config ( kv_config (
key TEXT PRIMARY KEY, -- 如 'argon2_salt' key TEXT PRIMARY KEY, -- 如 'argon2_salt'
value BYTEA NOT NULL -- Argon2id salt首台设备 init 时生成 value BYTEA NOT NULL -- Argon2id salt首台设备 init 时生成
) )
``` ```
@@ -93,26 +91,81 @@ kv_config (
```sql ```sql
audit_log ( audit_log (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
action VARCHAR(32) NOT NULL, -- 'add' | 'update' | 'delete' action VARCHAR(32) NOT NULL, -- 'add' | 'update' | 'delete' | 'rollback'
namespace VARCHAR(64) NOT NULL, namespace VARCHAR(64) NOT NULL,
kind VARCHAR(64) NOT NULL, kind VARCHAR(64) NOT NULL,
name VARCHAR(256) NOT NULL, name VARCHAR(256) NOT NULL,
detail JSONB NOT NULL DEFAULT '{}', -- 变更摘要tags/meta keys/secret keys不含 value detail JSONB NOT NULL DEFAULT '{}', -- 变更摘要tags/meta keys/secret keys不含 value
actor VARCHAR(128) NOT NULL DEFAULT '', -- 操作者($USER 环境变量) actor VARCHAR(128) NOT NULL DEFAULT '', -- 操作者($USER 环境变量)
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
) )
``` ```
### entries_history 表结构
```sql
entries_history (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
entry_id UUID NOT NULL,
namespace VARCHAR(64) NOT NULL,
kind VARCHAR(64) NOT NULL,
name VARCHAR(256) NOT NULL,
version BIGINT NOT NULL, -- 被快照时的版本号
action VARCHAR(16) NOT NULL, -- 'add' | 'update' | 'delete' | 'rollback'
tags TEXT[] NOT NULL DEFAULT '{}',
metadata JSONB NOT NULL DEFAULT '{}',
actor VARCHAR(128) NOT NULL DEFAULT '',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
```
### secrets_history 表结构
```sql
secrets_history (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
entry_id UUID NOT NULL,
secret_id UUID NOT NULL, -- 对应 secrets.id
entry_version BIGINT NOT NULL, -- 关联 entries_history 的版本号
field_name VARCHAR(256) NOT NULL,
encrypted BYTEA NOT NULL DEFAULT '\x',
action VARCHAR(16) NOT NULL, -- 'add' | 'update' | 'delete' | 'rollback'
actor VARCHAR(128) NOT NULL DEFAULT '',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
```
### 字段职责划分 ### 字段职责划分
| 字段 | 存什么 | 示例 | | 字段 | 存什么 | 示例 |
|------|--------|------| |------|--------|------|
| `namespace` | 项目/团队隔离 | `refining`, `ricnsmart` | | `namespace` | 项目/团队隔离 | `refining`, `ricnsmart` |
| `kind` | 记录类型 | `server`, `service` | | `kind` | 记录类型 | `server`, `service`, `key` |
| `name` | 唯一标识名 | `i-uf63f2uookgs5uxmrdyc`, `gitea` | | `name` | 唯一标识名 | `i-example0abcd1234efgh`, `gitea` |
| `tags` | 多维分类标签 | `["aliyun","hongkong","ricn"]` | | `tags` | 多维分类标签 | `["aliyun","hongkong","ricn"]` |
| `metadata` | 明文非敏感信息 | `{"ip":"47.243.154.187","desc":"Grafana","domains":["..."]}` | | `metadata` | 明文非敏感信息 | `{"ip":"192.0.2.1","desc":"Grafana","key_ref":"my-shared-key"}` |
| `encrypted` | 敏感凭据AES-256-GCM 加密存储 | 二进制密文,解密后为 `{"ssh_key":"...","password":"..."}` | | `secrets.field_name` | 加密字段名(明文) | `"username"`, `"token"`, `"ssh_key"` |
| `secrets.encrypted` | 仅加密值本身 | AES-256-GCM 密文 |
### PEM 共享机制key_ref
同一 PEM 被多台服务器共享时,将 PEM 存为独立的 `kind=key` 记录,服务器通过 `metadata.key_ref` 引用:
```bash
# 1. 存共享 PEM
secrets add -n refining --kind key --name my-shared-key \
--tag aliyun --tag hongkong \
-s content=@./keys/my-shared-key.pem
# 2. 服务器通过 metadata.key_ref 引用inject/run 时自动合并 key 的 secrets
secrets add -n refining --kind server --name i-example0xyz789 \
-m ip=192.0.2.1 -m key_ref=my-shared-key \
-s username=ecs-user
# 3. 轮换只需更新 key 记录,所有引用服务器自动生效
secrets update -n refining --kind key --name my-shared-key \
-s content=@./keys/new-key.pem
```
## 数据库配置 ## 数据库配置
@@ -153,7 +206,6 @@ secrets init # 提示输入主密码Argon2id 派生主密钥后存入 OS
- TTY终端直接运行→ 默认 `text` - TTY终端直接运行→ 默认 `text`
- 非 TTY管道/重定向/AI 调用)→ 自动 `json-compact` - 非 TTY管道/重定向/AI 调用)→ 自动 `json-compact`
- 显式 `-o json` → 美化 JSON - 显式 `-o json` → 美化 JSON
- 显式 `-o env` → KEY=VALUE可 source
--- ---
@@ -173,16 +225,16 @@ secrets init
# 参数说明(带典型值) # 参数说明(带典型值)
# -n / --namespace refining | ricnsmart # -n / --namespace refining | ricnsmart
# --kind server | service # --kind server | service
# --name gitea | i-uf63f2uookgs5uxmrdyc | mqtt # --name gitea | i-example0abcd1234efgh | mqtt
# --tag aliyun | hongkong | production # --tag aliyun | hongkong | production
# -q / --query mqtt | grafana | gitea (模糊匹配 name/namespace/kind/tags/metadata # -q / --query mqtt | grafana | gitea (模糊匹配 name/namespace/kind/tags/metadata
# --show-secrets 已弃用;search 不再直接展示 secrets # secrets schema search 默认展示 secrets 字段名、类型与长度(无需 master_key
# -f / --field metadata.ip | metadata.url | metadata.default_org # -f / --field metadata.ip | metadata.url | metadata.default_org
# --summary 不带值的 flag仅返回摘要name/tags/desc/updated_at # --summary 不带值的 flag仅返回摘要name/tags/desc/updated_at
# --limit 20 | 50默认 50 # --limit 20 | 50默认 50
# --offset 0 | 10 | 20分页偏移 # --offset 0 | 10 | 20分页偏移
# --sort name默认| updated | created # --sort name默认| updated | created
# -o / --output text | json | json-compact | env # -o / --output text | json | json-compact
# 发现概览(起步推荐) # 发现概览(起步推荐)
secrets search --summary --limit 20 secrets search --summary --limit 20
@@ -191,7 +243,7 @@ secrets search --sort updated --limit 10 --summary
# 精确定位单条记录 # 精确定位单条记录
secrets search -n refining --kind service --name gitea secrets search -n refining --kind service --name gitea
secrets search -n refining --kind server --name i-uf63f2uookgs5uxmrdyc secrets search -n refining --kind server --name i-example0abcd1234efgh
# 精确定位并获取完整内容secrets 保持加密占位) # 精确定位并获取完整内容secrets 保持加密占位)
secrets search -n refining --kind service --name gitea -o json secrets search -n refining --kind service --name gitea -o json
@@ -208,7 +260,7 @@ secrets run -n refining --kind service --name gitea -- printenv
# 模糊关键词搜索 # 模糊关键词搜索
secrets search -q mqtt secrets search -q mqtt
secrets search -q grafana secrets search -q grafana
secrets search -q 47.117 secrets search -q 192.0.2
# 按条件过滤 # 按条件过滤
secrets search -n refining --kind service secrets search -n refining --kind service
@@ -222,10 +274,6 @@ secrets search -n refining --summary --limit 10 --offset 10
# 管道 / AI 调用(非 TTY 自动 json-compact # 管道 / AI 调用(非 TTY 自动 json-compact
secrets search -n refining --kind service | jq '.[].name' secrets search -n refining --kind service | jq '.[].name'
# 导出 metadata 为 env 文件(单条记录)
secrets search -n refining --kind service --name gitea -o env \
> ~/.config/gitea/config.env
``` ```
--- ---
@@ -236,31 +284,31 @@ secrets search -n refining --kind service --name gitea -o env \
# 参数说明(带典型值) # 参数说明(带典型值)
# -n / --namespace refining | ricnsmart # -n / --namespace refining | ricnsmart
# --kind server | service # --kind server | service
# --name gitea | i-uf63f2uookgs5uxmrdyc # --name gitea | i-example0abcd1234efgh
# --tag aliyun | hongkong可重复 # --tag aliyun | hongkong可重复
# -m / --meta ip=47.117.131.22 | desc="Aliyun ECS" | url=https://... | tls:cert@./cert.pem可重复 # -m / --meta ip=10.0.0.1 | desc="ECS" | url=https://... | tls:cert@./cert.pem可重复
# -s / --secret token=<value> | ssh_key=@./key.pem | password=secret123 | credentials:content@./key.pem可重复 # -s / --secret token=<value> | ssh_key=@./key.pem | password=secret123 | credentials:content@./key.pem可重复
# 添加服务器 # 添加服务器
secrets add -n refining --kind server --name i-uf63f2uookgs5uxmrdyc \ secrets add -n refining --kind server --name i-example0abcd1234efgh \
--tag aliyun --tag shanghai \ --tag aliyun --tag shanghai \
-m ip=47.117.131.22 -m desc="Aliyun Shanghai ECS" \ -m ip=10.0.0.1 -m desc="Aliyun Shanghai ECS" \
-s username=root -s ssh_key=@./keys/voson_shanghai_e.pem -s username=root -s ssh_key=@./keys/deploy-key.pem
# 添加服务凭据 # 添加服务凭据
secrets add -n refining --kind service --name gitea \ secrets add -n refining --kind service --name gitea \
--tag gitea \ --tag gitea \
-m url=https://gitea.refining.dev -m default_org=refining -m username=voson \ -m url=https://code.example.com -m default_org=refining -m username=voson \
-s token=<token> -s runner_token=<runner_token> -s token=<token> -s runner_token=<runner_token>
# 从文件读取 token # 从文件读取 token
secrets add -n ricnsmart --kind service --name mqtt \ secrets add -n ricnsmart --kind service --name mqtt \
-m host=mqtt.ricnsmart.com -m port=1883 \ -m host=mqtt.example.com -m port=1883 \
-s password=@./mqtt_password.txt -s password=@./mqtt_password.txt
# 多行文件直接写入嵌套 secret 字段 # 多行文件直接写入嵌套 secret 字段
secrets add -n refining --kind server --name i-uf63f2uookgs5uxmrdyc \ secrets add -n refining --kind server --name i-example0abcd1234efgh \
-s credentials:content@./keys/voson_shanghai_e.pem -s credentials:content@./keys/deploy-key.pem
# 使用类型化值key:=<json>)存储非字符串类型 # 使用类型化值key:=<json>)存储非字符串类型
secrets add -n refining --kind service --name prometheus \ secrets add -n refining --kind service --name prometheus \
@@ -280,7 +328,7 @@ secrets add -n refining --kind service --name prometheus \
# 参数说明(带典型值) # 参数说明(带典型值)
# -n / --namespace refining | ricnsmart # -n / --namespace refining | ricnsmart
# --kind server | service # --kind server | service
# --name gitea | i-uf63f2uookgs5uxmrdyc # --name gitea | i-example0abcd1234efgh
# --add-tag production | backup不影响已有 tag可重复 # --add-tag production | backup不影响已有 tag可重复
# --remove-tag staging | deprecated可重复 # --remove-tag staging | deprecated可重复
# -m / --meta ip=10.0.0.1 | desc="新描述" | credentials:username=root新增或覆盖可重复 # -m / --meta ip=10.0.0.1 | desc="新描述" | credentials:username=root新增或覆盖可重复
@@ -289,7 +337,7 @@ secrets add -n refining --kind service --name prometheus \
# --remove-secret old_password | deprecated_key | credentials:content删除 secret 字段,可重复) # --remove-secret old_password | deprecated_key | credentials:content删除 secret 字段,可重复)
# 更新单个 metadata 字段 # 更新单个 metadata 字段
secrets update -n refining --kind server --name i-uf63f2uookgs5uxmrdyc \ secrets update -n refining --kind server --name i-example0abcd1234efgh \
-m ip=10.0.0.1 -m ip=10.0.0.1
# 轮换 token # 轮换 token
@@ -306,11 +354,11 @@ secrets update -n refining --kind service --name mqtt \
--remove-meta old_port --remove-secret old_password --remove-meta old_port --remove-secret old_password
# 从文件更新嵌套 secret 字段 # 从文件更新嵌套 secret 字段
secrets update -n refining --kind server --name i-uf63f2uookgs5uxmrdyc \ secrets update -n refining --kind server --name i-example0abcd1234efgh \
-s credentials:content@./keys/voson_shanghai_e.pem -s credentials:content@./keys/deploy-key.pem
# 删除嵌套字段 # 删除嵌套字段
secrets update -n refining --kind server --name i-uf63f2uookgs5uxmrdyc \ secrets update -n refining --kind server --name i-example0abcd1234efgh \
--remove-secret credentials:content --remove-secret credentials:content
# 移除 tag # 移除 tag
@@ -319,19 +367,34 @@ secrets update -n refining --kind service --name gitea --remove-tag staging
--- ---
### delete — 删除记录 ### delete — 删除记录(支持单条精确删除与批量删除)
删除时会自动将 entry 与所有关联 secret 字段快照到历史表,并写入审计日志,可通过 `rollback` 命令恢复。
```bash ```bash
# 参数说明(带典型值) # 参数说明(带典型值)
# -n / --namespace refining | ricnsmart # -n / --namespace refining | ricnsmart(必填)
# --kind server | service # --kind server | service(指定 --name 时必填;批量时可选)
# --name gitea | i-uf63f2uookgs5uxmrdyc必须精确匹配 # --name gitea | i-example0abcd1234efgh精确匹配省略则批量删除
# --dry-run 预览将删除的记录,不实际写入(仅批量模式有效)
# -o / --output text | json | json-compact
# 删除服务凭据 # 精确删除单条记录(--kind 必填)
secrets delete -n refining --kind service --name legacy-mqtt secrets delete -n refining --kind service --name legacy-mqtt
# 删除服务器记录
secrets delete -n ricnsmart --kind server --name i-old-server-id secrets delete -n ricnsmart --kind server --name i-old-server-id
# 预览批量删除(不写入数据库)
secrets delete -n refining --dry-run
secrets delete -n ricnsmart --kind server --dry-run
# 批量删除整个 namespace 的所有记录
secrets delete -n ricnsmart
# 批量删除 namespace 下指定 kind 的所有记录
secrets delete -n ricnsmart --kind server
# JSON 输出
secrets delete -n refining --kind service -o json
``` ```
--- ---
@@ -430,7 +493,9 @@ secrets run -n refining --kind service --name gitea -- printenv
### upgrade — 自动更新 CLI 二进制 ### upgrade — 自动更新 CLI 二进制
Gitea Release 下载最新版本,校验对应 `.sha256` 摘要后替换当前二进制,无需数据库连接或主密钥。 从 Release 服务器下载最新版本,校验对应 `.sha256` 摘要后替换当前二进制,无需数据库连接或主密钥。
**配置方式**`SECRETS_UPGRADE_URL` 必填。优先用**构建时**`SECRETS_UPGRADE_URL=https://... cargo build`CI 已自动注入。或**运行时**:写在 `.env``export` 后执行。
```bash ```bash
# 检查是否有新版本(不下载) # 检查是否有新版本(不下载)
@@ -442,6 +507,75 @@ secrets upgrade
--- ---
### export — 批量导出记录
将匹配的记录(含解密后的明文 secrets导出到文件或 stdout。支持 JSON、TOML、YAML 三种格式,文件格式由扩展名自动推断。使用 `--no-secrets` 时无需主密钥。
```bash
# 参数说明
# -n / --namespace refining | ricnsmart
# --kind server | service
# --name gitea | i-example0abcd1234efgh
# --tag aliyun | production可重复
# -q / --query 模糊关键词
# --file <path> 输出文件路径,格式由扩展名推断(.json / .toml / .yaml / .yml
# --format json | toml | yaml 显式指定格式(输出到 stdout 时必须指定)
# --no-secrets 不导出 secrets无需主密钥
# 全量导出到 JSON 文件
secrets export --file backup.json
# 按 namespace 导出为 TOML
secrets export -n refining --file refining.toml
# 按 kind 导出为 YAML
secrets export -n refining --kind service --file services.yaml
# 按 tag 过滤导出
secrets export --tag production --file prod.json
# 模糊关键词导出
secrets export -q mqtt --file mqtt.json
# 仅导出 schema不含 secrets无需主密钥
secrets export --no-secrets --file schema.json
# 输出到 stdout必须指定 --format
secrets export -n refining --format yaml
secrets export --format json | jq '.'
```
---
### import — 批量导入记录
从导出文件读取记录并写入数据库,自动重新加密 secrets。支持 JSON、TOML、YAML 三种格式,文件格式由扩展名自动推断。
```bash
# 参数说明
# <file> 必选,输入文件路径(格式由扩展名推断)
# --force 冲突时覆盖已有记录(默认:报错并停止)
# --dry-run 预览将执行的操作,不写入数据库
# -o / --output text | json | json-compact
# 导入 JSON 文件(遇到已存在记录报错)
secrets import backup.json
# 导入 TOML 文件,冲突时覆盖
secrets import --force refining.toml
# 导入 YAML 文件,冲突时覆盖
secrets import --force services.yaml
# 预览将执行的操作(不写入)
secrets import --dry-run backup.json
# JSON 格式输出导入摘要
secrets import backup.json -o json
```
---
### config — 配置管理(无需主密钥) ### config — 配置管理(无需主密钥)
```bash ```bash
@@ -541,5 +675,6 @@ cargo fmt -- --check && cargo clippy -- -D warnings && cargo test
|------|------| |------|------|
| `RUST_LOG` | 日志级别,如 `secrets=debug``secrets=trace`(默认 warn | | `RUST_LOG` | 日志级别,如 `secrets=debug``secrets=trace`(默认 warn |
| `USER` | 审计日志 actor 字段来源Shell 自动设置,通常无需手动配置 | | `USER` | 审计日志 actor 字段来源Shell 自动设置,通常无需手动配置 |
| `SECRETS_UPGRADE_URL` | upgrade 的 Release API 地址。构建时cargo build或运行时.env/export |
数据库连接通过 `secrets config set-db` 持久化到 `~/.config/secrets/config.toml`,不支持环境变量。 数据库连接通过 `secrets config set-db` 持久化到 `~/.config/secrets/config.toml`,不支持环境变量。

24
Cargo.lock generated
View File

@@ -1836,7 +1836,7 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]] [[package]]
name = "secrets" name = "secrets"
version = "0.7.4" version = "0.9.4"
dependencies = [ dependencies = [
"aes-gcm", "aes-gcm",
"anyhow", "anyhow",
@@ -1844,6 +1844,7 @@ dependencies = [
"chrono", "chrono",
"clap", "clap",
"dirs", "dirs",
"dotenvy",
"flate2", "flate2",
"keyring", "keyring",
"rand 0.10.0", "rand 0.10.0",
@@ -1853,6 +1854,7 @@ dependencies = [
"semver", "semver",
"serde", "serde",
"serde_json", "serde_json",
"serde_yaml",
"sha2", "sha2",
"sqlx", "sqlx",
"tar", "tar",
@@ -1982,6 +1984,19 @@ dependencies = [
"serde", "serde",
] ]
[[package]]
name = "serde_yaml"
version = "0.9.34+deprecated"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a8b1a1a2ebf674015cc02edccce75287f1a0130d394307b36743c2f5d504b47"
dependencies = [
"indexmap",
"itoa",
"ryu",
"serde",
"unsafe-libyaml",
]
[[package]] [[package]]
name = "sha1" name = "sha1"
version = "0.10.6" version = "0.10.6"
@@ -2434,7 +2449,6 @@ dependencies = [
"bytes", "bytes",
"libc", "libc",
"mio", "mio",
"parking_lot",
"pin-project-lite", "pin-project-lite",
"signal-hook-registry", "signal-hook-registry",
"socket2", "socket2",
@@ -2681,6 +2695,12 @@ dependencies = [
"subtle", "subtle",
] ]
[[package]]
name = "unsafe-libyaml"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "673aac59facbab8a9007c7f6108d11f63b603f7cabff99fabf650fea5c32b861"
[[package]] [[package]]
name = "untrusted" name = "untrusted"
version = "0.9.0" version = "0.9.0"

View File

@@ -1,31 +1,33 @@
[package] [package]
name = "secrets" name = "secrets"
version = "0.7.4" version = "0.9.4"
edition = "2024" edition = "2024"
[dependencies] [dependencies]
aes-gcm = "0.10.3" aes-gcm = "^0.10.3"
anyhow = "1.0.102" anyhow = "^1.0.102"
argon2 = { version = "0.5.3", features = ["std"] } argon2 = { version = "^0.5.3", features = ["std"] }
chrono = { version = "0.4.44", features = ["serde"] } chrono = { version = "^0.4.44", features = ["serde"] }
clap = { version = "4.6.0", features = ["derive"] } clap = { version = "^4.6.0", features = ["derive"] }
dirs = "6.0.0" dirs = "^6.0.0"
flate2 = "1.1.9" dotenvy = "^0.15"
keyring = { version = "3.6.3", features = ["apple-native", "windows-native", "linux-native"] } flate2 = "^1.1.9"
rand = "0.10.0" keyring = { version = "^3.6.3", features = ["apple-native", "windows-native", "linux-native"] }
reqwest = { version = "0.12", default-features = false, features = ["rustls-tls", "json"] } rand = "^0.10.0"
rpassword = "7.4.0" reqwest = { version = "^0.12", default-features = false, features = ["rustls-tls", "json"] }
self-replace = "1.5.0" rpassword = "^7.4.0"
semver = "1.0.27" self-replace = "^1.5.0"
serde = { version = "1.0.228", features = ["derive"] } semver = "^1.0.27"
serde_json = "1.0.149" serde = { version = "^1.0.228", features = ["derive"] }
sha2 = "0.10.9" serde_json = "^1.0.149"
sqlx = { version = "0.8.6", features = ["runtime-tokio", "tls-rustls", "postgres", "uuid", "json", "chrono"] } serde_yaml = "^0.9"
tar = "0.4.44" sha2 = "^0.10.9"
tempfile = "3.19" sqlx = { version = "^0.8.6", features = ["runtime-tokio", "tls-rustls", "postgres", "uuid", "json", "chrono"] }
tokio = { version = "1.50.0", features = ["full"] } tar = "^0.4.44"
toml = "1.0.7" tempfile = "^3.19"
tracing = "0.1" tokio = { version = "^1.50.0", features = ["rt-multi-thread", "macros", "fs", "io-util", "process", "signal"] }
tracing-subscriber = { version = "0.3", features = ["env-filter"] } toml = "^1.0.7"
uuid = { version = "1.22.0", features = ["serde"] } tracing = "^0.1"
zip = { version = "8.2.0", default-features = false, features = ["deflate"] } tracing-subscriber = { version = "^0.3", features = ["env-filter"] }
uuid = { version = "^1.22.0", features = ["serde"] }
zip = { version = "^8.2.0", default-features = false, features = ["deflate"] }

View File

@@ -2,7 +2,7 @@
跨设备密钥与配置管理 CLI基于 Rust + PostgreSQL 18。 跨设备密钥与配置管理 CLI基于 Rust + PostgreSQL 18。
将服务器信息、服务凭据统一存入数据库,供本地工具和 AI 读取上下文。敏感数据(`encrypted` 字段)使用 AES-256-GCM 加密存储,主密钥由 Argon2id 从主密码派生并存入系统钥匙串。 将服务器信息、服务凭据统一存入数据库,供本地工具和 AI 读取上下文。每个敏感字段单独行存储(`secrets` 子表),字段名、类型、长度以明文保存便于 AI 理解,仅值本身使用 AES-256-GCM 加密主密钥由 Argon2id 从主密码派生并存入系统钥匙串。
## 安装 ## 安装
@@ -54,7 +54,7 @@ secrets search --sort updated --limit 10 --summary
# 精确定位namespace + kind + name 三元组) # 精确定位namespace + kind + name 三元组)
secrets search -n refining --kind service --name gitea secrets search -n refining --kind service --name gitea
# 获取完整记录secrets 保持加密占位 # 获取完整记录(secrets 字段名,无需 master_key
secrets search -n refining --kind service --name gitea -o json secrets search -n refining --kind service --name gitea -o json
# 直接提取单个 metadata 字段值(最短路径) # 直接提取单个 metadata 字段值(最短路径)
@@ -69,14 +69,13 @@ secrets inject -n refining --kind service --name gitea
secrets run -n refining --kind service --name gitea -- printenv secrets run -n refining --kind service --name gitea -- printenv
``` ```
`search` 只负责发现、定位和读取 metadata不直接展示 secrets `search` 展示 metadata 与 secrets 的字段名,不展示 secret 值本身;需要值时用 `inject` / `run`
### 输出格式 ### 输出格式
| 场景 | 推荐命令 | | 场景 | 推荐命令 |
|------|----------| |------|----------|
| AI 解析 / 管道处理 | `-o json``-o json-compact` | | AI 解析 / 管道处理 | `-o json``-o json-compact` |
| 写入 metadata `.env` 文件 | `-o env` |
| 注入 secrets 到环境变量 | `inject` / `run` | | 注入 secrets 到环境变量 | `inject` / `run` |
| 人类查看 | 默认 `text`TTY 下自动启用) | | 人类查看 | 默认 `text`TTY 下自动启用) |
| 非 TTY管道/重定向) | 自动 `json-compact` | | 非 TTY管道/重定向) | 自动 `json-compact` |
@@ -87,10 +86,6 @@ secrets run -n refining --kind service --name gitea -- printenv
# 管道直接 jq 解析(非 TTY 自动 json-compact # 管道直接 jq 解析(非 TTY 自动 json-compact
secrets search -n refining --kind service | jq '.[].name' secrets search -n refining --kind service | jq '.[].name'
# 导出 metadata 为可 source 的 env 文件(单条记录)
secrets search -n refining --kind service --name gitea -o env \
> ~/.config/gitea/config.env
# 需要 secrets 时,使用 inject / run # 需要 secrets 时,使用 inject / run
secrets inject -n refining --kind service --name gitea > ~/.config/gitea/secrets.env secrets inject -n refining --kind service --name gitea > ~/.config/gitea/secrets.env
secrets run -n refining --kind service --name gitea -- ./deploy.sh secrets run -n refining --kind service --name gitea -- ./deploy.sh
@@ -108,6 +103,8 @@ secrets update --help
secrets delete --help secrets delete --help
secrets config --help secrets config --help
secrets upgrade --help # 检查并更新 CLI 版本 secrets upgrade --help # 检查并更新 CLI 版本
secrets export --help # 批量导出JSON/TOML/YAML
secrets import --help # 批量导入JSON/TOML/YAML
# ── search ────────────────────────────────────────────────────────────────── # ── search ──────────────────────────────────────────────────────────────────
secrets search --summary --limit 20 # 发现概览 secrets search --summary --limit 20 # 发现概览
@@ -116,14 +113,14 @@ secrets search -n refining --kind service --name gitea # 精确查找
secrets search -q mqtt # 关键词模糊搜索 secrets search -q mqtt # 关键词模糊搜索
secrets search --tag hongkong # 按 tag 过滤 secrets search --tag hongkong # 按 tag 过滤
secrets search -n refining --kind service --name gitea -f metadata.url # 提取 metadata 字段 secrets search -n refining --kind service --name gitea -f metadata.url # 提取 metadata 字段
secrets search -n refining --kind service --name gitea -o json # 完整记录secrets 保持占位 secrets search -n refining --kind service --name gitea -o json # 完整记录(secrets schema
secrets search --sort updated --limit 10 --summary # 最近改动 secrets search --sort updated --limit 10 --summary # 最近改动
secrets search -n refining --summary --limit 10 --offset 10 # 翻页 secrets search -n refining --summary --limit 10 --offset 10 # 翻页
# ── add ────────────────────────────────────────────────────────────────────── # ── add ──────────────────────────────────────────────────────────────────────
secrets add -n refining --kind server --name my-server \ secrets add -n refining --kind server --name my-server \
--tag aliyun --tag shanghai \ --tag aliyun --tag shanghai \
-m ip=47.117.131.22 -m desc="Aliyun Shanghai ECS" \ -m ip=10.0.0.1 -m desc="Example ECS" \
-s username=root -s ssh_key=@./keys/server.pem -s username=root -s ssh_key=@./keys/server.pem
# 多行文件直接写入嵌套 secret 字段 # 多行文件直接写入嵌套 secret 字段
@@ -139,7 +136,7 @@ secrets add -n refining --kind service --name deploy-bot \
secrets add -n refining --kind service --name gitea \ secrets add -n refining --kind service --name gitea \
--tag gitea \ --tag gitea \
-m url=https://gitea.refining.dev -m default_org=refining \ -m url=https://code.example.com -m default_org=myorg \
-s token=<token> -s token=<token>
# ── update ─────────────────────────────────────────────────────────────────── # ── update ───────────────────────────────────────────────────────────────────
@@ -149,7 +146,10 @@ secrets update -n refining --kind service --name mqtt --remove-meta old_port --r
secrets update -n refining --kind server --name my-server --remove-secret credentials:content secrets update -n refining --kind server --name my-server --remove-secret credentials:content
# ── delete ─────────────────────────────────────────────────────────────────── # ── delete ───────────────────────────────────────────────────────────────────
secrets delete -n refining --kind service --name legacy-mqtt secrets delete -n refining --kind service --name legacy-mqtt # 精确删除单条(--kind 必填)
secrets delete -n refining --dry-run # 预览批量删除(不写入)
secrets delete -n ricnsmart # 批量删除整个 namespace
secrets delete -n ricnsmart --kind server # 批量删除指定 kind
# ── init ───────────────────────────────────────────────────────────────────── # ── init ─────────────────────────────────────────────────────────────────────
secrets init # 主密钥初始化(每台设备一次,主密码至少 8 位,派生后存钥匙串) secrets init # 主密钥初始化(每台设备一次,主密码至少 8 位,派生后存钥匙串)
@@ -161,7 +161,21 @@ secrets config path # 打印配置文件路径
# ── upgrade ────────────────────────────────────────────────────────────────── # ── upgrade ──────────────────────────────────────────────────────────────────
secrets upgrade --check # 仅检查是否有新版本 secrets upgrade --check # 仅检查是否有新版本
secrets upgrade # 下载、校验 SHA-256 并安装最新版(从 Gitea Release secrets upgrade # 下载、校验 SHA-256 并安装最新版(可通过 SECRETS_UPGRADE_URL 自托管
# ── export ────────────────────────────────────────────────────────────────────
secrets export --file backup.json # 全量导出到 JSON
secrets export -n refining --file refining.toml # 按 namespace 导出为 TOML
secrets export -n refining --kind service --file svc.yaml # 按 kind 导出为 YAML
secrets export --tag production --file prod.json # 按 tag 过滤
secrets export -q mqtt --file mqtt.json # 模糊搜索导出
secrets export --no-secrets --file schema.json # 仅导出 schema无需主密钥
secrets export -n refining --format yaml # 输出到 stdout指定格式
# ── import ────────────────────────────────────────────────────────────────────
secrets import backup.json # 导入(冲突时报错)
secrets import --force refining.toml # 冲突时覆盖已有记录
secrets import --dry-run backup.yaml # 预览将要执行的操作(不写入)
# ── 调试 ────────────────────────────────────────────────────────────────────── # ── 调试 ──────────────────────────────────────────────────────────────────────
secrets --verbose search -q mqtt secrets --verbose search -q mqtt
@@ -170,18 +184,21 @@ RUST_LOG=secrets=trace secrets search
## 数据模型 ## 数据模型
单张 `secrets` 表,首次连接自动建表;同时自动创建 `audit_log` 表,记录所有写操作 主表 `entries`namespace、kind、name、tags、metadata+ 子表 `secrets`(每个加密字段一行,含 field_name、encrypted首次连接自动建表;同时创建 `audit_log``entries_history``secrets_history` 等表
| 字段 | 说明 | | 位置 | 字段 | 说明 |
|------|------| |------|------|------|
| `namespace` | 一级隔离,如 `refining``ricnsmart` | | entries | namespace | 一级隔离,如 `refining``ricnsmart` |
| `kind` | 记录类型,如 `server``service`(可自由扩展) | | entries | kind | 记录类型,如 `server``service``key`(可自由扩展) |
| `name` | 人类可读唯一标识 | | entries | name | 人类可读唯一标识 |
| `tags` | 多维标签,如 `["aliyun","hongkong"]` | | entries | tags | 多维标签,如 `["aliyun","hongkong"]` |
| `metadata` | 明文描述信息ip、desc、domains 等) | | entries | metadata | 明文描述ip、desc、domains、key_ref 等) |
| `encrypted` | 敏感凭据ssh_key、password、token 等AES-256-GCM 加密存储 | | secrets | field_name | 明文search 可见AI 可推断 inject 会生成什么变量 |
| secrets | encrypted | 仅加密值本身AES-256-GCM |
`-m` / `--meta` 写入 `metadata``-s` / `--secret` 写入 `encrypted`。支持 `key=value``key=@file``key:=<json>`,也支持 `credentials:content@./key.pem`嵌套字段文件写入语法,避免手动转义多行文本;删除时支持 `--remove-secret credentials:content``--remove-meta credentials:content`。加解密使用主密钥(由 `secrets init` 设置)。 `-m` / `--meta` 写入 `metadata``-s` / `--secret` 写入 `secrets` 表的独立行。支持 `key=value``key=@file``key:=<json>`,也支持 `credentials:content@./key.pem`嵌套字段文件写入;删除时支持 `--remove-secret credentials:content`。加解密使用主密钥(由 `secrets init` 设置)。
**PEM 共享**:同一 PEM 被多台服务器共享时,可存为 `kind=key` 记录,服务器通过 `metadata.key_ref` 引用;轮换只需 update 一条 key 记录,所有引用自动生效。详见 [AGENTS.md](AGENTS.md)。
### `-m` / `--meta` JSON 语法速查 ### `-m` / `--meta` JSON 语法速查
@@ -189,12 +206,12 @@ RUST_LOG=secrets=trace secrets search
| 目标值 | 写法示例 | 实际存入 | | 目标值 | 写法示例 | 实际存入 |
|------|------|------| |------|------|------|
| 普通字符串 | `-m url=https://gitea.refining.dev` | `"https://gitea.refining.dev"` | | 普通字符串 | `-m url=https://code.example.com` | `"https://code.example.com"` |
| 文件内容字符串 | `-m notes=@./service-notes.txt` | `"..."` | | 文件内容字符串 | `-m notes=@./service-notes.txt` | `"..."` |
| 布尔值 | `-m enabled:=true` | `true` | | 布尔值 | `-m enabled:=true` | `true` |
| 数字 | `-m port:=3000` | `3000` | | 数字 | `-m port:=3000` | `3000` |
| `null` | `-m deprecated_at:=null` | `null` | | `null` | `-m deprecated_at:=null` | `null` |
| 数组 | `-m domains:='["gitea.refining.dev","git.refining.dev"]'` | `["gitea.refining.dev","git.refining.dev"]` | | 数组 | `-m domains:='["code.example.com","git.example.com"]'` | `["code.example.com","git.example.com"]` |
| 对象 | `-m tls:='{"enabled":true,"redirect_http":true}'` | `{"enabled":true,"redirect_http":true}` | | 对象 | `-m tls:='{"enabled":true,"redirect_http":true}'` | `{"enabled":true,"redirect_http":true}` |
| 嵌套路径 + JSON | `-m deploy:strategy:='{"type":"rolling","batch":2}'` | `{"deploy":{"strategy":{"type":"rolling","batch":2}}}` | | 嵌套路径 + JSON | `-m deploy:strategy:='{"type":"rolling","batch":2}'` | `{"deploy":{"strategy":{"type":"rolling","batch":2}}}` |
@@ -209,10 +226,10 @@ RUST_LOG=secrets=trace secrets search
```bash ```bash
secrets add -n refining --kind service --name gitea \ secrets add -n refining --kind service --name gitea \
-m url=https://gitea.refining.dev \ -m url=https://code.example.com \
-m port:=3000 \ -m port:=3000 \
-m enabled:=true \ -m enabled:=true \
-m domains:='["gitea.refining.dev","git.refining.dev"]' \ -m domains:='["code.example.com","git.example.com"]' \
-m tls:='{"enabled":true,"redirect_http":true}' -m tls:='{"enabled":true,"redirect_http":true}'
``` ```
@@ -285,18 +302,22 @@ src/
main.rs # CLI 入口clap含各子命令 after_help 示例 main.rs # CLI 入口clap含各子命令 after_help 示例
output.rs # OutputMode 枚举 + TTY 检测 output.rs # OutputMode 枚举 + TTY 检测
config.rs # 配置读写(~/.config/secrets/config.toml config.rs # 配置读写(~/.config/secrets/config.toml
db.rs # 连接池 + auto-migratesecrets + audit_log + kv_config db.rs # 连接池 + auto-migrateentries + secrets + entries_history + secrets_history + audit_log + kv_config
crypto.rs # AES-256-GCM 加解密、Argon2id 派生、OS 钥匙串 crypto.rs # AES-256-GCM 加解密、Argon2id 派生、OS 钥匙串
models.rs # Secret 结构体 models.rs # Entry + SecretField 结构体
audit.rs # 审计日志写入audit_log 表) audit.rs # 审计日志写入audit_log 表)
commands/ commands/
init.rs # 主密钥初始化(首次/新设备) init.rs # 主密钥初始化(首次/新设备)
add.rs # upsert支持 -o json add.rs # upsert entries + secrets 行,支持 -o json
config.rs # config set-db/show/path config.rs # config set-db/show/path
search.rs # 多条件查询,支持 -f/-o/--summary/--limit/--offset/--sort search.rs # 多条件查询,展示 secrets schema-f/-o/--summary/--limit/--offset/--sort
delete.rs # 删除 delete.rs # 删除CASCADE 删除 secrets
update.rs # 增量更新(合并 tags/metadata/encrypted update.rs # 增量更新tags/metadata + secrets 行级 UPSERT/DELETE
rollback.rs # rollback / history按 entry_version 恢复
run.rs # inject / run逐字段解密 + key_ref 引用解析
upgrade.rs # 从 Gitea Release 自更新 upgrade.rs # 从 Gitea Release 自更新
export_cmd.rs # export批量导出支持 JSON/TOML/YAML含解密明文
import_cmd.rs # import批量导入冲突检测dry-run重新加密写入
scripts/ scripts/
setup-gitea-actions.sh # 配置 Gitea Actions 变量与 Secrets setup-gitea-actions.sh # 配置 Gitea Actions 变量与 Secrets
``` ```

View File

@@ -1,6 +1,11 @@
use serde_json::Value; use serde_json::Value;
use sqlx::{Postgres, Transaction}; use sqlx::{Postgres, Transaction};
/// Return the current OS user as the audit actor (falls back to empty string).
pub fn current_actor() -> String {
std::env::var("USER").unwrap_or_default()
}
/// Write an audit entry within an existing transaction. /// Write an audit entry within an existing transaction.
pub async fn log_tx( pub async fn log_tx(
tx: &mut Transaction<'_, Postgres>, tx: &mut Transaction<'_, Postgres>,
@@ -10,7 +15,7 @@ pub async fn log_tx(
name: &str, name: &str,
detail: Value, detail: Value,
) { ) {
let actor = std::env::var("USER").unwrap_or_default(); let actor = current_actor();
let result: Result<_, sqlx::Error> = sqlx::query( let result: Result<_, sqlx::Error> = sqlx::query(
"INSERT INTO audit_log (action, namespace, kind, name, detail, actor) \ "INSERT INTO audit_log (action, namespace, kind, name, detail, actor) \
VALUES ($1, $2, $3, $4, $5, $6)", VALUES ($1, $2, $3, $4, $5, $6)",

View File

@@ -5,7 +5,10 @@ use std::fs;
use crate::crypto; use crate::crypto;
use crate::db; use crate::db;
use crate::output::OutputMode; use crate::models::EntryRow;
use crate::output::{OutputMode, print_json};
// ── Key/value parsing helpers (shared with update.rs) ───────────────────────
/// Parse secret / metadata entries into a nested key path and JSON value. /// Parse secret / metadata entries into a nested key path and JSON value.
/// - `key=value` → stores the literal string `value` /// - `key=value` → stores the literal string `value`
@@ -158,6 +161,30 @@ pub(crate) fn remove_path(map: &mut Map<String, Value>, path: &[String]) -> Resu
Ok(removed) Ok(removed)
} }
/// Flatten a (potentially nested) JSON object into dot-separated field entries.
/// e.g. `{"credentials": {"type": "ssh", "content": "..."}}` →
/// `[("credentials.type", "ssh"), ("credentials.content", "...")]`
/// Top-level non-object values are emitted directly.
pub(crate) fn flatten_json_fields(prefix: &str, value: &Value) -> Vec<(String, Value)> {
match value {
Value::Object(map) => {
let mut out = Vec::new();
for (k, v) in map {
let full_key = if prefix.is_empty() {
k.clone()
} else {
format!("{}.{}", prefix, k)
};
out.extend(flatten_json_fields(&full_key, v));
}
out
}
other => vec![(prefix.to_string(), other.clone())],
}
}
// ── Add command ──────────────────────────────────────────────────────────────
pub struct AddArgs<'a> { pub struct AddArgs<'a> {
pub namespace: &'a str, pub namespace: &'a str,
pub kind: &'a str, pub kind: &'a str,
@@ -171,26 +198,17 @@ pub struct AddArgs<'a> {
pub async fn run(pool: &PgPool, args: AddArgs<'_>, master_key: &[u8; 32]) -> Result<()> { pub async fn run(pool: &PgPool, args: AddArgs<'_>, master_key: &[u8; 32]) -> Result<()> {
let metadata = build_json(args.meta_entries)?; let metadata = build_json(args.meta_entries)?;
let secret_json = build_json(args.secret_entries)?; let secret_json = build_json(args.secret_entries)?;
let encrypted_bytes = crypto::encrypt_json(master_key, &secret_json)?;
tracing::debug!(args.namespace, args.kind, args.name, "upserting record"); tracing::debug!(args.namespace, args.kind, args.name, "upserting entry");
let meta_keys = collect_key_paths(args.meta_entries)?; let meta_keys = collect_key_paths(args.meta_entries)?;
let secret_keys = collect_key_paths(args.secret_entries)?; let secret_keys = collect_key_paths(args.secret_entries)?;
let mut tx = pool.begin().await?; let mut tx = pool.begin().await?;
// Snapshot existing row into history before overwriting (if it exists). // Upsert the entry row (tags + metadata).
#[derive(sqlx::FromRow)] let existing: Option<EntryRow> = sqlx::query_as(
struct ExistingRow { "SELECT id, version, tags, metadata FROM entries \
id: uuid::Uuid,
version: i64,
tags: Vec<String>,
metadata: serde_json::Value,
encrypted: Vec<u8>,
}
let existing: Option<ExistingRow> = sqlx::query_as(
"SELECT id, version, tags, metadata, encrypted FROM secrets \
WHERE namespace = $1 AND kind = $2 AND name = $3", WHERE namespace = $1 AND kind = $2 AND name = $3",
) )
.bind(args.namespace) .bind(args.namespace)
@@ -199,11 +217,12 @@ pub async fn run(pool: &PgPool, args: AddArgs<'_>, master_key: &[u8; 32]) -> Res
.fetch_optional(&mut *tx) .fetch_optional(&mut *tx)
.await?; .await?;
if let Some(ex) = existing // Snapshot the current entry state before overwriting.
&& let Err(e) = db::snapshot_history( if let Some(ref ex) = existing
&& let Err(e) = db::snapshot_entry_history(
&mut tx, &mut tx,
db::SnapshotParams { db::EntrySnapshotParams {
secret_id: ex.id, entry_id: ex.id,
namespace: args.namespace, namespace: args.namespace,
kind: args.kind, kind: args.kind,
name: args.name, name: args.name,
@@ -211,25 +230,24 @@ pub async fn run(pool: &PgPool, args: AddArgs<'_>, master_key: &[u8; 32]) -> Res
action: "add", action: "add",
tags: &ex.tags, tags: &ex.tags,
metadata: &ex.metadata, metadata: &ex.metadata,
encrypted: &ex.encrypted,
}, },
) )
.await .await
{ {
tracing::warn!(error = %e, "failed to snapshot history before upsert"); tracing::warn!(error = %e, "failed to snapshot entry history before upsert");
} }
sqlx::query( let entry_id: uuid::Uuid = sqlx::query_scalar(
r#" r#"
INSERT INTO secrets (namespace, kind, name, tags, metadata, encrypted, version, updated_at) INSERT INTO entries (namespace, kind, name, tags, metadata, version, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, 1, NOW()) VALUES ($1, $2, $3, $4, $5, 1, NOW())
ON CONFLICT (namespace, kind, name) ON CONFLICT (namespace, kind, name)
DO UPDATE SET DO UPDATE SET
tags = EXCLUDED.tags, tags = EXCLUDED.tags,
metadata = EXCLUDED.metadata, metadata = EXCLUDED.metadata,
encrypted = EXCLUDED.encrypted, version = entries.version + 1,
version = secrets.version + 1,
updated_at = NOW() updated_at = NOW()
RETURNING id
"#, "#,
) )
.bind(args.namespace) .bind(args.namespace)
@@ -237,10 +255,71 @@ pub async fn run(pool: &PgPool, args: AddArgs<'_>, master_key: &[u8; 32]) -> Res
.bind(args.name) .bind(args.name)
.bind(args.tags) .bind(args.tags)
.bind(&metadata) .bind(&metadata)
.bind(&encrypted_bytes) .fetch_one(&mut *tx)
.execute(&mut *tx)
.await?; .await?;
let new_entry_version: i64 = sqlx::query_scalar("SELECT version FROM entries WHERE id = $1")
.bind(entry_id)
.fetch_one(&mut *tx)
.await?;
// Snapshot existing secret fields before replacing.
if existing.is_some() {
#[derive(sqlx::FromRow)]
struct ExistingField {
id: uuid::Uuid,
field_name: String,
encrypted: Vec<u8>,
}
let existing_fields: Vec<ExistingField> = sqlx::query_as(
"SELECT id, field_name, encrypted \
FROM secrets WHERE entry_id = $1",
)
.bind(entry_id)
.fetch_all(&mut *tx)
.await?;
for f in &existing_fields {
if let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
entry_id,
secret_id: f.id,
entry_version: new_entry_version - 1,
field_name: &f.field_name,
encrypted: &f.encrypted,
action: "add",
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot secret field history");
}
}
// Delete existing secret fields so we can re-insert the full set.
sqlx::query("DELETE FROM secrets WHERE entry_id = $1")
.bind(entry_id)
.execute(&mut *tx)
.await?;
}
// Insert new secret fields.
let flat_fields = flatten_json_fields("", &secret_json);
for (field_name, field_value) in &flat_fields {
let encrypted = crypto::encrypt_json(master_key, field_value)?;
sqlx::query(
"INSERT INTO secrets (entry_id, field_name, encrypted) \
VALUES ($1, $2, $3)",
)
.bind(entry_id)
.bind(field_name)
.bind(&encrypted)
.execute(&mut *tx)
.await?;
}
crate::audit::log_tx( crate::audit::log_tx(
&mut tx, &mut tx,
"add", "add",
@@ -268,11 +347,8 @@ pub async fn run(pool: &PgPool, args: AddArgs<'_>, master_key: &[u8; 32]) -> Res
}); });
match args.output { match args.output {
OutputMode::Json => { OutputMode::Json | OutputMode::JsonCompact => {
println!("{}", serde_json::to_string_pretty(&result_json)?); print_json(&result_json, &args.output)?;
}
OutputMode::JsonCompact => {
println!("{}", serde_json::to_string(&result_json)?);
} }
_ => { _ => {
println!("Added: [{}/{}] {}", args.namespace, args.kind, args.name); println!("Added: [{}/{}] {}", args.namespace, args.kind, args.name);
@@ -293,7 +369,7 @@ pub async fn run(pool: &PgPool, args: AddArgs<'_>, master_key: &[u8; 32]) -> Res
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::{build_json, key_path_to_string, parse_kv, remove_path}; use super::{build_json, flatten_json_fields, key_path_to_string, parse_kv, remove_path};
use serde_json::Value; use serde_json::Value;
use std::fs; use std::fs;
use std::path::PathBuf; use std::path::PathBuf;
@@ -363,4 +439,21 @@ mod tests {
assert!(removed); assert!(removed);
assert_eq!(value, serde_json::json!({ "username": "root" })); assert_eq!(value, serde_json::json!({ "username": "root" }));
} }
#[test]
fn flatten_json_fields_nested() {
let v = serde_json::json!({
"username": "root",
"credentials": {
"type": "ssh",
"content": "pem-data"
}
});
let mut fields = flatten_json_fields("", &v);
fields.sort_by(|a, b| a.0.cmp(&b.0));
assert_eq!(fields[0].0, "credentials.content");
assert_eq!(fields[1].0, "credentials.type");
assert_eq!(fields[2].0, "username");
}
} }

View File

@@ -15,7 +15,7 @@ pub async fn run(action: crate::ConfigAction) -> Result<()> {
database_url: Some(url.clone()), database_url: Some(url.clone()),
}; };
config::save_config(&cfg)?; config::save_config(&cfg)?;
println!("Database URL saved to: {}", config_path().display()); println!("Database URL saved to: {}", config_path()?.display());
println!(" {}", mask_password(&url)); println!(" {}", mask_password(&url));
} }
crate::ConfigAction::Show => { crate::ConfigAction::Show => {
@@ -23,7 +23,7 @@ pub async fn run(action: crate::ConfigAction) -> Result<()> {
match cfg.database_url { match cfg.database_url {
Some(url) => { Some(url) => {
println!("database_url = {}", mask_password(&url)); println!("database_url = {}", mask_password(&url));
println!("config file: {}", config_path().display()); println!("config file: {}", config_path()?.display());
} }
None => { None => {
println!("Database URL not configured."); println!("Database URL not configured.");
@@ -32,7 +32,7 @@ pub async fn run(action: crate::ConfigAction) -> Result<()> {
} }
} }
crate::ConfigAction::Path => { crate::ConfigAction::Path => {
println!("{}", config_path().display()); println!("{}", config_path()?.display());
} }
} }
Ok(()) Ok(())

View File

@@ -1,33 +1,64 @@
use anyhow::Result; use anyhow::Result;
use serde_json::{Value, json}; use serde_json::json;
use sqlx::{FromRow, PgPool}; use sqlx::PgPool;
use uuid::Uuid; use uuid::Uuid;
use crate::db; use crate::db;
use crate::output::OutputMode; use crate::models::{EntryRow, SecretFieldRow};
use crate::output::{OutputMode, print_json};
#[derive(FromRow)] pub struct DeleteArgs<'a> {
struct DeleteRow { pub namespace: &'a str,
id: Uuid, /// Kind filter. Required when --name is given; optional for bulk deletes.
version: i64, pub kind: Option<&'a str>,
tags: Vec<String>, /// Exact record name. When None, bulk-delete all matching records.
metadata: Value, pub name: Option<&'a str>,
encrypted: Vec<u8>, /// Preview without writing to the database (bulk mode only).
pub dry_run: bool,
pub output: OutputMode,
} }
pub async fn run( // ── Internal row type used for bulk queries ────────────────────────────────
#[derive(Debug, sqlx::FromRow)]
struct FullEntryRow {
pub id: Uuid,
pub version: i64,
pub kind: String,
pub name: String,
pub metadata: serde_json::Value,
pub tags: Vec<String>,
}
// ── Entry point ────────────────────────────────────────────────────────────
pub async fn run(pool: &PgPool, args: DeleteArgs<'_>) -> Result<()> {
match args.name {
Some(name) => {
let kind = args
.kind
.ok_or_else(|| anyhow::anyhow!("--kind is required when --name is specified"))?;
delete_one(pool, args.namespace, kind, name, args.output).await
}
None => delete_bulk(pool, args.namespace, args.kind, args.dry_run, args.output).await,
}
}
// ── Single-record delete (original behaviour) ─────────────────────────────
async fn delete_one(
pool: &PgPool, pool: &PgPool,
namespace: &str, namespace: &str,
kind: &str, kind: &str,
name: &str, name: &str,
output: OutputMode, output: OutputMode,
) -> Result<()> { ) -> Result<()> {
tracing::debug!(namespace, kind, name, "deleting record"); tracing::debug!(namespace, kind, name, "deleting entry");
let mut tx = pool.begin().await?; let mut tx = pool.begin().await?;
let row: Option<DeleteRow> = sqlx::query_as( let row: Option<EntryRow> = sqlx::query_as(
"SELECT id, version, tags, metadata, encrypted FROM secrets \ "SELECT id, version, tags, metadata FROM entries \
WHERE namespace = $1 AND kind = $2 AND name = $3 \ WHERE namespace = $1 AND kind = $2 AND name = $3 \
FOR UPDATE", FOR UPDATE",
) )
@@ -39,30 +70,178 @@ pub async fn run(
let Some(row) = row else { let Some(row) = row else {
tx.rollback().await?; tx.rollback().await?;
tracing::warn!(namespace, kind, name, "record not found for deletion"); tracing::warn!(namespace, kind, name, "entry not found for deletion");
let v = json!({"action":"not_found","namespace":namespace,"kind":kind,"name":name});
match output { match output {
OutputMode::Json => println!( OutputMode::Text => println!("Not found: [{}/{}] {}", namespace, kind, name),
"{}", ref mode => print_json(&v, mode)?,
serde_json::to_string_pretty(
&json!({"action":"not_found","namespace":namespace,"kind":kind,"name":name})
)?
),
OutputMode::JsonCompact => println!(
"{}",
serde_json::to_string(
&json!({"action":"not_found","namespace":namespace,"kind":kind,"name":name})
)?
),
_ => println!("Not found: [{}/{}] {}", namespace, kind, name),
} }
return Ok(()); return Ok(());
}; };
// Snapshot before physical delete so the row can be restored via rollback. snapshot_and_delete(&mut tx, namespace, kind, name, &row).await?;
if let Err(e) = db::snapshot_history(
&mut tx, crate::audit::log_tx(&mut tx, "delete", namespace, kind, name, json!({})).await;
db::SnapshotParams { tx.commit().await?;
secret_id: row.id,
let v = json!({"action":"deleted","namespace":namespace,"kind":kind,"name":name});
match output {
OutputMode::Text => println!("Deleted: [{}/{}] {}", namespace, kind, name),
ref mode => print_json(&v, mode)?,
}
Ok(())
}
// ── Bulk delete by namespace (+ optional kind filter) ─────────────────────
async fn delete_bulk(
pool: &PgPool,
namespace: &str,
kind: Option<&str>,
dry_run: bool,
output: OutputMode,
) -> Result<()> {
tracing::debug!(namespace, ?kind, dry_run, "bulk-deleting entries");
let rows: Vec<FullEntryRow> = if let Some(k) = kind {
sqlx::query_as(
"SELECT id, version, kind, name, metadata, tags FROM entries \
WHERE namespace = $1 AND kind = $2 \
ORDER BY name",
)
.bind(namespace)
.bind(k)
.fetch_all(pool)
.await?
} else {
sqlx::query_as(
"SELECT id, version, kind, name, metadata, tags FROM entries \
WHERE namespace = $1 \
ORDER BY kind, name",
)
.bind(namespace)
.fetch_all(pool)
.await?
};
if rows.is_empty() {
let v = json!({
"action": "noop",
"namespace": namespace,
"kind": kind,
"deleted": 0,
"dry_run": dry_run
});
match output {
OutputMode::Text => println!(
"No records found in namespace \"{}\"{}.",
namespace,
kind.map(|k| format!(" with kind \"{}\"", k))
.unwrap_or_default()
),
ref mode => print_json(&v, mode)?,
}
return Ok(());
}
if dry_run {
let count = rows.len();
match output {
OutputMode::Text => {
println!(
"dry-run: would delete {} record(s) in namespace \"{}\":",
count, namespace
);
for r in &rows {
println!(" [{}/{}] {}", namespace, r.kind, r.name);
}
}
ref mode => {
let items: Vec<_> = rows
.iter()
.map(|r| json!({"namespace": namespace, "kind": r.kind, "name": r.name}))
.collect();
print_json(
&json!({
"action": "dry_run",
"namespace": namespace,
"kind": kind,
"would_delete": count,
"entries": items
}),
mode,
)?;
}
}
return Ok(());
}
let mut deleted = Vec::with_capacity(rows.len());
for row in &rows {
let entry_row = EntryRow {
id: row.id,
version: row.version,
tags: row.tags.clone(),
metadata: row.metadata.clone(),
};
let mut tx = pool.begin().await?;
snapshot_and_delete(&mut tx, namespace, &row.kind, &row.name, &entry_row).await?;
crate::audit::log_tx(
&mut tx,
"delete",
namespace,
&row.kind,
&row.name,
json!({"bulk": true}),
)
.await;
tx.commit().await?;
deleted.push(json!({"namespace": namespace, "kind": row.kind, "name": row.name}));
tracing::info!(namespace, kind = %row.kind, name = %row.name, "bulk deleted");
}
let count = deleted.len();
match output {
OutputMode::Text => {
for item in &deleted {
println!(
"Deleted: [{}/{}] {}",
item["namespace"].as_str().unwrap_or(""),
item["kind"].as_str().unwrap_or(""),
item["name"].as_str().unwrap_or("")
);
}
println!("Total: {} record(s) deleted.", count);
}
ref mode => print_json(
&json!({
"action": "deleted",
"namespace": namespace,
"kind": kind,
"deleted": count,
"entries": deleted
}),
mode,
)?,
}
Ok(())
}
// ── Shared helper: snapshot history then DELETE ────────────────────────────
async fn snapshot_and_delete(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
namespace: &str,
kind: &str,
name: &str,
row: &EntryRow,
) -> Result<()> {
if let Err(e) = db::snapshot_entry_history(
tx,
db::EntrySnapshotParams {
entry_id: row.id,
namespace, namespace,
kind, kind,
name, name,
@@ -70,38 +249,43 @@ pub async fn run(
action: "delete", action: "delete",
tags: &row.tags, tags: &row.tags,
metadata: &row.metadata, metadata: &row.metadata,
encrypted: &row.encrypted,
}, },
) )
.await .await
{ {
tracing::warn!(error = %e, "failed to snapshot history before delete"); tracing::warn!(error = %e, "failed to snapshot entry history before delete");
} }
sqlx::query("DELETE FROM secrets WHERE id = $1") let fields: Vec<SecretFieldRow> = sqlx::query_as(
"SELECT id, field_name, encrypted \
FROM secrets WHERE entry_id = $1",
)
.bind(row.id)
.fetch_all(&mut **tx)
.await?;
for f in &fields {
if let Err(e) = db::snapshot_secret_history(
tx,
db::SecretSnapshotParams {
entry_id: row.id,
secret_id: f.id,
entry_version: row.version,
field_name: &f.field_name,
encrypted: &f.encrypted,
action: "delete",
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot secret history before delete");
}
}
sqlx::query("DELETE FROM entries WHERE id = $1")
.bind(row.id) .bind(row.id)
.execute(&mut *tx) .execute(&mut **tx)
.await?; .await?;
crate::audit::log_tx(&mut tx, "delete", namespace, kind, name, json!({})).await;
tx.commit().await?;
match output {
OutputMode::Json => println!(
"{}",
serde_json::to_string_pretty(
&json!({"action":"deleted","namespace":namespace,"kind":kind,"name":name})
)?
),
OutputMode::JsonCompact => println!(
"{}",
serde_json::to_string(
&json!({"action":"deleted","namespace":namespace,"kind":kind,"name":name})
)?
),
_ => println!("Deleted: [{}/{}] {}", namespace, kind, name),
}
Ok(()) Ok(())
} }

109
src/commands/export_cmd.rs Normal file
View File

@@ -0,0 +1,109 @@
use anyhow::Result;
use sqlx::PgPool;
use std::collections::BTreeMap;
use std::io::Write;
use crate::commands::search::{fetch_entries, fetch_secrets_for_entries};
use crate::crypto;
use crate::models::{ExportData, ExportEntry, ExportFormat};
pub struct ExportArgs<'a> {
pub namespace: Option<&'a str>,
pub kind: Option<&'a str>,
pub name: Option<&'a str>,
pub tags: &'a [String],
pub query: Option<&'a str>,
/// Output file path. None means write to stdout.
pub file: Option<&'a str>,
/// Explicit format override (e.g. from --format flag).
pub format: Option<&'a str>,
/// When true, secrets are omitted and master_key is not used.
pub no_secrets: bool,
}
pub async fn run(pool: &PgPool, args: ExportArgs<'_>, master_key: Option<&[u8; 32]>) -> Result<()> {
// Determine output format: --format > file extension > default JSON.
let format = if let Some(fmt_str) = args.format {
ExportFormat::from_str(fmt_str)?
} else if let Some(path) = args.file {
ExportFormat::from_extension(path).unwrap_or(ExportFormat::Json)
} else {
ExportFormat::Json
};
let entries = fetch_entries(
pool,
args.namespace,
args.kind,
args.name,
args.tags,
args.query,
)
.await?;
let entry_ids: Vec<uuid::Uuid> = entries.iter().map(|e| e.id).collect();
let secrets_map = if !args.no_secrets && !entry_ids.is_empty() {
fetch_secrets_for_entries(pool, &entry_ids).await?
} else {
std::collections::HashMap::new()
};
let key = if !args.no_secrets { master_key } else { None };
let mut export_entries: Vec<ExportEntry> = Vec::with_capacity(entries.len());
for entry in &entries {
let secrets = if args.no_secrets {
None
} else {
let fields = secrets_map.get(&entry.id).map(Vec::as_slice).unwrap_or(&[]);
if fields.is_empty() {
Some(BTreeMap::new())
} else {
let mk =
key.ok_or_else(|| anyhow::anyhow!("master key required to decrypt secrets"))?;
let mut map = BTreeMap::new();
for f in fields {
let decrypted = crypto::decrypt_json(mk, &f.encrypted)?;
map.insert(f.field_name.clone(), decrypted);
}
Some(map)
}
};
export_entries.push(ExportEntry {
namespace: entry.namespace.clone(),
kind: entry.kind.clone(),
name: entry.name.clone(),
tags: entry.tags.clone(),
metadata: entry.metadata.clone(),
secrets,
});
}
let data = ExportData {
version: 1,
exported_at: chrono::Utc::now().format("%Y-%m-%dT%H:%M:%SZ").to_string(),
entries: export_entries,
};
let serialized = format.serialize(&data)?;
if let Some(path) = args.file {
std::fs::write(path, &serialized)?;
println!(
"Exported {} record(s) to {} ({:?})",
data.entries.len(),
path,
format
);
} else {
std::io::stdout().write_all(serialized.as_bytes())?;
// Ensure trailing newline on stdout.
if !serialized.ends_with('\n') {
println!();
}
}
Ok(())
}

78
src/commands/history.rs Normal file
View File

@@ -0,0 +1,78 @@
use anyhow::Result;
use serde_json::{Value, json};
use sqlx::{FromRow, PgPool};
use crate::output::{OutputMode, format_local_time, print_json};
pub struct HistoryArgs<'a> {
pub namespace: &'a str,
pub kind: &'a str,
pub name: &'a str,
pub limit: u32,
pub output: OutputMode,
}
/// List history entries for an entry.
pub async fn run(pool: &PgPool, args: HistoryArgs<'_>) -> Result<()> {
#[derive(FromRow)]
struct HistorySummary {
version: i64,
action: String,
actor: String,
created_at: chrono::DateTime<chrono::Utc>,
}
let rows: Vec<HistorySummary> = sqlx::query_as(
"SELECT version, action, actor, created_at FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 \
ORDER BY id DESC LIMIT $4",
)
.bind(args.namespace)
.bind(args.kind)
.bind(args.name)
.bind(args.limit as i64)
.fetch_all(pool)
.await?;
match args.output {
OutputMode::Json | OutputMode::JsonCompact => {
let arr: Vec<Value> = rows
.iter()
.map(|r| {
json!({
"version": r.version,
"action": r.action,
"actor": r.actor,
"created_at": r.created_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
})
})
.collect();
print_json(&Value::Array(arr), &args.output)?;
}
_ => {
if rows.is_empty() {
println!(
"No history found for [{}/{}] {}.",
args.namespace, args.kind, args.name
);
return Ok(());
}
println!(
"History for [{}/{}] {}:",
args.namespace, args.kind, args.name
);
for r in &rows {
println!(
" v{:<4} {:8} {} {}",
r.version,
r.action,
r.actor,
format_local_time(r.created_at)
);
}
println!(" (use `secrets rollback --to-version <N>` to restore)");
}
}
Ok(())
}

217
src/commands/import_cmd.rs Normal file
View File

@@ -0,0 +1,217 @@
use anyhow::Result;
use serde_json::Value;
use sqlx::PgPool;
use std::collections::BTreeMap;
use crate::commands::add::{self, AddArgs};
use crate::models::ExportFormat;
use crate::output::{OutputMode, print_json};
pub struct ImportArgs<'a> {
pub file: &'a str,
/// Overwrite existing records when there is a conflict (upsert).
/// Without this flag, the import aborts on the first conflict.
/// A future `--skip` flag could allow silently skipping conflicts and continuing.
pub force: bool,
/// Check and preview operations without writing to the database.
pub dry_run: bool,
pub output: OutputMode,
}
pub async fn run(pool: &PgPool, args: ImportArgs<'_>, master_key: &[u8; 32]) -> Result<()> {
let format = ExportFormat::from_extension(args.file)?;
let content = std::fs::read_to_string(args.file)
.map_err(|e| anyhow::anyhow!("Cannot read file '{}': {}", args.file, e))?;
let data = format.deserialize(&content)?;
if data.version != 1 {
anyhow::bail!(
"Unsupported export version {}. Only version 1 is supported.",
data.version
);
}
let total = data.entries.len();
let mut inserted = 0usize;
let mut skipped = 0usize;
let mut failed = 0usize;
for entry in &data.entries {
// Check if record already exists.
let exists: bool = sqlx::query_scalar(
"SELECT EXISTS(SELECT 1 FROM entries \
WHERE namespace = $1 AND kind = $2 AND name = $3)",
)
.bind(&entry.namespace)
.bind(&entry.kind)
.bind(&entry.name)
.fetch_one(pool)
.await
.unwrap_or(false);
if exists && !args.force {
let v = serde_json::json!({
"action": "conflict",
"namespace": entry.namespace,
"kind": entry.kind,
"name": entry.name,
});
match args.output {
OutputMode::Text => eprintln!(
"[{}/{}/{}] conflict — record already exists (use --force to overwrite)",
entry.namespace, entry.kind, entry.name
),
ref mode => {
// Write conflict notice to stderr so it does not mix with summary JSON.
eprint!(
"{}",
if *mode == OutputMode::Json {
serde_json::to_string_pretty(&v)?
} else {
serde_json::to_string(&v)?
}
);
eprintln!();
}
}
return Err(anyhow::anyhow!(
"Import aborted: conflict on [{}/{}/{}]",
entry.namespace,
entry.kind,
entry.name
));
}
let action = if exists { "upsert" } else { "insert" };
if args.dry_run {
let v = serde_json::json!({
"action": action,
"namespace": entry.namespace,
"kind": entry.kind,
"name": entry.name,
"dry_run": true,
});
match args.output {
OutputMode::Text => println!(
"[dry-run] {} [{}/{}/{}]",
action, entry.namespace, entry.kind, entry.name
),
ref mode => print_json(&v, mode)?,
}
if exists {
skipped += 1;
} else {
inserted += 1;
}
continue;
}
// Build secret_entries: convert BTreeMap<String, Value> to Vec<String> ("key:=json")
let secret_entries = build_secret_entries(entry.secrets.as_ref());
// Build meta_entries from metadata JSON object.
let meta_entries = build_meta_entries(&entry.metadata);
match add::run(
pool,
AddArgs {
namespace: &entry.namespace,
kind: &entry.kind,
name: &entry.name,
tags: &entry.tags,
meta_entries: &meta_entries,
secret_entries: &secret_entries,
output: OutputMode::Text,
},
master_key,
)
.await
{
Ok(()) => {
let v = serde_json::json!({
"action": action,
"namespace": entry.namespace,
"kind": entry.kind,
"name": entry.name,
});
match args.output {
OutputMode::Text => println!(
"Imported [{}/{}/{}]",
entry.namespace, entry.kind, entry.name
),
ref mode => print_json(&v, mode)?,
}
inserted += 1;
}
Err(e) => {
eprintln!(
"Error importing [{}/{}/{}]: {}",
entry.namespace, entry.kind, entry.name, e
);
failed += 1;
}
}
}
let summary = serde_json::json!({
"total": total,
"inserted": inserted,
"skipped": skipped,
"failed": failed,
"dry_run": args.dry_run,
});
match args.output {
OutputMode::Text => {
if args.dry_run {
println!(
"\n[dry-run] {} total: {} would insert, {} would skip, {} would fail",
total, inserted, skipped, failed
);
} else {
println!(
"\nImport done: {} total — {} inserted, {} skipped, {} failed",
total, inserted, skipped, failed
);
}
}
ref mode => print_json(&summary, mode)?,
}
if failed > 0 {
anyhow::bail!("{} record(s) failed to import", failed);
}
Ok(())
}
/// Convert metadata JSON object into Vec<String> of "key:=json_value" entries.
fn build_meta_entries(metadata: &Value) -> Vec<String> {
let mut entries = Vec::new();
if let Some(obj) = metadata.as_object() {
for (k, v) in obj {
entries.push(value_to_kv_entry(k, v));
}
}
entries
}
/// Convert a BTreeMap<String, Value> (secrets) into Vec<String> of "key:=json_value" entries.
fn build_secret_entries(secrets: Option<&BTreeMap<String, Value>>) -> Vec<String> {
let mut entries = Vec::new();
if let Some(map) = secrets {
for (k, v) in map {
entries.push(value_to_kv_entry(k, v));
}
}
entries
}
/// Convert a key/value pair to a CLI-style entry string.
/// Strings use `key=value`; everything else uses `key:=<json>`.
fn value_to_kv_entry(key: &str, value: &Value) -> String {
match value {
Value::String(s) => format!("{}={}", key, s),
other => format!("{}:={}", key, other),
}
}

View File

@@ -1,6 +1,9 @@
pub mod add; pub mod add;
pub mod config; pub mod config;
pub mod delete; pub mod delete;
pub mod export_cmd;
pub mod history;
pub mod import_cmd;
pub mod init; pub mod init;
pub mod rollback; pub mod rollback;
pub mod run; pub mod run;

View File

@@ -3,32 +3,34 @@ use serde_json::{Value, json};
use sqlx::{FromRow, PgPool}; use sqlx::{FromRow, PgPool};
use uuid::Uuid; use uuid::Uuid;
use crate::output::{OutputMode, format_local_time}; use crate::crypto;
use crate::db;
#[derive(FromRow)] use crate::output::{OutputMode, print_json};
struct HistoryRow {
secret_id: Uuid,
version: i64,
action: String,
tags: Vec<String>,
metadata: Value,
encrypted: Vec<u8>,
}
pub struct RollbackArgs<'a> { pub struct RollbackArgs<'a> {
pub namespace: &'a str, pub namespace: &'a str,
pub kind: &'a str, pub kind: &'a str,
pub name: &'a str, pub name: &'a str,
/// Target version to restore. None → restore the most recent history entry. /// Target entry version to restore. None → restore the most recent history entry.
pub to_version: Option<i64>, pub to_version: Option<i64>,
pub output: OutputMode, pub output: OutputMode,
} }
pub async fn run(pool: &PgPool, args: RollbackArgs<'_>, master_key: &[u8; 32]) -> Result<()> { pub async fn run(pool: &PgPool, args: RollbackArgs<'_>, master_key: &[u8; 32]) -> Result<()> {
let snap: Option<HistoryRow> = if let Some(ver) = args.to_version { // ── Find the target entry history snapshot ────────────────────────────────
#[derive(FromRow)]
struct EntryHistoryRow {
entry_id: Uuid,
version: i64,
action: String,
tags: Vec<String>,
metadata: Value,
}
let snap: Option<EntryHistoryRow> = if let Some(ver) = args.to_version {
sqlx::query_as( sqlx::query_as(
"SELECT secret_id, version, action, tags, metadata, encrypted \ "SELECT entry_id, version, action, tags, metadata \
FROM secrets_history \ FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 AND version = $4 \ WHERE namespace = $1 AND kind = $2 AND name = $3 AND version = $4 \
ORDER BY id DESC LIMIT 1", ORDER BY id DESC LIMIT 1",
) )
@@ -40,8 +42,8 @@ pub async fn run(pool: &PgPool, args: RollbackArgs<'_>, master_key: &[u8; 32]) -
.await? .await?
} else { } else {
sqlx::query_as( sqlx::query_as(
"SELECT secret_id, version, action, tags, metadata, encrypted \ "SELECT entry_id, version, action, tags, metadata \
FROM secrets_history \ FROM entries_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 \ WHERE namespace = $1 AND kind = $2 AND name = $3 \
ORDER BY id DESC LIMIT 1", ORDER BY id DESC LIMIT 1",
) )
@@ -64,25 +66,51 @@ pub async fn run(pool: &PgPool, args: RollbackArgs<'_>, master_key: &[u8; 32]) -
) )
})?; })?;
// Validate encrypted blob is non-trivial (re-encrypt guard). // ── Find the matching secret field snapshots ──────────────────────────────
if !snap.encrypted.is_empty() { #[derive(FromRow)]
// Probe decrypt to ensure the blob is valid before restoring. struct SecretHistoryRow {
crate::crypto::decrypt_json(master_key, &snap.encrypted)?; secret_id: Uuid,
field_name: String,
encrypted: Vec<u8>,
action: String,
}
let field_snaps: Vec<SecretHistoryRow> = sqlx::query_as(
"SELECT secret_id, field_name, encrypted, action \
FROM secrets_history \
WHERE entry_id = $1 AND entry_version = $2 \
ORDER BY field_name",
)
.bind(snap.entry_id)
.bind(snap.version)
.fetch_all(pool)
.await?;
// Validate: try decrypting all encrypted fields before writing anything.
for f in &field_snaps {
if f.action != "delete" && !f.encrypted.is_empty() {
crypto::decrypt_json(master_key, &f.encrypted).map_err(|e| {
anyhow::anyhow!(
"Cannot decrypt snapshot for field '{}': {}",
f.field_name,
e
)
})?;
}
} }
let mut tx = pool.begin().await?; let mut tx = pool.begin().await?;
// Snapshot current live row (if it exists) before overwriting. // ── Snapshot the current live state before overwriting ────────────────────
#[derive(sqlx::FromRow)] #[derive(sqlx::FromRow)]
struct LiveRow { struct LiveEntry {
id: Uuid, id: Uuid,
version: i64, version: i64,
tags: Vec<String>, tags: Vec<String>,
metadata: Value, metadata: Value,
encrypted: Vec<u8>,
} }
let live: Option<LiveRow> = sqlx::query_as( let live: Option<LiveEntry> = sqlx::query_as(
"SELECT id, version, tags, metadata, encrypted FROM secrets \ "SELECT id, version, tags, metadata FROM entries \
WHERE namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE", WHERE namespace = $1 AND kind = $2 AND name = $3 FOR UPDATE",
) )
.bind(args.namespace) .bind(args.namespace)
@@ -91,11 +119,11 @@ pub async fn run(pool: &PgPool, args: RollbackArgs<'_>, master_key: &[u8; 32]) -
.fetch_optional(&mut *tx) .fetch_optional(&mut *tx)
.await?; .await?;
if let Some(lr) = live if let Some(ref lr) = live {
&& let Err(e) = crate::db::snapshot_history( if let Err(e) = db::snapshot_entry_history(
&mut tx, &mut tx,
crate::db::SnapshotParams { db::EntrySnapshotParams {
secret_id: lr.id, entry_id: lr.id,
namespace: args.namespace, namespace: args.namespace,
kind: args.kind, kind: args.kind,
name: args.name, name: args.name,
@@ -103,35 +131,96 @@ pub async fn run(pool: &PgPool, args: RollbackArgs<'_>, master_key: &[u8; 32]) -
action: "rollback", action: "rollback",
tags: &lr.tags, tags: &lr.tags,
metadata: &lr.metadata, metadata: &lr.metadata,
encrypted: &lr.encrypted,
}, },
) )
.await .await
{ {
tracing::warn!(error = %e, "failed to snapshot current row before rollback"); tracing::warn!(error = %e, "failed to snapshot entry before rollback");
}
// Snapshot existing secret fields.
#[derive(sqlx::FromRow)]
struct LiveField {
id: Uuid,
field_name: String,
encrypted: Vec<u8>,
}
let live_fields: Vec<LiveField> = sqlx::query_as(
"SELECT id, field_name, encrypted \
FROM secrets WHERE entry_id = $1",
)
.bind(lr.id)
.fetch_all(&mut *tx)
.await?;
for f in &live_fields {
if let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
entry_id: lr.id,
secret_id: f.id,
entry_version: lr.version,
field_name: &f.field_name,
encrypted: &f.encrypted,
action: "rollback",
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot secret field before rollback");
}
}
} }
// ── Restore entry row ─────────────────────────────────────────────────────
sqlx::query( sqlx::query(
"INSERT INTO secrets (id, namespace, kind, name, tags, metadata, encrypted, version, updated_at) \ "INSERT INTO entries (id, namespace, kind, name, tags, metadata, version, updated_at) \
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, NOW()) \ VALUES ($1, $2, $3, $4, $5, $6, $7, NOW()) \
ON CONFLICT (namespace, kind, name) DO UPDATE SET \ ON CONFLICT (namespace, kind, name) DO UPDATE SET \
tags = EXCLUDED.tags, \ tags = EXCLUDED.tags, \
metadata = EXCLUDED.metadata, \ metadata = EXCLUDED.metadata, \
encrypted = EXCLUDED.encrypted, \ version = entries.version + 1, \
version = secrets.version + 1, \
updated_at = NOW()", updated_at = NOW()",
) )
.bind(snap.secret_id) .bind(snap.entry_id)
.bind(args.namespace) .bind(args.namespace)
.bind(args.kind) .bind(args.kind)
.bind(args.name) .bind(args.name)
.bind(&snap.tags) .bind(&snap.tags)
.bind(&snap.metadata) .bind(&snap.metadata)
.bind(&snap.encrypted)
.bind(snap.version) .bind(snap.version)
.execute(&mut *tx) .execute(&mut *tx)
.await?; .await?;
// ── Restore secret fields ─────────────────────────────────────────────────
// Delete all current fields and re-insert from snapshot
// (only non-deleted fields from the snapshot are restored).
sqlx::query("DELETE FROM secrets WHERE entry_id = $1")
.bind(snap.entry_id)
.execute(&mut *tx)
.await?;
for f in &field_snaps {
if f.action == "delete" {
// Field was deleted at this snapshot point — don't restore it.
continue;
}
sqlx::query(
"INSERT INTO secrets (id, entry_id, field_name, encrypted) \
VALUES ($1, $2, $3, $4) \
ON CONFLICT (entry_id, field_name) DO UPDATE SET \
encrypted = EXCLUDED.encrypted, \
version = secrets.version + 1, \
updated_at = NOW()",
)
.bind(f.secret_id)
.bind(snap.entry_id)
.bind(&f.field_name)
.bind(&f.encrypted)
.execute(&mut *tx)
.await?;
}
crate::audit::log_tx( crate::audit::log_tx(
&mut tx, &mut tx,
"rollback", "rollback",
@@ -156,83 +245,11 @@ pub async fn run(pool: &PgPool, args: RollbackArgs<'_>, master_key: &[u8; 32]) -
}); });
match args.output { match args.output {
OutputMode::Json => println!("{}", serde_json::to_string_pretty(&result_json)?), OutputMode::Text => println!(
OutputMode::JsonCompact => println!("{}", serde_json::to_string(&result_json)?),
_ => println!(
"Rolled back: [{}/{}] {} → version {}", "Rolled back: [{}/{}] {} → version {}",
args.namespace, args.kind, args.name, snap.version args.namespace, args.kind, args.name, snap.version
), ),
} ref mode => print_json(&result_json, mode)?,
Ok(())
}
/// List history entries for a record.
pub async fn list_history(
pool: &PgPool,
namespace: &str,
kind: &str,
name: &str,
limit: u32,
output: OutputMode,
) -> Result<()> {
#[derive(FromRow)]
struct HistorySummary {
version: i64,
action: String,
actor: String,
created_at: chrono::DateTime<chrono::Utc>,
}
let rows: Vec<HistorySummary> = sqlx::query_as(
"SELECT version, action, actor, created_at FROM secrets_history \
WHERE namespace = $1 AND kind = $2 AND name = $3 \
ORDER BY id DESC LIMIT $4",
)
.bind(namespace)
.bind(kind)
.bind(name)
.bind(limit as i64)
.fetch_all(pool)
.await?;
match output {
OutputMode::Json | OutputMode::JsonCompact => {
let arr: Vec<Value> = rows
.iter()
.map(|r| {
json!({
"version": r.version,
"action": r.action,
"actor": r.actor,
"created_at": r.created_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
})
})
.collect();
let out = if output == OutputMode::Json {
serde_json::to_string_pretty(&arr)?
} else {
serde_json::to_string(&arr)?
};
println!("{}", out);
}
_ => {
if rows.is_empty() {
println!("No history found for [{}/{}] {}.", namespace, kind, name);
return Ok(());
}
println!("History for [{}/{}] {}:", namespace, kind, name);
for r in &rows {
println!(
" v{:<4} {:8} {} {}",
r.version,
r.action,
r.actor,
format_local_time(r.created_at)
);
}
println!(" (use `secrets rollback --to-version <N>` to restore)");
}
} }
Ok(()) Ok(())

View File

@@ -3,7 +3,7 @@ use serde_json::Value;
use sqlx::PgPool; use sqlx::PgPool;
use std::collections::HashMap; use std::collections::HashMap;
use crate::commands::search::build_injected_env_map; use crate::commands::search::{build_injected_env_map, fetch_entries, fetch_secrets_for_entries};
use crate::output::OutputMode; use crate::output::OutputMode;
pub struct InjectArgs<'a> { pub struct InjectArgs<'a> {
@@ -11,7 +11,6 @@ pub struct InjectArgs<'a> {
pub kind: Option<&'a str>, pub kind: Option<&'a str>,
pub name: Option<&'a str>, pub name: Option<&'a str>,
pub tags: &'a [String], pub tags: &'a [String],
/// Prefix to prepend to every variable name. Empty string means no prefix.
pub prefix: &'a str, pub prefix: &'a str,
pub output: OutputMode, pub output: OutputMode,
} }
@@ -22,12 +21,10 @@ pub struct RunArgs<'a> {
pub name: Option<&'a str>, pub name: Option<&'a str>,
pub tags: &'a [String], pub tags: &'a [String],
pub prefix: &'a str, pub prefix: &'a str,
/// The command and its arguments to execute with injected secrets.
pub command: &'a [String], pub command: &'a [String],
} }
/// Fetch secrets matching the filter and build a flat env map. /// Fetch entries matching the filter and build a flat env map (metadata + decrypted secrets).
/// Metadata and secret fields are merged; naming: `<PREFIX_><NAME>_<KEY>` (uppercased).
pub async fn collect_env_map( pub async fn collect_env_map(
pool: &PgPool, pool: &PgPool,
namespace: Option<&str>, namespace: Option<&str>,
@@ -42,13 +39,19 @@ pub async fn collect_env_map(
"At least one filter (--namespace, --kind, --name, or --tag) is required for inject/run" "At least one filter (--namespace, --kind, --name, or --tag) is required for inject/run"
); );
} }
let rows = crate::commands::search::fetch_rows(pool, namespace, kind, name, tags, None).await?; let entries = fetch_entries(pool, namespace, kind, name, tags, None).await?;
if rows.is_empty() { if entries.is_empty() {
anyhow::bail!("No records matched the given filters."); anyhow::bail!("No records matched the given filters.");
} }
let entry_ids: Vec<uuid::Uuid> = entries.iter().map(|e| e.id).collect();
let fields_map = fetch_secrets_for_entries(pool, &entry_ids).await?;
let mut map = HashMap::new(); let mut map = HashMap::new();
for row in &rows { for entry in &entries {
let row_map = build_injected_env_map(row, prefix, master_key)?; let empty = vec![];
let fields = fields_map.get(&entry.id).unwrap_or(&empty);
let row_map = build_injected_env_map(pool, entry, prefix, master_key, fields).await?;
for (k, v) in row_map { for (k, v) in row_map {
map.insert(k, v); map.insert(k, v);
} }
@@ -56,7 +59,7 @@ pub async fn collect_env_map(
Ok(map) Ok(map)
} }
/// `inject` command: print env vars to stdout (suitable for `eval $(...)` or export). /// `inject` command: print env vars to stdout.
pub async fn run_inject(pool: &PgPool, args: InjectArgs<'_>, master_key: &[u8; 32]) -> Result<()> { pub async fn run_inject(pool: &PgPool, args: InjectArgs<'_>, master_key: &[u8; 32]) -> Result<()> {
let env_map = collect_env_map( let env_map = collect_env_map(
pool, pool,
@@ -85,7 +88,6 @@ pub async fn run_inject(pool: &PgPool, args: InjectArgs<'_>, master_key: &[u8; 3
println!("{}", serde_json::to_string(&Value::Object(obj))?); println!("{}", serde_json::to_string(&Value::Object(obj))?);
} }
_ => { _ => {
// Shell-safe KEY=VALUE output, one per line.
let mut pairs: Vec<(String, String)> = env_map.into_iter().collect(); let mut pairs: Vec<(String, String)> = env_map.into_iter().collect();
pairs.sort_by(|a, b| a.0.cmp(&b.0)); pairs.sort_by(|a, b| a.0.cmp(&b.0));
for (k, v) in pairs { for (k, v) in pairs {
@@ -136,8 +138,6 @@ pub async fn run_exec(pool: &PgPool, args: RunArgs<'_>, master_key: &[u8; 32]) -
Ok(()) Ok(())
} }
/// Quote a value for safe shell output. Wraps the value in single quotes,
/// escaping any single quotes within the value.
fn shell_quote(s: &str) -> String { fn shell_quote(s: &str) -> String {
format!("'{}'", s.replace('\'', "'\\''")) format!("'{}'", s.replace('\'', "'\\''"))
} }

View File

@@ -4,7 +4,7 @@ use sqlx::PgPool;
use std::collections::HashMap; use std::collections::HashMap;
use crate::crypto; use crate::crypto;
use crate::models::Secret; use crate::models::{Entry, SecretField};
use crate::output::{OutputMode, format_local_time}; use crate::output::{OutputMode, format_local_time};
pub struct SearchArgs<'a> { pub struct SearchArgs<'a> {
@@ -13,7 +13,6 @@ pub struct SearchArgs<'a> {
pub name: Option<&'a str>, pub name: Option<&'a str>,
pub tags: &'a [String], pub tags: &'a [String],
pub query: Option<&'a str>, pub query: Option<&'a str>,
pub show_secrets: bool,
pub fields: &'a [String], pub fields: &'a [String],
pub summary: bool, pub summary: bool,
pub limit: u32, pub limit: u32,
@@ -23,9 +22,9 @@ pub struct SearchArgs<'a> {
} }
pub async fn run(pool: &PgPool, args: SearchArgs<'_>) -> Result<()> { pub async fn run(pool: &PgPool, args: SearchArgs<'_>) -> Result<()> {
validate_safe_search_args(args.show_secrets, args.fields)?; validate_safe_search_args(args.fields)?;
let rows = fetch_rows_paged( let rows = fetch_entries_paged(
pool, pool,
PagedFetchArgs { PagedFetchArgs {
namespace: args.namespace, namespace: args.namespace,
@@ -40,14 +39,25 @@ pub async fn run(pool: &PgPool, args: SearchArgs<'_>) -> Result<()> {
) )
.await?; .await?;
// -f/--field: extract specific field values directly // -f/--field: extract specific metadata field values directly
if !args.fields.is_empty() { if !args.fields.is_empty() {
return print_fields(&rows, args.fields); return print_fields(&rows, args.fields);
} }
// Fetch secret schemas for all returned entries (no master key needed).
let entry_ids: Vec<uuid::Uuid> = rows.iter().map(|r| r.id).collect();
let schema_map = if !args.summary && !entry_ids.is_empty() {
fetch_secret_schemas(pool, &entry_ids).await?
} else {
HashMap::new()
};
match args.output { match args.output {
OutputMode::Json | OutputMode::JsonCompact => { OutputMode::Json | OutputMode::JsonCompact => {
let arr: Vec<Value> = rows.iter().map(|r| to_json(r, args.summary)).collect(); let arr: Vec<Value> = rows
.iter()
.map(|r| to_json(r, args.summary, schema_map.get(&r.id).map(Vec::as_slice)))
.collect();
let out = if args.output == OutputMode::Json { let out = if args.output == OutputMode::Json {
serde_json::to_string_pretty(&arr)? serde_json::to_string_pretty(&arr)?
} else { } else {
@@ -55,31 +65,17 @@ pub async fn run(pool: &PgPool, args: SearchArgs<'_>) -> Result<()> {
}; };
println!("{}", out); println!("{}", out);
} }
OutputMode::Env => {
if rows.len() > 1 {
anyhow::bail!(
"env output requires exactly one record; got {}. Add more filters.",
rows.len()
);
}
if let Some(row) = rows.first() {
let map = build_metadata_env_map(row, "");
let mut pairs: Vec<(String, String)> = map.into_iter().collect();
pairs.sort_by(|a, b| a.0.cmp(&b.0));
for (k, v) in pairs {
println!("{}={}", k, shell_quote(&v));
}
} else {
eprintln!("No records found.");
}
}
OutputMode::Text => { OutputMode::Text => {
if rows.is_empty() { if rows.is_empty() {
println!("No records found."); println!("No records found.");
return Ok(()); return Ok(());
} }
for row in &rows { for row in &rows {
print_text(row, args.summary)?; print_text(
row,
args.summary,
schema_map.get(&row.id).map(Vec::as_slice),
)?;
} }
println!("{} record(s) found.", rows.len()); println!("{} record(s) found.", rows.len());
if rows.len() == args.limit as usize { if rows.len() == args.limit as usize {
@@ -95,20 +91,13 @@ pub async fn run(pool: &PgPool, args: SearchArgs<'_>) -> Result<()> {
Ok(()) Ok(())
} }
fn validate_safe_search_args(show_secrets: bool, fields: &[String]) -> Result<()> { fn validate_safe_search_args(fields: &[String]) -> Result<()> {
if show_secrets {
anyhow::bail!(
"`search` no longer reveals secrets. Use `secrets inject` or `secrets run` instead."
);
}
if let Some(field) = fields.iter().find(|field| is_secret_field(field)) { if let Some(field) = fields.iter().find(|field| is_secret_field(field)) {
anyhow::bail!( anyhow::bail!(
"Field '{}' is sensitive. `search -f` only supports metadata.* fields; use `secrets inject` or `secrets run` for secrets.", "Field '{}' is sensitive. `search -f` only supports metadata.* fields; use `secrets inject` or `secrets run` for secrets.",
field field
); );
} }
Ok(()) Ok(())
} }
@@ -119,32 +108,8 @@ fn is_secret_field(field: &str) -> bool {
) )
} }
/// Fetch rows with simple equality/tag filters (no pagination). Used by inject/run. // ── Entry fetching ────────────────────────────────────────────────────────────
pub async fn fetch_rows(
pool: &PgPool,
namespace: Option<&str>,
kind: Option<&str>,
name: Option<&str>,
tags: &[String],
query: Option<&str>,
) -> Result<Vec<Secret>> {
fetch_rows_paged(
pool,
PagedFetchArgs {
namespace,
kind,
name,
tags,
query,
sort: "name",
limit: 200,
offset: 0,
},
)
.await
}
/// Arguments for the internal paged fetch. Grouped to avoid too-many-arguments lint.
struct PagedFetchArgs<'a> { struct PagedFetchArgs<'a> {
namespace: Option<&'a str>, namespace: Option<&'a str>,
kind: Option<&'a str>, kind: Option<&'a str>,
@@ -156,7 +121,50 @@ struct PagedFetchArgs<'a> {
offset: u32, offset: u32,
} }
async fn fetch_rows_paged(pool: &PgPool, a: PagedFetchArgs<'_>) -> Result<Vec<Secret>> { /// A very large limit used when callers need all matching records (export, inject, run).
/// Postgres will stop scanning when this many rows are found; adjust if needed.
pub const FETCH_ALL_LIMIT: u32 = 100_000;
/// Fetch entries matching the given filters (used by search, inject, run).
/// `limit` caps the result set; pass `FETCH_ALL_LIMIT` when you need all matching records.
pub async fn fetch_entries(
pool: &PgPool,
namespace: Option<&str>,
kind: Option<&str>,
name: Option<&str>,
tags: &[String],
query: Option<&str>,
) -> Result<Vec<Entry>> {
fetch_entries_with_limit(pool, namespace, kind, name, tags, query, FETCH_ALL_LIMIT).await
}
/// Like `fetch_entries` but with an explicit limit. Used internally by `search`.
pub(crate) async fn fetch_entries_with_limit(
pool: &PgPool,
namespace: Option<&str>,
kind: Option<&str>,
name: Option<&str>,
tags: &[String],
query: Option<&str>,
limit: u32,
) -> Result<Vec<Entry>> {
fetch_entries_paged(
pool,
PagedFetchArgs {
namespace,
kind,
name,
tags,
query,
sort: "name",
limit,
offset: 0,
},
)
.await
}
async fn fetch_entries_paged(pool: &PgPool, a: PagedFetchArgs<'_>) -> Result<Vec<Entry>> {
let mut conditions: Vec<String> = Vec::new(); let mut conditions: Vec<String> = Vec::new();
let mut idx: i32 = 1; let mut idx: i32 = 1;
@@ -205,7 +213,7 @@ async fn fetch_rows_paged(pool: &PgPool, a: PagedFetchArgs<'_>) -> Result<Vec<Se
}; };
let sql = format!( let sql = format!(
"SELECT * FROM secrets {} ORDER BY {} LIMIT ${} OFFSET ${}", "SELECT * FROM entries {} ORDER BY {} LIMIT ${} OFFSET ${}",
where_clause, where_clause,
order, order,
idx, idx,
@@ -214,7 +222,7 @@ async fn fetch_rows_paged(pool: &PgPool, a: PagedFetchArgs<'_>) -> Result<Vec<Se
tracing::debug!(sql, "executing search query"); tracing::debug!(sql, "executing search query");
let mut q = sqlx::query_as::<_, Secret>(&sql); let mut q = sqlx::query_as::<_, Entry>(&sql);
if let Some(v) = a.namespace { if let Some(v) = a.namespace {
q = q.bind(v); q = q.bind(v);
} }
@@ -237,12 +245,62 @@ async fn fetch_rows_paged(pool: &PgPool, a: PagedFetchArgs<'_>) -> Result<Vec<Se
} }
q = q.bind(a.limit as i64).bind(a.offset as i64); q = q.bind(a.limit as i64).bind(a.offset as i64);
let rows = q.fetch_all(pool).await?; Ok(q.fetch_all(pool).await?)
Ok(rows)
} }
fn env_prefix(row: &Secret, prefix: &str) -> String { // ── Secret schema fetching (no master key) ───────────────────────────────────
let name_part = row.name.to_uppercase().replace(['-', '.', ' '], "_");
/// Fetch secret field names for a set of entry ids.
/// Returns a map from entry_id to list of SecretField.
async fn fetch_secret_schemas(
pool: &PgPool,
entry_ids: &[uuid::Uuid],
) -> Result<HashMap<uuid::Uuid, Vec<SecretField>>> {
if entry_ids.is_empty() {
return Ok(HashMap::new());
}
let fields: Vec<SecretField> = sqlx::query_as(
"SELECT * FROM secrets WHERE entry_id = ANY($1) ORDER BY entry_id, field_name",
)
.bind(entry_ids)
.fetch_all(pool)
.await?;
let mut map: HashMap<uuid::Uuid, Vec<SecretField>> = HashMap::new();
for f in fields {
map.entry(f.entry_id).or_default().push(f);
}
Ok(map)
}
/// Fetch all secret fields (including encrypted bytes) for a set of entry ids.
pub async fn fetch_secrets_for_entries(
pool: &PgPool,
entry_ids: &[uuid::Uuid],
) -> Result<HashMap<uuid::Uuid, Vec<SecretField>>> {
if entry_ids.is_empty() {
return Ok(HashMap::new());
}
let fields: Vec<SecretField> = sqlx::query_as(
"SELECT * FROM secrets WHERE entry_id = ANY($1) ORDER BY entry_id, field_name",
)
.bind(entry_ids)
.fetch_all(pool)
.await?;
let mut map: HashMap<uuid::Uuid, Vec<SecretField>> = HashMap::new();
for f in fields {
map.entry(f.entry_id).or_default().push(f);
}
Ok(map)
}
// ── Display helpers ───────────────────────────────────────────────────────────
fn env_prefix(entry: &Entry, prefix: &str) -> String {
let name_part = entry.name.to_uppercase().replace(['-', '.', ' '], "_");
if prefix.is_empty() { if prefix.is_empty() {
name_part name_part
} else { } else {
@@ -254,15 +312,12 @@ fn env_prefix(row: &Secret, prefix: &str) -> String {
} }
} }
/// Build a flat `KEY=VALUE` map from metadata only. /// Build a flat KEY=VALUE map from metadata only (no master key required).
/// Variable names: `<PREFIX><NAME>_<FIELD>` (all uppercased, hyphens/dots → underscores). pub fn build_metadata_env_map(entry: &Entry, prefix: &str) -> HashMap<String, String> {
/// If `prefix` is empty, the name segment alone is used as the prefix. let effective_prefix = env_prefix(entry, prefix);
pub fn build_metadata_env_map(row: &Secret, prefix: &str) -> HashMap<String, String> {
let effective_prefix = env_prefix(row, prefix);
let mut map = HashMap::new(); let mut map = HashMap::new();
if let Some(meta) = row.metadata.as_object() { if let Some(meta) = entry.metadata.as_object() {
for (k, v) in meta { for (k, v) in meta {
let key = format!( let key = format!(
"{}_{}", "{}_{}",
@@ -272,43 +327,68 @@ pub fn build_metadata_env_map(row: &Secret, prefix: &str) -> HashMap<String, Str
map.insert(key, json_value_to_env_string(v)); map.insert(key, json_value_to_env_string(v));
} }
} }
map map
} }
/// Build a flat `KEY=VALUE` map from metadata and decrypted secrets. /// Build a flat KEY=VALUE map from metadata + decrypted secret fields.
pub fn build_injected_env_map( /// Resolves key_ref: if metadata.key_ref is set, merges secret fields from that key entry.
row: &Secret, pub async fn build_injected_env_map(
pool: &PgPool,
entry: &Entry,
prefix: &str, prefix: &str,
master_key: &[u8; 32], master_key: &[u8; 32],
fields: &[SecretField],
) -> Result<HashMap<String, String>> { ) -> Result<HashMap<String, String>> {
let effective_prefix = env_prefix(row, prefix); let effective_prefix = env_prefix(entry, prefix);
let mut map = build_metadata_env_map(row, prefix); let mut map = build_metadata_env_map(entry, prefix);
if !row.encrypted.is_empty() { // Decrypt each secret field and add to env map.
let decrypted = crypto::decrypt_json(master_key, &row.encrypted)?; for f in fields {
if let Some(enc) = decrypted.as_object() { let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
for (k, v) in enc { let key = format!(
let key = format!( "{}_{}",
effective_prefix,
f.field_name.to_uppercase().replace(['-', '.'], "_")
);
map.insert(key, json_value_to_env_string(&decrypted));
}
// Resolve key_ref: merge secrets from the referenced key entry.
if let Some(key_ref) = entry.metadata.get("key_ref").and_then(|v| v.as_str()) {
let key_entries = fetch_entries(
pool,
Some(&entry.namespace),
Some("key"),
Some(key_ref),
&[],
None,
)
.await?;
if let Some(key_entry) = key_entries.first() {
let key_ids = vec![key_entry.id];
let key_fields_map = fetch_secrets_for_entries(pool, &key_ids).await?;
let empty = vec![];
let key_fields = key_fields_map.get(&key_entry.id).unwrap_or(&empty);
let key_prefix = env_prefix(key_entry, prefix);
for f in key_fields {
let decrypted = crypto::decrypt_json(master_key, &f.encrypted)?;
let key_var = format!(
"{}_{}", "{}_{}",
effective_prefix, key_prefix,
k.to_uppercase().replace(['-', '.'], "_") f.field_name.to_uppercase().replace(['-', '.'], "_")
); );
map.insert(key, json_value_to_env_string(v)); map.insert(key_var, json_value_to_env_string(&decrypted));
} }
} else {
tracing::warn!(key_ref, "key_ref target not found");
} }
} }
Ok(map) Ok(map)
} }
/// Quote a value for safe shell / env output. Wraps in single quotes,
/// escaping any single quotes within the value.
fn shell_quote(s: &str) -> String {
format!("'{}'", s.replace('\'', "'\\''"))
}
/// Convert a JSON value to its string representation suitable for env vars.
fn json_value_to_env_string(v: &Value) -> String { fn json_value_to_env_string(v: &Value) -> String {
match v { match v {
Value::String(s) => s.clone(), Value::String(s) => s.clone(),
@@ -317,81 +397,96 @@ fn json_value_to_env_string(v: &Value) -> String {
} }
} }
fn to_json(row: &Secret, summary: bool) -> Value { fn to_json(entry: &Entry, summary: bool, schema: Option<&[SecretField]>) -> Value {
if summary { if summary {
let desc = row let desc = entry
.metadata .metadata
.get("desc") .get("desc")
.or_else(|| row.metadata.get("url")) .or_else(|| entry.metadata.get("url"))
.and_then(|v| v.as_str()) .and_then(|v| v.as_str())
.unwrap_or("") .unwrap_or("")
.to_string(); .to_string();
return json!({ return json!({
"namespace": row.namespace, "namespace": entry.namespace,
"kind": row.kind, "kind": entry.kind,
"name": row.name, "name": entry.name,
"tags": row.tags, "tags": entry.tags,
"desc": desc, "desc": desc,
"updated_at": row.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(), "updated_at": entry.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
}); });
} }
let secrets_val = if row.encrypted.is_empty() { let secrets_val: Value = match schema {
Value::Object(Default::default()) Some(fields) if !fields.is_empty() => {
} else { let schema_arr: Vec<Value> = fields
json!({"_encrypted": true}) .iter()
.map(|f| {
json!({
"field_name": f.field_name,
})
})
.collect();
Value::Array(schema_arr)
}
_ => Value::Array(vec![]),
}; };
json!({ json!({
"id": row.id, "id": entry.id,
"namespace": row.namespace, "namespace": entry.namespace,
"kind": row.kind, "kind": entry.kind,
"name": row.name, "name": entry.name,
"tags": row.tags, "tags": entry.tags,
"metadata": row.metadata, "metadata": entry.metadata,
"secrets": secrets_val, "secrets": secrets_val,
"version": row.version, "version": entry.version,
"created_at": row.created_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(), "created_at": entry.created_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
"updated_at": row.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(), "updated_at": entry.updated_at.format("%Y-%m-%dT%H:%M:%SZ").to_string(),
}) })
} }
fn print_text(row: &Secret, summary: bool) -> Result<()> { fn print_text(entry: &Entry, summary: bool, schema: Option<&[SecretField]>) -> Result<()> {
println!("[{}/{}] {}", row.namespace, row.kind, row.name); println!("[{}/{}] {}", entry.namespace, entry.kind, entry.name);
if summary { if summary {
let desc = row let desc = entry
.metadata .metadata
.get("desc") .get("desc")
.or_else(|| row.metadata.get("url")) .or_else(|| entry.metadata.get("url"))
.and_then(|v| v.as_str()) .and_then(|v| v.as_str())
.unwrap_or("-"); .unwrap_or("-");
if !row.tags.is_empty() { if !entry.tags.is_empty() {
println!(" tags: [{}]", row.tags.join(", ")); println!(" tags: [{}]", entry.tags.join(", "));
} }
println!(" desc: {}", desc); println!(" desc: {}", desc);
println!(" updated: {}", format_local_time(row.updated_at)); println!(" updated: {}", format_local_time(entry.updated_at));
} else { } else {
println!(" id: {}", row.id); println!(" id: {}", entry.id);
if !row.tags.is_empty() { if !entry.tags.is_empty() {
println!(" tags: [{}]", row.tags.join(", ")); println!(" tags: [{}]", entry.tags.join(", "));
} }
if row.metadata.as_object().is_some_and(|m| !m.is_empty()) { if entry.metadata.as_object().is_some_and(|m| !m.is_empty()) {
println!( println!(
" metadata: {}", " metadata: {}",
serde_json::to_string_pretty(&row.metadata)? serde_json::to_string_pretty(&entry.metadata)?
); );
} }
if !row.encrypted.is_empty() { match schema {
println!(" secrets: [encrypted] (use `secrets inject` or `secrets run`)"); Some(fields) if !fields.is_empty() => {
let schema_str: Vec<String> = fields.iter().map(|f| f.field_name.clone()).collect();
println!(" secrets: {}", schema_str.join(", "));
println!(" (use `secrets inject` or `secrets run` to get values)");
}
_ => {}
} }
println!(" created: {}", format_local_time(row.created_at)); println!(" version: {}", entry.version);
println!(" created: {}", format_local_time(entry.created_at));
} }
println!(); println!();
Ok(()) Ok(())
} }
/// Extract one or more field paths like `metadata.url`. /// Extract one or more metadata field paths like `metadata.url`.
fn print_fields(rows: &[Secret], fields: &[String]) -> Result<()> { fn print_fields(rows: &[Entry], fields: &[String]) -> Result<()> {
for row in rows { for row in rows {
for field in fields { for field in fields {
let val = extract_field(row, field)?; let val = extract_field(row, field)?;
@@ -401,13 +496,13 @@ fn print_fields(rows: &[Secret], fields: &[String]) -> Result<()> {
Ok(()) Ok(())
} }
fn extract_field(row: &Secret, field: &str) -> Result<String> { fn extract_field(entry: &Entry, field: &str) -> Result<String> {
let (section, key) = field let (section, key) = field
.split_once('.') .split_once('.')
.ok_or_else(|| anyhow::anyhow!("Invalid field path '{}'. Use metadata.<key>.", field))?; .ok_or_else(|| anyhow::anyhow!("Invalid field path '{}'. Use metadata.<key>.", field))?;
let obj = match section { let obj = match section {
"metadata" | "meta" => &row.metadata, "metadata" | "meta" => &entry.metadata,
other => anyhow::bail!("Unknown field section '{}'. Use 'metadata'.", other), other => anyhow::bail!("Unknown field section '{}'. Use 'metadata'.", other),
}; };
@@ -421,9 +516,9 @@ fn extract_field(row: &Secret, field: &str) -> Result<String> {
anyhow::anyhow!( anyhow::anyhow!(
"Field '{}' not found in record [{}/{}/{}]", "Field '{}' not found in record [{}/{}/{}]",
field, field,
row.namespace, entry.namespace,
row.kind, entry.kind,
row.name entry.name
) )
}) })
} }
@@ -435,45 +530,49 @@ mod tests {
use serde_json::json; use serde_json::json;
use uuid::Uuid; use uuid::Uuid;
fn sample_secret() -> Secret { fn sample_entry() -> Entry {
let key = [0x42u8; 32]; Entry {
let encrypted = crypto::encrypt_json(&key, &json!({"token": "abc123"})).unwrap();
Secret {
id: Uuid::nil(), id: Uuid::nil(),
namespace: "refining".to_string(), namespace: "refining".to_string(),
kind: "service".to_string(), kind: "service".to_string(),
name: "gitea.main".to_string(), name: "gitea.main".to_string(),
tags: vec!["prod".to_string()], tags: vec!["prod".to_string()],
metadata: json!({"url": "https://gitea.refining.dev", "enabled": true}), metadata: json!({"url": "https://code.example.com", "enabled": true}),
encrypted,
version: 1, version: 1,
created_at: Utc::now(), created_at: Utc::now(),
updated_at: Utc::now(), updated_at: Utc::now(),
} }
} }
#[test] fn sample_fields() -> Vec<SecretField> {
fn rejects_show_secrets_flag() { let key = [0x42u8; 32];
let err = validate_safe_search_args(true, &[]).unwrap_err(); let enc = crypto::encrypt_json(&key, &json!("abc123")).unwrap();
assert!(err.to_string().contains("no longer reveals secrets")); vec![SecretField {
id: Uuid::nil(),
entry_id: Uuid::nil(),
field_name: "token".to_string(),
encrypted: enc,
version: 1,
created_at: Utc::now(),
updated_at: Utc::now(),
}]
} }
#[test] #[test]
fn rejects_secret_field_extraction() { fn rejects_secret_field_extraction() {
let fields = vec!["secret.token".to_string()]; let fields = vec!["secret.token".to_string()];
let err = validate_safe_search_args(false, &fields).unwrap_err(); let err = validate_safe_search_args(&fields).unwrap_err();
assert!(err.to_string().contains("sensitive")); assert!(err.to_string().contains("sensitive"));
} }
#[test] #[test]
fn metadata_env_map_excludes_secret_values() { fn metadata_env_map_excludes_secret_values() {
let row = sample_secret(); let entry = sample_entry();
let map = build_metadata_env_map(&row, ""); let map = build_metadata_env_map(&entry, "");
assert_eq!( assert_eq!(
map.get("GITEA_MAIN_URL").map(String::as_str), map.get("GITEA_MAIN_URL").map(String::as_str),
Some("https://gitea.refining.dev") Some("https://code.example.com")
); );
assert_eq!( assert_eq!(
map.get("GITEA_MAIN_ENABLED").map(String::as_str), map.get("GITEA_MAIN_ENABLED").map(String::as_str),
@@ -483,14 +582,21 @@ mod tests {
} }
#[test] #[test]
fn injected_env_map_includes_secret_values() { fn to_json_full_includes_secrets_schema() {
let row = sample_secret(); let entry = sample_entry();
let key = [0x42u8; 32]; let fields = sample_fields();
let map = build_injected_env_map(&row, "", &key).unwrap(); let v = to_json(&entry, false, Some(&fields));
assert_eq!( let secrets = v.get("secrets").unwrap().as_array().unwrap();
map.get("GITEA_MAIN_TOKEN").map(String::as_str), assert_eq!(secrets.len(), 1);
Some("abc123") assert_eq!(secrets[0]["field_name"], "token");
); }
#[test]
fn to_json_summary_omits_secrets_schema() {
let entry = sample_entry();
let fields = sample_fields();
let v = to_json(&entry, true, Some(&fields));
assert!(v.get("secrets").is_none());
} }
} }

View File

@@ -1,23 +1,16 @@
use anyhow::Result; use anyhow::Result;
use serde_json::{Map, Value, json}; use serde_json::{Map, Value, json};
use sqlx::{FromRow, PgPool}; use sqlx::PgPool;
use uuid::Uuid; use uuid::Uuid;
use super::add::{ use super::add::{
collect_field_paths, collect_key_paths, insert_path, parse_key_path, parse_kv, remove_path, collect_field_paths, collect_key_paths, flatten_json_fields, insert_path, parse_key_path,
parse_kv, remove_path,
}; };
use crate::crypto; use crate::crypto;
use crate::db; use crate::db;
use crate::output::OutputMode; use crate::models::EntryRow;
use crate::output::{OutputMode, print_json};
#[derive(FromRow)]
struct UpdateRow {
id: Uuid,
version: i64,
tags: Vec<String>,
metadata: Value,
encrypted: Vec<u8>,
}
pub struct UpdateArgs<'a> { pub struct UpdateArgs<'a> {
pub namespace: &'a str, pub namespace: &'a str,
@@ -35,9 +28,9 @@ pub struct UpdateArgs<'a> {
pub async fn run(pool: &PgPool, args: UpdateArgs<'_>, master_key: &[u8; 32]) -> Result<()> { pub async fn run(pool: &PgPool, args: UpdateArgs<'_>, master_key: &[u8; 32]) -> Result<()> {
let mut tx = pool.begin().await?; let mut tx = pool.begin().await?;
let row: Option<UpdateRow> = sqlx::query_as( let row: Option<EntryRow> = sqlx::query_as(
"SELECT id, version, tags, metadata, encrypted \ "SELECT id, version, tags, metadata \
FROM secrets \ FROM entries \
WHERE namespace = $1 AND kind = $2 AND name = $3 \ WHERE namespace = $1 AND kind = $2 AND name = $3 \
FOR UPDATE", FOR UPDATE",
) )
@@ -56,11 +49,11 @@ pub async fn run(pool: &PgPool, args: UpdateArgs<'_>, master_key: &[u8; 32]) ->
) )
})?; })?;
// Snapshot current state before modifying. // Snapshot current entry state before modifying.
if let Err(e) = db::snapshot_history( if let Err(e) = db::snapshot_entry_history(
&mut tx, &mut tx,
db::SnapshotParams { db::EntrySnapshotParams {
secret_id: row.id, entry_id: row.id,
namespace: args.namespace, namespace: args.namespace,
kind: args.kind, kind: args.kind,
name: args.name, name: args.name,
@@ -68,15 +61,14 @@ pub async fn run(pool: &PgPool, args: UpdateArgs<'_>, master_key: &[u8; 32]) ->
action: "update", action: "update",
tags: &row.tags, tags: &row.tags,
metadata: &row.metadata, metadata: &row.metadata,
encrypted: &row.encrypted,
}, },
) )
.await .await
{ {
tracing::warn!(error = %e, "failed to snapshot history before update"); tracing::warn!(error = %e, "failed to snapshot entry history before update");
} }
// Merge tags // ── Merge tags ────────────────────────────────────────────────────────────
let mut tags: Vec<String> = row.tags; let mut tags: Vec<String> = row.tags;
for t in args.add_tags { for t in args.add_tags {
if !tags.contains(t) { if !tags.contains(t) {
@@ -85,7 +77,7 @@ pub async fn run(pool: &PgPool, args: UpdateArgs<'_>, master_key: &[u8; 32]) ->
} }
tags.retain(|t| !args.remove_tags.contains(t)); tags.retain(|t| !args.remove_tags.contains(t));
// Merge metadata // ── Merge metadata ────────────────────────────────────────────────────────
let mut meta_map: Map<String, Value> = match row.metadata { let mut meta_map: Map<String, Value> = match row.metadata {
Value::Object(m) => m, Value::Object(m) => m,
_ => Map::new(), _ => Map::new(),
@@ -100,43 +92,14 @@ pub async fn run(pool: &PgPool, args: UpdateArgs<'_>, master_key: &[u8; 32]) ->
} }
let metadata = Value::Object(meta_map); let metadata = Value::Object(meta_map);
// Decrypt existing encrypted blob, merge changes, re-encrypt // CAS update of the entry row.
let existing_json = if row.encrypted.is_empty() {
Value::Object(Map::new())
} else {
crypto::decrypt_json(master_key, &row.encrypted)?
};
let mut enc_map: Map<String, Value> = match existing_json {
Value::Object(m) => m,
_ => Map::new(),
};
for entry in args.secret_entries {
let (path, value) = parse_kv(entry)?;
insert_path(&mut enc_map, &path, value)?;
}
for key in args.remove_secrets {
let path = parse_key_path(key)?;
remove_path(&mut enc_map, &path)?;
}
let secret_json = Value::Object(enc_map);
let encrypted_bytes = crypto::encrypt_json(master_key, &secret_json)?;
tracing::debug!(
namespace = args.namespace,
kind = args.kind,
name = args.name,
"updating record"
);
// CAS: update only if version hasn't changed (FOR UPDATE lock ensures this).
let result = sqlx::query( let result = sqlx::query(
"UPDATE secrets \ "UPDATE entries \
SET tags = $1, metadata = $2, encrypted = $3, version = version + 1, updated_at = NOW() \ SET tags = $1, metadata = $2, version = version + 1, updated_at = NOW() \
WHERE id = $4 AND version = $5", WHERE id = $3 AND version = $4",
) )
.bind(&tags) .bind(&tags)
.bind(&metadata) .bind(&metadata)
.bind(&encrypted_bytes)
.bind(row.id) .bind(row.id)
.bind(row.version) .bind(row.version)
.execute(&mut *tx) .execute(&mut *tx)
@@ -152,6 +115,116 @@ pub async fn run(pool: &PgPool, args: UpdateArgs<'_>, master_key: &[u8; 32]) ->
); );
} }
let new_version = row.version + 1;
// ── Update secret fields ──────────────────────────────────────────────────
for entry in args.secret_entries {
let (path, field_value) = parse_kv(entry)?;
// For nested paths (e.g. credentials:type), flatten into dot-separated names
// and treat the sub-value as the individual field to store.
let flat = flatten_json_fields("", &{
let mut m = Map::new();
insert_path(&mut m, &path, field_value)?;
Value::Object(m)
});
for (field_name, fv) in &flat {
let encrypted = crypto::encrypt_json(master_key, fv)?;
// Snapshot existing field before replacing.
#[derive(sqlx::FromRow)]
struct ExistingField {
id: Uuid,
encrypted: Vec<u8>,
}
let existing_field: Option<ExistingField> = sqlx::query_as(
"SELECT id, encrypted \
FROM secrets WHERE entry_id = $1 AND field_name = $2",
)
.bind(row.id)
.bind(field_name)
.fetch_optional(&mut *tx)
.await?;
if let Some(ef) = &existing_field
&& let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
entry_id: row.id,
secret_id: ef.id,
entry_version: row.version,
field_name,
encrypted: &ef.encrypted,
action: "update",
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot secret field history");
}
sqlx::query(
"INSERT INTO secrets (entry_id, field_name, encrypted) \
VALUES ($1, $2, $3) \
ON CONFLICT (entry_id, field_name) DO UPDATE SET \
encrypted = EXCLUDED.encrypted, \
version = secrets.version + 1, \
updated_at = NOW()",
)
.bind(row.id)
.bind(field_name)
.bind(&encrypted)
.execute(&mut *tx)
.await?;
}
}
// ── Remove secret fields ──────────────────────────────────────────────────
for key in args.remove_secrets {
let path = parse_key_path(key)?;
// Dot-join the path to match flattened field_name storage.
let field_name = path.join(".");
// Snapshot before delete.
#[derive(sqlx::FromRow)]
struct FieldToDelete {
id: Uuid,
encrypted: Vec<u8>,
}
let field: Option<FieldToDelete> = sqlx::query_as(
"SELECT id, encrypted \
FROM secrets WHERE entry_id = $1 AND field_name = $2",
)
.bind(row.id)
.bind(&field_name)
.fetch_optional(&mut *tx)
.await?;
if let Some(f) = field {
if let Err(e) = db::snapshot_secret_history(
&mut tx,
db::SecretSnapshotParams {
entry_id: row.id,
secret_id: f.id,
entry_version: new_version,
field_name: &field_name,
encrypted: &f.encrypted,
action: "delete",
},
)
.await
{
tracing::warn!(error = %e, "failed to snapshot secret field history before delete");
}
sqlx::query("DELETE FROM secrets WHERE id = $1")
.bind(f.id)
.execute(&mut *tx)
.await?;
}
}
let meta_keys = collect_key_paths(args.meta_entries)?; let meta_keys = collect_key_paths(args.meta_entries)?;
let remove_meta_keys = collect_field_paths(args.remove_meta)?; let remove_meta_keys = collect_field_paths(args.remove_meta)?;
let secret_keys = collect_key_paths(args.secret_entries)?; let secret_keys = collect_key_paths(args.secret_entries)?;
@@ -190,11 +263,8 @@ pub async fn run(pool: &PgPool, args: UpdateArgs<'_>, master_key: &[u8; 32]) ->
}); });
match args.output { match args.output {
OutputMode::Json => { OutputMode::Json | OutputMode::JsonCompact => {
println!("{}", serde_json::to_string_pretty(&result_json)?); print_json(&result_json, &args.output)?;
}
OutputMode::JsonCompact => {
println!("{}", serde_json::to_string(&result_json)?);
} }
_ => { _ => {
println!("Updated: [{}/{}] {}", args.namespace, args.kind, args.name); println!("Updated: [{}/{}] {}", args.namespace, args.kind, args.name);

View File

@@ -5,10 +5,26 @@ use sha2::{Digest, Sha256};
use std::io::{Cursor, Read, Write}; use std::io::{Cursor, Read, Write};
use std::time::Duration; use std::time::Duration;
const GITEA_API: &str = "https://gitea.refining.dev/api/v1/repos/refining/secrets/releases/latest";
const CURRENT_VERSION: &str = env!("CARGO_PKG_VERSION"); const CURRENT_VERSION: &str = env!("CARGO_PKG_VERSION");
/// Build-time config via `option_env!("SECRETS_UPGRADE_URL")`. Set during `cargo build`, e.g.:
/// SECRETS_UPGRADE_URL=https://... cargo build --release
const BUILD_UPGRADE_URL: Option<&'static str> = option_env!("SECRETS_UPGRADE_URL");
fn upgrade_api_url() -> Result<String> {
if let Some(url) = BUILD_UPGRADE_URL.filter(|s| !s.trim().is_empty()) {
return Ok(url.to_string());
}
let url = std::env::var("SECRETS_UPGRADE_URL").context(
"SECRETS_UPGRADE_URL is not set at build or runtime. Set it when building: \
SECRETS_UPGRADE_URL=https://... cargo build, or export before running secrets upgrade.",
)?;
if url.trim().is_empty() {
anyhow::bail!("SECRETS_UPGRADE_URL is empty.");
}
Ok(url)
}
#[derive(Debug, Deserialize)] #[derive(Debug, Deserialize)]
struct Release { struct Release {
tag_name: String, tag_name: String,
@@ -186,13 +202,14 @@ pub async fn run(check_only: bool) -> Result<()> {
.build() .build()
.context("failed to build HTTP client")?; .context("failed to build HTTP client")?;
let api_url = upgrade_api_url()?;
let release: Release = client let release: Release = client
.get(GITEA_API) .get(&api_url)
.send() .send()
.await .await
.context("failed to fetch release info from Gitea")? .context("failed to fetch release info")?
.error_for_status() .error_for_status()
.context("Gitea API returned an error")? .context("release API returned an error")?
.json() .json()
.await .await
.context("failed to parse release JSON")?; .context("failed to parse release JSON")?;

View File

@@ -8,19 +8,23 @@ pub struct Config {
pub database_url: Option<String>, pub database_url: Option<String>,
} }
pub fn config_dir() -> PathBuf { pub fn config_dir() -> Result<PathBuf> {
dirs::config_dir() let dir = dirs::config_dir()
.or_else(|| dirs::home_dir().map(|h| h.join(".config"))) .or_else(|| dirs::home_dir().map(|h| h.join(".config")))
.unwrap_or_else(|| PathBuf::from(".config")) .context(
.join("secrets") "Cannot determine config directory: \
neither XDG_CONFIG_HOME nor HOME is set",
)?
.join("secrets");
Ok(dir)
} }
pub fn config_path() -> PathBuf { pub fn config_path() -> Result<PathBuf> {
config_dir().join("config.toml") Ok(config_dir()?.join("config.toml"))
} }
pub fn load_config() -> Result<Config> { pub fn load_config() -> Result<Config> {
let path = config_path(); let path = config_path()?;
if !path.exists() { if !path.exists() {
return Ok(Config::default()); return Ok(Config::default());
} }
@@ -32,11 +36,11 @@ pub fn load_config() -> Result<Config> {
} }
pub fn save_config(config: &Config) -> Result<()> { pub fn save_config(config: &Config) -> Result<()> {
let dir = config_dir(); let dir = config_dir()?;
fs::create_dir_all(&dir) fs::create_dir_all(&dir)
.with_context(|| format!("failed to create config dir: {}", dir.display()))?; .with_context(|| format!("failed to create config dir: {}", dir.display()))?;
let path = config_path(); let path = dir.join("config.toml");
let content = toml::to_string_pretty(config).context("failed to serialize config")?; let content = toml::to_string_pretty(config).context("failed to serialize config")?;
fs::write(&path, &content) fs::write(&path, &content)
.with_context(|| format!("failed to write config file: {}", path.display()))?; .with_context(|| format!("failed to write config file: {}", path.display()))?;

View File

@@ -10,12 +10,24 @@ const KEYRING_SERVICE: &str = "secrets-cli";
const KEYRING_USER: &str = "master-key"; const KEYRING_USER: &str = "master-key";
const NONCE_LEN: usize = 12; const NONCE_LEN: usize = 12;
// Argon2id parameters — OWASP recommended (m=64 MiB, t=3 iterations, p=4 threads, key=32 B)
const ARGON2_M_COST: u32 = 65_536;
const ARGON2_T_COST: u32 = 3;
const ARGON2_P_COST: u32 = 4;
const ARGON2_KEY_LEN: usize = 32;
// ─── Argon2id key derivation ───────────────────────────────────────────────── // ─── Argon2id key derivation ─────────────────────────────────────────────────
/// Derive a 32-byte Master Key from a password and salt using Argon2id. /// Derive a 32-byte Master Key from a password and salt using Argon2id.
/// Parameters: m=65536 KiB (64 MB), t=3, p=4 — OWASP recommended. /// Parameters: m=65536 KiB (64 MB), t=3, p=4 — OWASP recommended.
pub fn derive_master_key(password: &str, salt: &[u8]) -> Result<[u8; 32]> { pub fn derive_master_key(password: &str, salt: &[u8]) -> Result<[u8; 32]> {
let params = Params::new(65536, 3, 4, Some(32)).context("invalid Argon2id params")?; let params = Params::new(
ARGON2_M_COST,
ARGON2_T_COST,
ARGON2_P_COST,
Some(ARGON2_KEY_LEN),
)
.context("invalid Argon2id params")?;
let argon2 = Argon2::new(argon2::Algorithm::Argon2id, Version::V0x13, params); let argon2 = Argon2::new(argon2::Algorithm::Argon2id, Version::V0x13, params);
let mut key = [0u8; 32]; let mut key = [0u8; 32];
argon2 argon2

160
src/db.rs
View File

@@ -1,7 +1,10 @@
use anyhow::Result; use anyhow::Result;
use serde_json::Value;
use sqlx::PgPool; use sqlx::PgPool;
use sqlx::postgres::PgPoolOptions; use sqlx::postgres::PgPoolOptions;
use crate::audit::current_actor;
pub async fn create_pool(database_url: &str) -> Result<PgPool> { pub async fn create_pool(database_url: &str) -> Result<PgPool> {
tracing::debug!("connecting to database"); tracing::debug!("connecting to database");
let pool = PgPoolOptions::new() let pool = PgPoolOptions::new()
@@ -17,61 +20,46 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
tracing::debug!("running migrations"); tracing::debug!("running migrations");
sqlx::raw_sql( sqlx::raw_sql(
r#" r#"
CREATE TABLE IF NOT EXISTS secrets ( -- ── entries: top-level entities (server, service, key, …) ──────────────
CREATE TABLE IF NOT EXISTS entries (
id UUID PRIMARY KEY DEFAULT uuidv7(), id UUID PRIMARY KEY DEFAULT uuidv7(),
namespace VARCHAR(64) NOT NULL, namespace VARCHAR(64) NOT NULL,
kind VARCHAR(64) NOT NULL, kind VARCHAR(64) NOT NULL,
name VARCHAR(256) NOT NULL, name VARCHAR(256) NOT NULL,
tags TEXT[] NOT NULL DEFAULT '{}', tags TEXT[] NOT NULL DEFAULT '{}',
metadata JSONB NOT NULL DEFAULT '{}', metadata JSONB NOT NULL DEFAULT '{}',
encrypted BYTEA NOT NULL DEFAULT '\x',
version BIGINT NOT NULL DEFAULT 1, version BIGINT NOT NULL DEFAULT 1,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE(namespace, kind, name) UNIQUE(namespace, kind, name)
); );
-- idempotent column add for existing tables CREATE INDEX IF NOT EXISTS idx_entries_namespace ON entries(namespace);
DO $$ BEGIN CREATE INDEX IF NOT EXISTS idx_entries_kind ON entries(kind);
ALTER TABLE secrets ADD COLUMN IF NOT EXISTS metadata JSONB NOT NULL DEFAULT '{}'; CREATE INDEX IF NOT EXISTS idx_entries_tags ON entries USING GIN(tags);
EXCEPTION WHEN OTHERS THEN NULL; CREATE INDEX IF NOT EXISTS idx_entries_metadata ON entries USING GIN(metadata jsonb_path_ops);
END $$;
DO $$ BEGIN -- ── secrets: one row per encrypted field, plaintext schema metadata ────
ALTER TABLE secrets ADD COLUMN IF NOT EXISTS version BIGINT NOT NULL DEFAULT 1; CREATE TABLE IF NOT EXISTS secrets (
EXCEPTION WHEN OTHERS THEN NULL; id UUID PRIMARY KEY DEFAULT uuidv7(),
END $$; entry_id UUID NOT NULL REFERENCES entries(id) ON DELETE CASCADE,
field_name VARCHAR(256) NOT NULL,
encrypted BYTEA NOT NULL DEFAULT '\x',
version BIGINT NOT NULL DEFAULT 1,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE(entry_id, field_name)
);
-- Migrate encrypted column from JSONB to BYTEA if still JSONB type. CREATE INDEX IF NOT EXISTS idx_secrets_entry_id ON secrets(entry_id);
-- After migration, old plaintext rows will have their JSONB data
-- stored as raw bytes (UTF-8 encoded).
DO $$ BEGIN
IF EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = 'secrets'
AND column_name = 'encrypted'
AND data_type = 'jsonb'
) THEN
ALTER TABLE secrets RENAME COLUMN encrypted TO encrypted_jsonb_old;
ALTER TABLE secrets ADD COLUMN encrypted BYTEA NOT NULL DEFAULT '\x';
-- Copy existing JSONB data as raw UTF-8 bytes so nothing is lost
UPDATE secrets SET encrypted = convert_to(encrypted_jsonb_old::text, 'UTF8');
ALTER TABLE secrets DROP COLUMN encrypted_jsonb_old;
END IF;
EXCEPTION WHEN OTHERS THEN NULL;
END $$;
CREATE INDEX IF NOT EXISTS idx_secrets_namespace ON secrets(namespace); -- ── kv_config: global key-value store (Argon2id salt, etc.) ────────────
CREATE INDEX IF NOT EXISTS idx_secrets_kind ON secrets(kind);
CREATE INDEX IF NOT EXISTS idx_secrets_tags ON secrets USING GIN(tags);
CREATE INDEX IF NOT EXISTS idx_secrets_metadata ON secrets USING GIN(metadata jsonb_path_ops);
-- Key-value config table: stores Argon2id salt (shared across devices)
CREATE TABLE IF NOT EXISTS kv_config ( CREATE TABLE IF NOT EXISTS kv_config (
key TEXT PRIMARY KEY, key TEXT PRIMARY KEY,
value BYTEA NOT NULL value BYTEA NOT NULL
); );
-- ── audit_log: append-only operation log ────────────────────────────────
CREATE TABLE IF NOT EXISTS audit_log ( CREATE TABLE IF NOT EXISTS audit_log (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
action VARCHAR(32) NOT NULL, action VARCHAR(32) NOT NULL,
@@ -83,14 +71,13 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
); );
CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC); CREATE INDEX IF NOT EXISTS idx_audit_log_created ON audit_log(created_at DESC);
CREATE INDEX IF NOT EXISTS idx_audit_log_ns_kind ON audit_log(namespace, kind); CREATE INDEX IF NOT EXISTS idx_audit_log_ns_kind ON audit_log(namespace, kind);
-- History table: snapshot of secrets before each write operation. -- ── entries_history: entry-level snapshot (tags + metadata) ─────────────
-- Supports rollback to any prior version via `secrets rollback`. CREATE TABLE IF NOT EXISTS entries_history (
CREATE TABLE IF NOT EXISTS secrets_history (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
secret_id UUID NOT NULL, entry_id UUID NOT NULL,
namespace VARCHAR(64) NOT NULL, namespace VARCHAR(64) NOT NULL,
kind VARCHAR(64) NOT NULL, kind VARCHAR(64) NOT NULL,
name VARCHAR(256) NOT NULL, name VARCHAR(256) NOT NULL,
@@ -98,13 +85,32 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
action VARCHAR(16) NOT NULL, action VARCHAR(16) NOT NULL,
tags TEXT[] NOT NULL DEFAULT '{}', tags TEXT[] NOT NULL DEFAULT '{}',
metadata JSONB NOT NULL DEFAULT '{}', metadata JSONB NOT NULL DEFAULT '{}',
encrypted BYTEA NOT NULL DEFAULT '\x',
actor VARCHAR(128) NOT NULL DEFAULT '', actor VARCHAR(128) NOT NULL DEFAULT '',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
); );
CREATE INDEX IF NOT EXISTS idx_history_secret_id ON secrets_history(secret_id, version DESC); CREATE INDEX IF NOT EXISTS idx_entries_history_entry_id
CREATE INDEX IF NOT EXISTS idx_history_ns_kind_name ON secrets_history(namespace, kind, name, version DESC); ON entries_history(entry_id, version DESC);
CREATE INDEX IF NOT EXISTS idx_entries_history_ns_kind_name
ON entries_history(namespace, kind, name, version DESC);
-- ── secrets_history: field-level snapshot ───────────────────────────────
CREATE TABLE IF NOT EXISTS secrets_history (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
entry_id UUID NOT NULL,
secret_id UUID NOT NULL,
entry_version BIGINT NOT NULL,
field_name VARCHAR(256) NOT NULL,
encrypted BYTEA NOT NULL DEFAULT '\x',
action VARCHAR(16) NOT NULL,
actor VARCHAR(128) NOT NULL DEFAULT '',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_secrets_history_entry_id
ON secrets_history(entry_id, entry_version DESC);
CREATE INDEX IF NOT EXISTS idx_secrets_history_secret_id
ON secrets_history(secret_id);
"#, "#,
) )
.execute(pool) .execute(pool)
@@ -113,33 +119,31 @@ pub async fn migrate(pool: &PgPool) -> Result<()> {
Ok(()) Ok(())
} }
/// Snapshot parameters grouped to avoid too-many-arguments lint. // ── Entry-level history snapshot ────────────────────────────────────────────
pub struct SnapshotParams<'a> {
pub secret_id: uuid::Uuid, pub struct EntrySnapshotParams<'a> {
pub entry_id: uuid::Uuid,
pub namespace: &'a str, pub namespace: &'a str,
pub kind: &'a str, pub kind: &'a str,
pub name: &'a str, pub name: &'a str,
pub version: i64, pub version: i64,
pub action: &'a str, pub action: &'a str,
pub tags: &'a [String], pub tags: &'a [String],
pub metadata: &'a serde_json::Value, pub metadata: &'a Value,
pub encrypted: &'a [u8],
} }
/// Snapshot a secrets row into `secrets_history` before a write operation. /// Snapshot an entry row into `entries_history` before a write operation.
/// `action` is one of "add", "update", "delete". pub async fn snapshot_entry_history(
/// Failures are non-fatal (caller should warn).
pub async fn snapshot_history(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>, tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
p: SnapshotParams<'_>, p: EntrySnapshotParams<'_>,
) -> Result<()> { ) -> Result<()> {
let actor = std::env::var("USER").unwrap_or_default(); let actor = current_actor();
sqlx::query( sqlx::query(
"INSERT INTO secrets_history \ "INSERT INTO entries_history \
(secret_id, namespace, kind, name, version, action, tags, metadata, encrypted, actor) \ (entry_id, namespace, kind, name, version, action, tags, metadata, actor) \
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)", VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)",
) )
.bind(p.secret_id) .bind(p.entry_id)
.bind(p.namespace) .bind(p.namespace)
.bind(p.kind) .bind(p.kind)
.bind(p.name) .bind(p.name)
@@ -147,15 +151,49 @@ pub async fn snapshot_history(
.bind(p.action) .bind(p.action)
.bind(p.tags) .bind(p.tags)
.bind(p.metadata) .bind(p.metadata)
.bind(p.encrypted)
.bind(&actor) .bind(&actor)
.execute(&mut **tx) .execute(&mut **tx)
.await?; .await?;
Ok(()) Ok(())
} }
// ── Secret field-level history snapshot ─────────────────────────────────────
pub struct SecretSnapshotParams<'a> {
pub entry_id: uuid::Uuid,
pub secret_id: uuid::Uuid,
pub entry_version: i64,
pub field_name: &'a str,
pub encrypted: &'a [u8],
pub action: &'a str,
}
/// Snapshot a single secret field into `secrets_history`.
pub async fn snapshot_secret_history(
tx: &mut sqlx::Transaction<'_, sqlx::Postgres>,
p: SecretSnapshotParams<'_>,
) -> Result<()> {
let actor = current_actor();
sqlx::query(
"INSERT INTO secrets_history \
(entry_id, secret_id, entry_version, field_name, encrypted, action, actor) \
VALUES ($1, $2, $3, $4, $5, $6, $7)",
)
.bind(p.entry_id)
.bind(p.secret_id)
.bind(p.entry_version)
.bind(p.field_name)
.bind(p.encrypted)
.bind(p.action)
.bind(&actor)
.execute(&mut **tx)
.await?;
Ok(())
}
// ── Argon2 salt helpers ──────────────────────────────────────────────────────
/// Load the Argon2id salt from the database. /// Load the Argon2id salt from the database.
/// Returns None if not yet initialized.
pub async fn load_argon2_salt(pool: &PgPool) -> Result<Option<Vec<u8>>> { pub async fn load_argon2_salt(pool: &PgPool) -> Result<Option<Vec<u8>>> {
let row: Option<(Vec<u8>,)> = let row: Option<(Vec<u8>,)> =
sqlx::query_as("SELECT value FROM kv_config WHERE key = 'argon2_salt'") sqlx::query_as("SELECT value FROM kv_config WHERE key = 'argon2_salt'")

View File

@@ -7,6 +7,11 @@ mod models;
mod output; mod output;
use anyhow::Result; use anyhow::Result;
/// Load .env from current or parent directories (best-effort, no error if missing).
fn load_dotenv() {
let _ = dotenvy::dotenv();
}
use clap::{Parser, Subcommand}; use clap::{Parser, Subcommand};
use tracing_subscriber::EnvFilter; use tracing_subscriber::EnvFilter;
@@ -76,25 +81,25 @@ EXAMPLES:
# Add a server # Add a server
secrets add -n refining --kind server --name my-server \\ secrets add -n refining --kind server --name my-server \\
--tag aliyun --tag shanghai \\ --tag aliyun --tag shanghai \\
-m ip=47.117.131.22 -m desc=\"Aliyun Shanghai ECS\" \\ -m ip=10.0.0.1 -m desc=\"Example ECS\" \\
-s username=root -s ssh_key=@./keys/server.pem -s username=root -s ssh_key=@./keys/server.pem
# Add a service credential # Add a service credential
secrets add -n refining --kind service --name gitea \\ secrets add -n refining --kind service --name gitea \\
--tag gitea \\ --tag gitea \\
-m url=https://gitea.refining.dev -m default_org=refining \\ -m url=https://code.example.com -m default_org=myorg \\
-s token=<token> -s token=<token>
# Add typed JSON metadata # Add typed JSON metadata
secrets add -n refining --kind service --name gitea \\ secrets add -n refining --kind service --name gitea \\
-m port:=3000 \\ -m port:=3000 \\
-m enabled:=true \\ -m enabled:=true \\
-m domains:='[\"gitea.refining.dev\",\"git.refining.dev\"]' \\ -m domains:='[\"code.example.com\",\"git.example.com\"]' \\
-m tls:='{\"enabled\":true,\"redirect_http\":true}' -m tls:='{\"enabled\":true,\"redirect_http\":true}'
# Add with token read from a file # Add with token read from a file
secrets add -n ricnsmart --kind service --name mqtt \\ secrets add -n ricnsmart --kind service --name mqtt \\
-m host=mqtt.ricnsmart.com -m port=1883 \\ -m host=mqtt.example.com -m port=1883 \\
-s password=@./mqtt_password.txt -s password=@./mqtt_password.txt
# Add typed JSON secrets # Add typed JSON secrets
@@ -106,7 +111,13 @@ EXAMPLES:
# Write a multiline file into a nested secret field # Write a multiline file into a nested secret field
secrets add -n refining --kind server --name my-server \\ secrets add -n refining --kind server --name my-server \\
-s credentials:content@./keys/server.pem")] -s credentials:content@./keys/server.pem
# Shared PEM (key_ref): store key once, reference from multiple servers
secrets add -n refining --kind key --name my-shared-key \\
--tag aliyun -s content=@./keys/shared.pem
secrets add -n refining --kind server --name i-abc123 \\
-m ip=10.0.0.1 -m key_ref=my-shared-key -s username=ecs-user")]
Add { Add {
/// Namespace, e.g. refining, ricnsmart /// Namespace, e.g. refining, ricnsmart
#[arg(short, long)] #[arg(short, long)]
@@ -114,19 +125,20 @@ EXAMPLES:
/// Kind of record: server, service, key, ... /// Kind of record: server, service, key, ...
#[arg(long)] #[arg(long)]
kind: String, kind: String,
/// Human-readable unique name, e.g. gitea, i-uf63f2uookgs5uxmrdyc /// Human-readable unique name, e.g. gitea, i-example0abcd1234efgh
#[arg(long)] #[arg(long)]
name: String, name: String,
/// Tag for categorization (repeatable), e.g. --tag aliyun --tag hongkong /// Tag for categorization (repeatable), e.g. --tag aliyun --tag hongkong
#[arg(long = "tag")] #[arg(long = "tag")]
tags: Vec<String>, tags: Vec<String>,
/// Plaintext metadata: key=value, key:=<json>, key=@file, or nested:path@file /// Plaintext metadata: key=value, key:=<json>, key=@file, or nested:path@file.
/// Use key_ref=<name> to reference a shared key entry (kind=key); inject/run merge its secrets.
#[arg(long = "meta", short = 'm')] #[arg(long = "meta", short = 'm')]
meta: Vec<String>, meta: Vec<String>,
/// Secret entry: key=value, key:=<json>, key=@file, or nested:path@file /// Secret entry: key=value, key:=<json>, key=@file, or nested:path@file
#[arg(long = "secret", short = 's')] #[arg(long = "secret", short = 's')]
secrets: Vec<String>, secrets: Vec<String>,
/// Output format: text (default on TTY), json, json-compact, env /// Output format: text (default on TTY), json, json-compact
#[arg(short, long = "output")] #[arg(short, long = "output")]
output: Option<String>, output: Option<String>,
}, },
@@ -135,7 +147,7 @@ EXAMPLES:
/// ///
/// Supports fuzzy search (-q), exact lookup (--name), field extraction (-f), /// Supports fuzzy search (-q), exact lookup (--name), field extraction (-f),
/// summary view (--summary), pagination (--limit / --offset), and structured /// summary view (--summary), pagination (--limit / --offset), and structured
/// output (-o json / json-compact / env). When stdout is not a TTY, output /// output (-o json / json-compact). When stdout is not a TTY, output
/// defaults to json-compact automatically. /// defaults to json-compact automatically.
#[command(after_help = "EXAMPLES: #[command(after_help = "EXAMPLES:
# Discover all records (summary, safe default limit) # Discover all records (summary, safe default limit)
@@ -157,9 +169,6 @@ EXAMPLES:
secrets search -n refining --kind service --name gitea \\ secrets search -n refining --kind service --name gitea \\
-f metadata.url -f metadata.default_org -f metadata.url -f metadata.default_org
# Export metadata as env vars (single record only)
secrets search -n refining --kind service --name gitea -o env
# Inject decrypted secrets only when needed # Inject decrypted secrets only when needed
secrets inject -n refining --kind service --name gitea secrets inject -n refining --kind service --name gitea
secrets run -n refining --kind service --name gitea -- printenv secrets run -n refining --kind service --name gitea -- printenv
@@ -180,7 +189,7 @@ EXAMPLES:
/// Filter by kind, e.g. server, service /// Filter by kind, e.g. server, service
#[arg(long)] #[arg(long)]
kind: Option<String>, kind: Option<String>,
/// Exact name filter, e.g. gitea, i-uf63f2uookgs5uxmrdyc /// Exact name filter, e.g. gitea, i-example0abcd1234efgh
#[arg(long)] #[arg(long)]
name: Option<String>, name: Option<String>,
/// Filter by tag, e.g. --tag aliyun (repeatable for AND intersection) /// Filter by tag, e.g. --tag aliyun (repeatable for AND intersection)
@@ -189,9 +198,6 @@ EXAMPLES:
/// Fuzzy keyword (matches name, namespace, kind, tags, metadata text) /// Fuzzy keyword (matches name, namespace, kind, tags, metadata text)
#[arg(short, long)] #[arg(short, long)]
query: Option<String>, query: Option<String>,
/// Deprecated: search never reveals secrets; use inject/run instead
#[arg(long)]
show_secrets: bool,
/// Extract metadata field value(s) directly: metadata.<key> (repeatable) /// Extract metadata field value(s) directly: metadata.<key> (repeatable)
#[arg(short = 'f', long = "field")] #[arg(short = 'f', long = "field")]
fields: Vec<String>, fields: Vec<String>,
@@ -207,28 +213,44 @@ EXAMPLES:
/// Sort order: name (default), updated, created /// Sort order: name (default), updated, created
#[arg(long, default_value = "name")] #[arg(long, default_value = "name")]
sort: String, sort: String,
/// Output format: text (default on TTY), json, json-compact, env /// Output format: text (default on TTY), json, json-compact
#[arg(short, long = "output")] #[arg(short, long = "output")]
output: Option<String>, output: Option<String>,
}, },
/// Delete a record permanently. Requires exact namespace + kind + name. /// Delete one record precisely, or bulk-delete by namespace.
///
/// With --name: deletes exactly that record (--kind also required).
/// Without --name: bulk-deletes all records matching namespace + optional --kind.
/// Use --dry-run to preview bulk deletes before committing.
#[command(after_help = "EXAMPLES: #[command(after_help = "EXAMPLES:
# Delete a service credential # Delete a single record (exact match)
secrets delete -n refining --kind service --name legacy-mqtt secrets delete -n refining --kind service --name legacy-mqtt
# Delete a server record # Preview what a bulk delete would remove (no writes)
secrets delete -n ricnsmart --kind server --name i-old-server-id")] secrets delete -n refining --dry-run
# Bulk-delete all records in a namespace
secrets delete -n ricnsmart
# Bulk-delete only server records in a namespace
secrets delete -n ricnsmart --kind server
# JSON output
secrets delete -n refining --kind service -o json")]
Delete { Delete {
/// Namespace, e.g. refining /// Namespace, e.g. refining
#[arg(short, long)] #[arg(short, long)]
namespace: String, namespace: String,
/// Kind, e.g. server, service /// Kind filter, e.g. server, service (required with --name; optional for bulk)
#[arg(long)] #[arg(long)]
kind: String, kind: Option<String>,
/// Exact name of the record to delete /// Exact name of the record to delete (omit for bulk delete)
#[arg(long)] #[arg(long)]
name: String, name: Option<String>,
/// Preview what would be deleted without making any changes (bulk mode only)
#[arg(long)]
dry_run: bool,
/// Output format: text (default on TTY), json, json-compact /// Output format: text (default on TTY), json, json-compact
#[arg(short, long = "output")] #[arg(short, long = "output")]
output: Option<String>, output: Option<String>,
@@ -272,7 +294,11 @@ EXAMPLES:
# Update nested typed JSON fields # Update nested typed JSON fields
secrets update -n refining --kind service --name deploy-bot \\ secrets update -n refining --kind service --name deploy-bot \\
-s auth:config:='{\"issuer\":\"gitea\",\"rotate\":true}' \\ -s auth:config:='{\"issuer\":\"gitea\",\"rotate\":true}' \\
-s auth:retry:=5")] -s auth:retry:=5
# Rotate shared PEM (all servers with key_ref=my-shared-key get the new key)
secrets update -n refining --kind key --name my-shared-key \\
-s content=@./keys/new-shared.pem")]
Update { Update {
/// Namespace, e.g. refining, ricnsmart /// Namespace, e.g. refining, ricnsmart
#[arg(short, long)] #[arg(short, long)]
@@ -289,7 +315,8 @@ EXAMPLES:
/// Remove a tag (repeatable) /// Remove a tag (repeatable)
#[arg(long = "remove-tag")] #[arg(long = "remove-tag")]
remove_tags: Vec<String>, remove_tags: Vec<String>,
/// Set or overwrite a metadata field: key=value, key:=<json>, key=@file, or nested:path@file /// Set or overwrite a metadata field: key=value, key:=<json>, key=@file, or nested:path@file.
/// Use key_ref=<name> to reference a shared key entry (kind=key).
#[arg(long = "meta", short = 'm')] #[arg(long = "meta", short = 'm')]
meta: Vec<String>, meta: Vec<String>,
/// Delete a metadata field by key or nested path, e.g. old_port or credentials:content /// Delete a metadata field by key or nested path, e.g. old_port or credentials:content
@@ -379,7 +406,9 @@ EXAMPLES:
secrets inject -n refining --kind service --name gitea -o json secrets inject -n refining --kind service --name gitea -o json
# Eval into current shell (use with caution) # Eval into current shell (use with caution)
eval $(secrets inject -n refining --kind service --name gitea)")] eval $(secrets inject -n refining --kind service --name gitea)
# For entries with metadata.key_ref, referenced key's secrets are merged automatically")]
Inject { Inject {
#[arg(short, long)] #[arg(short, long)]
namespace: Option<String>, namespace: Option<String>,
@@ -409,7 +438,9 @@ EXAMPLES:
secrets run --tag production -- env | grep GITEA secrets run --tag production -- env | grep GITEA
# With prefix # With prefix
secrets run -n refining --kind service --name gitea --prefix GITEA -- printenv")] secrets run -n refining --kind service --name gitea --prefix GITEA -- printenv
# metadata.key_ref entries get key secrets merged (e.g. server + shared PEM)")]
Run { Run {
#[arg(short, long)] #[arg(short, long)]
namespace: Option<String>, namespace: Option<String>,
@@ -429,8 +460,8 @@ EXAMPLES:
/// Check for a newer version and update the binary in-place. /// Check for a newer version and update the binary in-place.
/// ///
/// Downloads the latest release from Gitea and replaces the current binary. /// Downloads the latest release and replaces the current binary. No database connection or master key required.
/// No database connection or master key required. /// Release URL defaults to the upstream server; override via SECRETS_UPGRADE_URL for self-hosted or fork.
#[command(after_help = "EXAMPLES: #[command(after_help = "EXAMPLES:
# Check for updates only (no download) # Check for updates only (no download)
secrets upgrade --check secrets upgrade --check
@@ -442,6 +473,83 @@ EXAMPLES:
#[arg(long)] #[arg(long)]
check: bool, check: bool,
}, },
/// Export records to a file (JSON, TOML, or YAML).
///
/// Decrypts and exports all matched records. Requires master key unless --no-secrets is used.
#[command(after_help = "EXAMPLES:
# Export everything to JSON
secrets export --file backup.json
# Export a specific namespace to TOML
secrets export -n refining --file refining.toml
# Export a specific kind
secrets export -n refining --kind service --file services.yaml
# Export by tag
secrets export --tag production --file prod.json
# Export schema only (no decryption needed)
secrets export --no-secrets --file schema.json
# Print to stdout in YAML
secrets export -n refining --format yaml")]
Export {
/// Filter by namespace
#[arg(short, long)]
namespace: Option<String>,
/// Filter by kind, e.g. server, service
#[arg(long)]
kind: Option<String>,
/// Exact name filter
#[arg(long)]
name: Option<String>,
/// Filter by tag (repeatable)
#[arg(long)]
tag: Vec<String>,
/// Fuzzy keyword search
#[arg(short, long)]
query: Option<String>,
/// Output file path (format inferred from extension: .json / .toml / .yaml / .yml)
#[arg(long)]
file: Option<String>,
/// Explicit format: json, toml, or yaml (overrides file extension; required for stdout)
#[arg(long)]
format: Option<String>,
/// Omit secrets from output (no master key required)
#[arg(long)]
no_secrets: bool,
},
/// Import records from a file (JSON, TOML, or YAML).
///
/// Reads an export file and inserts or updates entries. Requires master key to re-encrypt secrets.
#[command(after_help = "EXAMPLES:
# Import a JSON backup (conflict = error by default)
secrets import backup.json
# Import and overwrite existing records
secrets import --force refining.toml
# Preview what would be imported (no writes)
secrets import --dry-run backup.yaml
# JSON output for the import summary
secrets import backup.json -o json")]
Import {
/// Input file path (format inferred from extension: .json / .toml / .yaml / .yml)
file: String,
/// Overwrite existing records on conflict (default: error and abort)
#[arg(long)]
force: bool,
/// Preview operations without writing to the database
#[arg(long)]
dry_run: bool,
/// Output format: text (default on TTY), json, json-compact
#[arg(short, long = "output")]
output: Option<String>,
},
} }
#[derive(Subcommand)] #[derive(Subcommand)]
@@ -459,6 +567,7 @@ enum ConfigAction {
#[tokio::main] #[tokio::main]
async fn main() -> Result<()> { async fn main() -> Result<()> {
load_dotenv();
let cli = Cli::parse(); let cli = Cli::parse();
let filter = if cli.verbose { let filter = if cli.verbose {
@@ -531,7 +640,6 @@ async fn main() -> Result<()> {
name, name,
tag, tag,
query, query,
show_secrets,
fields, fields,
summary, summary,
limit, limit,
@@ -549,7 +657,6 @@ async fn main() -> Result<()> {
name: name.as_deref(), name: name.as_deref(),
tags: &tag, tags: &tag,
query: query.as_deref(), query: query.as_deref(),
show_secrets,
fields: &fields, fields: &fields,
summary, summary,
limit, limit,
@@ -565,12 +672,23 @@ async fn main() -> Result<()> {
namespace, namespace,
kind, kind,
name, name,
dry_run,
output, output,
} => { } => {
let _span = let _span =
tracing::info_span!("cmd", command = "delete", %namespace, %kind, %name).entered(); tracing::info_span!("cmd", command = "delete", %namespace, ?kind, ?name).entered();
let out = resolve_output_mode(output.as_deref())?; let out = resolve_output_mode(output.as_deref())?;
commands::delete::run(&pool, &namespace, &kind, &name, out).await?; commands::delete::run(
&pool,
commands::delete::DeleteArgs {
namespace: &namespace,
kind: kind.as_deref(),
name: name.as_deref(),
dry_run,
output: out,
},
)
.await?;
} }
Commands::Update { Commands::Update {
@@ -616,7 +734,17 @@ async fn main() -> Result<()> {
output, output,
} => { } => {
let out = resolve_output_mode(output.as_deref())?; let out = resolve_output_mode(output.as_deref())?;
commands::rollback::list_history(&pool, &namespace, &kind, &name, limit, out).await?; commands::history::run(
&pool,
commands::history::HistoryArgs {
namespace: &namespace,
kind: &kind,
name: &name,
limit,
output: out,
},
)
.await?;
} }
Commands::Rollback { Commands::Rollback {
@@ -690,6 +818,61 @@ async fn main() -> Result<()> {
) )
.await?; .await?;
} }
Commands::Export {
namespace,
kind,
name,
tag,
query,
file,
format,
no_secrets,
} => {
let master_key = if no_secrets {
None
} else {
Some(crypto::load_master_key()?)
};
let _span = tracing::info_span!("cmd", command = "export").entered();
commands::export_cmd::run(
&pool,
commands::export_cmd::ExportArgs {
namespace: namespace.as_deref(),
kind: kind.as_deref(),
name: name.as_deref(),
tags: &tag,
query: query.as_deref(),
file: file.as_deref(),
format: format.as_deref(),
no_secrets,
},
master_key.as_ref(),
)
.await?;
}
Commands::Import {
file,
force,
dry_run,
output,
} => {
let master_key = crypto::load_master_key()?;
let _span = tracing::info_span!("cmd", command = "import").entered();
let out = resolve_output_mode(output.as_deref())?;
commands::import_cmd::run(
&pool,
commands::import_cmd::ImportArgs {
file: &file,
force,
dry_run,
output: out,
},
&master_key,
)
.await?;
}
} }
Ok(()) Ok(())

View File

@@ -1,20 +1,211 @@
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value; use serde_json::Value;
use std::collections::BTreeMap;
use uuid::Uuid; use uuid::Uuid;
/// A top-level entry (server, service, key, …).
/// Sensitive fields are stored separately in `secrets`.
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)] #[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct Secret { pub struct Entry {
pub id: Uuid, pub id: Uuid,
pub namespace: String, pub namespace: String,
pub kind: String, pub kind: String,
pub name: String, pub name: String,
pub tags: Vec<String>, pub tags: Vec<String>,
pub metadata: Value, pub metadata: Value,
pub version: i64,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
}
/// A single encrypted field belonging to an Entry.
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct SecretField {
pub id: Uuid,
pub entry_id: Uuid,
pub field_name: String,
/// AES-256-GCM ciphertext: nonce(12B) || ciphertext+tag /// AES-256-GCM ciphertext: nonce(12B) || ciphertext+tag
/// Decrypt with crypto::decrypt_json() before use.
pub encrypted: Vec<u8>, pub encrypted: Vec<u8>,
pub version: i64, pub version: i64,
pub created_at: DateTime<Utc>, pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>, pub updated_at: DateTime<Utc>,
} }
// ── Internal query row types (shared across commands) ─────────────────────────
/// Minimal entry row fetched for write operations (add / update / delete / rollback).
#[derive(Debug, sqlx::FromRow)]
pub struct EntryRow {
pub id: Uuid,
pub version: i64,
pub tags: Vec<String>,
pub metadata: Value,
}
/// Minimal secret field row fetched before snapshots or cascade deletes.
#[derive(Debug, sqlx::FromRow)]
pub struct SecretFieldRow {
pub id: Uuid,
pub field_name: String,
pub encrypted: Vec<u8>,
}
// ── Export / Import types ──────────────────────────────────────────────────────
/// Supported file formats for export/import.
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum ExportFormat {
Json,
Toml,
Yaml,
}
impl ExportFormat {
/// Infer format from file extension (.json / .toml / .yaml / .yml).
pub fn from_extension(path: &str) -> anyhow::Result<Self> {
let ext = path.rsplit('.').next().unwrap_or("").to_lowercase();
Self::from_str(&ext).map_err(|_| {
anyhow::anyhow!(
"Cannot infer format from extension '.{}'. Use --format json|toml|yaml",
ext
)
})
}
/// Parse from --format CLI value.
pub fn from_str(s: &str) -> anyhow::Result<Self> {
match s.to_lowercase().as_str() {
"json" => Ok(Self::Json),
"toml" => Ok(Self::Toml),
"yaml" | "yml" => Ok(Self::Yaml),
other => anyhow::bail!("Unknown format '{}'. Expected: json, toml, or yaml", other),
}
}
/// Serialize ExportData to a string in this format.
pub fn serialize(&self, data: &ExportData) -> anyhow::Result<String> {
match self {
Self::Json => Ok(serde_json::to_string_pretty(data)?),
Self::Toml => {
let toml_val = json_to_toml_value(&serde_json::to_value(data)?)?;
toml::to_string_pretty(&toml_val)
.map_err(|e| anyhow::anyhow!("TOML serialization failed: {}", e))
}
Self::Yaml => serde_yaml::to_string(data)
.map_err(|e| anyhow::anyhow!("YAML serialization failed: {}", e)),
}
}
/// Deserialize ExportData from a string in this format.
pub fn deserialize(&self, content: &str) -> anyhow::Result<ExportData> {
match self {
Self::Json => Ok(serde_json::from_str(content)?),
Self::Toml => {
let toml_val: toml::Value = toml::from_str(content)
.map_err(|e| anyhow::anyhow!("TOML parse error: {}", e))?;
let json_val = toml_to_json_value(&toml_val);
Ok(serde_json::from_value(json_val)?)
}
Self::Yaml => serde_yaml::from_str(content)
.map_err(|e| anyhow::anyhow!("YAML parse error: {}", e)),
}
}
}
/// Top-level structure for export/import files.
#[derive(Debug, Serialize, Deserialize)]
pub struct ExportData {
pub version: u32,
pub exported_at: String,
pub entries: Vec<ExportEntry>,
}
/// A single entry with decrypted secrets for export/import.
#[derive(Debug, Serialize, Deserialize)]
pub struct ExportEntry {
pub namespace: String,
pub kind: String,
pub name: String,
#[serde(default)]
pub tags: Vec<String>,
#[serde(default)]
pub metadata: Value,
/// Decrypted secret fields. None means no secrets in this export (--no-secrets).
#[serde(default, skip_serializing_if = "Option::is_none")]
pub secrets: Option<BTreeMap<String, Value>>,
}
// ── TOML ↔ JSON value conversion ──────────────────────────────────────────────
/// Convert a serde_json Value to a toml Value.
/// `null` values are filtered out (TOML does not support null).
/// Mixed-type arrays are serialised as JSON strings.
pub fn json_to_toml_value(v: &Value) -> anyhow::Result<toml::Value> {
match v {
Value::Null => anyhow::bail!("TOML does not support null values"),
Value::Bool(b) => Ok(toml::Value::Boolean(*b)),
Value::Number(n) => {
if let Some(i) = n.as_i64() {
Ok(toml::Value::Integer(i))
} else if let Some(f) = n.as_f64() {
Ok(toml::Value::Float(f))
} else {
anyhow::bail!("unsupported number: {}", n)
}
}
Value::String(s) => Ok(toml::Value::String(s.clone())),
Value::Array(arr) => {
let items: anyhow::Result<Vec<toml::Value>> =
arr.iter().map(json_to_toml_value).collect();
match items {
Ok(vals) => Ok(toml::Value::Array(vals)),
Err(e) => {
tracing::debug!(error = %e, "mixed-type array; falling back to JSON string");
Ok(toml::Value::String(serde_json::to_string(v)?))
}
}
}
Value::Object(map) => {
let mut toml_map = toml::map::Map::new();
for (k, val) in map {
if val.is_null() {
// Skip null entries
continue;
}
match json_to_toml_value(val) {
Ok(tv) => {
toml_map.insert(k.clone(), tv);
}
Err(e) => {
tracing::debug!(key = %k, error = %e, "field not representable in TOML; falling back to JSON string");
toml_map
.insert(k.clone(), toml::Value::String(serde_json::to_string(val)?));
}
}
}
Ok(toml::Value::Table(toml_map))
}
}
}
/// Convert a toml Value back to a serde_json Value.
pub fn toml_to_json_value(v: &toml::Value) -> Value {
match v {
toml::Value::Boolean(b) => Value::Bool(*b),
toml::Value::Integer(i) => Value::Number((*i).into()),
toml::Value::Float(f) => serde_json::Number::from_f64(*f)
.map(Value::Number)
.unwrap_or(Value::Null),
toml::Value::String(s) => Value::String(s.clone()),
toml::Value::Datetime(dt) => Value::String(dt.to_string()),
toml::Value::Array(arr) => Value::Array(arr.iter().map(toml_to_json_value).collect()),
toml::Value::Table(map) => {
let obj: serde_json::Map<String, Value> = map
.iter()
.map(|(k, v)| (k.clone(), toml_to_json_value(v)))
.collect();
Value::Object(obj)
}
}
}

View File

@@ -12,8 +12,6 @@ pub enum OutputMode {
Json, Json,
/// Single-line JSON (default when stdout is NOT a TTY, e.g. piped to jq) /// Single-line JSON (default when stdout is NOT a TTY, e.g. piped to jq)
JsonCompact, JsonCompact,
/// KEY=VALUE pairs suitable for `source` or `.env` files
Env,
} }
impl FromStr for OutputMode { impl FromStr for OutputMode {
@@ -24,9 +22,8 @@ impl FromStr for OutputMode {
"text" => Ok(Self::Text), "text" => Ok(Self::Text),
"json" => Ok(Self::Json), "json" => Ok(Self::Json),
"json-compact" => Ok(Self::JsonCompact), "json-compact" => Ok(Self::JsonCompact),
"env" => Ok(Self::Env),
other => Err(anyhow::anyhow!( other => Err(anyhow::anyhow!(
"Unknown output format '{}'. Valid: text, json, json-compact, env", "Unknown output format '{}'. Valid: text, json, json-compact",
other other
)), )),
} }
@@ -53,3 +50,16 @@ pub fn format_local_time(dt: DateTime<Utc>) -> String {
.format("%Y-%m-%d %H:%M:%S %:z") .format("%Y-%m-%d %H:%M:%S %:z")
.to_string() .to_string()
} }
/// Print a JSON value to stdout in the requested output mode.
/// - `Json` → pretty-printed
/// - `JsonCompact` → single line
/// - `Text` → no-op (caller is responsible for the text branch)
pub fn print_json(value: &serde_json::Value, mode: &OutputMode) -> anyhow::Result<()> {
match mode {
OutputMode::Json => println!("{}", serde_json::to_string_pretty(value)?),
OutputMode::JsonCompact => println!("{}", serde_json::to_string(value)?),
OutputMode::Text => {}
}
Ok(())
}

View File

@@ -0,0 +1,3 @@
-----BEGIN EXAMPLE KEY PLACEHOLDER-----
This file is for local dev/testing. Replace with a real key when needed.
-----END EXAMPLE KEY PLACEHOLDER-----