Compare commits

..

105 Commits

Author SHA1 Message Date
c10b70757f fix: 资产信息接口 device_realtime 字段返回固定假数据,避免前端因 nil 报错
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 9m58s
Gateway 同步接口尚未对接,临时为设备类型资产返回 mock 数据,
后续对接后搜索 buildMockDeviceRealtime 替换为真实数据
2026-03-21 14:42:48 +08:00
4d1e714366 fix: 补齐迁移 000076 遗漏的列名重命名(card_wallet_id → asset_wallet_id)
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 8m52s
迁移 000076 只将表名从 card_wallet 改为 asset_wallet,但遗漏了表内
card_wallet_id 列的重命名,导致 Model 中 column:asset_wallet_id 与数据库
实际列名不匹配,所有涉及该字段的 INSERT/SELECT 均报错 2002。

影响范围:
- tb_asset_recharge_record.card_wallet_id → asset_wallet_id
- tb_asset_wallet_transaction.card_wallet_id → asset_wallet_id
2026-03-21 14:30:29 +08:00
d2b765327c 完整的字段返回
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 9m52s
2026-03-21 13:41:44 +08:00
7dfcf41b41 fix: 修复卡类型资产绑定键错误导致归属校验永远失败
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 9m48s
resolveAssetBindingKey 对卡类型错误地返回 card.ICCID 作为绑定键,
但归属校验 isCustomerOwnAsset 使用 card.VirtualNo 比对,二者不一致
导致所有卡资产的 C 端接口返回 403 无权限。

修复:卡类型绑定键改为 card.VirtualNo,与设计文档一致。
附带数据迁移修正已有的错误绑定记录。
2026-03-21 11:33:57 +08:00
ed334b946b refactor: 清理重构遗留的死代码
Some checks failed
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Has been cancelled
- personal_customer.Service: 删除已迁移到 client_auth 的死方法
  (GetProfile/SendVerificationCode/VerifyCode),移除多余的
  verificationService/jwtManager 依赖
- 删除 internal/service/customer/ 整个目录(零引用的早期残留)
2026-03-21 11:33:06 +08:00
95b2334658 feat: 资产套餐历史接口新增 package_type 和 status 筛选条件
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 8m10s
GET /api/c/v1/asset/package-history 支持可选参数:
- package_type: formal(正式套餐) / addon(加油包)
- status: 0(待生效) / 1(生效中) / 2(已用完) / 3(已过期) / 4(已失效)
不传则返回全部,保持向后兼容。
2026-03-21 11:01:21 +08:00
da66e673fe feat: 接入短信服务,修复 SMS 客户端 API 路径
Some checks failed
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Has been cancelled
- cmd/api/main.go: 新增 initSMS() 初始化短信客户端并注入 verificationService
- pkg/sms/client.go: 修复 API 路径缺少 /sms 前缀(/api/... → /sms/api/...)
- docker-compose.prod.yml: 添加线上短信服务环境变量
2026-03-21 10:51:43 +08:00
284f6c15c7 fix: 修复个人客户设备绑定查询使用已废弃的 device_no 列名
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m4s
数据库列已重命名为 virtual_no,但 Store 层 3 处原始 SQL 仍使用旧列名 device_no,
导致小程序登录时查询客户资产绑定关系报 column device_no does not exist。
2026-03-20 18:20:24 +08:00
55918a0b88 fix: 修复 C 端公开路由被认证中间件拦截的问题
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m51s
Fiber 的 Group.Use() 在路由表中注册全局 USE 处理器,不区分 Group 对象。
原代码先调用 authProtectedGroup.Use() 再注册公开路由,导致 verify-asset、
wechat-login、miniapp-login、send-code 四个无需认证的接口被拦截返回 1004。

修复方式:公开路由直接注册在 router 上且在任何 Use() 之前,
利用 Fiber 按注册顺序匹配的机制确保公开路由优先命中。
2026-03-20 18:01:12 +08:00
d2494798aa fix: 修正停复机接口错误码,网关失败不再返回模糊的内部服务器错误
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m13s
- 单卡停复机:网关错误从 CodeInternalError(2001) 改为 CodeGatewayError(1110),前端可看到具体失败原因
- 单卡停复机:DB 更新裸返 GORM error 改为 CodeDatabaseError(2002) 包装
- 设备复机:全部卡失败时错误码从 CodeInternalError 改为 CodeGatewayError
2026-03-19 18:37:03 +08:00
b9733c4913 fix: 修正零售价架构错误 + 清理旧微信配置 + 归档提案 + 前端接口文档
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m12s
1. 修正 retail_price 架构:
   - 删除 batch-pricing 接口的 pricing_target 字段和 retail_price 分支
     (上级只能改下级成本价,不能改零售价)
   - 新增 PATCH /api/admin/packages/:id/retail-price 接口
     (代理自己改自己的零售价,校验 retail_price >= cost_price)

2. 清理旧微信 YAML 配置(已全部迁移到数据库 tb_wechat_config):
   - 删除 config.yaml 中 wechat.official_account 配置节
   - 删除 NewOfficialAccountApp() 旧工厂函数
   - 清理 personal_customer service 中的死代码(旧登录/绑定微信方法)
   - 清理 docker-compose.prod.yml 中旧微信环境变量和证书挂载注释

3. 归档四个已完成提案到 openspec/changes/archive/

4. 新增前端接口变更说明文档(docs/前端接口变更说明.md)

5. 修正归档提案和 specs 中关于 pricing_target 的错误描述
2026-03-19 17:39:43 +08:00
9bd55a1695 feat: 实现客户端核心业务接口(client-core-business-api)
新增客户端资产、钱包、订单、实名、设备管理等核心业务 Handler 与 DTO:
- 客户端资产信息查询、套餐列表、套餐历史、资产刷新
- 客户端钱包详情、流水、充值校验、充值订单、充值记录
- 客户端订单创建、列表、详情
- 客户端实名认证链接获取
- 客户端设备卡列表、重启、恢复出厂、WiFi配置、切卡
- 客户端订单服务(含微信/支付宝支付流程)
- 强充自动代购异步任务处理
- 数据库迁移 000084:充值记录增加自动代购状态字段
2026-03-19 13:28:04 +08:00
e78f5794b9 feat: 实现客户端换货系统(client-exchange-system)
新增完整换货生命周期管理:后台发起 → 客户端填收货信息 → 后台发货 → 确认完成(含可选全量迁移) → 旧资产转新再销售

后台接口(7个):
- POST /api/admin/exchanges(发起换货)
- GET /api/admin/exchanges(换货列表)
- GET /api/admin/exchanges/:id(换货详情)
- POST /api/admin/exchanges/:id/ship(发货)
- POST /api/admin/exchanges/:id/complete(确认完成+可选迁移)
- POST /api/admin/exchanges/:id/cancel(取消)
- POST /api/admin/exchanges/:id/renew(旧资产转新)

客户端接口(2个):
- GET /api/c/v1/exchange/pending(查询换货通知)
- POST /api/c/v1/exchange/:id/shipping-info(填写收货信息)

核心能力:
- ExchangeOrder 模型与状态机(1待填写→2待发货→3已发货→4已完成,1/2可取消→5)
- 全量迁移事务(11张表:钱包、套餐、标签、客户绑定等)
- 旧资产转新(generation+1、状态重置、新钱包、历史隔离)
- 旧 CardReplacementRecord 表改名为 legacy,is_replaced 过滤改为查新表
- 数据库迁移:000085 新建 tb_exchange_order,000086 旧表改名
2026-03-19 13:26:54 +08:00
df76e33105 feat: 实现 C 端完整认证系统(client-auth-system)
实现面向个人客户的 7 个认证接口(A1-A7),覆盖资产验证、
微信公众号/小程序登录、手机号绑定/换绑、退出登录完整流程。

主要变更:
- 新增 PersonalCustomerOpenID 模型,支持多 AppID 多 OpenID 管理
- 实现有状态 JWT(JWT + Redis 双重校验),支持服务端主动失效
- 扩展微信 SDK:小程序 Code2Session + 3 个 DB 动态工厂函数
- 实现 A1 资产验证 IP 限流(30/min)和 A4 三层验证码限流
- 新增 7 个错误码(1180-1186)和 6 个 Redis Key 函数
- 注册 /api/c/v1/auth/* 下 7 个端点并更新 OpenAPI 文档
- 数据库迁移 000083:新建 tb_personal_customer_openid 表
2026-03-19 11:33:41 +08:00
ec86dbf463 feat: 客户端接口数据模型基础准备
- 新增资产状态、订单来源、操作人类型、实名链接类型常量
- 8个模型新增字段(asset_status/generation/source/retail_price等)
- 数据库迁移000082:7张表15+字段,含存量retail_price回填
- BUG-1修复:代理零售价渠道隔离,cost_price分配锁定
- BUG-2修复:一次性佣金仅客户端订单触发
- BUG-4修复:充值回调Store操作纳入事务
- 新增资产手动停用接口(PATCH /iot-cards/:id/deactivate、/devices/:id/deactivate)
- Carrier管理新增实名链接配置
- 后台订单generation写时快照
- BatchUpdatePricing支持retail_price调价目标
- 清理全部H5旧接口和个人客户旧登录方法
2026-03-19 10:56:50 +08:00
817d0d6e04 更新openspec
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 46s
2026-03-17 14:22:01 +08:00
b44363b335 fix: 修复新建店铺未初始化代理钱包导致充值订单报错
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m6s
新建店铺时在 shop.Service.Create() 中自动初始化主钱包(main)和分佣钱包(commission),修复充值订单创建时「目标店铺主钱包不存在」错误

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-17 14:08:26 +08:00
3e8f613475 fix: 修复 OpenAPI 文档生成器启动 panic,路由缺少 path parameter 定义
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m2s
- 新增 UpdateWechatConfigParams/AgentOfflinePayParams 聚合结构体,嵌入 IDReq 提供 path:id 标签
- 修复 PUT /:id 和 POST /:id/offline-pay 路由的 Input 引用
- 修复 Makefile 构建路径从单文件改为包路径,解决多文件编译问题
- 标记 tasks.md 中 1.2.4 迁移任务为已完成
2026-03-17 09:45:51 +08:00
242e0b1f40 docs: 更新 AGENTS.md 和 CLAUDE.md
Some checks failed
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Failing after 6m28s
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:31:07 +08:00
060d8fd65e docs: 新增微信参数配置管理和代理预充值功能总结文档
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:30:56 +08:00
f3297f0529 docs: 归档 asset-wallet-interface OpenSpec 提案,更新卡钱包 spec
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:30:48 +08:00
63ca12393b docs: 新增 OpenSpec 提案 add-payment-config-management
包含 proposal.md、design.md、tasks.md 及各模块 spec 文件(微信配置管理、富友支付、代理充值、订单支付、资产充值适配、微信支付留桩)

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:30:39 +08:00
429edf0d19 refactor: 注册微信配置和代理充值模块到 Bootstrap 和 OpenAPI 文档生成器
- bootstrap/types.go: 新增 WechatConfigStore/WechatConfigService/WechatConfigHandler/AgentRechargeService/AgentRechargeHandler 字段
- bootstrap/stores.go: 初始化 WechatConfigStore
- bootstrap/services.go: 初始化 WechatConfigService(注入 AuditService)和 AgentRechargeService
- bootstrap/handlers.go: 初始化 WechatConfigHandler 和 AgentRechargeHandler;PaymentHandler 新增 agentRechargeService 参数
- bootstrap/worker_services.go: 补充 WechatConfigService 注入
- routes/admin.go: 注册 WechatConfig 和 AgentRecharge 路由组
- openapi/handlers.go: 注册 WechatConfigHandler 和 AgentRechargeHandler 到文档生成器

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:30:30 +08:00
7c64e433e8 feat: 改造支付回调 Handler,支持富友回调和多订单类型按前缀分发
- payment.go: WechatPayCallback 改造为按订单号前缀分发(ORD→套餐订单、CRCH→资产充值、ARCH→代理充值);新增 FuiouPayCallback(GBK→UTF-8+XML解析+验签+分发);修复 RechargeOrderPrefix 废弃引用
- order.go: 注册 POST /api/callback/fuiou-pay 路由(无需认证)

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:30:17 +08:00
269769bfe4 refactor: 改造订单和资产充值 Service,支持动态支付配置
- order/service.go: 注入 wechatConfigService,CreateH5Order/CreateAdminOrder 下单时查询 active 配置并记录 payment_config_id;无配置时拒绝第三方支付;WechatPayJSAPI/WechatPayH5/FuiouPayJSAPI/FuiouPayMiniApp 添加 TODO 留桩
- recharge/service.go: Create 方法记录 payment_config_id,HandlePaymentCallback 留桩

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:30:05 +08:00
1980c846f2 feat: 订单/资产充值/代理充值模型新增 PaymentConfigID 字段
- order.go: Order 模型新增 PaymentConfigID *uint(记录下单时使用的支付配置)
- asset_wallet.go: AssetRechargeRecord 新增 PaymentConfigID *uint
- agent_wallet.go: AgentRechargeRecord 新增 PaymentConfigID *uint
配置切换时旧订单仍按 payment_config_id 加载对应配置验签,解决竞态问题

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:29:52 +08:00
89f9875a97 feat: 新增代理预充值模块(DTO、Service、Handler、路由)
- agent_recharge_dto.go: 创建/列表/详情请求响应 DTO
- service.go: 权限验证(代理只能充自己店铺)、金额范围校验、查询 active 配置、创建订单、线下充值确认(乐观锁+审计日志)、回调幂等处理
- agent_recharge.go Handler: Create/List/Get/OfflinePay 共 4 个方法
- agent_recharge.go 路由: 注册到 /api/admin/agent-recharges/*,路由层拦截企业账号

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:29:42 +08:00
30c56e66dd feat: 新增微信参数配置管理 Handler 和路由(仅平台账号可访问)
- wechat_config.go Handler: Create/List/Get/Update/Delete/Activate/Deactivate/GetActive 共 8 个方法
- wechat_config.go 路由: 注册到 /api/admin/wechat-configs/*,路由层限制平台账号权限

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:29:31 +08:00
c86afbfa8f feat: 新增微信参数配置模块(Model、DTO、Store、Service)
- wechat_config.go: WechatConfig GORM 模型,含 ProviderTypeWechat/Fuiou 常量
- wechat_config_dto.go: Create/Update/List 请求 DTO,响应 DTO 含脱敏逻辑
- wechat_config_store.go: CRUD、GetActive、ActivateInTx(事务内唯一激活)、软删除保护查询
- service.go: 业务逻辑,按渠道校验必填字段、Redis 缓存管理(wechat:config:active)、删除保护、审计日志

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:29:11 +08:00
aa41a5ed5e feat: 新增支付配置管理相关数据库迁移(000078-000081)
- 000078: 创建 tb_wechat_config 表(支持微信直连和富友双渠道,含软删除)
- 000079: tb_order 新增 payment_config_id 字段(nullable,记录下单时使用的配置)
- 000080: tb_asset_recharge_record 新增 payment_config_id 字段
- 000081: tb_agent_recharge_record 新增 payment_config_id 字段

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:28:57 +08:00
a308ee228b feat: 新增富友支付 SDK(RSA 签名、GBK 编解码、XML 协议、回调验签)
- pkg/fuiou/types.go: WxPreCreateRequest/Response、NotifyRequest 等 XML 结构体
- pkg/fuiou/client.go: Client 结构体、NewClient、字典序+GBK+MD5+RSA 签名/验签、HTTP 请求
- pkg/fuiou/wxprecreate.go: WxPreCreate 方法,支持公众号 JSAPI(JSAPI)和小程序(LETPAY)
- pkg/fuiou/notify.go: VerifyNotify(GBK→UTF-8+XML 解析+RSA 验签)、BuildNotifyResponse

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:28:42 +08:00
b0da71bd25 refactor: 清理 YAML 支付配置遗留代码,重命名 Card* 常量为 Asset*,新增支付配置相关错误码
- 删除 PaymentConfig 结构体和 WechatConfig.Payment 字段(YAML 方案已废弃)
- 删除 wechat.payment 配置节和 NewPaymentApp() 函数
- 删除 validateWechatConfig 中所有 wechatCfg.Payment.* 校验代码
- pkg/constants/wallet.go: Card* 前缀统一重命名为 Asset*,旧名保留废弃别名
- pkg/constants/redis.go: 新增 RedisWechatConfigActiveKey()
- pkg/errors/codes.go: 新增错误码 1170-1175
- go.mod: 新增 golang.org/x/text 依赖(富友支付 GBK 编解码)

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 23:28:29 +08:00
7f18765911 fix: IoT 卡列表查询补充 virtual_no 字段
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m58s
standaloneListColumns 是为性能优化而手写的列选择列表,
virtual_no 字段新增时只加了 model 和 DTO,遗漏了这里,
导致四条列表查询路径均未 SELECT virtual_no,字段始终为空。

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 16:48:45 +08:00
876c92095c fix: 平台账号后台创建钱包订单时,绕过代理套餐分配检查
后台钱包支付下单时,原逻辑根据卡/设备所属代理店铺触发
套餐分配上架校验,导致平台账号无法为属于代理的卡购买
未被该代理分配的套餐(如 0 元赠送套餐)。

修复:在 CreateAdminOrder wallet 分支中,按买家类型区分:
- 代理账号:保留原有校验,确保卡所属代理已分配该套餐
- 平台/超管账号:跳过代理分配检查,仅验证套餐全局状态

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 15:51:01 +08:00
e45610661e docs: 更新 admin OpenAPI 文档,新增 asset_wallet 接口定义
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m57s
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 15:44:02 +08:00
d85d7bffd6 refactor: 更新路由和 OpenAPI 注册以接入 AssetWallet
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 15:43:55 +08:00
fe77d9ca72 refactor: 注册 AssetWallet 组件到 Bootstrap
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 15:43:49 +08:00
9b83f92fb6 feat: 新增 AssetWallet Handler,实现资产钱包 API 接口
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 15:43:42 +08:00
2248558bd3 refactor: 适配 asset_wallet 更名,更新订单、充值和购买验证服务
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 15:43:37 +08:00
2aae31ac5f feat: 新增 AssetWallet Service,实现资产钱包业务逻辑
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 15:43:29 +08:00
5031bf15b9 refactor: 更新 wallet 常量和队列类型以适配 asset_wallet
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 15:43:22 +08:00
9c768e0719 refactor: 重命名 card_wallet store 为 asset_wallet,新增 transaction store
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 15:43:17 +08:00
b6c379265d refactor: 重命名 CardWallet 模型为 AssetWallet,新增 DTO
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 15:43:11 +08:00
4156bfc9dd feat: 新增 asset_wallet 和 reference_no 数据库迁移
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 15:42:52 +08:00
0ef136f008 fix: 修复资产套餐列表时间字段返回异常时区偏移问题
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m54s
待生效套餐的 activated_at/expires_at 在 DB 中存储为零值(0001-01-01),
Go 序列化时因 Asia/Shanghai 历史 LMT(+08:05:36)导致输出异常时区偏移。

- AssetPackageResponse.ActivatedAt/ExpiresAt 改为 *time.Time + omitempty
- 新增 nonZeroTimePtr 辅助函数,零值时间转 nil,避免序列化问题
- 同步修复 GetPackages 和 GetCurrentPackage 两处赋值

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-16 11:42:39 +08:00
b1d6355a7d fix: resolve 接口 series_name 永远为空,asset service 注入套餐系列 store
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m58s
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 10:59:29 +08:00
907e500ffb 修复列表没有正确返回新增字段问题
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m58s
2026-03-16 10:51:15 +08:00
275debdd38 fix: IoT 卡列表补充 virtual_no 字段和查询过滤,修正设备/卡导入 API 文档描述
Some checks failed
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Has been cancelled
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-16 10:44:38 +08:00
b9c3875c08 feat: 新增数据库迁移,重命名 device_no 为 virtual_no,新增 iot_card.virtual_no 和 package.virtual_ratio 字段
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m3s
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-14 18:27:28 +08:00
b5147d1acb 设备的部分改造
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m34s
2026-03-10 10:34:08 +08:00
86f8d0b644 fix: 适配 Gateway 响应模型变更,更新轮询处理器和 Mock 服务
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m25s
- polling_handler: Status→RealStatus, UsedFlow→Used, parseRealnameStatus 参数改为 bool
- mock_gateway: 同步接口路径和响应结构与上游文档一致
2026-03-07 11:29:40 +08:00
a83dca2eb2 fix: 修复 Gateway 流量卡接口路径、响应模型和时间戳与上游文档不一致
- 时间戳从 UnixMilli (13位) 改为 Unix (10位秒级)
- 实名状态接口路径 /realname-status → /realName
- 实名链接接口路径 /realname-link → /RealNameVerification
- RealnameStatusResp: status string → realStatus bool + iccid
- FlowUsageResp: usedFlow int64 → used float64 + iccid
- RealnameLinkResp: link → url
2026-03-07 11:29:34 +08:00
51ee38bc2e 使用超级管理员权限去访问gateway
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m44s
2026-03-07 11:10:22 +08:00
9417179161 fix: 修复设备限速和切卡接口请求字段解析错误
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 9m15s
SetSpeedLimit 和 SwitchCard 的 Handler 直接解析 gateway 结构体(驼峰命名),
导致与 OpenAPI 文档(DTO 蛇形命名)不一致,前端按文档调用时参数被静默丢弃。

改为先解析 DTO,再手动映射到 gateway 结构体,使文档与实际行为一致。

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-06 18:16:10 +08:00
b52cb9a078 fix: 修复梯度佣金档位字段缺失,补全授权接口响应字段及强充有效状态
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m27s
- OneTimeCommissionTierDTO 补充 operator 字段映射
- GrantCommissionTierItem 补充 dimension/stat_scope 字段(从全局配置合并)
- 系列授权列表/详情补充强充锁定状态和强充金额的有效值计算
- 同步 OpenSpec 主规范并归档变更文档
2026-03-05 11:23:28 +08:00
de9eacd273 chore: 新增 systematic-debugging 技能,更新项目开发规范
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m27s
新增 systematic-debugging Skill(四阶段根因分析流程),在 AGENTS.md 和 CLAUDE.md 中补充触发条件说明。opencode.json 配置同步更新。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:38:01 +08:00
f40abaf93c docs: 同步 OpenSpec 主规范,新增系列授权 capability 并更新强充预检规范
三个 capability 同步:
- agent-series-grant(新建):定义系列授权 CRUD,覆盖固定/梯度佣金模式和强充层级场景
- force-recharge-check(更新):新增「代理层强充层级判断」Requirement,更新钱包充值和套餐购买预检场景以反映平台/代理层级规则
- shop-series-allocation(更新):在 REMOVED 区域追加三个已废弃接口的文档说明(/shop-series-allocations、/shop-package-allocations、enable_one_time_commission 等字段)

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:37:46 +08:00
e0cb4498e6 docs: 归档 refactor-agent-series-grant 变更文档
将已完成的变更(proposal、design、tasks、delta specs)归档至 openspec/changes/archive/2026-03-04-refactor-agent-series-grant/。变更内容:合并系列分配和套餐分配为系列授权(Grant)、新增梯度佣金模式、新增代理层强充层级规则。50/50 任务全部完成。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:37:33 +08:00
c7b8ecfebf refactor: 佣金计算适配梯度阶梯 Operator 比较,套餐服务集成代理强充逻辑
commission_calculation: matchOneTimeCommissionTier() 接收 agentTiers 参数,根据 tier.Operator(>、>=、<、<=,默认 >=)执行对应比较逻辑,支持代理专属梯度阶梯计算。package/service: 套餐购买预检调用更新后的强充层级判断接口。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:37:02 +08:00
2ca33b7172 fix: 强充预检按平台/代理层级判断,代理自设强充在平台未设时生效
checkForceRechargeRequirement() 新增层级逻辑:平台(PackageSeries)的强充配置具有最高优先级;平台未设强充时,读取 order.SellerShopID 对应的 ShopSeriesAllocation 强充配置;两者均未设时返回 need_force_recharge=false(降级处理)。GetPurchaseCheck 复用同一函数,无需额外修改。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:36:49 +08:00
769f6b8709 refactor: 更新路由总线和 OpenAPI 文档注册
admin.go 删除 registerShopSeriesAllocationRoutes、registerShopPackageAllocationRoutes 两处调用,注册 registerShopSeriesGrantRoutes。OpenAPI handlers.go 同步移除旧 Handler 引用,注册 ShopSeriesGrant Handler 供文档生成器使用。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:36:39 +08:00
dd68d0a62b refactor: 更新 Bootstrap 注册,移除旧分配服务,接入系列授权
Types、Services、Handlers 三个文件同步:删除 ShopSeriesAllocation 和 ShopPackageAllocation 的 Handler/Service 字段及初始化逻辑,注册新的 ShopSeriesGrant Handler 和 Service。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:36:30 +08:00
c5018f110f feat: 新增系列授权 Handler 和路由(/shop-series-grants)
Handler 实现 POST /shop-series-grants(创建)、GET /shop-series-grants(列表)、GET /shop-series-grants/:id(详情)、PUT /shop-series-grants/:id(更新佣金和强充配置)、PUT /shop-series-grants/:id/packages(管理授权内套餐)、DELETE /shop-series-grants/:id(删除)六个接口。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:36:20 +08:00
ad3a7a770a feat: 新增系列授权 Service,支持固定/梯度佣金模式和代理自设强充
实现 /shop-series-grants 全套业务逻辑:
- 创建授权(固定/梯度模式):原子性创建 ShopSeriesAllocation + ShopPackageAllocation;校验分配者天花板和阶梯阈值匹配;平台创建无天花板限制
- 强充层级:首次充值类型由平台锁定;累计充值类型平台已设时代理配置被忽略,平台未设时代理可自设
- 查询(列表/详情):聚合套餐列表,梯度模式从 PackageSeries 读取 operator 合并响应
- 更新佣金和强充配置;套餐增删改(事务保证)
- 删除:有下级依赖时禁止删除

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:36:09 +08:00
beed9d25e0 refactor: 删除旧套餐系列分配和套餐分配 Service
业务逻辑已全部迁移至 shop_series_grant/service.go,旧 Service 层完整删除。底层 Store(shop_series_allocation_store、shop_package_allocation_store)保留,仍被佣金计算、订单服务和 Grant Service 使用。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:35:56 +08:00
163d01dae5 refactor: 删除旧套餐系列/套餐分配 Handler 和路由
/shop-series-allocations 和 /shop-package-allocations 接口已被 /shop-series-grants 完全替代,开发阶段干净删除,不保留兼容接口。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:35:46 +08:00
e7d52db270 refactor: 新增系列授权 DTO,删除旧套餐/系列分配 DTO
新增 ShopSeriesGrantDTO(含 packages 列表聚合视图)、CreateShopSeriesGrantRequest(支持固定/梯度模式及强充配置)、UpdateShopSeriesGrantRequest、ManageGrantPackagesRequest 等请求/响应结构。删除已被 Grant 接口取代的 ShopSeriesAllocationDTO 和 ShopPackageAllocationDTO。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:35:38 +08:00
672274f9fd refactor: 更新套餐系列分配和套餐模型,支持梯度佣金和代理强充
ShopSeriesAllocation 新增 commission_tiers_json(梯度模式专属阶梯 JSON)、enable_force_recharge(代理自设强充开关)、force_recharge_amount(强充金额,0 表示使用阈值)字段;移除与 PackageSeries 重复的三个字段。Package 模型补充 PackageSeriesID 字段,用于系列授权套餐归属校验。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:35:27 +08:00
b52744b149 feat: 新增数据库迁移,重构套餐系列分配佣金和强充字段
迁移编号 000071,在 tb_shop_series_allocation 中新增梯度佣金字段(commission_tiers_json)、代理自设强充字段(enable_force_recharge、force_recharge_amount),删除与 PackageSeries 语义重复的三个冗余字段(enable_one_time_commission、one_time_commission_trigger、one_time_commission_threshold)。

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-04 11:34:55 +08:00
61155952a7 feat: 新增代理分配套餐上架状态(shelf_status)功能
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m56s
- 新增数据库迁移:为 shop_package_allocation 表添加 shelf_status 字段
- 更新模型/DTO:ShopPackageAllocation 增加 ShelfStatus 字段及相关枚举
- 更新套餐分配 Service:支持上架/下架状态管理逻辑
- 更新套餐 Store/Service:根据 shelf_status 过滤可售套餐
- 更新购买验证 Service:引入上架状态校验逻辑
- 归档 OpenSpec 变更:2026-03-02-agent-allocation-shelf-status
- 同步更新主规范文档:allocation-shelf-status、package-management、purchase-validation

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 15:38:54 +08:00
8efe79526a fix: 修复平台自营资源(未分配代理)无法线下下单的问题
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m56s
offline 支付分支新增平台自营子场景判断:
- 资源 shopID 为空时(未分配给任何代理商),使用零售价直接创建订单
- 资源 shopID 不为空时(属于代理商),走原有平台代购逻辑

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 11:44:18 +08:00
a625462205 更新opencode
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 46s
2026-03-02 11:08:58 +08:00
c5429e7287 fix: 修复平台/超管用户订单列表查询为空的问题
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m5s
Service 层无条件将空 buyer_type 和 0 buyer_id 写入查询过滤条件,
导致平台/超管用户查询时附加 WHERE buyer_type = '' AND buyer_id = 0,
与任何订单均不匹配,返回空列表。

修复方式:仅当 buyerType 非空且 buyerID 非零时才添加过滤条件,
平台/超管用户不限定买家范围,可查看全部订单。

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-02 10:48:11 +08:00
e661b59bb9 feat: 实现订单超时自动取消功能,支持钱包余额解冻和 Asynq Scheduler 统一调度
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m58s
- 新增 expires_at 字段和复合索引,待支付订单 30 分钟超时自动取消
- 实现 cancelOrder/unfreezeWalletForCancel 钱包余额解冻逻辑
- 创建 Asynq 定时任务(order_expire/alert_check/data_cleanup)
- 将原有 time.Ticker 轮询迁移至 Asynq Scheduler 统一调度
- 同步 delta specs 到 main specs 并归档变更
2026-02-28 17:16:15 +08:00
5bb0ff0ddf fix: 修复代理钱包订单创建逻辑,拆分后台/H5端下单方法并归档变更
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m54s
- 拆分订单创建为 CreateAdminOrder(后台一步支付)和 CreateH5Order(H5 两步支付)
- 新增 CreateAdminOrderRequest DTO,后台仅允许 wallet/offline 支付方式
- 同步 delta specs 到主规格(order-payment 更新 + admin-order-creation 新增)
- 归档 fix-agent-wallet-order-creation 变更
- 新增 implement-order-expiration 变更提案
2026-02-28 16:31:31 +08:00
8ed3d9da93 feat: 实现代理钱包订单创建和订单角色追踪功能
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m0s
新增功能:
- 代理在后台使用 wallet 支付时,订单直接完成(扣款 + 激活套餐)
- 支持代理自购和代理代购场景
- 新增订单角色追踪字段(operator_id、operator_type、actual_paid_amount、purchase_role)
- 订单查询支持 OR 逻辑(buyer_id 或 operator_id)
- 钱包流水记录交易子类型和关联店铺
- 佣金逻辑调整:代理代购不产生佣金

数据库变更:
- 订单表新增 4 个字段和 2 个索引
- 钱包流水表新增 2 个字段
- 包含迁移脚本和回滚脚本

文档:
- 功能总结文档
- 部署指南
- OpenAPI 文档更新
- Specs 同步(新增 agent-order-role-tracking capability)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-28 14:11:42 +08:00
c5bf85c8de refactor: 移除 IoT 卡未使用的价格字段
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m2s
- 移除 IotCard 模型的 cost_price 和 distribute_price 字段
- 移除 StandaloneIotCardResponse DTO 中对应的字段
- 添加数据库迁移文件 000066_remove_iot_card_price_fields
- 更新 opencode.json 配置

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-27 15:38:33 +08:00
f5000f2bfc 修复超管无法回收资产的问题
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m0s
2026-02-27 11:03:44 +08:00
4189dbe98f debug: 添加资产回收店铺查询的调试日志
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m3s
在 RecallCards 方法中添加日志,用于诊断平台账号回收资产失败的问题:
- 记录操作者店铺ID
- 记录请求查询的店铺IDs
- 记录实际查询到的店铺数量和IDs
- 记录直属下级店铺集合

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-27 09:36:48 +08:00
bc60886aea fix: 修复 GetByIDs 缺少数据权限过滤导致平台账号无法回收资产
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m2s
在 ShopStore.GetByIDs 方法中添加 ApplyShopIDFilter,确保:
- 平台用户可以查询所有店铺(用于资产回收)
- 代理用户只能查询自己和下级店铺(保持权限隔离)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-26 18:07:45 +08:00
6ecc0b5adb fix: 修复套餐系列/套餐分配权限过滤问题
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m19s
代理用户只能看到自己分配出去的记录,而不是被分配的记录。

- 新增 ApplyAllocatorShopFilter 过滤函数
- ShopSeriesAllocationStore: List 和 GetByID 改用 ApplyAllocatorShopFilter
- ShopPackageAllocationStore: List 和 GetByID 改用 ApplyAllocatorShopFilter
- 平台用户和超管不受限制
- 代理用户只能看到 allocator_shop_id = 自己店铺ID 的记录

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-26 17:10:20 +08:00
1d602ad1f9 fix: 修复代理用户能看到全部店铺的问题
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m3s
在 ShopStore.List 中应用数据权限过滤,新增 ApplyShopIDFilter
函数用于对 Shop 表的 id 字段进行过滤。

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-26 16:55:47 +08:00
03a0960c4d refactor: 数据权限过滤从 GORM Callback 改为 Store 层显式调用
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m2s
- 移除 RegisterDataPermissionCallback 和 SkipDataPermission 机制
- 在 Auth 中间件预计算 SubordinateShopIDs 并注入 Context
- 新增 ApplyShopFilter/ApplyEnterpriseFilter/ApplyOwnerShopFilter 等 Helper 函数
- 所有 Store 层查询方法显式调用数据权限过滤函数
- 权限检查函数 CanManageShop/CanManageEnterprise 改为从 Context 获取数据

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-26 16:38:52 +08:00
4ba1f5b99d fix: 添加角色名重复检查
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m46s
- 创建角色时检查角色名是否已存在
- 更新角色时检查角色名是否与其他角色重复

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-26 14:55:46 +08:00
1382cbbf47 fix: 修复代理用户能看到未分配套餐系列的问题
Some checks failed
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Has been cancelled
问题:代理用户登录后能看到所有套餐系列,即使没有分配给该店铺

原因:PackageSeries 模型没有 shop_id 字段,GORM Callback 无法自动过滤

修复:
- 在 package_series Service 的 List 方法中添加权限过滤
- 代理用户只能看到通过 shop_series_allocation 分配给自己店铺的系列
- 平台用户/超级管理员可以看到所有套餐系列

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-26 14:54:52 +08:00
c1eec5d4f1 fix: 新增店铺时为初始账号分配默认角色
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m0s
问题:创建店铺时只创建了 shop_roles 记录(店铺可用角色),
但没有创建 account_roles 记录,导致初始账号没有任何权限。

修复:在创建初始账号后,立即为其分配默认角色到 account_roles 表。

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-25 16:47:36 +08:00
efe8a362aa fix: 平台账号可回收所有店铺的卡和设备
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m4s
之前平台用户回收时只能回收一级代理的资产,现在允许回收所有店铺的资产。

修改:
- iot_card/service.go: isDirectSubordinate 对平台用户返回 true
- device/service.go: RecallDevices 平台用户跳过直属下级验证

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-25 16:37:23 +08:00
6dc6afece0 fix: 修复已删除店铺名称无法显示的问题
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m6s
店铺被软删除后,GORM 默认过滤 deleted_at IS NOT NULL 的记录,
导致查询店铺名称时找不到对应店铺,shop_name 字段被 omitempty 省略。

修复方案:在加载店铺名称的查询中添加 Unscoped(),包含已删除的店铺。

影响接口:
- GET /api/admin/devices(设备列表)
- GET /api/admin/iot-cards/standalone(独立卡列表)
- GET /api/admin/asset-allocation-records(分配记录列表)
- GET /api/admin/enterprises(企业列表)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-25 16:27:58 +08:00
037595c22e feat: 单卡回收接口优化 & 店铺禁用登录拦截
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m0s
单卡回收优化:
- 移除 from_shop_id 参数,系统自动识别卡所属店铺
- 保持直属下级限制,混合来源分别处理
- 新增 GetDistributedStandaloneByICCIDRange/GetDistributedStandaloneByFilters 方法

店铺禁用拦截:
- 登录时检查关联店铺状态,禁用店铺无法登录
- 新增 CodeShopDisabled 错误码

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-25 15:54:53 +08:00
25e9749564 feat: 新增店铺时自动设置默认角色
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m1s
- CreateShopRequest 新增必填字段 default_role_id
- 创建店铺时验证默认角色(必须存在、是客户角色、已启用)
- 创建店铺后自动设置 ShopRole,初始账号立即拥有权限

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-25 14:33:13 +08:00
18daeae65a feat: 钱包系统分离 - 代理钱包与卡钱包完全隔离
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m17s
## 变更概述
将统一钱包系统拆分为代理钱包和卡钱包两个独立系统,实现数据表和代码层面的完全隔离。

## 数据库变更
- 新增 6 张表:tb_agent_wallet、tb_agent_wallet_transaction、tb_agent_recharge_record、tb_card_wallet、tb_card_wallet_transaction、tb_card_recharge_record
- 删除 3 张旧表:tb_wallet、tb_wallet_transaction、tb_recharge_record
- 代理钱包:按 (shop_id, wallet_type) 唯一标识,支持主钱包和分佣钱包
- 卡钱包:按 (resource_type, resource_id) 唯一标识,支持物联网卡和设备

## 代码变更
- Model 层:新增 AgentWallet、AgentWalletTransaction、AgentRechargeRecord、CardWallet、CardWalletTransaction、CardRechargeRecord 模型
- Store 层:新增 6 个独立 Store,支持事务、乐观锁、Redis 缓存
- Service 层:重构 commission_calculation、commission_withdrawal、order、recharge 等 8 个服务
- Bootstrap 层:更新 Store 和 Service 依赖注入
- 常量层:按钱包类型重新组织常量和 Redis Key 生成函数

## 技术特性
- 乐观锁:使用 version 字段防止并发冲突
- 多租户:支持 shop_id_tag 和 enterprise_id_tag 过滤
- 事务管理:所有余额变动使用事务保证 ACID
- 缓存策略:Cache-Aside 模式,余额变动后删除缓存

## 业务影响
- 代理钱包和卡钱包业务完全隔离,互不影响
- 为独立监控、优化、扩展打下基础
- 提升代理钱包的稳定性和独立性

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-25 09:51:00 +08:00
f32d32cd36 perf: IoT 卡 30M 行分页查询优化(P95 17.9s → <500ms)
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 7m6s
- 新增 is_standalone 物化列 + 触发器自动维护(迁移 056)
- 并行查询拆分:多店铺 IN 查询拆为 per-shop goroutine 并行 Index Scan
- 两阶段延迟 Join:深度分页(page≥50)走覆盖索引 Index Only Scan 取 ID 再回表
- COUNT 缓存:per-shop 并行 COUNT + Redis 30 分钟 TTL
- 索引优化:删除有害全局索引、新增 partial composite indexes(迁移 057/058)
- ICCID 模糊搜索路径隔离:trigram GIN 索引走独立查询路径
- 慢查询阈值从 100ms 调整为 500ms
- 新增 30M 测试数据种子脚本和 benchmark 工具
2026-02-24 16:23:02 +08:00
c665f32976 feat: 套餐系统升级 - Worker 重构、流量重置、文档与规范更新
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m54s
- 重构 Worker 启动流程,引入 bootstrap 模块统一管理依赖注入
- 实现套餐流量重置服务(日/月/年周期重置)
- 新增套餐激活排队、加油包绑定、囤货待实名激活逻辑
- 新增订单创建幂等性防重(Redis 业务键 + 分布式锁)
- 更新 AGENTS.md/CLAUDE.md:新增注释规范、幂等性规范,移除测试要求
- 添加套餐系统升级完整文档(API文档、使用指南、功能总结、运维指南)
- 归档 OpenSpec package-system-upgrade 变更,同步 specs 到主目录
- 新增 queue types 抽象和 Redis 常量定义
2026-02-12 14:24:15 +08:00
655c9ce7a6 1 2026-02-11 17:29:06 +08:00
353621d923 移除所有测试代码和测试要求
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m33s
**变更说明**:
- 删除所有 *_test.go 文件(单元测试、集成测试、验收测试、流程测试)
- 删除整个 tests/ 目录
- 更新 CLAUDE.md:用"测试禁令"章节替换所有测试要求
- 删除测试生成 Skill (openspec-generate-acceptance-tests)
- 删除测试生成命令 (opsx:gen-tests)
- 更新 tasks.md:删除所有测试相关任务

**新规范**:
-  禁止编写任何形式的自动化测试
-  禁止创建 *_test.go 文件
-  禁止在任务中包含测试相关工作
-  仅当用户明确要求时才编写测试

**原因**:
业务系统的正确性通过人工验证和生产环境监控保证,测试代码维护成本高于价值。

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-11 17:13:42 +08:00
804145332b chore: 归档轮询系统实现变更 (polling-system-implementation)
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 44s
已完成千万级卡规模轮询系统的完整实现和集成测试验证,将变更归档到 openspec/changes/archive/2026-02-10-polling-system-implementation/

主要成果:
- 三大轮询任务:实名检查、卡流量检查、套餐流量检查
- 快速启动(<10秒)和渐进式初始化
- 完整运营工具:配置管理、并发控制、监控面板、告警系统、数据清理、手动触发
- 任务完成度:215/216(99.5%)
- 所有 24 个新增接口已生成 OpenAPI 文档

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-10 10:28:47 +08:00
931e140e8e feat: 实现 IoT 卡轮询系统(支持千万级卡规模)
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m35s
实现功能:
- 实名状态检查轮询(可配置间隔)
- 卡流量检查轮询(支持跨月流量追踪)
- 套餐检查与超额自动停机
- 分布式并发控制(Redis 信号量)
- 手动触发轮询(单卡/批量/条件筛选)
- 数据清理配置与执行
- 告警规则与历史记录
- 实时监控统计(队列/性能/并发)

性能优化:
- Redis 缓存卡信息,减少 DB 查询
- Pipeline 批量写入 Redis
- 异步流量记录写入
- 渐进式初始化(10万卡/批)

压测工具(scripts/benchmark/):
- Mock Gateway 模拟上游服务
- 测试卡生成器
- 配置初始化脚本
- 实时监控脚本

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-05 17:32:44 +08:00
b11edde720 fix: 注册佣金计算任务 Handler 到队列处理器
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m19s
佣金计算任务 (commission:calculate) 的 Handler 已实现但未在队列处理器中注册,
导致支付成功后入队的佣金计算任务永远不会被消费执行。

变更内容:
- 在 pkg/queue/handler.go 中添加 registerCommissionCalculationHandler() 方法
- 创建所有需要的 Store 和 Service 依赖
- 在 RegisterHandlers() 中调用注册方法

修复后,订单支付成功将正确触发佣金计算和发放。

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 16:08:03 +08:00
8ab5ebc3af feat: 在 IoT 卡和设备列表响应中添加套餐系列名称字段
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m2s
主要变更:
- 在 StandaloneIotCardResponse 和 DeviceResponse 中添加 series_name 字段
- 在 iot_card 和 device service 中添加 loadSeriesNames 方法批量加载系列名称
- 更新相关方法以支持 series_name 的填充

其他变更:
- 新增 OpenSpec 测试生成和共识锁定 skill
- 新增 MCP 配置文件
- 更新 CLAUDE.md 项目规范文档

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-04 15:28:41 +08:00
dc84cef2ce fix(package-series): 将 enable_one_time_commission 字段提升到创建/更新请求顶层
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m5s
- DTO: CreatePackageSeriesRequest 和 UpdatePackageSeriesRequest 添加 EnableOneTimeCommission 字段
- Service: Create/Update 方法处理顶层字段并同步到 JSON config 的 Enable 字段
- 确保顶层字段与 JSON config 内的 enable 保持一致,避免业务逻辑判断出错
2026-02-04 14:38:10 +08:00
b18ecfeb55 refactor: 一次性佣金配置从套餐级别提升到系列级别
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m29s
主要变更:
- 新增 tb_shop_series_allocation 表,存储系列级别的一次性佣金配置
- ShopPackageAllocation 移除 one_time_commission_amount 字段
- PackageSeries 新增 enable_one_time_commission 字段控制是否启用一次性佣金
- 新增 /api/admin/shop-series-allocations CRUD 接口
- 佣金计算逻辑改为从 ShopSeriesAllocation 获取一次性佣金金额
- 删除废弃的 ShopSeriesOneTimeCommissionTier 模型
- OpenAPI Tag '系列分配' 和 '单套餐分配' 合并为 '套餐分配'

迁移脚本:
- 000042: 重构佣金套餐模型
- 000043: 简化佣金分配
- 000044: 一次性佣金分配重构
- 000045: PackageSeries 添加 enable_one_time_commission 字段

测试:
- 新增验收测试 (shop_series_allocation, commission_calculation)
- 新增流程测试 (one_time_commission_chain)
- 删除过时的单元测试(已被验收测试覆盖)
2026-02-04 14:28:44 +08:00
fba8e9e76b refactor(account): 移除卡类型字段、优化账号列表查询和权限检查
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m18s
- 移除 IoT 卡和号卡的 card_type 字段(数据库迁移)
- 优化账号列表查询,支持按店铺和企业筛选
- 账号响应增加店铺名称和企业名称字段
- 实现批量加载店铺和企业名称,避免 N+1 查询
- 更新权限检查中间件,完善权限验证逻辑
- 更新相关测试用例,确保功能正确性
2026-02-03 10:59:44 +08:00
ad6d43e0cd 移除 2026-02-03 10:19:39 +08:00
5a90caa619 feat(shop-role): 实现店铺角色继承功能和权限检查优化
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 6m39s
- 新增店铺角色管理 API 和数据模型
- 实现角色继承和权限检查逻辑
- 添加流程测试框架和集成测试
- 更新权限服务和账号管理逻辑
- 添加数据库迁移脚本
- 归档 OpenSpec 变更文档

Ultraworked with Sisyphus
2026-02-03 10:06:13 +08:00
bc7e5d6f6d 修复go的验证库把int的0当作无值的情况 2026-02-03 09:57:53 +08:00
885 changed files with 100309 additions and 57724 deletions

View File

@@ -111,7 +111,7 @@ Working on task 4/7: <task description>
- [x] Task 2 - [x] Task 2
... ...
All tasks complete! Ready to archive this change. All tasks complete! You can archive this change with `/opsx:archive`.
``` ```
**Output On Pause (Issue Encountered)** **Output On Pause (Issue Encountered)**

View File

@@ -59,7 +59,7 @@ Archive a completed change in the experimental workflow.
- If changes needed: "Sync now (recommended)", "Archive without syncing" - If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel" - If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute `/opsx:sync` logic. Proceed to archive regardless of choice. If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive** 5. **Perform the archive**
@@ -153,5 +153,5 @@ Target archive directory already exists.
- Don't block archive on warnings - just inform and confirm - Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory) - Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened - Show clear summary of what happened
- If sync is requested, use /opsx:sync approach (agent-driven) - If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting - If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -1,242 +0,0 @@
---
name: "OPSX: Bulk Archive"
description: Archive multiple completed changes at once
category: Workflow
tags: [workflow, archive, experimental, bulk]
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -1,114 +0,0 @@
---
name: "OPSX: Continue"
description: Continue working on a change - create the next artifact (Experimental)
category: Workflow
tags: [workflow, artifacts, experimental]
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name after `/opsx:continue` (e.g., `/opsx:continue add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Run `/opsx:continue` to create the next artifact"
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -7,7 +7,7 @@ tags: [workflow, explore, experimental, thinking]
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes. Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing. **IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore. **This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
@@ -100,8 +100,7 @@ If the user mentioned a specific change name, read its artifacts for context.
Think freely. When insights crystallize, you might offer: Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?" - "This feels solid enough to start a change. Want me to create a proposal?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize - Or keep exploring - no pressure to formalize
### When a change exists ### When a change exists
@@ -153,7 +152,7 @@ If the user mentions a change or you detect one is relevant:
There's no required ending. Discovery might: There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? `/opsx:new` or `/opsx:ff`" - **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions" - **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on - **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime" - **Continue later**: "We can pick this up anytime"

View File

@@ -1,69 +0,0 @@
---
name: "OPSX: New"
description: Start a new change using the experimental artifact workflow (OPSX)
category: Workflow
tags: [workflow, artifacts, experimental]
---
Start a new change using the experimental artifact-driven approach.
**Input**: The argument after `/opsx:new` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema. Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Run `/opsx:continue` or just describe what this change is about and I'll draft it."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest using `/opsx:continue` instead
- Pass --schema if using a non-default workflow

View File

@@ -1,525 +0,0 @@
---
name: "OPSX: Onboard"
description: Guided onboarding - walk through a complete OpenSpec workflow cycle with narration
category: Workflow
tags: [workflow, onboarding, tutorial, learning]
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -1,13 +1,22 @@
--- ---
name: "OPSX: Fast Forward" name: "OPSX: Propose"
description: Create a change and generate all artifacts needed for implementation in one go description: Propose a new change - create it and generate all artifacts in one step
category: Workflow category: Workflow
tags: [workflow, artifacts, experimental] tags: [workflow, artifacts, experimental]
--- ---
Fast-forward through artifact creation - generate everything needed to start implementation. Propose a new change - create the change and generate all artifacts in one step.
**Input**: The argument after `/opsx:ff` is the change name (kebab-case), OR a description of what the user wants to build. I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The argument after `/opsx:propose` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps** **Steps**
@@ -24,7 +33,7 @@ Fast-forward through artifact creation - generate everything needed to start imp
```bash ```bash
openspec new change "<name>" openspec new change "<name>"
``` ```
This creates a scaffolded change at `openspec/changes/<name>/`. This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order** 3. **Get the artifact build order**
```bash ```bash
@@ -55,7 +64,7 @@ Fast-forward through artifact creation - generate everything needed to start imp
- Read any completed dependency files for context - Read any completed dependency files for context
- Create the artifact file using `template` as the structure - Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file - Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>" - Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete** b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json` - After creating each artifact, re-run `openspec status --change "<name>" --json`
@@ -84,7 +93,10 @@ After completing all artifacts, summarize:
- Follow the `instruction` field from `openspec instructions` for each artifact type - Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it - The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones - Read dependency artifacts for context before creating new ones
- Use the `template` as a starting point, filling in based on context - Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails** **Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`) - Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)

View File

@@ -1,134 +0,0 @@
---
name: "OPSX: Sync"
description: Sync delta specs from a change to main specs
category: Workflow
tags: [workflow, specs, experimental]
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name after `/opsx:sync` (e.g., `/opsx:sync add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -1,164 +0,0 @@
---
name: "OPSX: Verify"
description: Verify implementation matches change artifacts before archiving
category: Workflow
tags: [workflow, verify, experimental]
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name after `/opsx:verify` (e.g., `/opsx:verify add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

5
.claude/settings.json Normal file
View File

@@ -0,0 +1,5 @@
{
"enabledPlugins": {
"ralph-loop@claude-plugins-official": true
}
}

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Implement tasks from an OpenSpec change. Implement tasks from an OpenSpec change.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Archive a completed change in the experimental workflow. Archive a completed change in the experimental workflow.
@@ -63,7 +63,7 @@ Archive a completed change in the experimental workflow.
- If changes needed: "Sync now (recommended)", "Archive without syncing" - If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel" - If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx:sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice. If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive** 5. **Perform the archive**

View File

@@ -1,246 +0,0 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -1,118 +0,0 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -6,12 +6,12 @@ compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes. Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing. **IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore. **This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
@@ -95,8 +95,7 @@ This tells you:
Think freely. When insights crystallize, you might offer: Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?" - "This feels solid enough to start a change. Want me to create a proposal?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize - Or keep exploring - no pressure to formalize
### When a change exists ### When a change exists
@@ -252,7 +251,7 @@ You: That changes everything.
There's no required ending. Discovery might: There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx:new or /opsx:ff" - **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions" - **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on - **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime" - **Continue later**: "We can pick this up anytime"
@@ -269,8 +268,7 @@ When it feels like things are crystallizing, you might summarize:
**Open questions**: [if any remain] **Open questions**: [if any remain]
**Next steps** (if ready): **Next steps** (if ready):
- Create a change: /opsx:new <name> - Create a change proposal
- Fast-forward to tasks: /opsx:ff <name>
- Keep exploring: just keep talking - Keep exploring: just keep talking
``` ```

View File

@@ -0,0 +1,281 @@
---
name: openspec-lock-consensus
description: 锁定共识 - 在探索讨论后,将讨论结果锁定为正式共识文档。防止后续提案偏离讨论内容。
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: junhong
version: "1.1"
---
# 共识锁定 Skill
`/opsx:explore` 讨论后,使用此 skill 将讨论结果锁定为正式共识。共识文档是后续所有 artifact 的基础约束。
## 触发方式
```
/opsx:lock <change-name>
```
或在探索结束后AI 主动提议:
> "讨论已经比较清晰了,要锁定共识吗?"
---
## 工作流程
### Step 1: 整理讨论要点
从对话中提取以下四个维度的共识:
| 维度 | 说明 | 示例 |
|------|------|------|
| **要做什么** | 明确的功能范围 | "支持批量导入 IoT 卡" |
| **不做什么** | 明确排除的内容 | "不支持实时同步,仅定时批量" |
| **关键约束** | 技术/业务限制 | "必须使用 Asynq 异步任务" |
| **验收标准** | 如何判断完成 | "导入 1000 张卡 < 30s" |
### Step 2: 使用 Question_tool 逐维度确认
**必须使用 Question_tool 进行结构化确认**,每个维度一个问题:
```typescript
// 示例:确认"要做什么"
Question_tool({
questions: [{
header: "确认:要做什么",
question: "以下是整理的功能范围,请确认:\n\n" +
"1. 功能点 A\n" +
"2. 功能点 B\n" +
"3. 功能点 C\n\n" +
"是否准确完整?",
options: [
{ label: "确认无误", description: "以上内容准确完整" },
{ label: "需要补充", description: "有遗漏的功能点" },
{ label: "需要删减", description: "有不应该包含的内容" }
],
multiple: false
}]
})
```
**如果用户选择"需要补充"或"需要删减"**
- 用户会通过自定义输入提供修改意见
- 根据反馈更新列表,再次使用 Question_tool 确认
**确认流程**
```
┌─────────────────────────────────────────────────────────────────────┐
│ Question_tool: 确认"要做什么" │
│ ├── 用户选择"确认无误" → 进入下一维度 │
│ └── 用户选择其他/自定义 → 修改后重新确认 │
├─────────────────────────────────────────────────────────────────────┤
│ Question_tool: 确认"不做什么" │
│ ├── 用户选择"确认无误" → 进入下一维度 │
│ └── 用户选择其他/自定义 → 修改后重新确认 │
├─────────────────────────────────────────────────────────────────────┤
│ Question_tool: 确认"关键约束" │
│ ├── 用户选择"确认无误" → 进入下一维度 │
│ └── 用户选择其他/自定义 → 修改后重新确认 │
├─────────────────────────────────────────────────────────────────────┤
│ Question_tool: 确认"验收标准" │
│ ├── 用户选择"确认无误" → 生成 consensus.md │
│ └── 用户选择其他/自定义 → 修改后重新确认 │
└─────────────────────────────────────────────────────────────────────┘
```
### Step 3: 生成 consensus.md
所有维度确认后,创建文件:
```bash
# 检查 change 是否存在
openspec list --json
# 如果 change 不存在,先创建
# openspec new <change-name>
# 写入 consensus.md
```
**文件路径**: `openspec/changes/<change-name>/consensus.md`
---
## Question_tool 使用规范
### 每个维度的问题模板
**1. 要做什么**
```typescript
{
header: "确认:要做什么",
question: "以下是整理的【功能范围】:\n\n" +
items.map((item, i) => `${i+1}. ${item}`).join('\n') +
"\n\n请确认是否准确完整",
options: [
{ label: "确认无误", description: "功能范围准确完整" },
{ label: "需要补充", description: "有遗漏的功能点" },
{ label: "需要删减", description: "有不应该包含的内容" }
]
}
```
**2. 不做什么**
```typescript
{
header: "确认:不做什么",
question: "以下是明确【排除的内容】:\n\n" +
items.map((item, i) => `${i+1}. ${item}`).join('\n') +
"\n\n请确认是否正确",
options: [
{ label: "确认无误", description: "排除范围正确" },
{ label: "需要补充", description: "还有其他需要排除的" },
{ label: "需要删减", description: "有些不应该排除" }
]
}
```
**3. 关键约束**
```typescript
{
header: "确认:关键约束",
question: "以下是【关键约束】:\n\n" +
items.map((item, i) => `${i+1}. ${item}`).join('\n') +
"\n\n请确认是否正确",
options: [
{ label: "确认无误", description: "约束条件正确" },
{ label: "需要补充", description: "还有其他约束" },
{ label: "需要修改", description: "约束描述不准确" }
]
}
```
**4. 验收标准**
```typescript
{
header: "确认:验收标准",
question: "以下是【验收标准】(必须可测量):\n\n" +
items.map((item, i) => `${i+1}. ${item}`).join('\n') +
"\n\n请确认是否正确",
options: [
{ label: "确认无误", description: "验收标准清晰可测量" },
{ label: "需要补充", description: "还有其他验收标准" },
{ label: "需要修改", description: "标准不够清晰或无法测量" }
]
}
```
### 处理用户反馈
当用户选择非"确认无误"选项或提供自定义输入时:
1. 解析用户的修改意见
2. 更新对应维度的内容
3. 再次使用 Question_tool 确认更新后的内容
4. 重复直到用户选择"确认无误"
---
## consensus.md 模板
```markdown
# 共识文档
**Change**: <change-name>
**确认时间**: <timestamp>
**确认人**: 用户
---
## 1. 要做什么
- [x] 功能点 A已确认
- [x] 功能点 B已确认
- [x] 功能点 C已确认
## 2. 不做什么
- [x] 排除项 A已确认
- [x] 排除项 B已确认
## 3. 关键约束
- [x] 技术约束 A已确认
- [x] 业务约束 B已确认
## 4. 验收标准
- [x] 验收标准 A已确认
- [x] 验收标准 B已确认
---
## 讨论背景
<简要总结讨论的核心问题和解决方向>
## 关键决策记录
| 决策点 | 选择 | 原因 |
|--------|------|------|
| 决策 1 | 选项 A | 理由... |
| 决策 2 | 选项 B | 理由... |
---
**签字确认**: 用户已通过 Question_tool 逐条确认以上内容
```
---
## 后续流程绑定
### Proposal 生成时
`/opsx:continue` 生成 proposal 时,**必须**
1. 读取 `consensus.md`
2. 确保 proposal 的 Capabilities 覆盖"要做什么"中的每一项
3. 确保 proposal 不包含"不做什么"中的内容
4. 确保 proposal 遵守"关键约束"
### 验证机制
如果 proposal 与 consensus 不一致,输出警告:
```
⚠️ Proposal 验证警告:
共识中"要做什么"但 Proposal 未提及:
- 功能点 C
共识中"不做什么"但 Proposal 包含:
- 排除项 A
建议修正 Proposal 或更新共识。
```
---
## Guardrails
- **必须使用 Question_tool** - 不要用纯文本确认
- **逐维度确认** - 四个维度分开确认,不要合并
- **不要跳过确认** - 每个维度都必须让用户明确确认
- **不要自作主张** - 只整理讨论中明确提到的内容
- **避免模糊表述** - "尽量"、"可能"、"考虑"等词汇需要明确化
- **验收标准必须可测量** - 避免"性能要好"这类无法验证的标准
---
## 与其他 Skills 的关系
| Skill | 关系 |
|-------|------|
| `openspec-explore` | 探索结束后触发 lock |
| `openspec-new-change` | lock 后触发 new如果 change 不存在)|
| `openspec-continue-change` | 生成 proposal 时读取 consensus 验证 |
| `openspec-generate-acceptance-tests` | 从 consensus 的验收标准生成测试骨架 |

View File

@@ -1,74 +0,0 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

View File

@@ -1,529 +0,0 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -1,15 +1,24 @@
--- ---
name: openspec-ff-change name: openspec-propose
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually. description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
license: MIT license: MIT
compatibility: Requires openspec CLI. compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Fast-forward through artifact creation - generate everything needed to start implementation in one go. Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build. **Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
@@ -28,7 +37,7 @@ Fast-forward through artifact creation - generate everything needed to start imp
```bash ```bash
openspec new change "<name>" openspec new change "<name>"
``` ```
This creates a scaffolded change at `openspec/changes/<name>/`. This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order** 3. **Get the artifact build order**
```bash ```bash
@@ -59,7 +68,7 @@ Fast-forward through artifact creation - generate everything needed to start imp
- Read any completed dependency files for context - Read any completed dependency files for context
- Create the artifact file using `template` as the structure - Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file - Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>" - Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete** b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json` - After creating each artifact, re-run `openspec status --change "<name>" --json`
@@ -97,5 +106,5 @@ After completing all artifacts, summarize:
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`) - Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one - Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum - If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead - If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next - Verify each artifact file exists after writing before proceeding to next

View File

@@ -1,138 +0,0 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -1,168 +0,0 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,260 @@
---
name: systematic-debugging
description: 遇到任何 bug、异常行为、报错时必须使用。在提出任何修复方案之前强制执行根因分析流程。适用于 API 报错、数据异常、业务逻辑错误、性能问题等所有技术问题。
---
# 系统化调试方法论
## 铁律
```
没有找到根因,禁止提出任何修复方案。
```
改之前先搞懂为什么坏了。猜测不是调试,验证假设才是。
---
## 什么时候用
**所有技术问题都用这个流程**
- API 接口报错4xx / 5xx
- 业务数据异常(金额不对、状态流转错误)
- 性能问题(接口慢、数据库慢查询)
- 异步任务失败Asynq 任务报错/卡住)
- 构建失败、启动失败
**尤其是以下场景**
- 时间紧迫(越急越不能瞎猜)
- "很简单的问题"(简单问题也有根因)
- 已经试了一次修复但没解决
- 不完全理解为什么出问题
---
## 四阶段流程
必须按顺序完成每个阶段,不可跳过。
### 阶段一:根因调查
**这是最重要的阶段,占整个调试时间的 60%。没完成本阶段,禁止进入阶段二。**
#### 1. 仔细阅读错误信息
- 完整阅读 stack trace不要跳过
- 注意行号、文件路径、错误码
- 很多时候答案就在错误信息里
- 检查 `logs/app.log``logs/access.log` 中的上下文
#### 2. 稳定复现
- 能稳定触发吗?精确的请求参数是什么?
- 用 curl 或 Postman 复现,记录完整的请求和响应
- 不能复现 → 收集更多数据检查日志、Redis 状态、数据库记录),**不要瞎猜**
#### 3. 检查最近改动
- `git diff` / `git log --oneline -10` 看最近改了什么
- 新加了什么依赖?改了什么配置?改了什么 SQL
- 对比改动前后的行为差异
#### 4. 逐层诊断(针对本项目架构)
本项目有明确的分层架构,问题一定出在某一层的边界:
```
请求 → Fiber Middleware → Handler → Service → Store → PostgreSQL/Redis
↑ ↑ ↑ ↑ ↑
认证/限流 参数解析 业务逻辑 SQL/缓存 数据本身
```
**在每个层边界确认数据是否正确**
```go
// Handler 层 — 请求进来的参数对不对?
logger.Info("Handler 收到请求",
zap.Any("params", req),
zap.String("request_id", requestID),
)
// Service 层 — 传给业务逻辑的数据对不对?
logger.Info("Service 开始处理",
zap.Uint("user_id", userID),
zap.Any("input", input),
)
// Store 层 — SQL 查询/写入的数据对不对?
// 开启 GORM Debug 模式查看实际 SQL
db.Debug().Where(...).Find(&result)
// Redis 层 — 缓存的数据对不对?
// 用 redis-cli 直接检查 key 的值
// GET auth:token:{token}
// GET sim:status:{iccid}
```
**跑一次 → 看日志 → 找到断裂的那一层 → 再深入该层排查。**
#### 5. 追踪数据流
如果错误深藏在调用链中:
- 坏数据从哪来的?
- 谁调用了这个函数,传了什么参数?
- 一直往上追,直到找到数据变坏的源头
- **修源头,不修症状**
---
### 阶段二:模式分析
**找到参照物,对比差异。**
#### 1. 找能用的参照
项目里有没有类似的、能正常工作的代码?
| 如果问题在... | 参照物在... |
|-------------|-----------|
| Handler 参数解析 | 其他 Handler 的相同模式 |
| Service 业务逻辑 | 同模块其他方法的实现 |
| Store SQL 查询 | 同 Store 文件中类似的查询 |
| Redis 操作 | `pkg/constants/redis.go` 中的 Key 定义 |
| 异步任务 | `internal/task/` 中其他任务处理器 |
| GORM Callback | `pkg/database/` 中的 callback 实现 |
#### 2. 逐行对比
完整阅读参考代码,不要跳读。列出每一处差异。
#### 3. 不要假设"这个不重要"
小差异经常是 bug 的根因:
- 字段标签 `gorm:"column:xxx"` 拼写不对
- `errors.New()` 用了错误的错误码
- Redis Key 函数参数传反了
- Context 里的 UserID 没取到(中间件没配)
---
### 阶段三:假设和验证
**科学方法:一次只验证一个假设。**
#### 1. 形成单一假设
明确写下:
> "我认为根因是 X因为 Y。验证方法是 Z。"
#### 2. 最小化验证
- 只改一个地方
- 一次只验证一个变量
- 不要同时修多处
#### 3. 验证结果
- 假设成立 → 进入阶段四
- 假设不成立 → 回到阶段一,用新信息重新分析
- **绝对不能在失败的修复上再叠加修复**
#### 4. 三次失败 → 停下来
如果连续 3 次假设都不成立:
**这不是 bug是架构问题。**
- 停止一切修复尝试
- 整理已知信息
- 向用户说明情况,讨论是否需要重构
- 不要再试第 4 次
---
### 阶段四:实施修复
**确认根因后,一次性修好。**
#### 1. 修根因,不修症状
```
❌ 症状修复:在 Handler 里加个 if 把坏数据过滤掉
✅ 根因修复:修 Service 层生成坏数据的逻辑
```
#### 2. 一次只改一个地方
- 不搞"顺手优化"
- 不在修 bug 的同时重构代码
- 修完 bug 就停
#### 3. 验证修复
- `go build ./...` 编译通过
- `lsp_diagnostics` 无新增错误
- 用原来复现 bug 的请求再跑一次,确认修好了
- 用 PostgreSQL MCP 工具检查数据库中的数据状态
#### 4. 清理诊断代码
- 删除阶段一加的临时诊断日志(除非它们本身就该保留)
- 确保没有 `db.Debug()` 残留在代码里
---
## 本项目常见调试场景速查
| 场景 | 首先检查 |
|------|---------|
| API 返回 401 | `logs/access.log` 中该请求的 token → Redis 中 `auth:token:{token}` 是否存在 |
| API 返回 403 | 用户类型是什么 → GORM Callback 自动过滤的条件对不对 → `middleware.CanManageShop()` 的参数 |
| 数据查不到 | GORM 数据权限过滤有没有生效 → `shop_id` / `enterprise_id` 是否正确 → 是否需要 `SkipDataPermission` |
| 金额/余额不对 | 乐观锁 version 字段 → `RowsAffected` 是否为 0 → 并发场景下的锁竞争 |
| 状态流转错误 | `WHERE status = expected` 条件更新 → 状态机是否有遗漏的路径 |
| 异步任务不执行 | Asynq Dashboard → `RedisTaskLockKey` 有没有残留 → Worker 日志 |
| 异步任务重复执行 | `RedisTaskLockKey` 的 TTL → 任务幂等性检查 |
| 分佣计算错误 | 佣金类型(差价/一次性) → 套餐级别的佣金率 → 设备级防重复分佣 |
| 套餐激活异常 | 卡状态 → 实名状态 → 主套餐排队逻辑 → 加油包绑定关系 |
| Redis 缓存不一致 | Key 的 TTL → 缓存更新时机 → 是否有手动 `Del` 清除 |
| 微信支付回调失败 | 签名验证 → 幂等性处理 → 回调 URL 是否可达 |
| GORM 查询慢 | `db.Debug()` 看实际 SQL → 是否 N+1 → 是否缺少索引 |
---
## 红线规则
如果你发现自己在想以下任何一条,**立刻停下来,回到阶段一**
| 想法 | 为什么是错的 |
|------|------------|
| "先快速修一下,回头再查" | 快速修 = 猜测。猜测 = 浪费时间。 |
| "试试改这个看看行不行" | 一次只验证一个假设,不是随机改。 |
| "大概是 X 的问题,我直接改了" | "大概"不是根因。先验证再改。 |
| "这个很简单,不用走流程" | 简单问题走流程只需要 5 分钟。不走流程可能浪费 2 小时。 |
| "我不完全理解但这应该行" | 不理解 = 没找到根因。回阶段一。 |
| "再试一次"(已经失败 2 次) | 3 次失败 = 架构问题。停下来讨论。 |
| "同时改这几个地方应该能修好" | 改多处 = 无法确认哪个是根因。一次只改一处。 |
---
## 常见借口和真相
| 借口 | 真相 |
|------|------|
| "问题很简单,不需要走流程" | 简单问题也有根因。走流程对简单问题只花 5 分钟。 |
| "太紧急了,没时间分析" | 系统化调试比乱猜快 3-5 倍。越急越要走流程。 |
| "先改了验证一下" | 这叫猜测,不叫验证。先确认根因再改。 |
| "我看到问题了,直接修" | 看到症状 ≠ 理解根因。症状修复是技术债。 |
| "改了好几个地方,反正能用了" | 不知道哪个改动修的,下次还会出问题。 |
---
## 快速参考
| 阶段 | 核心动作 | 完成标准 |
|------|---------|---------|
| **一、根因调查** | 读错误日志、复现、检查改动、逐层诊断、追踪数据流 | 能说清楚"因为 X 所以 Y" |
| **二、模式分析** | 找参照代码、逐行对比、列出差异 | 知道正确的应该长什么样 |
| **三、假设验证** | 写下假设、最小改动、单变量验证 | 假设被证实或推翻 |
| **四、实施修复** | 修根因、编译检查、请求验证、清理诊断代码 | bug 消失,无新增问题 |

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Implement tasks from an OpenSpec change. Implement tasks from an OpenSpec change.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Archive a completed change in the experimental workflow. Archive a completed change in the experimental workflow.
@@ -63,7 +63,7 @@ Archive a completed change in the experimental workflow.
- If changes needed: "Sync now (recommended)", "Archive without syncing" - If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel" - If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx:sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice. If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive** 5. **Perform the archive**

View File

@@ -1,246 +0,0 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -1,118 +0,0 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -6,12 +6,12 @@ compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes. Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing. **IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore. **This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
@@ -95,8 +95,7 @@ This tells you:
Think freely. When insights crystallize, you might offer: Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?" - "This feels solid enough to start a change. Want me to create a proposal?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize - Or keep exploring - no pressure to formalize
### When a change exists ### When a change exists
@@ -252,7 +251,7 @@ You: That changes everything.
There's no required ending. Discovery might: There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx:new or /opsx:ff" - **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions" - **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on - **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime" - **Continue later**: "We can pick this up anytime"
@@ -269,8 +268,7 @@ When it feels like things are crystallizing, you might summarize:
**Open questions**: [if any remain] **Open questions**: [if any remain]
**Next steps** (if ready): **Next steps** (if ready):
- Create a change: /opsx:new <name> - Create a change proposal
- Fast-forward to tasks: /opsx:ff <name>
- Keep exploring: just keep talking - Keep exploring: just keep talking
``` ```

View File

@@ -1,74 +0,0 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

View File

@@ -1,529 +0,0 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -1,15 +1,24 @@
--- ---
name: openspec-ff-change name: openspec-propose
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually. description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
license: MIT license: MIT
compatibility: Requires openspec CLI. compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Fast-forward through artifact creation - generate everything needed to start implementation in one go. Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build. **Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
@@ -28,7 +37,7 @@ Fast-forward through artifact creation - generate everything needed to start imp
```bash ```bash
openspec new change "<name>" openspec new change "<name>"
``` ```
This creates a scaffolded change at `openspec/changes/<name>/`. This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order** 3. **Get the artifact build order**
```bash ```bash
@@ -59,7 +68,7 @@ Fast-forward through artifact creation - generate everything needed to start imp
- Read any completed dependency files for context - Read any completed dependency files for context
- Create the artifact file using `template` as the structure - Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file - Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>" - Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete** b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json` - After creating each artifact, re-run `openspec status --change "<name>" --json`
@@ -97,5 +106,5 @@ After completing all artifacts, summarize:
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`) - Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one - Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum - If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead - If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next - Verify each artifact file exists after writing before proceeding to next

View File

@@ -1,138 +0,0 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -1,168 +0,0 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

13
.config/dbhub.toml Normal file
View File

@@ -0,0 +1,13 @@
[[sources]]
id = "main"
dsn = "postgresql://erp_pgsql:erp_2025@cxd.whcxd.cn:16159/junhong_cmp_test?sslmode=disable"
[[tools]]
name = "search_objects"
source = "main"
[[tools]]
name = "execute_sql"
source = "main"
readonly = true # Only allow SELECT, SHOW, DESCRIBE, EXPLAIN
max_rows = 1000 # Limit query results

1
.gitignore vendored
View File

@@ -76,3 +76,4 @@ docs/admin-openapi.yaml
/api /api
/gendocs /gendocs
.env.local .env.local
/worker

19
.mcp.json Normal file
View File

@@ -0,0 +1,19 @@
{
"mcpServers": {
"postgres": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"DATABASE_URI",
"crystaldba/postgres-mcp",
"--access-mode=restricted"
],
"env": {
"DATABASE_URI": "postgresql://erp_pgsql:erp_2025@cxd.whcxd.cn:16159/junhong_cmp_test?sslmode=disable"
}
}
}
}

View File

@@ -4,7 +4,7 @@ description: Implement tasks from an OpenSpec change (Experimental)
Implement tasks from an OpenSpec change. Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes. **Input**: Optionally specify a change name (e.g., `/opsx-apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps** **Steps**
@@ -15,7 +15,7 @@ Implement tasks from an OpenSpec change.
- Auto-select if only one active change exists - Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select - If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`). Always announce: "Using change: <name>" and how to override (e.g., `/opsx-apply <other>`).
2. **Check status to understand the schema** 2. **Check status to understand the schema**
```bash ```bash
@@ -38,7 +38,7 @@ Implement tasks from an OpenSpec change.
- Dynamic instruction based on current state - Dynamic instruction based on current state
**Handle states:** **Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue` - If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx-continue`
- If `state: "all_done"`: congratulate, suggest archive - If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation - Otherwise: proceed to implementation
@@ -108,7 +108,7 @@ Working on task 4/7: <task description>
- [x] Task 2 - [x] Task 2
... ...
All tasks complete! Ready to archive this change. All tasks complete! You can archive this change with `/opsx-archive`.
``` ```
**Output On Pause (Issue Encountered)** **Output On Pause (Issue Encountered)**

View File

@@ -4,7 +4,7 @@ description: Archive a completed change in the experimental workflow
Archive a completed change in the experimental workflow. Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes. **Input**: Optionally specify a change name after `/opsx-archive` (e.g., `/opsx-archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps** **Steps**
@@ -56,7 +56,7 @@ Archive a completed change in the experimental workflow.
- If changes needed: "Sync now (recommended)", "Archive without syncing" - If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel" - If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute `/opsx:sync` logic. Proceed to archive regardless of choice. If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive** 5. **Perform the archive**
@@ -150,5 +150,5 @@ Target archive directory already exists.
- Don't block archive on warnings - just inform and confirm - Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory) - Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened - Show clear summary of what happened
- If sync is requested, use /opsx:sync approach (agent-driven) - If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting - If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -1,239 +0,0 @@
---
description: Archive multiple completed changes at once
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -1,111 +0,0 @@
---
description: Continue working on a change - create the next artifact (Experimental)
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name after `/opsx:continue` (e.g., `/opsx:continue add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Run `/opsx:continue` to create the next artifact"
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -4,11 +4,11 @@ description: Enter explore mode - think through ideas, investigate problems, cla
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes. Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing. **IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore. **This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be: **Input**: The argument after `/opsx-explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration" - A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy" - A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change) - A change name: "add-dark-mode" (to explore in context of that change)
@@ -97,8 +97,7 @@ If the user mentioned a specific change name, read its artifacts for context.
Think freely. When insights crystallize, you might offer: Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?" - "This feels solid enough to start a change. Want me to create a proposal?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize - Or keep exploring - no pressure to formalize
### When a change exists ### When a change exists
@@ -150,7 +149,7 @@ If the user mentions a change or you detect one is relevant:
There's no required ending. Discovery might: There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? `/opsx:new` or `/opsx:ff`" - **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions" - **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on - **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime" - **Continue later**: "We can pick this up anytime"

View File

@@ -1,66 +0,0 @@
---
description: Start a new change using the experimental artifact workflow (OPSX)
---
Start a new change using the experimental artifact-driven approach.
**Input**: The argument after `/opsx:new` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema. Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Run `/opsx:continue` or just describe what this change is about and I'll draft it."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest using `/opsx:continue` instead
- Pass --schema if using a non-default workflow

View File

@@ -1,522 +0,0 @@
---
description: Guided onboarding - walk through a complete OpenSpec workflow cycle with narration
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -1,10 +1,19 @@
--- ---
description: Create a change and generate all artifacts needed for implementation in one go description: Propose a new change - create it and generate all artifacts in one step
--- ---
Fast-forward through artifact creation - generate everything needed to start implementation. Propose a new change - create the change and generate all artifacts in one step.
**Input**: The argument after `/opsx:ff` is the change name (kebab-case), OR a description of what the user wants to build. I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx-apply
---
**Input**: The argument after `/opsx-propose` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps** **Steps**
@@ -21,7 +30,7 @@ Fast-forward through artifact creation - generate everything needed to start imp
```bash ```bash
openspec new change "<name>" openspec new change "<name>"
``` ```
This creates a scaffolded change at `openspec/changes/<name>/`. This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order** 3. **Get the artifact build order**
```bash ```bash
@@ -52,7 +61,7 @@ Fast-forward through artifact creation - generate everything needed to start imp
- Read any completed dependency files for context - Read any completed dependency files for context
- Create the artifact file using `template` as the structure - Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file - Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>" - Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete** b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json` - After creating each artifact, re-run `openspec status --change "<name>" --json`
@@ -74,14 +83,17 @@ After completing all artifacts, summarize:
- Change name and location - Change name and location
- List of artifacts created with brief descriptions - List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation." - What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing." - Prompt: "Run `/opsx-apply` to start implementing."
**Artifact Creation Guidelines** **Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type - Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it - The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones - Read dependency artifacts for context before creating new ones
- Use the `template` as a starting point, filling in based on context - Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails** **Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`) - Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)

View File

@@ -1,131 +0,0 @@
---
description: Sync delta specs from a change to main specs
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name after `/opsx:sync` (e.g., `/opsx:sync add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -1,161 +0,0 @@
---
description: Verify implementation matches change artifacts before archiving
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name after `/opsx:verify` (e.g., `/opsx:verify add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Implement tasks from an OpenSpec change. Implement tasks from an OpenSpec change.
@@ -22,7 +22,7 @@ Implement tasks from an OpenSpec change.
- Auto-select if only one active change exists - Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select - If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`). Always announce: "Using change: <name>" and how to override (e.g., `/opsx-apply <other>`).
2. **Check status to understand the schema** 2. **Check status to understand the schema**
```bash ```bash

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Archive a completed change in the experimental workflow. Archive a completed change in the experimental workflow.
@@ -63,7 +63,7 @@ Archive a completed change in the experimental workflow.
- If changes needed: "Sync now (recommended)", "Archive without syncing" - If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel" - If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx:sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice. If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive** 5. **Perform the archive**

View File

@@ -1,246 +0,0 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -1,118 +0,0 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -6,12 +6,12 @@ compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes. Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing. **IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore. **This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
@@ -95,8 +95,7 @@ This tells you:
Think freely. When insights crystallize, you might offer: Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?" - "This feels solid enough to start a change. Want me to create a proposal?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize - Or keep exploring - no pressure to formalize
### When a change exists ### When a change exists
@@ -202,7 +201,7 @@ You: [reads codebase]
**User is stuck mid-implementation:** **User is stuck mid-implementation:**
``` ```
User: /opsx:explore add-auth-system User: /opsx-explore add-auth-system
The OAuth integration is more complex than expected The OAuth integration is more complex than expected
You: [reads change artifacts] You: [reads change artifacts]
@@ -252,7 +251,7 @@ You: That changes everything.
There's no required ending. Discovery might: There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx:new or /opsx:ff" - **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions" - **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on - **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime" - **Continue later**: "We can pick this up anytime"
@@ -269,8 +268,7 @@ When it feels like things are crystallizing, you might summarize:
**Open questions**: [if any remain] **Open questions**: [if any remain]
**Next steps** (if ready): **Next steps** (if ready):
- Create a change: /opsx:new <name> - Create a change proposal
- Fast-forward to tasks: /opsx:ff <name>
- Keep exploring: just keep talking - Keep exploring: just keep talking
``` ```

View File

@@ -0,0 +1,281 @@
---
name: openspec-lock-consensus
description: 锁定共识 - 在探索讨论后,将讨论结果锁定为正式共识文档。防止后续提案偏离讨论内容。
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: junhong
version: "1.1"
---
# 共识锁定 Skill
`/opsx:explore` 讨论后,使用此 skill 将讨论结果锁定为正式共识。共识文档是后续所有 artifact 的基础约束。
## 触发方式
```
/opsx:lock <change-name>
```
或在探索结束后AI 主动提议:
> "讨论已经比较清晰了,要锁定共识吗?"
---
## 工作流程
### Step 1: 整理讨论要点
从对话中提取以下四个维度的共识:
| 维度 | 说明 | 示例 |
|------|------|------|
| **要做什么** | 明确的功能范围 | "支持批量导入 IoT 卡" |
| **不做什么** | 明确排除的内容 | "不支持实时同步,仅定时批量" |
| **关键约束** | 技术/业务限制 | "必须使用 Asynq 异步任务" |
| **验收标准** | 如何判断完成 | "导入 1000 张卡 < 30s" |
### Step 2: 使用 Question_tool 逐维度确认
**必须使用 Question_tool 进行结构化确认**,每个维度一个问题:
```typescript
// 示例:确认"要做什么"
Question_tool({
questions: [{
header: "确认:要做什么",
question: "以下是整理的功能范围,请确认:\n\n" +
"1. 功能点 A\n" +
"2. 功能点 B\n" +
"3. 功能点 C\n\n" +
"是否准确完整?",
options: [
{ label: "确认无误", description: "以上内容准确完整" },
{ label: "需要补充", description: "有遗漏的功能点" },
{ label: "需要删减", description: "有不应该包含的内容" }
],
multiple: false
}]
})
```
**如果用户选择"需要补充"或"需要删减"**
- 用户会通过自定义输入提供修改意见
- 根据反馈更新列表,再次使用 Question_tool 确认
**确认流程**
```
┌─────────────────────────────────────────────────────────────────────┐
│ Question_tool: 确认"要做什么" │
│ ├── 用户选择"确认无误" → 进入下一维度 │
│ └── 用户选择其他/自定义 → 修改后重新确认 │
├─────────────────────────────────────────────────────────────────────┤
│ Question_tool: 确认"不做什么" │
│ ├── 用户选择"确认无误" → 进入下一维度 │
│ └── 用户选择其他/自定义 → 修改后重新确认 │
├─────────────────────────────────────────────────────────────────────┤
│ Question_tool: 确认"关键约束" │
│ ├── 用户选择"确认无误" → 进入下一维度 │
│ └── 用户选择其他/自定义 → 修改后重新确认 │
├─────────────────────────────────────────────────────────────────────┤
│ Question_tool: 确认"验收标准" │
│ ├── 用户选择"确认无误" → 生成 consensus.md │
│ └── 用户选择其他/自定义 → 修改后重新确认 │
└─────────────────────────────────────────────────────────────────────┘
```
### Step 3: 生成 consensus.md
所有维度确认后,创建文件:
```bash
# 检查 change 是否存在
openspec list --json
# 如果 change 不存在,先创建
# openspec new <change-name>
# 写入 consensus.md
```
**文件路径**: `openspec/changes/<change-name>/consensus.md`
---
## Question_tool 使用规范
### 每个维度的问题模板
**1. 要做什么**
```typescript
{
header: "确认:要做什么",
question: "以下是整理的【功能范围】:\n\n" +
items.map((item, i) => `${i+1}. ${item}`).join('\n') +
"\n\n请确认是否准确完整",
options: [
{ label: "确认无误", description: "功能范围准确完整" },
{ label: "需要补充", description: "有遗漏的功能点" },
{ label: "需要删减", description: "有不应该包含的内容" }
]
}
```
**2. 不做什么**
```typescript
{
header: "确认:不做什么",
question: "以下是明确【排除的内容】:\n\n" +
items.map((item, i) => `${i+1}. ${item}`).join('\n') +
"\n\n请确认是否正确",
options: [
{ label: "确认无误", description: "排除范围正确" },
{ label: "需要补充", description: "还有其他需要排除的" },
{ label: "需要删减", description: "有些不应该排除" }
]
}
```
**3. 关键约束**
```typescript
{
header: "确认:关键约束",
question: "以下是【关键约束】:\n\n" +
items.map((item, i) => `${i+1}. ${item}`).join('\n') +
"\n\n请确认是否正确",
options: [
{ label: "确认无误", description: "约束条件正确" },
{ label: "需要补充", description: "还有其他约束" },
{ label: "需要修改", description: "约束描述不准确" }
]
}
```
**4. 验收标准**
```typescript
{
header: "确认:验收标准",
question: "以下是【验收标准】(必须可测量):\n\n" +
items.map((item, i) => `${i+1}. ${item}`).join('\n') +
"\n\n请确认是否正确",
options: [
{ label: "确认无误", description: "验收标准清晰可测量" },
{ label: "需要补充", description: "还有其他验收标准" },
{ label: "需要修改", description: "标准不够清晰或无法测量" }
]
}
```
### 处理用户反馈
当用户选择非"确认无误"选项或提供自定义输入时:
1. 解析用户的修改意见
2. 更新对应维度的内容
3. 再次使用 Question_tool 确认更新后的内容
4. 重复直到用户选择"确认无误"
---
## consensus.md 模板
```markdown
# 共识文档
**Change**: <change-name>
**确认时间**: <timestamp>
**确认人**: 用户
---
## 1. 要做什么
- [x] 功能点 A已确认
- [x] 功能点 B已确认
- [x] 功能点 C已确认
## 2. 不做什么
- [x] 排除项 A已确认
- [x] 排除项 B已确认
## 3. 关键约束
- [x] 技术约束 A已确认
- [x] 业务约束 B已确认
## 4. 验收标准
- [x] 验收标准 A已确认
- [x] 验收标准 B已确认
---
## 讨论背景
<简要总结讨论的核心问题和解决方向>
## 关键决策记录
| 决策点 | 选择 | 原因 |
|--------|------|------|
| 决策 1 | 选项 A | 理由... |
| 决策 2 | 选项 B | 理由... |
---
**签字确认**: 用户已通过 Question_tool 逐条确认以上内容
```
---
## 后续流程绑定
### Proposal 生成时
`/opsx:continue` 生成 proposal 时,**必须**
1. 读取 `consensus.md`
2. 确保 proposal 的 Capabilities 覆盖"要做什么"中的每一项
3. 确保 proposal 不包含"不做什么"中的内容
4. 确保 proposal 遵守"关键约束"
### 验证机制
如果 proposal 与 consensus 不一致,输出警告:
```
⚠️ Proposal 验证警告:
共识中"要做什么"但 Proposal 未提及:
- 功能点 C
共识中"不做什么"但 Proposal 包含:
- 排除项 A
建议修正 Proposal 或更新共识。
```
---
## Guardrails
- **必须使用 Question_tool** - 不要用纯文本确认
- **逐维度确认** - 四个维度分开确认,不要合并
- **不要跳过确认** - 每个维度都必须让用户明确确认
- **不要自作主张** - 只整理讨论中明确提到的内容
- **避免模糊表述** - "尽量"、"可能"、"考虑"等词汇需要明确化
- **验收标准必须可测量** - 避免"性能要好"这类无法验证的标准
---
## 与其他 Skills 的关系
| Skill | 关系 |
|-------|------|
| `openspec-explore` | 探索结束后触发 lock |
| `openspec-new-change` | lock 后触发 new如果 change 不存在)|
| `openspec-continue-change` | 生成 proposal 时读取 consensus 验证 |
| `openspec-generate-acceptance-tests` | 从 consensus 的验收标准生成测试骨架 |

View File

@@ -1,74 +0,0 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

View File

@@ -1,529 +0,0 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -1,15 +1,24 @@
--- ---
name: openspec-ff-change name: openspec-propose
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually. description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
license: MIT license: MIT
compatibility: Requires openspec CLI. compatibility: Requires openspec CLI.
metadata: metadata:
author: openspec author: openspec
version: "1.0" version: "1.0"
generatedBy: "1.0.2" generatedBy: "1.2.0"
--- ---
Fast-forward through artifact creation - generate everything needed to start implementation in one go. Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx-apply
---
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build. **Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
@@ -28,7 +37,7 @@ Fast-forward through artifact creation - generate everything needed to start imp
```bash ```bash
openspec new change "<name>" openspec new change "<name>"
``` ```
This creates a scaffolded change at `openspec/changes/<name>/`. This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order** 3. **Get the artifact build order**
```bash ```bash
@@ -59,7 +68,7 @@ Fast-forward through artifact creation - generate everything needed to start imp
- Read any completed dependency files for context - Read any completed dependency files for context
- Create the artifact file using `template` as the structure - Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file - Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>" - Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete** b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json` - After creating each artifact, re-run `openspec status --change "<name>" --json`
@@ -81,7 +90,7 @@ After completing all artifacts, summarize:
- Change name and location - Change name and location
- List of artifacts created with brief descriptions - List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation." - What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks." - Prompt: "Run `/opsx-apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines** **Artifact Creation Guidelines**
@@ -97,5 +106,5 @@ After completing all artifacts, summarize:
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`) - Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one - Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum - If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead - If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next - Verify each artifact file exists after writing before proceeding to next

View File

@@ -1,138 +0,0 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -1,168 +0,0 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,265 @@
---
name: systematic-debugging
description: 遇到任何 bug、异常行为、报错时必须使用。在提出任何修复方案之前强制执行根因分析流程。适用于 API 报错、数据异常、业务逻辑错误、性能问题等所有技术问题。
license: MIT
metadata:
author: junhong
version: "1.0"
source: "adapted from obra/superpowers systematic-debugging"
---
# 系统化调试方法论
## 铁律
```
没有找到根因,禁止提出任何修复方案。
```
改之前先搞懂为什么坏了。猜测不是调试,验证假设才是。
---
## 什么时候用
**所有技术问题都用这个流程**
- API 接口报错4xx / 5xx
- 业务数据异常(金额不对、状态流转错误)
- 性能问题(接口慢、数据库慢查询)
- 异步任务失败Asynq 任务报错/卡住)
- 构建失败、启动失败
**尤其是以下场景**
- 时间紧迫(越急越不能瞎猜)
- "很简单的问题"(简单问题也有根因)
- 已经试了一次修复但没解决
- 不完全理解为什么出问题
---
## 四阶段流程
必须按顺序完成每个阶段,不可跳过。
### 阶段一:根因调查
**这是最重要的阶段,占整个调试时间的 60%。没完成本阶段,禁止进入阶段二。**
#### 1. 仔细阅读错误信息
- 完整阅读 stack trace不要跳过
- 注意行号、文件路径、错误码
- 很多时候答案就在错误信息里
- 检查 `logs/app.log``logs/access.log` 中的上下文
#### 2. 稳定复现
- 能稳定触发吗?精确的请求参数是什么?
- 用 curl 或 Postman 复现,记录完整的请求和响应
- 不能复现 → 收集更多数据检查日志、Redis 状态、数据库记录),**不要瞎猜**
#### 3. 检查最近改动
- `git diff` / `git log --oneline -10` 看最近改了什么
- 新加了什么依赖?改了什么配置?改了什么 SQL
- 对比改动前后的行为差异
#### 4. 逐层诊断(针对本项目架构)
本项目有明确的分层架构,问题一定出在某一层的边界:
```
请求 → Fiber Middleware → Handler → Service → Store → PostgreSQL/Redis
↑ ↑ ↑ ↑ ↑
认证/限流 参数解析 业务逻辑 SQL/缓存 数据本身
```
**在每个层边界确认数据是否正确**
```go
// Handler 层 — 请求进来的参数对不对?
logger.Info("Handler 收到请求",
zap.Any("params", req),
zap.String("request_id", requestID),
)
// Service 层 — 传给业务逻辑的数据对不对?
logger.Info("Service 开始处理",
zap.Uint("user_id", userID),
zap.Any("input", input),
)
// Store 层 — SQL 查询/写入的数据对不对?
// 开启 GORM Debug 模式查看实际 SQL
db.Debug().Where(...).Find(&result)
// Redis 层 — 缓存的数据对不对?
// 用 redis-cli 直接检查 key 的值
// GET auth:token:{token}
// GET sim:status:{iccid}
```
**跑一次 → 看日志 → 找到断裂的那一层 → 再深入该层排查。**
#### 5. 追踪数据流
如果错误深藏在调用链中:
- 坏数据从哪来的?
- 谁调用了这个函数,传了什么参数?
- 一直往上追,直到找到数据变坏的源头
- **修源头,不修症状**
---
### 阶段二:模式分析
**找到参照物,对比差异。**
#### 1. 找能用的参照
项目里有没有类似的、能正常工作的代码?
| 如果问题在... | 参照物在... |
|-------------|-----------|
| Handler 参数解析 | 其他 Handler 的相同模式 |
| Service 业务逻辑 | 同模块其他方法的实现 |
| Store SQL 查询 | 同 Store 文件中类似的查询 |
| Redis 操作 | `pkg/constants/redis.go` 中的 Key 定义 |
| 异步任务 | `internal/task/` 中其他任务处理器 |
| GORM Callback | `pkg/database/` 中的 callback 实现 |
#### 2. 逐行对比
完整阅读参考代码,不要跳读。列出每一处差异。
#### 3. 不要假设"这个不重要"
小差异经常是 bug 的根因:
- 字段标签 `gorm:"column:xxx"` 拼写不对
- `errors.New()` 用了错误的错误码
- Redis Key 函数参数传反了
- Context 里的 UserID 没取到(中间件没配)
---
### 阶段三:假设和验证
**科学方法:一次只验证一个假设。**
#### 1. 形成单一假设
明确写下:
> "我认为根因是 X因为 Y。验证方法是 Z。"
#### 2. 最小化验证
- 只改一个地方
- 一次只验证一个变量
- 不要同时修多处
#### 3. 验证结果
- 假设成立 → 进入阶段四
- 假设不成立 → 回到阶段一,用新信息重新分析
- **绝对不能在失败的修复上再叠加修复**
#### 4. 三次失败 → 停下来
如果连续 3 次假设都不成立:
**这不是 bug是架构问题。**
- 停止一切修复尝试
- 整理已知信息
- 向用户说明情况,讨论是否需要重构
- 不要再试第 4 次
---
### 阶段四:实施修复
**确认根因后,一次性修好。**
#### 1. 修根因,不修症状
```
❌ 症状修复:在 Handler 里加个 if 把坏数据过滤掉
✅ 根因修复:修 Service 层生成坏数据的逻辑
```
#### 2. 一次只改一个地方
- 不搞"顺手优化"
- 不在修 bug 的同时重构代码
- 修完 bug 就停
#### 3. 验证修复
- `go build ./...` 编译通过
- `lsp_diagnostics` 无新增错误
- 用原来复现 bug 的请求再跑一次,确认修好了
- 用 PostgreSQL MCP 工具检查数据库中的数据状态
#### 4. 清理诊断代码
- 删除阶段一加的临时诊断日志(除非它们本身就该保留)
- 确保没有 `db.Debug()` 残留在代码里
---
## 本项目常见调试场景速查
| 场景 | 首先检查 |
|------|---------|
| API 返回 401 | `logs/access.log` 中该请求的 token → Redis 中 `auth:token:{token}` 是否存在 |
| API 返回 403 | 用户类型是什么 → GORM Callback 自动过滤的条件对不对 → `middleware.CanManageShop()` 的参数 |
| 数据查不到 | GORM 数据权限过滤有没有生效 → `shop_id` / `enterprise_id` 是否正确 → 是否需要 `SkipDataPermission` |
| 金额/余额不对 | 乐观锁 version 字段 → `RowsAffected` 是否为 0 → 并发场景下的锁竞争 |
| 状态流转错误 | `WHERE status = expected` 条件更新 → 状态机是否有遗漏的路径 |
| 异步任务不执行 | Asynq Dashboard → `RedisTaskLockKey` 有没有残留 → Worker 日志 |
| 异步任务重复执行 | `RedisTaskLockKey` 的 TTL → 任务幂等性检查 |
| 分佣计算错误 | 佣金类型(差价/一次性) → 套餐级别的佣金率 → 设备级防重复分佣 |
| 套餐激活异常 | 卡状态 → 实名状态 → 主套餐排队逻辑 → 加油包绑定关系 |
| Redis 缓存不一致 | Key 的 TTL → 缓存更新时机 → 是否有手动 `Del` 清除 |
| 微信支付回调失败 | 签名验证 → 幂等性处理 → 回调 URL 是否可达 |
| GORM 查询慢 | `db.Debug()` 看实际 SQL → 是否 N+1 → 是否缺少索引 |
---
## 红线规则
如果你发现自己在想以下任何一条,**立刻停下来,回到阶段一**
| 想法 | 为什么是错的 |
|------|------------|
| "先快速修一下,回头再查" | 快速修 = 猜测。猜测 = 浪费时间。 |
| "试试改这个看看行不行" | 一次只验证一个假设,不是随机改。 |
| "大概是 X 的问题,我直接改了" | "大概"不是根因。先验证再改。 |
| "这个很简单,不用走流程" | 简单问题走流程只需要 5 分钟。不走流程可能浪费 2 小时。 |
| "我不完全理解但这应该行" | 不理解 = 没找到根因。回阶段一。 |
| "再试一次"(已经失败 2 次) | 3 次失败 = 架构问题。停下来讨论。 |
| "同时改这几个地方应该能修好" | 改多处 = 无法确认哪个是根因。一次只改一处。 |
---
## 常见借口和真相
| 借口 | 真相 |
|------|------|
| "问题很简单,不需要走流程" | 简单问题也有根因。走流程对简单问题只花 5 分钟。 |
| "太紧急了,没时间分析" | 系统化调试比乱猜快 3-5 倍。越急越要走流程。 |
| "先改了验证一下" | 这叫猜测,不叫验证。先确认根因再改。 |
| "我看到问题了,直接修" | 看到症状 ≠ 理解根因。症状修复是技术债。 |
| "改了好几个地方,反正能用了" | 不知道哪个改动修的,下次还会出问题。 |
---
## 快速参考
| 阶段 | 核心动作 | 完成标准 |
|------|---------|---------|
| **一、根因调查** | 读错误日志、复现、检查改动、逐层诊断、追踪数据流 | 能说清楚"因为 X 所以 Y" |
| **二、模式分析** | 找参照代码、逐行对比、列出差异 | 知道正确的应该长什么样 |
| **三、假设验证** | 写下假设、最小改动、单变量验证 | 假设被证实或推翻 |
| **四、实施修复** | 修根因、编译检查、请求验证、清理诊断代码 | bug 消失,无新增问题 |

404
AGENTS.md
View File

@@ -17,6 +17,7 @@
| 测试接口/验证数据 | `db-validation` | PostgreSQL MCP 使用方法和验证示例 | | 测试接口/验证数据 | `db-validation` | PostgreSQL MCP 使用方法和验证示例 |
| 数据库迁移 | `db-migration` | 迁移命令、文件规范、执行流程、失败处理 | | 数据库迁移 | `db-migration` | 迁移命令、文件规范、执行流程、失败处理 |
| 维护规范文档 | `doc-management` | 规范文档流程和维护规则 | | 维护规范文档 | `doc-management` | 规范文档流程和维护规则 |
| 调试 bug / 排查异常 | `systematic-debugging` | 四阶段根因分析流程、逐层诊断、场景速查表 |
### ⚠️ 新增 Handler 时必须同步更新文档生成器 ### ⚠️ 新增 Handler 时必须同步更新文档生成器
@@ -37,6 +38,7 @@ handlers := &bootstrap.Handlers{
## 语言要求 ## 语言要求
**必须遵守:** **必须遵守:**
- 永远用中文交互 - 永远用中文交互
- 注释必须使用中文 - 注释必须使用中文
- 文档必须使用中文 - 文档必须使用中文
@@ -62,6 +64,7 @@ handlers := &bootstrap.Handlers{
| 缓存 | Redis 6.0+ | | 缓存 | Redis 6.0+ |
**禁止:** **禁止:**
- 直接使用 `database/sql`(必须通过 GORM - 直接使用 `database/sql`(必须通过 GORM
- 使用 `net/http` 替代 Fiber - 使用 `net/http` 替代 Fiber
- 使用 `encoding/json` 替代 sonic除非必要 - 使用 `encoding/json` 替代 sonic除非必要
@@ -82,27 +85,179 @@ Handler → Service → Store → Model
## 核心原则 ## 核心原则
### 错误处理 ### 错误处理
- 所有错误必须在 `pkg/errors/` 中定义 - 所有错误必须在 `pkg/errors/` 中定义
- 使用统一错误码系统 - 使用统一错误码系统
- Handler 层通过返回 `error` 传递给全局 ErrorHandler - Handler 层通过返回 `error` 传递给全局 ErrorHandler
#### 错误报错规范(必须遵守) #### 错误报错规范(必须遵守)
- Handler 层禁止直接返回/拼接底层错误信息给客户端(例如 `"参数验证失败: "+err.Error()``err.Error()` - Handler 层禁止直接返回/拼接底层错误信息给客户端(例如 `"参数验证失败: "+err.Error()``err.Error()`
- 参数校验失败:对外统一返回 `errors.New(errors.CodeInvalidParam)`(详细校验错误写日志) - 参数校验失败:对外统一返回 `errors.New(errors.CodeInvalidParam)`(详细校验错误写日志)
- Service 层禁止对外返回 `fmt.Errorf(...)`,必须返回 `errors.New(...)``errors.Wrap(...)` - Service 层禁止对外返回 `fmt.Errorf(...)`,必须返回 `errors.New(...)``errors.Wrap(...)`
- 约定用法:`errors.New(code[, msg])``errors.Wrap(code, err[, msg])` - 约定用法:`errors.New(code[, msg])``errors.Wrap(code, err[, msg])`
### 响应格式 ### 响应格式
- 所有 API 响应使用 `pkg/response/` 的统一格式 - 所有 API 响应使用 `pkg/response/` 的统一格式
- 格式: `{code, msg, data, timestamp}` - 格式: `{code, msg, data, timestamp}`
### 常量管理 ### 常量管理
- 所有常量定义在 `pkg/constants/` - 所有常量定义在 `pkg/constants/`
- Redis key 使用函数生成: `Redis{Module}{Purpose}Key(params...)` - Redis key 使用函数生成: `Redis{Module}{Purpose}Key(params...)`
- 禁止硬编码字符串和 magic numbers - 禁止硬编码字符串和 magic numbers
- **必须为所有常量添加中文注释** - **必须为所有常量添加中文注释**
### 注释规范
#### 基本原则
- **所有注释使用中文**(与语言要求一致)
- **导出符号必须有文档注释**(包、函数、方法、类型、接口、常量、变量)
- **复杂逻辑必须有实现注释**(解释"为什么",而不是"做了什么"
- **禁止废话注释**(不要用注释复述代码本身)
- **修改代码时必须同步更新注释**(过时的注释比没有注释更有害)
#### 包注释
每个包的入口文件(通常是主文件或 `doc.go`)必须有包注释:
```go
// Package account 提供账号管理的业务逻辑服务
// 包含账号创建、修改、删除、权限分配等功能
package account
```
#### 结构体注释
所有导出结构体必须有文档注释,说明该结构体代表什么:
```go
// Service 账号业务服务
// 负责账号的 CRUD、角色分配、密码管理等业务逻辑
type Service struct {
store *Store
auditService AuditServiceInterface
}
```
#### 接口注释
导出接口必须注释接口用途,每个方法必须说明契约:
```go
// PermissionChecker 权限检查器接口
// 用于查询用户的权限列表
type PermissionChecker interface {
// CheckPermission 检查用户是否拥有指定权限
// userID: 用户ID
// permCode: 权限编码(格式: module:action
// platform: 端口类型 (all/web/h5)
CheckPermission(ctx context.Context, userID uint, permCode string, platform string) (bool, error)
}
```
#### 函数和方法注释
导出函数/方法必须以函数名开头,说明功能:
```go
// Create 创建账号
// POST /api/admin/accounts
func (h *AccountHandler) Create(c *fiber.Ctx) error {
```
**复杂方法**(超过 30 行或包含复杂业务逻辑)必须额外说明实现思路:
```go
// ActivateByRealname 首次实名激活套餐
// 当用户完成实名认证后,自动激活处于"囤货待实名"状态的套餐:
// 1. 查找该卡所有 status=3待实名激活的套餐
// 2. 按创建时间排序第一个主套餐立即激活status=1
// 3. 其余主套餐进入排队状态status=4
// 4. 加油包如果绑定了已激活的主套餐则一并激活
func (s *UsageService) ActivateByRealname(ctx context.Context, cardID uint) error {
```
#### 未导出符号的注释
未导出(小写)的函数/方法:
- **简单逻辑**< 15 行):可以不加注释
- **复杂逻辑**(≥ 15 行)或 **非显而易见的算法**:必须加注释
```go
// buildPermissionTree 递归构建权限树
// 采用 map 索引 + 单次遍历算法,时间复杂度 O(n)
func (s *Service) buildPermissionTree(permissions []*model.Permission) []*dto.PermissionTreeNode {
```
#### 内联注释(实现逻辑注释)
以下场景**必须**添加内联注释:
| 场景 | 要求 |
|------|------|
| 复杂条件判断 | 解释判断的业务含义 |
| 多步骤业务流程 | 用编号注释标明每一步 |
| 非显而易见的设计决策 | 解释"为什么这样做"而不是"做了什么" |
| 缓存/事务/并发处理 | 说明策略和原因 |
| 临时方案/兼容逻辑 | 标注 TODO 或说明背景 |
**✅ 好的内联注释(解释为什么)**
```go
// 使用 Redis 分布式锁防止并发重复创建,锁超时 10 秒
if !s.acquireLock(ctx, lockKey, 10*time.Second) {
return errors.New(errors.CodeTooManyRequests, "操作过于频繁,请稍后重试")
}
// 先冻结佣金再扣款,保证资金安全(失败时佣金自动解冻)
if err := s.freezeCommission(ctx, tx, orderID); err != nil {
return err
}
```
**❌ 废话注释(禁止)**
```go
// 获取用户ID ← 禁止:代码本身已经很清楚
userID := middleware.GetUserIDFromContext(ctx)
// 创建账号 ← 禁止:变量名已说明意图
account := &model.Account{}
// 返回错误 ← 禁止return err 不需要注释
return err
```
#### 常量和枚举注释
分组常量必须有组注释,每个值必须有行内注释:
```go
// 用户类型常量
const (
UserTypeSuperAdmin = 1 // 超级管理员
UserTypePlatform = 2 // 平台用户
UserTypeAgent = 3 // 代理账号
UserTypeEnterprise = 4 // 企业账号
)
```
#### Handler 层特殊要求
Handler 方法的注释必须包含 HTTP 方法和路径:
```go
// Create 创建账号
// POST /api/admin/accounts
func (h *AccountHandler) Create(c *fiber.Ctx) error {
```
### Go 代码风格 ### Go 代码风格
- 使用 `gofmt` 格式化 - 使用 `gofmt` 格式化
- 遵循 [Effective Go](https://go.dev/doc/effective_go) - 遵循 [Effective Go](https://go.dev/doc/effective_go)
- 包名: 简短、小写、单数、无下划线 - 包名: 简短、小写、单数、无下划线
@@ -111,6 +266,7 @@ Handler → Service → Store → Model
## 数据库设计 ## 数据库设计
**核心规则:** **核心规则:**
- ❌ 禁止建立外键约束 - ❌ 禁止建立外键约束
- ❌ 禁止使用 GORM 关联关系标签foreignKey、hasMany、belongsTo - ❌ 禁止使用 GORM 关联关系标签foreignKey、hasMany、belongsTo
- ✅ 关联通过存储 ID 字段手动维护 - ✅ 关联通过存储 ID 字段手动维护
@@ -119,6 +275,7 @@ Handler → Service → Store → Model
## Go 惯用法 vs Java 风格 ## Go 惯用法 vs Java 风格
### ✅ Go 风格(推荐) ### ✅ Go 风格(推荐)
- 扁平化包结构(最多 2-3 层) - 扁平化包结构(最多 2-3 层)
- 小而专注的接口1-3 个方法) - 小而专注的接口1-3 个方法)
- 直接访问导出字段(不用 getter/setter - 直接访问导出字段(不用 getter/setter
@@ -126,96 +283,44 @@ Handler → Service → Store → Model
- 显式错误返回和检查 - 显式错误返回和检查
### ❌ Java 风格(禁止) ### ❌ Java 风格(禁止)
- 过度抽象(不必要的接口、工厂) - 过度抽象(不必要的接口、工厂)
- Getter/Setter 方法 - Getter/Setter 方法
- 深层继承层次 - 深层继承层次
- 异常处理panic/recover - 异常处理panic/recover
- 类型前缀IService、AbstractBase、ServiceImpl - 类型前缀IService、AbstractBase、ServiceImpl
## 测试要求 ## ⚠️ 测试禁令(强制执行)
- 核心业务逻辑Service 层)测试覆盖率 ≥ 90% **本项目不使用任何形式的自动化测试代码。**
- 所有 API 端点必须有集成测试
- 使用 table-driven tests
- 单元测试 < 100ms集成测试 < 1s
### ⚠️ 测试真实性原则(严格遵守) **绝对禁止:**
**测试必须真正验证功能,禁止绕过核心逻辑:** -**禁止编写单元测试** - 无论任何场景
-**禁止编写集成测试** - 无论任何场景
-**禁止编写验收测试** - 无论任何场景
-**禁止编写流程测试** - 无论任何场景
-**禁止编写 E2E 测试** - 无论任何场景
-**禁止创建 `*_test.go` 文件** - 除非用户明确要求
-**禁止在任务中包含测试相关工作** - 规划和实现均不涉及测试
-**禁止在文档中提及测试要求** - 规范、设计文档均不讨论测试
| 规则 | 说明 | **唯一例外:**
|------|------|
| ❌ 禁止传递 nil 绕过依赖 | 如果功能依赖外部服务(如对象存储、第三方 API测试必须验证该依赖的调用 |
| ❌ 禁止只测试部分流程 | 如果功能包含 A → B → C 三步,不能只测试 B 而跳过 A 和 C |
| ❌ 禁止声称"测试通过"但未验证核心逻辑 | 测试通过必须意味着功能真正可用 |
| ❌ 禁止擅自使用 Mock | 尽量使用真实服务进行集成测试,如需使用 Mock 必须先询问用户并获得同意 |
| ✅ 必须验证端到端流程 | 新增功能必须有完整的集成测试覆盖整个调用链 |
| ✅ 缺少配置时必须询问 | 如果测试需要的配置(如 API Key、环境变量缺失必须询问用户而非跳过测试 |
**反面案例** - ✅ **仅当用户明确要求**时才编写测试代码
```go - ✅ 用户必须主动说明"请写测试"或"需要测试"
// ❌ 错误:传递 nil 绕过 storageService只测试了 processImport
handler := NewIotCardImportHandler(db, redis, store1, store2, nil, logger)
result := handler.processImport(ctx, task) // 跳过了 downloadAndParseCSV
// ✅ 正确:使用真实服务测试完整流程 **原因说明:**
handler := NewIotCardImportHandler(db, redis, store1, store2, realStorageService, logger)
handler.HandleIotCardImport(ctx, asynqTask) // 测试完整流程,验证真实上传/下载
```
**测试超时 = 生产超时** - 业务系统的正确性通过人工验证和生产环境监控保证
- 集成测试超时意味着生产环境也可能超时 - 测试代码的维护成本高于价值
- 发现超时必须排查原因,不能简单跳过或增加超时时间 - 快速迭代优先于测试覆盖率
### 测试连接管理(必读) **替代方案:**
**详细规范**: [docs/testing/test-connection-guide.md](docs/testing/test-connection-guide.md) - 使用 PostgreSQL MCP 工具手动验证数据
- 使用 Postman/curl 手动测试 API
**⚠️ 运行测试必须先加载环境变量**: - 依赖生产环境日志和监控发现问题
```bash
# ✅ 正确
source .env.local && go test -v ./internal/service/xxx/...
# ❌ 错误(会因缺少配置而失败)
go test -v ./internal/service/xxx/...
```
**标准模板**:
```go
func TestXxx(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := postgres.NewXxxStore(tx, rdb)
// 测试代码...
}
```
**核心函数**:
- `NewTestTransaction(t)`: 创建测试事务,自动回滚
- `GetTestRedis(t)`: 获取全局 Redis 连接
- `CleanTestRedisKeys(t, rdb)`: 自动清理测试 Redis 键
**集成测试环境**HTTP API 测试):
```go
func TestAPI_Create(t *testing.T) {
env := testutils.NewIntegrationTestEnv(t)
t.Run("成功创建", func(t *testing.T) {
resp, err := env.AsSuperAdmin().Request("POST", "/api/admin/resources", jsonBody)
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
})
}
```
- `NewIntegrationTestEnv(t)`: 创建完整测试环境事务、Redis、App、Token
- `AsSuperAdmin()`: 以超级管理员身份请求
- `AsUser(account)`: 以指定账号身份请求
**禁止使用(已移除)**:
-`SetupTestDB` / `TeardownTestDB` / `SetupTestDBWithStore`
## 性能要求 ## 性能要求
@@ -254,35 +359,40 @@ func TestAPI_Create(t *testing.T) {
3. ✅ 使用统一错误处理 3. ✅ 使用统一错误处理
4. ✅ 常量定义在 pkg/constants/ 4. ✅ 常量定义在 pkg/constants/
5. ✅ Go 惯用法(非 Java 风格) 5. ✅ Go 惯用法(非 Java 风格)
6.包含测试计划 6.性能考虑
7.性能考虑 7.文档更新计划
8.文档更新计划 8.中文优先
9. ✅ 中文优先
## Code Review 检查清单 ## Code Review 检查清单
### 错误处理 ### 错误处理
- [ ] Service 层无 `fmt.Errorf` 对外返回 - [ ] Service 层无 `fmt.Errorf` 对外返回
- [ ] Handler 层参数校验不泄露细节 - [ ] Handler 层参数校验不泄露细节
- [ ] 错误码使用正确4xx vs 5xx - [ ] 错误码使用正确4xx vs 5xx
- [ ] 错误日志完整(包含上下文) - [ ] 错误日志完整(包含上下文)
### 代码质量 ### 代码质量
- [ ] 遵循 Handler → Service → Store → Model 分层 - [ ] 遵循 Handler → Service → Store → Model 分层
- [ ] 函数长度 ≤ 100 行(核心逻辑 ≤ 50 行) - [ ] 函数长度 ≤ 100 行(核心逻辑 ≤ 50 行)
- [ ] 常量定义在 `pkg/constants/` - [ ] 常量定义在 `pkg/constants/`
- [ ] 使用 Go 惯用法(非 Java 风格) - [ ] 使用 Go 惯用法(非 Java 风格)
### 测试覆盖
- [ ] 核心业务逻辑测试覆盖率 ≥ 90%
- [ ] 所有 API 端点有集成测试
- [ ] 测试验证真实功能(不绕过核心逻辑)
### 文档和注释 ### 文档和注释
- [ ] 所有注释使用中文 - [ ] 所有注释使用中文
- [ ] 导出函数/类型有文档注释 - [ ] 导出函数/类型有文档注释
- [ ] API 路径注释与真实路由一致 - [ ] API 路径注释与真实路由一致
### 幂等性
- [ ] 创建类写操作有 Redis 业务键防重
- [ ] 状态变更使用条件更新(`WHERE status = expected`
- [ ] 余额/库存变更使用乐观锁version 字段)
- [ ] 分布式锁使用 `defer` 确保释放
- [ ] Redis Key 定义在 `pkg/constants/redis.go`
### 越权防护规范 ### 越权防护规范
**适用场景**:任何涉及跨用户、跨店铺、跨企业的资源访问 **适用场景**:任何涉及跨用户、跨店铺、跨企业的资源访问
@@ -292,6 +402,7 @@ func TestAPI_Create(t *testing.T) {
1. **路由层中间件**(粗粒度拦截) 1. **路由层中间件**(粗粒度拦截)
- 用于明显的权限限制(如企业账号禁止访问账号管理) - 用于明显的权限限制(如企业账号禁止访问账号管理)
- 示例: - 示例:
```go ```go
group.Use(func(c *fiber.Ctx) error { group.Use(func(c *fiber.Ctx) error {
userType := middleware.GetUserTypeFromContext(c.UserContext()) userType := middleware.GetUserTypeFromContext(c.UserContext())
@@ -315,9 +426,118 @@ func TestAPI_Create(t *testing.T) {
- 无需手动调用 - 无需手动调用
**统一错误返回** **统一错误返回**
- 越权访问统一返回:`errors.New(errors.CodeForbidden, "无权限操作该资源或资源不存在")` - 越权访问统一返回:`errors.New(errors.CodeForbidden, "无权限操作该资源或资源不存在")`
- 不区分"不存在"和"无权限",防止信息泄露 - 不区分"不存在"和"无权限",防止信息泄露
### 幂等性规范
**适用场景**:任何可能被重复触发的写操作
#### 必须实现幂等性的场景
| 场景 | 原因 | 实现策略 |
|------|------|----------|
| 订单创建 | 用户双击、网络重试 | Redis 业务键防重 + 分布式锁 |
| 支付回调 | 第三方平台重复通知 | 状态条件更新(`WHERE status = pending` |
| 钱包扣款/充值 | 并发请求、消息重投 | 乐观锁version 字段)+ 状态条件更新 |
| 套餐激活 | 异步任务重试 | Redis 分布式锁 + 已存在记录检查 |
| 异步任务处理 | Asynq 自动重试 | Redis 任务锁(`RedisTaskLockKey` |
| 佣金计算 | 支付成功后触发 | 幂等任务入队 + 状态检查 |
#### 不需要幂等性的场景
- 纯查询接口GET 请求天然幂等)
- 管理后台的配置修改(低频操作,人为确认)
- 日志记录、审计记录(允许重复写入)
#### 实现策略选择
根据场景特征选择合适的策略:
**策略 1状态条件更新首选适用于有明确状态流转的操作**
```go
// 通过 WHERE 条件确保只有预期状态才能更新RowsAffected == 0 说明已被处理
result := tx.Model(&model.Order{}).
Where("id = ? AND payment_status = ?", orderID, model.PaymentStatusPending).
Updates(map[string]any{"payment_status": model.PaymentStatusPaid})
if result.RowsAffected == 0 {
// 已被处理,检查当前状态决定返回成功还是错误
}
```
**策略 2Redis 业务键防重 + 分布式锁(适用于创建类操作,无状态可依赖)**
```go
// 业务键 = 唯一标识请求意图的组合字段
// 示例order:create:{buyer_type}:{buyer_id}:{carrier_type}:{carrier_id}:{sorted_package_ids}
idempotencyKey := buildBusinessKey(...)
redisKey := constants.RedisXxxIdempotencyKey(idempotencyKey)
// 第 1 层Redis GET 快速检测
val, err := s.redis.Get(ctx, redisKey).Result()
if err == nil && val != "" {
return existingResult // 已创建,直接返回
}
// 第 2 层:分布式锁防止并发
lockKey := constants.RedisXxxLockKey(resourceType, resourceID)
locked, _ := s.redis.SetNX(ctx, lockKey, time.Now().String(), lockTTL).Result()
if !locked {
return errors.New(errors.CodeTooManyRequests, "操作进行中,请勿重复提交")
}
defer s.redis.Del(ctx, lockKey)
// 第 3 层:加锁后二次检测
val, err = s.redis.Get(ctx, redisKey).Result()
if err == nil && val != "" {
return existingResult
}
// 执行业务逻辑...
// 成功后标记
s.redis.Set(ctx, redisKey, resultID, idempotencyTTL)
```
**策略 3乐观锁适用于余额、库存等数值更新**
```go
result := tx.Model(&model.Wallet{}).
Where("id = ? AND balance >= ? AND version = ?", walletID, amount, currentVersion).
Updates(map[string]any{
"balance": gorm.Expr("balance - ?", amount),
"version": gorm.Expr("version + 1"),
})
if result.RowsAffected == 0 {
return errors.New(errors.CodeInsufficientBalance, "余额不足或并发冲突")
}
```
#### Redis Key 命名规范
幂等性相关的 Redis Key 统一在 `pkg/constants/redis.go` 定义:
```go
// 幂等性检测键Redis{Module}IdempotencyKey — TTL 通常 3~5 分钟
func RedisOrderIdempotencyKey(businessKey string) string
// 分布式锁键Redis{Module}{Action}LockKey — TTL 通常 10~30 秒
func RedisOrderCreateLockKey(carrierType string, carrierID uint) string
```
#### 现有幂等性实现参考
| 模块 | 文件 | 策略 |
|------|------|------|
| 订单创建 | `internal/service/order/service.go` → `Create()` | 策略 2Redis 业务键 + 分布式锁 |
| 钱包支付 | `internal/service/order/service.go` → `WalletPay()` | 策略 1状态条件更新 |
| 支付回调 | `internal/service/order/service.go` → `HandlePaymentCallback()` | 策略 1状态条件更新 |
| 套餐激活 | `internal/service/package/activation_service.go` → `ActivateQueuedPackage()` | 策略 2简化版Redis 分布式锁 |
| 钱包扣款 | `internal/service/order/service.go` → `WalletPay()` | 策略 3乐观锁version 字段) |
### 审计日志规范 ### 审计日志规范
**适用场景**:任何敏感操作(账号管理、权限变更、数据删除等) **适用场景**:任何敏感操作(账号管理、权限变更、数据删除等)
@@ -325,6 +545,7 @@ func TestAPI_Create(t *testing.T) {
**使用方式** **使用方式**
1. **Service 层集成审计日志** 1. **Service 层集成审计日志**
```go ```go
type Service struct { type Service struct {
store *Store store *Store
@@ -388,3 +609,18 @@ func TestAPI_Create(t *testing.T) {
> "任务 3.1 在当前实现中可能不需要,是否可以跳过?" > "任务 3.1 在当前实现中可能不需要,是否可以跳过?"
**详细规范和 OpenSpec 工作流请查看**: `@/openspec/AGENTS.md` **详细规范和 OpenSpec 工作流请查看**: `@/openspec/AGENTS.md`
# English Learning Mode
The user is learning English through practical use. Apply these rules in every conversation:
1. **Always respond in Chinese** — regardless of whether the user writes in English or Chinese.
2. **When the user writes in English**, append a one-line correction at the end of your response in this format:
→ `[natural version of what they wrote]`
No explanation needed — just the corrected phrase.
3. **When the user mixes Chinese into English** (e.g., "I want to 实现 dark mode"), translate the Chinese word/phrase inline and continue naturally. Do not make a
big deal of it.
4. **Never interrupt the flow** to give grammar lessons. Corrections are silent and brief — the user's focus is on the task, not the language.

488
CLAUDE.md
View File

@@ -17,6 +17,7 @@
| 测试接口/验证数据 | `db-validation` | PostgreSQL MCP 使用方法和验证示例 | | 测试接口/验证数据 | `db-validation` | PostgreSQL MCP 使用方法和验证示例 |
| 数据库迁移 | `db-migration` | 迁移命令、文件规范、执行流程、失败处理 | | 数据库迁移 | `db-migration` | 迁移命令、文件规范、执行流程、失败处理 |
| 维护规范文档 | `doc-management` | 规范文档流程和维护规则 | | 维护规范文档 | `doc-management` | 规范文档流程和维护规则 |
| 调试 bug / 排查异常 | `systematic-debugging` | 四阶段根因分析流程、逐层诊断、场景速查表 |
### ⚠️ 新增 Handler 时必须同步更新文档生成器 ### ⚠️ 新增 Handler 时必须同步更新文档生成器
@@ -37,6 +38,7 @@ handlers := &bootstrap.Handlers{
## 语言要求 ## 语言要求
**必须遵守:** **必须遵守:**
- 永远用中文交互 - 永远用中文交互
- 注释必须使用中文 - 注释必须使用中文
- 文档必须使用中文 - 文档必须使用中文
@@ -62,6 +64,7 @@ handlers := &bootstrap.Handlers{
| 缓存 | Redis 6.0+ | | 缓存 | Redis 6.0+ |
**禁止:** **禁止:**
- 直接使用 `database/sql`(必须通过 GORM - 直接使用 `database/sql`(必须通过 GORM
- 使用 `net/http` 替代 Fiber - 使用 `net/http` 替代 Fiber
- 使用 `encoding/json` 替代 sonic除非必要 - 使用 `encoding/json` 替代 sonic除非必要
@@ -82,21 +85,179 @@ Handler → Service → Store → Model
## 核心原则 ## 核心原则
### 错误处理 ### 错误处理
- 所有错误必须在 `pkg/errors/` 中定义 - 所有错误必须在 `pkg/errors/` 中定义
- 使用统一错误码系统 - 使用统一错误码系统
- Handler 层通过返回 `error` 传递给全局 ErrorHandler - Handler 层通过返回 `error` 传递给全局 ErrorHandler
#### 错误报错规范(必须遵守)
- Handler 层禁止直接返回/拼接底层错误信息给客户端(例如 `"参数验证失败: "+err.Error()``err.Error()`
- 参数校验失败:对外统一返回 `errors.New(errors.CodeInvalidParam)`(详细校验错误写日志)
- Service 层禁止对外返回 `fmt.Errorf(...)`,必须返回 `errors.New(...)``errors.Wrap(...)`
- 约定用法:`errors.New(code[, msg])``errors.Wrap(code, err[, msg])`
### 响应格式 ### 响应格式
- 所有 API 响应使用 `pkg/response/` 的统一格式 - 所有 API 响应使用 `pkg/response/` 的统一格式
- 格式: `{code, message, data, timestamp}` - 格式: `{code, msg, data, timestamp}`
### 常量管理 ### 常量管理
- 所有常量定义在 `pkg/constants/` - 所有常量定义在 `pkg/constants/`
- Redis key 使用函数生成: `Redis{Module}{Purpose}Key(params...)` - Redis key 使用函数生成: `Redis{Module}{Purpose}Key(params...)`
- 禁止硬编码字符串和 magic numbers - 禁止硬编码字符串和 magic numbers
- **必须为所有常量添加中文注释** - **必须为所有常量添加中文注释**
### 注释规范
#### 基本原则
- **所有注释使用中文**(与语言要求一致)
- **导出符号必须有文档注释**(包、函数、方法、类型、接口、常量、变量)
- **复杂逻辑必须有实现注释**(解释"为什么",而不是"做了什么"
- **禁止废话注释**(不要用注释复述代码本身)
- **修改代码时必须同步更新注释**(过时的注释比没有注释更有害)
#### 包注释
每个包的入口文件(通常是主文件或 `doc.go`)必须有包注释:
```go
// Package account 提供账号管理的业务逻辑服务
// 包含账号创建、修改、删除、权限分配等功能
package account
```
#### 结构体注释
所有导出结构体必须有文档注释,说明该结构体代表什么:
```go
// Service 账号业务服务
// 负责账号的 CRUD、角色分配、密码管理等业务逻辑
type Service struct {
store *Store
auditService AuditServiceInterface
}
```
#### 接口注释
导出接口必须注释接口用途,每个方法必须说明契约:
```go
// PermissionChecker 权限检查器接口
// 用于查询用户的权限列表
type PermissionChecker interface {
// CheckPermission 检查用户是否拥有指定权限
// userID: 用户ID
// permCode: 权限编码(格式: module:action
// platform: 端口类型 (all/web/h5)
CheckPermission(ctx context.Context, userID uint, permCode string, platform string) (bool, error)
}
```
#### 函数和方法注释
导出函数/方法必须以函数名开头,说明功能:
```go
// Create 创建账号
// POST /api/admin/accounts
func (h *AccountHandler) Create(c *fiber.Ctx) error {
```
**复杂方法**(超过 30 行或包含复杂业务逻辑)必须额外说明实现思路:
```go
// ActivateByRealname 首次实名激活套餐
// 当用户完成实名认证后,自动激活处于"囤货待实名"状态的套餐:
// 1. 查找该卡所有 status=3待实名激活的套餐
// 2. 按创建时间排序第一个主套餐立即激活status=1
// 3. 其余主套餐进入排队状态status=4
// 4. 加油包如果绑定了已激活的主套餐则一并激活
func (s *UsageService) ActivateByRealname(ctx context.Context, cardID uint) error {
```
#### 未导出符号的注释
未导出(小写)的函数/方法:
- **简单逻辑**< 15 行):可以不加注释
- **复杂逻辑**(≥ 15 行)或 **非显而易见的算法**:必须加注释
```go
// buildPermissionTree 递归构建权限树
// 采用 map 索引 + 单次遍历算法,时间复杂度 O(n)
func (s *Service) buildPermissionTree(permissions []*model.Permission) []*dto.PermissionTreeNode {
```
#### 内联注释(实现逻辑注释)
以下场景**必须**添加内联注释:
| 场景 | 要求 |
|------|------|
| 复杂条件判断 | 解释判断的业务含义 |
| 多步骤业务流程 | 用编号注释标明每一步 |
| 非显而易见的设计决策 | 解释"为什么这样做"而不是"做了什么" |
| 缓存/事务/并发处理 | 说明策略和原因 |
| 临时方案/兼容逻辑 | 标注 TODO 或说明背景 |
**✅ 好的内联注释(解释为什么)**
```go
// 使用 Redis 分布式锁防止并发重复创建,锁超时 10 秒
if !s.acquireLock(ctx, lockKey, 10*time.Second) {
return errors.New(errors.CodeTooManyRequests, "操作过于频繁,请稍后重试")
}
// 先冻结佣金再扣款,保证资金安全(失败时佣金自动解冻)
if err := s.freezeCommission(ctx, tx, orderID); err != nil {
return err
}
```
**❌ 废话注释(禁止)**
```go
// 获取用户ID ← 禁止:代码本身已经很清楚
userID := middleware.GetUserIDFromContext(ctx)
// 创建账号 ← 禁止:变量名已说明意图
account := &model.Account{}
// 返回错误 ← 禁止return err 不需要注释
return err
```
#### 常量和枚举注释
分组常量必须有组注释,每个值必须有行内注释:
```go
// 用户类型常量
const (
UserTypeSuperAdmin = 1 // 超级管理员
UserTypePlatform = 2 // 平台用户
UserTypeAgent = 3 // 代理账号
UserTypeEnterprise = 4 // 企业账号
)
```
#### Handler 层特殊要求
Handler 方法的注释必须包含 HTTP 方法和路径:
```go
// Create 创建账号
// POST /api/admin/accounts
func (h *AccountHandler) Create(c *fiber.Ctx) error {
```
### Go 代码风格 ### Go 代码风格
- 使用 `gofmt` 格式化 - 使用 `gofmt` 格式化
- 遵循 [Effective Go](https://go.dev/doc/effective_go) - 遵循 [Effective Go](https://go.dev/doc/effective_go)
- 包名: 简短、小写、单数、无下划线 - 包名: 简短、小写、单数、无下划线
@@ -105,6 +266,7 @@ Handler → Service → Store → Model
## 数据库设计 ## 数据库设计
**核心规则:** **核心规则:**
- ❌ 禁止建立外键约束 - ❌ 禁止建立外键约束
- ❌ 禁止使用 GORM 关联关系标签foreignKey、hasMany、belongsTo - ❌ 禁止使用 GORM 关联关系标签foreignKey、hasMany、belongsTo
- ✅ 关联通过存储 ID 字段手动维护 - ✅ 关联通过存储 ID 字段手动维护
@@ -113,6 +275,7 @@ Handler → Service → Store → Model
## Go 惯用法 vs Java 风格 ## Go 惯用法 vs Java 风格
### ✅ Go 风格(推荐) ### ✅ Go 风格(推荐)
- 扁平化包结构(最多 2-3 层) - 扁平化包结构(最多 2-3 层)
- 小而专注的接口1-3 个方法) - 小而专注的接口1-3 个方法)
- 直接访问导出字段(不用 getter/setter - 直接访问导出字段(不用 getter/setter
@@ -120,70 +283,44 @@ Handler → Service → Store → Model
- 显式错误返回和检查 - 显式错误返回和检查
### ❌ Java 风格(禁止) ### ❌ Java 风格(禁止)
- 过度抽象(不必要的接口、工厂) - 过度抽象(不必要的接口、工厂)
- Getter/Setter 方法 - Getter/Setter 方法
- 深层继承层次 - 深层继承层次
- 异常处理panic/recover - 异常处理panic/recover
- 类型前缀IService、AbstractBase、ServiceImpl - 类型前缀IService、AbstractBase、ServiceImpl
## 测试要求 ## ⚠️ 测试禁令(强制执行)
- 核心业务逻辑Service 层)测试覆盖率 ≥ 90% **本项目不使用任何形式的自动化测试代码。**
- 所有 API 端点必须有集成测试
- 使用 table-driven tests
- 单元测试 < 100ms集成测试 < 1s
### ⚠️ 测试真实性原则(严格遵守) **绝对禁止:**
**测试必须真正验证功能,禁止绕过核心逻辑:** -**禁止编写单元测试** - 无论任何场景
-**禁止编写集成测试** - 无论任何场景
-**禁止编写验收测试** - 无论任何场景
-**禁止编写流程测试** - 无论任何场景
-**禁止编写 E2E 测试** - 无论任何场景
-**禁止创建 `*_test.go` 文件** - 除非用户明确要求
-**禁止在任务中包含测试相关工作** - 规划和实现均不涉及测试
-**禁止在文档中提及测试要求** - 规范、设计文档均不讨论测试
| 规则 | 说明 | **唯一例外:**
|------|------|
| ❌ 禁止传递 nil 绕过依赖 | 如果功能依赖外部服务(如对象存储、第三方 API测试必须验证该依赖的调用 |
| ❌ 禁止只测试部分流程 | 如果功能包含 A → B → C 三步,不能只测试 B 而跳过 A 和 C |
| ❌ 禁止声称"测试通过"但未验证核心逻辑 | 测试通过必须意味着功能真正可用 |
| ❌ 禁止擅自使用 Mock | 尽量使用真实服务进行集成测试,如需使用 Mock 必须先询问用户并获得同意 |
| ✅ 必须验证端到端流程 | 新增功能必须有完整的集成测试覆盖整个调用链 |
| ✅ 缺少配置时必须询问 | 如果测试需要的配置(如 API Key、环境变量缺失必须询问用户而非跳过测试 |
**反面案例** - ✅ **仅当用户明确要求**时才编写测试代码
```go - ✅ 用户必须主动说明"请写测试"或"需要测试"
// ❌ 错误:传递 nil 绕过 storageService只测试了 processImport
handler := NewIotCardImportHandler(db, redis, store1, store2, nil, logger)
result := handler.processImport(ctx, task) // 跳过了 downloadAndParseCSV
// ✅ 正确:使用真实服务测试完整流程 **原因说明:**
handler := NewIotCardImportHandler(db, redis, store1, store2, realStorageService, logger)
handler.HandleIotCardImport(ctx, asynqTask) // 测试完整流程,验证真实上传/下载
```
**测试超时 = 生产超时** - 业务系统的正确性通过人工验证和生产环境监控保证
- 集成测试超时意味着生产环境也可能超时 - 测试代码的维护成本高于价值
- 发现超时必须排查原因,不能简单跳过或增加超时时间 - 快速迭代优先于测试覆盖率
### 测试连接管理(必读) **替代方案:**
**详细规范**: [docs/testing/test-connection-guide.md](docs/testing/test-connection-guide.md) - 使用 PostgreSQL MCP 工具手动验证数据
- 使用 Postman/curl 手动测试 API
**标准模板**: - 依赖生产环境日志和监控发现问题
```go
func TestXxx(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := postgres.NewXxxStore(tx, rdb)
// 测试代码...
}
```
**核心函数**:
- `NewTestTransaction(t)`: 创建测试事务,自动回滚
- `GetTestRedis(t)`: 获取全局 Redis 连接
- `CleanTestRedisKeys(t, rdb)`: 自动清理测试 Redis 键
**禁止使用(已移除)**:
-`SetupTestDB` / `TeardownTestDB` / `SetupTestDBWithStore`
## 性能要求 ## 性能要求
@@ -222,10 +359,238 @@ func TestXxx(t *testing.T) {
3. ✅ 使用统一错误处理 3. ✅ 使用统一错误处理
4. ✅ 常量定义在 pkg/constants/ 4. ✅ 常量定义在 pkg/constants/
5. ✅ Go 惯用法(非 Java 风格) 5. ✅ Go 惯用法(非 Java 风格)
6.包含测试计划 6.性能考虑
7.性能考虑 7.文档更新计划
8.文档更新计划 8.中文优先
9. ✅ 中文优先
## Code Review 检查清单
### 错误处理
- [ ] Service 层无 `fmt.Errorf` 对外返回
- [ ] Handler 层参数校验不泄露细节
- [ ] 错误码使用正确4xx vs 5xx
- [ ] 错误日志完整(包含上下文)
### 代码质量
- [ ] 遵循 Handler → Service → Store → Model 分层
- [ ] 函数长度 ≤ 100 行(核心逻辑 ≤ 50 行)
- [ ] 常量定义在 `pkg/constants/`
- [ ] 使用 Go 惯用法(非 Java 风格)
### 文档和注释
- [ ] 所有注释使用中文
- [ ] 导出函数/类型有文档注释
- [ ] API 路径注释与真实路由一致
### 幂等性
- [ ] 创建类写操作有 Redis 业务键防重
- [ ] 状态变更使用条件更新(`WHERE status = expected`
- [ ] 余额/库存变更使用乐观锁version 字段)
- [ ] 分布式锁使用 `defer` 确保释放
- [ ] Redis Key 定义在 `pkg/constants/redis.go`
### 越权防护规范
**适用场景**:任何涉及跨用户、跨店铺、跨企业的资源访问
**三层防护机制**
1. **路由层中间件**(粗粒度拦截)
- 用于明显的权限限制(如企业账号禁止访问账号管理)
- 示例:
```go
group.Use(func(c *fiber.Ctx) error {
userType := middleware.GetUserTypeFromContext(c.UserContext())
if userType == constants.UserTypeEnterprise {
return errors.New(errors.CodeForbidden, "无权限访问账号管理功能")
}
return c.Next()
})
```
2. **Service 层业务检查**(细粒度验证)
- 使用 `middleware.CanManageShop(ctx, targetShopID, shopStore)` 验证店铺权限
- 使用 `middleware.CanManageEnterprise(ctx, targetEnterpriseID, enterpriseStore, shopStore)` 验证企业权限
- 类型级权限检查(如代理不能创建平台账号)
- 示例见 `internal/service/account/service.go`
3. **GORM Callback 自动过滤**(兜底)
- 已有实现,自动应用到所有查询
- 代理用户:`WHERE shop_id IN (自己店铺+下级店铺)`
- 企业用户:`WHERE enterprise_id = 当前企业ID`
- 无需手动调用
**统一错误返回**
- 越权访问统一返回:`errors.New(errors.CodeForbidden, "无权限操作该资源或资源不存在")`
- 不区分"不存在"和"无权限",防止信息泄露
### 幂等性规范
**适用场景**:任何可能被重复触发的写操作
#### 必须实现幂等性的场景
| 场景 | 原因 | 实现策略 |
|------|------|----------|
| 订单创建 | 用户双击、网络重试 | Redis 业务键防重 + 分布式锁 |
| 支付回调 | 第三方平台重复通知 | 状态条件更新(`WHERE status = pending` |
| 钱包扣款/充值 | 并发请求、消息重投 | 乐观锁version 字段)+ 状态条件更新 |
| 套餐激活 | 异步任务重试 | Redis 分布式锁 + 已存在记录检查 |
| 异步任务处理 | Asynq 自动重试 | Redis 任务锁(`RedisTaskLockKey` |
| 佣金计算 | 支付成功后触发 | 幂等任务入队 + 状态检查 |
#### 不需要幂等性的场景
- 纯查询接口GET 请求天然幂等)
- 管理后台的配置修改(低频操作,人为确认)
- 日志记录、审计记录(允许重复写入)
#### 实现策略选择
根据场景特征选择合适的策略:
**策略 1状态条件更新首选适用于有明确状态流转的操作**
```go
// 通过 WHERE 条件确保只有预期状态才能更新RowsAffected == 0 说明已被处理
result := tx.Model(&model.Order{}).
Where("id = ? AND payment_status = ?", orderID, model.PaymentStatusPending).
Updates(map[string]any{"payment_status": model.PaymentStatusPaid})
if result.RowsAffected == 0 {
// 已被处理,检查当前状态决定返回成功还是错误
}
```
**策略 2Redis 业务键防重 + 分布式锁(适用于创建类操作,无状态可依赖)**
```go
// 业务键 = 唯一标识请求意图的组合字段
// 示例order:create:{buyer_type}:{buyer_id}:{carrier_type}:{carrier_id}:{sorted_package_ids}
idempotencyKey := buildBusinessKey(...)
redisKey := constants.RedisXxxIdempotencyKey(idempotencyKey)
// 第 1 层Redis GET 快速检测
val, err := s.redis.Get(ctx, redisKey).Result()
if err == nil && val != "" {
return existingResult // 已创建,直接返回
}
// 第 2 层:分布式锁防止并发
lockKey := constants.RedisXxxLockKey(resourceType, resourceID)
locked, _ := s.redis.SetNX(ctx, lockKey, time.Now().String(), lockTTL).Result()
if !locked {
return errors.New(errors.CodeTooManyRequests, "操作进行中,请勿重复提交")
}
defer s.redis.Del(ctx, lockKey)
// 第 3 层:加锁后二次检测
val, err = s.redis.Get(ctx, redisKey).Result()
if err == nil && val != "" {
return existingResult
}
// 执行业务逻辑...
// 成功后标记
s.redis.Set(ctx, redisKey, resultID, idempotencyTTL)
```
**策略 3乐观锁适用于余额、库存等数值更新**
```go
result := tx.Model(&model.Wallet{}).
Where("id = ? AND balance >= ? AND version = ?", walletID, amount, currentVersion).
Updates(map[string]any{
"balance": gorm.Expr("balance - ?", amount),
"version": gorm.Expr("version + 1"),
})
if result.RowsAffected == 0 {
return errors.New(errors.CodeInsufficientBalance, "余额不足或并发冲突")
}
```
#### Redis Key 命名规范
幂等性相关的 Redis Key 统一在 `pkg/constants/redis.go` 定义:
```go
// 幂等性检测键Redis{Module}IdempotencyKey — TTL 通常 3~5 分钟
func RedisOrderIdempotencyKey(businessKey string) string
// 分布式锁键Redis{Module}{Action}LockKey — TTL 通常 10~30 秒
func RedisOrderCreateLockKey(carrierType string, carrierID uint) string
```
#### 现有幂等性实现参考
| 模块 | 文件 | 策略 |
|------|------|------|
| 订单创建 | `internal/service/order/service.go` → `Create()` | 策略 2Redis 业务键 + 分布式锁 |
| 钱包支付 | `internal/service/order/service.go` → `WalletPay()` | 策略 1状态条件更新 |
| 支付回调 | `internal/service/order/service.go` → `HandlePaymentCallback()` | 策略 1状态条件更新 |
| 套餐激活 | `internal/service/package/activation_service.go` → `ActivateQueuedPackage()` | 策略 2简化版Redis 分布式锁 |
| 钱包扣款 | `internal/service/order/service.go` → `WalletPay()` | 策略 3乐观锁version 字段) |
### 审计日志规范
**适用场景**:任何敏感操作(账号管理、权限变更、数据删除等)
**使用方式**
1. **Service 层集成审计日志**
```go
type Service struct {
store *Store
auditService AuditServiceInterface
}
func (s *Service) SensitiveOperation(ctx context.Context, ...) error {
// 1. 执行业务操作
err := s.store.DoSomething(ctx, ...)
if err != nil {
return err
}
// 2. 记录审计日志(异步)
s.auditService.LogOperation(ctx, &model.OperationLog{
OperatorID: middleware.GetUserIDFromContext(ctx),
OperationType: "operation_type",
OperationDesc: "操作描述",
BeforeData: beforeData, // 变更前数据
AfterData: afterData, // 变更后数据
RequestID: middleware.GetRequestIDFromContext(ctx),
IPAddress: middleware.GetIPFromContext(ctx),
UserAgent: middleware.GetUserAgentFromContext(ctx),
})
return nil
}
```
2. **审计日志字段说明**
- `operator_id`, `operator_type`, `operator_name`: 操作人信息(必填)
- `target_*`: 目标资源信息(可选)
- `operation_type`: 操作类型create/update/delete/assign_roles等
- `operation_desc`: 操作描述(中文,便于查看)
- `before_data`, `after_data`: 变更数据JSON 格式)
- `request_id`, `ip_address`, `user_agent`: 请求上下文
3. **异步写入**
- 审计日志使用 Goroutine 异步写入
- 写入失败不影响业务操作
- 失败时记录 Error 日志,包含完整审计信息
**示例参考**`internal/service/account/service.go`
---
### ⚠️ 任务执行规范(必须遵守) ### ⚠️ 任务执行规范(必须遵守)
@@ -244,3 +609,18 @@ func TestXxx(t *testing.T) {
> "任务 3.1 在当前实现中可能不需要,是否可以跳过?" > "任务 3.1 在当前实现中可能不需要,是否可以跳过?"
**详细规范和 OpenSpec 工作流请查看**: `@/openspec/AGENTS.md` **详细规范和 OpenSpec 工作流请查看**: `@/openspec/AGENTS.md`
# English Learning Mode
The user is learning English through practical use. Apply these rules in every conversation:
1. **Always respond in Chinese** — regardless of whether the user writes in English or Chinese.
2. **When the user writes in English**, append a one-line correction at the end of your response in this format:
→ `[natural version of what they wrote]`
No explanation needed — just the corrected phrase.
3. **When the user mixes Chinese into English** (e.g., "I want to 实现 dark mode"), translate the Chinese word/phrase inline and continue naturally. Do not make a
big deal of it.
4. **Never interrupt the flow** to give grammar lessons. Corrections are silent and brief — the user's focus is on the task, not the language.

View File

@@ -7,8 +7,8 @@ GOCLEAN=$(GOCMD) clean
GOTEST=$(GOCMD) test GOTEST=$(GOCMD) test
GOGET=$(GOCMD) get GOGET=$(GOCMD) get
BINARY_NAME=bin/junhong-cmp BINARY_NAME=bin/junhong-cmp
MAIN_PATH=cmd/api/main.go MAIN_PATH=./cmd/api
WORKER_PATH=cmd/worker/main.go WORKER_PATH=./cmd/worker
WORKER_BINARY=bin/junhong-worker WORKER_BINARY=bin/junhong-worker
# Database migration parameters # Database migration parameters

View File

@@ -215,11 +215,14 @@ default:
- **B 端认证系统**:完整的后台和 H5 认证功能,支持基于 Redis 的 Token 管理和双令牌机制Access Token 24h + Refresh Token 7天包含登录、登出、Token 刷新、用户信息查询和密码修改功能通过用户类型隔离确保后台SuperAdmin、Platform、Agent和 H5Agent、Enterprise的访问控制**登录响应包含菜单树和按钮权限**menus/buttons前端无需二次处理直接渲染侧边栏和控制按钮显示详见 [API 文档](docs/api/auth.md)、[使用指南](docs/auth-usage-guide.md)、[架构说明](docs/auth-architecture.md) 和 [菜单权限使用指南](docs/login-menu-button-response/使用指南.md) - **B 端认证系统**:完整的后台和 H5 认证功能,支持基于 Redis 的 Token 管理和双令牌机制Access Token 24h + Refresh Token 7天包含登录、登出、Token 刷新、用户信息查询和密码修改功能通过用户类型隔离确保后台SuperAdmin、Platform、Agent和 H5Agent、Enterprise的访问控制**登录响应包含菜单树和按钮权限**menus/buttons前端无需二次处理直接渲染侧边栏和控制按钮显示详见 [API 文档](docs/api/auth.md)、[使用指南](docs/auth-usage-guide.md)、[架构说明](docs/auth-architecture.md) 和 [菜单权限使用指南](docs/login-menu-button-response/使用指南.md)
- **B 端认证系统**:完整的后台和 H5 认证功能,支持基于 Redis 的 Token 管理和双令牌机制Access Token 24h + Refresh Token 7天包含登录、登出、Token 刷新、用户信息查询和密码修改功能通过用户类型隔离确保后台SuperAdmin、Platform、Agent和 H5Agent、Enterprise的访问控制详见 [API 文档](docs/api/auth.md)、[使用指南](docs/auth-usage-guide.md) 和 [架构说明](docs/auth-architecture.md) - **B 端认证系统**:完整的后台和 H5 认证功能,支持基于 Redis 的 Token 管理和双令牌机制Access Token 24h + Refresh Token 7天包含登录、登出、Token 刷新、用户信息查询和密码修改功能通过用户类型隔离确保后台SuperAdmin、Platform、Agent和 H5Agent、Enterprise的访问控制详见 [API 文档](docs/api/auth.md)、[使用指南](docs/auth-usage-guide.md) 和 [架构说明](docs/auth-architecture.md)
- **生命周期管理**:物联网卡/号卡的开卡、激活、停机、复机、销户 - **生命周期管理**:物联网卡/号卡的开卡、激活、停机、复机、销户
- **代理商体系**:层级管理和分佣结算 - **代理商体系**:层级管理和分佣结算,支持差价佣金和一次性佣金两种佣金类型,详见 [套餐与佣金业务模型](docs/commission-package-model.md)
- **批量同步**:卡状态、实名状态、流量使用情况 - **批量同步**:卡状态、实名状态、流量使用情况
- **轮询系统**IoT 卡实名状态、流量使用、套餐余额的定时轮询检查;支持配置化轮询策略、动态并发控制、告警系统、数据清理和手动触发功能;详见 [轮询系统文档](docs/polling-system/README.md)
- **套餐系统升级**:完整的套餐生命周期管理,支持主套餐排队激活、加油包绑定主套餐、囤货待实名激活、流量按优先级扣减、自然月/按天有效期计算、日/月/年流量重置、客户端流量查询和套餐流量详单;详见 [套餐系统升级文档](docs/package-system-upgrade/)
- **分佣验证指引**:对代理分佣的冻结、解冻、提现校验流程进行了结构化说明与流程图,详见 [分佣逻辑正确与否验证](docs/优化说明/分佣逻辑正确与否验证.md) - **分佣验证指引**:对代理分佣的冻结、解冻、提现校验流程进行了结构化说明与流程图,详见 [分佣逻辑正确与否验证](docs/优化说明/分佣逻辑正确与否验证.md)
- **对象存储**S3 兼容的对象存储服务集成(联通云 OSS支持预签名 URL 上传、文件下载、临时文件处理;用于 ICCID 批量导入、数据导出等场景;详见 [使用指南](docs/object-storage/使用指南.md) 和 [前端接入指南](docs/object-storage/前端接入指南.md) - **对象存储**S3 兼容的对象存储服务集成(联通云 OSS支持预签名 URL 上传、文件下载、临时文件处理;用于 ICCID 批量导入、数据导出等场景;详见 [使用指南](docs/object-storage/使用指南.md) 和 [前端接入指南](docs/object-storage/前端接入指南.md)
- **微信集成**:完整的微信公众号 OAuth 认证和微信支付功能JSAPI + H5使用 PowerWeChat v3 SDK支持个人客户微信授权登录、账号绑定、微信内支付和浏览器 H5 支付;支付回调自动验证签名和幂等性处理;详见 [使用指南](docs/wechat-integration/使用指南.md) 和 [API 文档](docs/wechat-integration/API文档.md) - **微信集成**:完整的微信公众号 OAuth 认证和微信支付功能JSAPI + H5使用 PowerWeChat v3 SDK支持个人客户微信授权登录、账号绑定、微信内支付和浏览器 H5 支付;支付回调自动验证签名和幂等性处理;详见 [使用指南](docs/wechat-integration/使用指南.md) 和 [API 文档](docs/wechat-integration/API文档.md)
- **订单超时自动取消**:待支付订单(微信/支付宝30 分钟超时自动取消,支持钱包余额解冻;使用 Asynq Scheduler 每分钟扫描,取代原有 time.Ticker 实现;同时将告警检查和数据清理迁移至 Asynq Scheduler 统一调度;详见 [功能总结](docs/order-expiration/功能总结.md)
## 用户体系设计 ## 用户体系设计

View File

@@ -5,6 +5,8 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
"github.com/break/junhong_cmp_fiber/internal/bootstrap" "github.com/break/junhong_cmp_fiber/internal/bootstrap"
"github.com/break/junhong_cmp_fiber/internal/handler/admin"
apphandler "github.com/break/junhong_cmp_fiber/internal/handler/app"
"github.com/break/junhong_cmp_fiber/internal/routes" "github.com/break/junhong_cmp_fiber/internal/routes"
"github.com/break/junhong_cmp_fiber/pkg/openapi" "github.com/break/junhong_cmp_fiber/pkg/openapi"
) )
@@ -22,6 +24,15 @@ func generateOpenAPIDocs(outputPath string, logger *zap.Logger) {
// 3. 创建 Handler使用 nil 依赖,因为只需要路由结构) // 3. 创建 Handler使用 nil 依赖,因为只需要路由结构)
handlers := openapi.BuildDocHandlers() handlers := openapi.BuildDocHandlers()
handlers.AssetLifecycle = admin.NewAssetLifecycleHandler(nil)
handlers.ClientAuth = apphandler.NewClientAuthHandler(nil, nil)
handlers.ClientAsset = apphandler.NewClientAssetHandler(nil, nil, nil, nil, nil, nil, nil, nil, nil)
handlers.ClientWallet = apphandler.NewClientWalletHandler(nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil)
handlers.ClientOrder = apphandler.NewClientOrderHandler(nil, nil, nil, nil, nil, nil, nil, nil)
handlers.ClientExchange = apphandler.NewClientExchangeHandler(nil)
handlers.ClientRealname = apphandler.NewClientRealnameHandler(nil, nil, nil, nil, nil, nil, nil)
handlers.ClientDevice = apphandler.NewClientDeviceHandler(nil, nil, nil, nil, nil, nil, nil)
handlers.AdminExchange = admin.NewExchangeHandler(nil, nil)
// 4. 注册所有路由到文档生成器 // 4. 注册所有路由到文档生成器
routes.RegisterRoutesWithDoc(app, handlers, &bootstrap.Middlewares{}, adminDoc) routes.RegisterRoutesWithDoc(app, handlers, &bootstrap.Middlewares{}, adminDoc)

View File

@@ -27,6 +27,7 @@ import (
"github.com/break/junhong_cmp_fiber/pkg/database" "github.com/break/junhong_cmp_fiber/pkg/database"
"github.com/break/junhong_cmp_fiber/pkg/logger" "github.com/break/junhong_cmp_fiber/pkg/logger"
"github.com/break/junhong_cmp_fiber/pkg/queue" "github.com/break/junhong_cmp_fiber/pkg/queue"
"github.com/break/junhong_cmp_fiber/pkg/sms"
"github.com/break/junhong_cmp_fiber/pkg/storage" "github.com/break/junhong_cmp_fiber/pkg/storage"
) )
@@ -42,8 +43,6 @@ func main() {
// 3. 初始化日志 // 3. 初始化日志
appLogger := initLogger(cfg) appLogger := initLogger(cfg)
// 4. 验证微信配置
validateWechatConfig(cfg, appLogger)
defer func() { defer func() {
_ = logger.Sync() _ = logger.Sync()
}() }()
@@ -247,14 +246,11 @@ func applyRateLimiterToBusinessRoutes(app *fiber.App, rateLimitMiddleware fiber.
adminGroup := app.Group("/api/admin") adminGroup := app.Group("/api/admin")
adminGroup.Use(rateLimitMiddleware) adminGroup.Use(rateLimitMiddleware)
h5Group := app.Group("/api/h5")
h5Group.Use(rateLimitMiddleware)
personalGroup := app.Group("/api/c/v1") personalGroup := app.Group("/api/c/v1")
personalGroup.Use(rateLimitMiddleware) personalGroup.Use(rateLimitMiddleware)
appLogger.Info("限流器已应用到业务路由组", appLogger.Info("限流器已应用到业务路由组",
zap.Strings("paths", []string{"/api/admin", "/api/h5", "/api/c/v1"}), zap.Strings("paths", []string{"/api/admin", "/api/c/v1"}),
) )
} }
@@ -311,11 +307,42 @@ func initAuthComponents(cfg *config.Config, redisClient *redis.Client, appLogger
refreshTTL := time.Duration(cfg.JWT.RefreshTokenTTL) * time.Second refreshTTL := time.Duration(cfg.JWT.RefreshTokenTTL) * time.Second
tokenManager := auth.NewTokenManager(redisClient, accessTTL, refreshTTL) tokenManager := auth.NewTokenManager(redisClient, accessTTL, refreshTTL)
verificationSvc := verification.NewService(redisClient, nil, appLogger) smsClient := initSMS(cfg, appLogger)
verificationSvc := verification.NewService(redisClient, smsClient, appLogger)
return jwtManager, tokenManager, verificationSvc return jwtManager, tokenManager, verificationSvc
} }
func initSMS(cfg *config.Config, appLogger *zap.Logger) *sms.Client {
if cfg.SMS.GatewayURL == "" {
appLogger.Info("短信服务未配置,跳过初始化")
return nil
}
timeout := cfg.SMS.Timeout
if timeout == 0 {
timeout = 10 * time.Second
}
httpClient := sms.NewStandardHTTPClient(0)
client := sms.NewClient(
cfg.SMS.GatewayURL,
cfg.SMS.Username,
cfg.SMS.Password,
cfg.SMS.Signature,
timeout,
appLogger,
httpClient,
)
appLogger.Info("短信服务已初始化",
zap.String("gateway_url", cfg.SMS.GatewayURL),
zap.String("signature", cfg.SMS.Signature),
)
return client
}
func initStorage(cfg *config.Config, appLogger *zap.Logger) *storage.Service { func initStorage(cfg *config.Config, appLogger *zap.Logger) *storage.Service {
if cfg.Storage.Provider == "" || cfg.Storage.S3.Endpoint == "" { if cfg.Storage.Provider == "" || cfg.Storage.S3.Endpoint == "" {
appLogger.Info("对象存储未配置,跳过初始化") appLogger.Info("对象存储未配置,跳过初始化")
@@ -346,6 +373,7 @@ func initGateway(cfg *config.Config, appLogger *zap.Logger) *gateway.Client {
cfg.Gateway.BaseURL, cfg.Gateway.BaseURL,
cfg.Gateway.AppID, cfg.Gateway.AppID,
cfg.Gateway.AppSecret, cfg.Gateway.AppSecret,
appLogger,
).WithTimeout(time.Duration(cfg.Gateway.Timeout) * time.Second) ).WithTimeout(time.Duration(cfg.Gateway.Timeout) * time.Second)
appLogger.Info("Gateway 客户端初始化成功", appLogger.Info("Gateway 客户端初始化成功",
@@ -354,64 +382,3 @@ func initGateway(cfg *config.Config, appLogger *zap.Logger) *gateway.Client {
return client return client
} }
func validateWechatConfig(cfg *config.Config, appLogger *zap.Logger) {
wechatCfg := cfg.Wechat
if wechatCfg.OfficialAccount.AppID == "" && wechatCfg.Payment.AppID == "" {
appLogger.Warn("微信配置未设置,微信相关功能将不可用")
return
}
if wechatCfg.OfficialAccount.AppID != "" {
if wechatCfg.OfficialAccount.AppSecret == "" {
appLogger.Fatal("微信公众号配置不完整",
zap.String("missing", "app_secret"),
zap.String("env", "JUNHONG_WECHAT_OFFICIAL_ACCOUNT_APP_SECRET"))
}
appLogger.Info("微信公众号配置已验证",
zap.String("app_id", wechatCfg.OfficialAccount.AppID))
}
if wechatCfg.Payment.AppID != "" {
missingFields := []string{}
if wechatCfg.Payment.MchID == "" {
missingFields = append(missingFields, "mch_id (JUNHONG_WECHAT_PAYMENT_MCH_ID)")
}
if wechatCfg.Payment.APIV3Key == "" {
missingFields = append(missingFields, "api_v3_key (JUNHONG_WECHAT_PAYMENT_API_V3_KEY)")
}
if wechatCfg.Payment.CertPath == "" {
missingFields = append(missingFields, "cert_path (JUNHONG_WECHAT_PAYMENT_CERT_PATH)")
}
if wechatCfg.Payment.KeyPath == "" {
missingFields = append(missingFields, "key_path (JUNHONG_WECHAT_PAYMENT_KEY_PATH)")
}
if wechatCfg.Payment.SerialNo == "" {
missingFields = append(missingFields, "serial_no (JUNHONG_WECHAT_PAYMENT_SERIAL_NO)")
}
if wechatCfg.Payment.NotifyURL == "" {
missingFields = append(missingFields, "notify_url (JUNHONG_WECHAT_PAYMENT_NOTIFY_URL)")
}
if len(missingFields) > 0 {
appLogger.Fatal("微信支付配置不完整",
zap.Strings("missing_fields", missingFields))
}
if _, err := os.Stat(wechatCfg.Payment.CertPath); os.IsNotExist(err) {
appLogger.Fatal("微信支付证书文件不存在",
zap.String("cert_path", wechatCfg.Payment.CertPath))
}
if _, err := os.Stat(wechatCfg.Payment.KeyPath); os.IsNotExist(err) {
appLogger.Fatal("微信支付私钥文件不存在",
zap.String("key_path", wechatCfg.Payment.KeyPath))
}
appLogger.Info("微信支付配置已验证",
zap.String("app_id", wechatCfg.Payment.AppID),
zap.String("mch_id", wechatCfg.Payment.MchID))
}
}

View File

@@ -7,6 +7,8 @@ import (
"github.com/gofiber/fiber/v2" "github.com/gofiber/fiber/v2"
"github.com/break/junhong_cmp_fiber/internal/bootstrap" "github.com/break/junhong_cmp_fiber/internal/bootstrap"
"github.com/break/junhong_cmp_fiber/internal/handler/admin"
apphandler "github.com/break/junhong_cmp_fiber/internal/handler/app"
"github.com/break/junhong_cmp_fiber/internal/routes" "github.com/break/junhong_cmp_fiber/internal/routes"
"github.com/break/junhong_cmp_fiber/pkg/openapi" "github.com/break/junhong_cmp_fiber/pkg/openapi"
) )
@@ -31,6 +33,15 @@ func generateAdminDocs(outputPath string) error {
// 3. 创建 Handler使用 nil 依赖,因为只需要路由结构) // 3. 创建 Handler使用 nil 依赖,因为只需要路由结构)
handlers := openapi.BuildDocHandlers() handlers := openapi.BuildDocHandlers()
handlers.AssetLifecycle = admin.NewAssetLifecycleHandler(nil)
handlers.ClientAuth = apphandler.NewClientAuthHandler(nil, nil)
handlers.ClientAsset = apphandler.NewClientAssetHandler(nil, nil, nil, nil, nil, nil, nil, nil, nil)
handlers.ClientWallet = apphandler.NewClientWalletHandler(nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil)
handlers.ClientOrder = apphandler.NewClientOrderHandler(nil, nil, nil, nil, nil, nil, nil, nil)
handlers.ClientExchange = apphandler.NewClientExchangeHandler(nil)
handlers.ClientRealname = apphandler.NewClientRealnameHandler(nil, nil, nil, nil, nil, nil, nil)
handlers.ClientDevice = apphandler.NewClientDeviceHandler(nil, nil, nil, nil, nil, nil, nil)
handlers.AdminExchange = admin.NewExchangeHandler(nil, nil)
// 4. 注册所有路由到文档生成器 // 4. 注册所有路由到文档生成器
routes.RegisterRoutesWithDoc(app, handlers, &bootstrap.Middlewares{}, adminDoc) routes.RegisterRoutesWithDoc(app, handlers, &bootstrap.Middlewares{}, adminDoc)

View File

@@ -6,12 +6,18 @@ import (
"os/signal" "os/signal"
"strconv" "strconv"
"syscall" "syscall"
"time"
"github.com/hibiken/asynq"
"github.com/redis/go-redis/v9" "github.com/redis/go-redis/v9"
"go.uber.org/zap" "go.uber.org/zap"
"github.com/break/junhong_cmp_fiber/pkg/bootstrap" "github.com/break/junhong_cmp_fiber/internal/bootstrap"
"github.com/break/junhong_cmp_fiber/internal/gateway"
"github.com/break/junhong_cmp_fiber/internal/polling"
pkgBootstrap "github.com/break/junhong_cmp_fiber/pkg/bootstrap"
"github.com/break/junhong_cmp_fiber/pkg/config" "github.com/break/junhong_cmp_fiber/pkg/config"
"github.com/break/junhong_cmp_fiber/pkg/constants"
"github.com/break/junhong_cmp_fiber/pkg/database" "github.com/break/junhong_cmp_fiber/pkg/database"
"github.com/break/junhong_cmp_fiber/pkg/logger" "github.com/break/junhong_cmp_fiber/pkg/logger"
"github.com/break/junhong_cmp_fiber/pkg/queue" "github.com/break/junhong_cmp_fiber/pkg/queue"
@@ -24,7 +30,7 @@ func main() {
panic("加载配置失败: " + err.Error()) panic("加载配置失败: " + err.Error())
} }
if _, err := bootstrap.EnsureDirectories(cfg, nil); err != nil { if _, err := pkgBootstrap.EnsureDirectories(cfg, nil); err != nil {
panic("初始化目录失败: " + err.Error()) panic("初始化目录失败: " + err.Error())
} }
@@ -97,17 +103,92 @@ func main() {
// 初始化对象存储服务(可选) // 初始化对象存储服务(可选)
storageSvc := initStorage(cfg, appLogger) storageSvc := initStorage(cfg, appLogger)
// 初始化 Gateway 客户端(可选,用于轮询任务)
gatewayClient := initGateway(cfg, appLogger)
// 创建 Asynq 客户端(用于调度器提交任务)
asynqClient := asynq.NewClient(asynq.RedisClientOpt{
Addr: redisAddr,
Password: cfg.Redis.Password,
DB: cfg.Redis.DB,
})
defer func() {
if err := asynqClient.Close(); err != nil {
appLogger.Error("关闭 Asynq 客户端失败", zap.Error(err))
}
}()
// 创建 Worker 依赖
workerDeps := &bootstrap.WorkerDependencies{
DB: db,
Redis: redisClient,
Logger: appLogger,
AsynqClient: asynqClient,
StorageService: storageSvc,
GatewayClient: gatewayClient,
}
// Bootstrap Worker 组件
workerResult, err := bootstrap.BootstrapWorker(workerDeps)
if err != nil {
appLogger.Fatal("Worker Bootstrap 失败", zap.Error(err))
}
// 创建 Asynq Worker 服务器 // 创建 Asynq Worker 服务器
workerServer := queue.NewServer(redisClient, &cfg.Queue, appLogger) workerServer := queue.NewServer(redisClient, &cfg.Queue, appLogger)
// 初始化轮询调度器(在创建 Handler 之前,因为 Handler 需要使用调度器作为回调)
scheduler := polling.NewScheduler(db, redisClient, asynqClient, appLogger)
// 注入流量重置服务到调度器
dataResetHandler := polling.NewDataResetHandler(workerResult.Services.ResetService, appLogger)
scheduler.SetResetService(dataResetHandler)
if err := scheduler.Start(ctx); err != nil {
appLogger.Error("启动轮询调度器失败", zap.Error(err))
} else {
appLogger.Info("轮询调度器已启动")
}
// 创建任务处理器管理器并注册所有处理器 // 创建任务处理器管理器并注册所有处理器
taskHandler := queue.NewHandler(db, redisClient, storageSvc, appLogger) taskHandler := queue.NewHandler(db, redisClient, storageSvc, gatewayClient, scheduler, workerResult, asynqClient, appLogger)
taskHandler.RegisterHandlers() taskHandler.RegisterHandlers()
appLogger.Info("Worker 服务器配置完成", appLogger.Info("Worker 服务器配置完成",
zap.Int("concurrency", cfg.Queue.Concurrency), zap.Int("concurrency", cfg.Queue.Concurrency),
zap.Any("queues", cfg.Queue.Queues)) zap.Any("queues", cfg.Queue.Queues))
// 创建 Asynq Scheduler定时任务调度器订单超时、告警检查、数据清理
asynqScheduler := asynq.NewScheduler(
asynq.RedisClientOpt{
Addr: redisAddr,
Password: cfg.Redis.Password,
DB: cfg.Redis.DB,
},
&asynq.SchedulerOpts{Location: time.Local},
)
// 注册定时任务:订单超时检查(每分钟)
if _, err := asynqScheduler.Register("@every 1m", asynq.NewTask(constants.TaskTypeOrderExpire, nil)); err != nil {
appLogger.Fatal("注册订单超时定时任务失败", zap.Error(err))
}
// 注册定时任务:告警检查(每分钟)
if _, err := asynqScheduler.Register("@every 1m", asynq.NewTask(constants.TaskTypeAlertCheck, nil)); err != nil {
appLogger.Fatal("注册告警检查定时任务失败", zap.Error(err))
}
// 注册定时任务:数据清理(每天凌晨 2 点)
if _, err := asynqScheduler.Register("0 2 * * *", asynq.NewTask(constants.TaskTypeDataCleanup, nil)); err != nil {
appLogger.Fatal("注册数据清理定时任务失败", zap.Error(err))
}
// 启动 Asynq Scheduler
go func() {
if err := asynqScheduler.Run(); err != nil {
appLogger.Fatal("Asynq Scheduler 启动失败", zap.Error(err))
}
}()
appLogger.Info("Asynq Scheduler 已启动(订单超时: @every 1m, 告警检查: @every 1m, 数据清理: 0 2 * * *")
// 优雅关闭 // 优雅关闭
quit := make(chan os.Signal, 1) quit := make(chan os.Signal, 1)
signal.Notify(quit, os.Interrupt, syscall.SIGTERM) signal.Notify(quit, os.Interrupt, syscall.SIGTERM)
@@ -125,6 +206,12 @@ func main() {
<-quit <-quit
appLogger.Info("正在关闭 Worker 服务器...") appLogger.Info("正在关闭 Worker 服务器...")
// 停止 Asynq Scheduler
asynqScheduler.Shutdown()
// 停止轮询调度器
scheduler.Stop()
// 优雅关闭 Worker 服务器(等待正在执行的任务完成) // 优雅关闭 Worker 服务器(等待正在执行的任务完成)
workerServer.Shutdown() workerServer.Shutdown()
@@ -150,3 +237,24 @@ func initStorage(cfg *config.Config, appLogger *zap.Logger) *storage.Service {
return storage.NewService(provider, &cfg.Storage) return storage.NewService(provider, &cfg.Storage)
} }
// initGateway 初始化 Gateway 客户端
func initGateway(cfg *config.Config, appLogger *zap.Logger) *gateway.Client {
if cfg.Gateway.BaseURL == "" {
appLogger.Info("Gateway 未配置,跳过初始化(轮询任务将无法查询真实数据)")
return nil
}
client := gateway.NewClient(
cfg.Gateway.BaseURL,
cfg.Gateway.AppID,
cfg.Gateway.AppSecret,
appLogger,
).WithTimeout(time.Duration(cfg.Gateway.Timeout) * time.Second)
appLogger.Info("Gateway 客户端初始化成功",
zap.String("base_url", cfg.Gateway.BaseURL),
zap.String("app_id", cfg.Gateway.AppID))
return client
}

View File

@@ -22,9 +22,11 @@ version: '3.8'
# #
# 可选配置(根据需要启用): # 可选配置(根据需要启用):
# - Gateway 服务配置JUNHONG_GATEWAY_* # - Gateway 服务配置JUNHONG_GATEWAY_*
# - 微信公众号配置JUNHONG_WECHAT_OFFICIAL_ACCOUNT_*
# - 微信支付配置JUNHONG_WECHAT_PAYMENT_*
# - 对象存储配置JUNHONG_STORAGE_* # - 对象存储配置JUNHONG_STORAGE_*
# - 短信服务配置JUNHONG_SMS_*
#
# 微信公众号/小程序/支付配置已迁移至数据库tb_wechat_config 表),
# 不再需要环境变量和证书文件挂载。
services: services:
api: api:
@@ -62,31 +64,16 @@ services:
- JUNHONG_STORAGE_S3_PATH_STYLE=true - JUNHONG_STORAGE_S3_PATH_STYLE=true
# Gateway 配置(可选) # Gateway 配置(可选)
- JUNHONG_GATEWAY_BASE_URL=https://lplan.whjhft.com/openapi - JUNHONG_GATEWAY_BASE_URL=https://lplan.whjhft.com/openapi
- JUNHONG_GATEWAY_APP_ID=60bgt1X8i7AvXqkd - JUNHONG_GATEWAY_APP_ID=LfjL0WjUqpwkItQ0
- JUNHONG_GATEWAY_APP_SECRET=BZeQttaZQt0i73moF - JUNHONG_GATEWAY_APP_SECRET=K0DYuWzbRE6zg5bX
- JUNHONG_GATEWAY_TIMEOUT=30 - JUNHONG_GATEWAY_TIMEOUT=30
# 微信公众号配置(可选) # 短信服务配置
# - JUNHONG_WECHAT_OFFICIAL_ACCOUNT_APP_ID=your_app_id - JUNHONG_SMS_GATEWAY_URL=https://gateway.sms.whjhft.com:8443
# - JUNHONG_WECHAT_OFFICIAL_ACCOUNT_APP_SECRET=your_app_secret - JUNHONG_SMS_USERNAME=JH0001
# - JUNHONG_WECHAT_OFFICIAL_ACCOUNT_TOKEN=your_token - JUNHONG_SMS_PASSWORD=wwR8E4qnL6F0
# - JUNHONG_WECHAT_OFFICIAL_ACCOUNT_AES_KEY=your_aes_key - JUNHONG_SMS_SIGNATURE=【JHFTIOT】
# - JUNHONG_WECHAT_OFFICIAL_ACCOUNT_OAUTH_REDIRECT_URL=https://your-domain.com/callback
# 微信支付配置(可选)
# - JUNHONG_WECHAT_PAYMENT_APP_ID=your_app_id
# - JUNHONG_WECHAT_PAYMENT_MCH_ID=your_mch_id
# - JUNHONG_WECHAT_PAYMENT_API_V3_KEY=your_32_char_api_v3_key
# - JUNHONG_WECHAT_PAYMENT_API_V2_KEY=your_api_v2_key
# - JUNHONG_WECHAT_PAYMENT_CERT_PATH=/app/certs/apiclient_cert.pem
# - JUNHONG_WECHAT_PAYMENT_KEY_PATH=/app/certs/apiclient_key.pem
# - JUNHONG_WECHAT_PAYMENT_SERIAL_NO=your_serial_no
# - JUNHONG_WECHAT_PAYMENT_NOTIFY_URL=https://your-domain.com/api/callback/wechat-pay
# - JUNHONG_WECHAT_PAYMENT_HTTP_DEBUG=false
# - JUNHONG_WECHAT_PAYMENT_TIMEOUT=30s
volumes: volumes:
# 仅挂载日志目录(配置已嵌入二进制文件)
- ./logs:/app/logs - ./logs:/app/logs
# 微信支付证书目录(如果使用微信支付,需要挂载证书)
# - ./certs:/app/certs:ro
networks: networks:
- junhong-network - junhong-network
healthcheck: healthcheck:
@@ -137,27 +124,8 @@ services:
- JUNHONG_GATEWAY_APP_ID=60bgt1X8i7AvXqkd - JUNHONG_GATEWAY_APP_ID=60bgt1X8i7AvXqkd
- JUNHONG_GATEWAY_APP_SECRET=BZeQttaZQt0i73moF - JUNHONG_GATEWAY_APP_SECRET=BZeQttaZQt0i73moF
- JUNHONG_GATEWAY_TIMEOUT=30 - JUNHONG_GATEWAY_TIMEOUT=30
# 微信公众号配置(可选)
# - JUNHONG_WECHAT_OFFICIAL_ACCOUNT_APP_ID=your_app_id
# - JUNHONG_WECHAT_OFFICIAL_ACCOUNT_APP_SECRET=your_app_secret
# - JUNHONG_WECHAT_OFFICIAL_ACCOUNT_TOKEN=your_token
# - JUNHONG_WECHAT_OFFICIAL_ACCOUNT_AES_KEY=your_aes_key
# - JUNHONG_WECHAT_OFFICIAL_ACCOUNT_OAUTH_REDIRECT_URL=https://your-domain.com/callback
# 微信支付配置(可选)
# - JUNHONG_WECHAT_PAYMENT_APP_ID=your_app_id
# - JUNHONG_WECHAT_PAYMENT_MCH_ID=your_mch_id
# - JUNHONG_WECHAT_PAYMENT_API_V3_KEY=your_32_char_api_v3_key
# - JUNHONG_WECHAT_PAYMENT_API_V2_KEY=your_api_v2_key
# - JUNHONG_WECHAT_PAYMENT_CERT_PATH=/app/certs/apiclient_cert.pem
# - JUNHONG_WECHAT_PAYMENT_KEY_PATH=/app/certs/apiclient_key.pem
# - JUNHONG_WECHAT_PAYMENT_SERIAL_NO=your_serial_no
# - JUNHONG_WECHAT_PAYMENT_NOTIFY_URL=https://your-domain.com/api/callback/wechat-pay
# - JUNHONG_WECHAT_PAYMENT_HTTP_DEBUG=false
# - JUNHONG_WECHAT_PAYMENT_TIMEOUT=30s
volumes: volumes:
- ./logs:/app/logs - ./logs:/app/logs
# 微信支付证书目录(如果使用微信支付,需要挂载证书)
# - ./certs:/app/certs:ro
networks: networks:
- junhong-network - junhong-network
depends_on: depends_on:

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,227 @@
# 代理预充值功能
## 功能概述
代理商(店铺)余额钱包的在线充值系统,支持微信在线支付和线下转账两种充值方式,具备完整的 Service/Handler/回调处理链路。充值仅针对余额钱包(`wallet_type=main`),佣金钱包通过分佣自动入账。
### 背景与动机
原有 `tb_agent_recharge_record` 表和 Store 层骨架已存在,但缺少 Service 层和 Handler 层,无法通过 API 发起充值。本次补全完整实现,并集成至支付配置管理体系(按 `payment_config_id` 动态路由至微信直连或富友通道)。
## 核心流程
### 在线充值流程(微信)
```
代理/平台 → POST /api/admin/agent-recharges
├─ 验证权限:代理只能充自己店铺,平台可指定任意店铺
├─ 验证金额范围100 元~100 万元)
├─ 查找目标店铺的 main 钱包
├─ 查询 active 支付配置 → 无配置则拒绝(返回 1175
├─ 记录 payment_config_id
└─ 创建充值订单status=1 待支付)
└─ 返回订单信息(客户端支付发起【留桩】)
支付成功 → POST /api/callback/wechat-pay 或 /api/callback/fuiou-pay
├─ 按订单号前缀 "ARCH" 识别为代理充值
├─ 查询充值记录,取 payment_config_id
├─ 按配置验签
└─ agentRechargeService.HandlePaymentCallback()
├─ 幂等检查WHERE status = 1
├─ 更新充值记录状态 → 2已完成
├─ 代理主钱包余额增加(乐观锁防并发)
└─ 创建钱包流水记录
```
### 线下充值流程(仅平台)
```
平台 → POST /api/admin/agent-recharges
└─ payment_method = "offline"
└─ 创建充值订单status=1 待支付)
平台确认 → POST /api/admin/agent-recharges/:id/offline-pay
├─ 验证操作密码(二次鉴权)
└─ 事务内:
├─ 更新充值记录状态 → 2已完成
├─ 记录 paid_at、completed_at
├─ 代理主钱包余额增加(乐观锁 version 字段)
├─ 创建钱包流水记录
└─ 记录审计日志
```
## 接口说明
### 基础路径
`/api/admin/agent-recharges`
**权限要求**:企业账号(`user_type=4`)在路由层被拦截,返回 `1005`
### 接口列表
| 方法 | 路径 | 说明 | 权限 |
|------|------|------|------|
| POST | `/api/admin/agent-recharges` | 创建充值订单 | 代理(自己店铺)/ 平台(任意店铺)|
| GET | `/api/admin/agent-recharges` | 查询充值记录列表 | 代理(自己店铺)/ 平台(全部)|
| GET | `/api/admin/agent-recharges/:id` | 查询充值记录详情 | 代理(自己店铺)/ 平台(全部)|
| POST | `/api/admin/agent-recharges/:id/offline-pay` | 确认线下充值到账 | 仅平台 |
### 创建充值订单
**请求体示例(在线充值)**
```json
{
"shop_id": 101,
"amount": 50000,
"payment_method": "wechat"
}
```
**请求体示例(线下充值)**
```json
{
"shop_id": 101,
"amount": 200000,
"payment_method": "offline"
}
```
**请求字段**
| 字段 | 类型 | 必填 | 说明 |
|------|------|------|------|
| shop_id | integer | 是 | 目标店铺 ID代理只能填自己所属店铺|
| amount | integer | 是 | 充值金额(单位:分),范围 10000~100000000 |
| payment_method | string | 是 | `wechat`(在线)/ `offline`(线下,仅平台)|
**成功响应**
```json
{
"code": 0,
"msg": "success",
"data": {
"id": 88,
"recharge_no": "ARCH20260316100001",
"shop_id": 101,
"amount": 50000,
"payment_method": "wechat",
"payment_channel": "wechat_direct",
"payment_config_id": 3,
"status": 1,
"created_at": "2026-03-16T10:00:00+08:00"
},
"timestamp": "2026-03-16T10:00:00+08:00"
}
```
### 线下充值确认
**请求体**
```json
{
"operation_password": "Abc123456"
}
```
操作密码验证通过后,事务内同步完成:余额到账 + 钱包流水 + 审计日志。
## 权限控制矩阵
| 操作 | 平台账号 | 代理账号 | 企业账号 |
|------|----------|----------|----------|
| 创建充值(在线) | ✅ 任意店铺 | ✅ 仅自己店铺 | ❌ |
| 创建充值(线下) | ✅ 任意店铺 | ❌ | ❌ |
| 线下充值确认 | ✅ | ❌ | ❌ |
| 查询充值列表 | ✅ 全部 | ✅ 仅自己店铺 | ❌ |
| 查询充值详情 | ✅ 全部 | ✅ 仅自己店铺 | ❌ |
**越权统一响应**:代理访问他人店铺充值记录时,返回 `1121 CodeRechargeNotFound`(不区分不存在与无权限)
## 数据模型
### `tb_agent_recharge_record` 新增字段
| 字段 | 类型 | 可空 | 说明 |
|------|------|------|------|
| `payment_config_id` | bigint | 是 | 关联支付配置 ID线下充值为 NULL在线充值记录实际使用的配置|
### 充值订单状态枚举
| 值 | 含义 |
|----|------|
| 1 | 待支付 |
| 2 | 已完成 |
| 3 | 已取消 |
### 支付方式与通道
| payment_method | payment_channel | 说明 |
|---------------|----------------|------|
| wechat | wechat_direct | 微信直连通道provider_type=wechat|
| wechat | fuyou | 富友通道provider_type=fuiou|
| offline | offline | 线下转账 |
> 前端统一显示"微信支付",后端根据生效配置的 `provider_type` 自动路由,前端不感知具体通道。
### 充值单号规则
前缀 `ARCH`,全局唯一,用于回调时识别订单类型。
## 幂等性设计
- 回调处理使用状态条件更新:`WHERE status = 1`
- `RowsAffected == 0` 时说明已被处理,直接返回成功,不重复入账
- 钱包余额更新使用乐观锁(`version` 字段),并发冲突时最多重试 3 次
## 审计日志
线下充值确认(`OfflinePay`)操作记录审计日志,字段包括:
| 字段 | 值 |
|------|-----|
| `operator_id` | 当前操作人 ID |
| `operation_type` | `offline_recharge` |
| `operation_desc` | `确认代理充值到账:充值单号 {recharge_no},金额 {amount} 分` |
| `before_data` | 操作前余额和充值记录状态 |
| `after_data` | 操作后余额和充值记录状态 |
## 涉及文件
### 新增文件
| 层级 | 文件 | 说明 |
|------|------|------|
| DTO | `internal/model/dto/agent_recharge_dto.go` | 请求/响应 DTO |
| Service | `internal/service/agent_recharge/service.go` | 充值业务逻辑 |
| Handler | `internal/handler/admin/agent_recharge.go` | 4 个 Handler 方法 |
| 路由 | `internal/routes/agent_recharge.go` | 路由注册 |
### 修改文件
| 文件 | 变更说明 |
|------|---------|
| `internal/model/agent_wallet.go` | 新增 `PaymentConfigID *uint` 字段 |
| `internal/handler/callback/payment.go` | 新增 "ARCH" 前缀分发 → agentRechargeService.HandlePaymentCallback() |
| `internal/bootstrap/` 系列 | 注册 AgentRechargeService、AgentRechargeHandler |
| `cmd/api/docs.go` / `cmd/gendocs/main.go` | 注册 AgentRechargeHandler |
| `migrations/000081_add_payment_config_id_to_agent_recharge.up.sql` | tb_agent_recharge_record 新增 payment_config_id 列 |
## 常量定义
```go
// pkg/constants/wallet.go
AgentRechargeOrderPrefix = "ARCH" // 充值单号前缀
AgentRechargeMinAmount = 10000 // 最小充值100 元(单位:分)
AgentRechargeMaxAmount = 100000000 // 最大充值100 万元(单位:分)
```
## 已知限制(留桩)
**客户端支付发起未实现**:在线充值(`payment_method=wechat`)创建订单成功后,前端获取支付参数的接口本次未实现。充值回调处理已完整实现——等支付发起改造完成后,完整的充值支付闭环即可联通。

View File

@@ -0,0 +1,253 @@
# 资产详情重构 API 变更说明
> 适用版本asset-detail-refactor 提案上线后
> 文档更新2026-03-14
---
## 一、现有接口字段变更
### 1. `device_no` 重命名为 `virtual_no`
所有涉及设备标识符的接口,响应中的 `device_no` 字段已统一改名为 `virtual_no`**JSON key 同步变更**,前端需全局替换。
受影响接口:
| 接口 | 变更字段 |
|------|---------|
| `GET /api/admin/devices`(列表/详情响应) | `device_no``virtual_no` |
| `GET /api/admin/devices/import/tasks/:id` | `failed_items[].device_no``virtual_no` |
| `GET /api/admin/enterprises/:id/devices`(企业设备列表) | `device_no``virtual_no` |
| `GET /api/admin/shop-commission/records` | `device_no``virtual_no` |
| `GET /api/admin/my-commission/records` | `device_no``virtual_no` |
| 企业卡授权相关响应中的设备字段 | `device_no``virtual_no` |
---
### 2. 套餐接口新增 `virtual_ratio` 字段
`GET /api/admin/packages` 及套餐详情响应新增:
| 新增字段 | 类型 | 说明 |
|---------|------|------|
| `virtual_ratio` | float64 | 虚流量比例real_data_mb / virtual_data_mb。启用虚流量时计算否则为 1.0 |
---
### 3. IoT 卡接口新增 `virtual_no` 字段
卡列表/详情响应新增:
| 新增字段 | 类型 | 说明 |
|---------|------|------|
| `virtual_no` | string | 虚拟号(可空) |
---
## 二、新增接口
### 基础说明
- 路径参数 `asset_type` 取值:`card`(卡)或 `device`(设备)
- 企业账号调用 `resolve` 接口会返回 403
---
### `GET /api/admin/assets/resolve/:identifier`
通过任意标识符查询设备或卡的完整详情。支持虚拟号、ICCID、IMEI、SN、MSISDN。
**响应字段:**
| 字段 | 类型 | 说明 |
|------|------|------|
| `asset_type` | string | `card``device` |
| `asset_id` | uint | 数据库 ID |
| `virtual_no` | string | 虚拟号 |
| `status` | int | 资产状态 |
| `batch_no` | string | 批次号 |
| `shop_id` | uint | 所属店铺 ID |
| `shop_name` | string | 所属店铺名称 |
| `series_id` | uint | 套餐系列 ID |
| `series_name` | string | 套餐系列名称 |
| `real_name_status` | int | 实名状态0 未实名 / 1 实名中 / 2 已实名 |
| `network_status` | int | 网络状态0 停机 / 1 开机(仅 card |
| `current_package` | string | 当前套餐名称(无则空) |
| `package_total_mb` | int64 | 当前套餐总虚流量 MB |
| `package_used_mb` | float64 | 已用虚流量 MB |
| `package_remain_mb` | float64 | 剩余虚流量 MB |
| `device_protect_status` | string | 保护期状态:`none` / `stop` / `start`(仅 device |
| `activated_at` | time | 激活时间 |
| `created_at` | time | 创建时间 |
| `updated_at` | time | 更新时间 |
| **绑定关系card 时)** | | |
| `iccid` | string | 卡 ICCID |
| `bound_device_id` | uint | 绑定设备 ID |
| `bound_device_no` | string | 绑定设备虚拟号 |
| `bound_device_name` | string | 绑定设备名称 |
| **绑定关系device 时)** | | |
| `bound_card_count` | int | 绑定卡数量 |
| `cards[]` | array | 绑定卡列表,每项含:`card_id` / `iccid` / `msisdn` / `network_status` / `real_name_status` / `slot_position` |
| **设备专属字段card 时为空)** | | |
| `device_name` | string | 设备名称 |
| `imei` | string | IMEI |
| `sn` | string | 序列号 |
| `device_model` | string | 设备型号 |
| `device_type` | string | 设备类型 |
| `max_sim_slots` | int | 最大插槽数 |
| `manufacturer` | string | 制造商 |
| **卡专属字段device 时为空)** | | |
| `carrier_type` | string | 运营商类型 |
| `carrier_name` | string | 运营商名称 |
| `msisdn` | string | 手机号 |
| `imsi` | string | IMSI |
| `card_category` | string | 卡业务类型 |
| `supplier` | string | 供应商 |
| `activation_status` | int | 激活状态 |
| `enable_polling` | bool | 是否参与轮询 |
---
### `GET /api/admin/assets/:asset_type/:id/realtime-status`
读取资产实时状态(直接读 DB/Redis不调网关
**响应字段:**
| 字段 | 类型 | 说明 |
|------|------|------|
| `asset_type` | string | `card``device` |
| `asset_id` | uint | 资产 ID |
| `network_status` | int | 网络状态(仅 card |
| `real_name_status` | int | 实名状态(仅 card |
| `current_month_usage_mb` | float64 | 本月已用流量 MB仅 card |
| `last_sync_time` | time | 最后同步时间(仅 card |
| `device_protect_status` | string | 保护期:`none` / `stop` / `start`(仅 device |
| `cards[]` | array | 所有绑定卡的状态(仅 device同 resolve 的 cards 结构 |
---
### `POST /api/admin/assets/:asset_type/:id/refresh`
主动调网关拉取最新数据后返回,响应结构与 `realtime-status` 完全相同。
> 设备有 **30 秒冷却期**,冷却中调用返回 429。
---
### `GET /api/admin/assets/:asset_type/:id/packages`
查询该资产所有套餐记录,含虚流量换算字段。
**响应为数组,每项字段:**
| 字段 | 类型 | 说明 |
|------|------|------|
| `package_usage_id` | uint | 套餐使用记录 ID |
| `package_id` | uint | 套餐 ID |
| `package_name` | string | 套餐名称 |
| `package_type` | string | `formal`(正式套餐)/ `addon`(加油包) |
| `status` | int | 0 待生效 / 1 生效中 / 2 已用完 / 3 已过期 / 4 已失效 |
| `status_name` | string | 状态中文名 |
| `data_limit_mb` | int64 | 真流量总量 MB |
| `virtual_limit_mb` | int64 | 虚流量总量 MB已按 virtual_ratio 换算) |
| `data_usage_mb` | int64 | 已用真流量 MB |
| `virtual_used_mb` | float64 | 已用虚流量 MB |
| `virtual_remain_mb` | float64 | 剩余虚流量 MB |
| `virtual_ratio` | float64 | 虚流量比例 |
| `activated_at` | time | 激活时间 |
| `expires_at` | time | 到期时间 |
| `master_usage_id` | uint | 主套餐 ID加油包时有值 |
| `priority` | int | 优先级 |
| `created_at` | time | 创建时间 |
---
### `GET /api/admin/assets/:asset_type/:id/current-package`
查询当前生效中的主套餐,响应结构同 `packages` 数组的单项。无生效套餐时返回 404。
---
### `POST /api/admin/assets/device/:device_id/stop`
批量停机设备下所有已实名卡,停机成功后设置 **1 小时停机保护期**(保护期内禁止复机)。
**响应字段:**
| 字段 | 类型 | 说明 |
|------|------|------|
| `message` | string | 操作结果描述 |
| `success_count` | int | 成功停机的卡数量 |
| `failed_cards[]` | array | 停机失败列表,每项含 `iccid``reason` |
---
### `POST /api/admin/assets/device/:device_id/start`
批量复机设备下所有已实名卡,复机成功后设置 **1 小时复机保护期**(保护期内禁止停机)。
无响应 bodyHTTP 200 即成功。
---
### `POST /api/admin/assets/card/:iccid/stop`
手动停机单张卡(通过 ICCID。若卡绑定的设备在**复机保护期**内,返回 403。
无响应 bodyHTTP 200 即成功。
---
### `POST /api/admin/assets/card/:iccid/start`
手动复机单张卡(通过 ICCID。若卡绑定的设备在**停机保护期**内,返回 403。
无响应 bodyHTTP 200 即成功。
---
## 三、删除的接口
### IoT 卡
| 删除的接口 | 替代接口 |
|-----------|---------|
| `GET /api/admin/iot-cards/by-iccid/:iccid` | `GET /api/admin/assets/resolve/:iccid` |
| `GET /api/admin/iot-cards/:iccid/gateway-status` | `GET /api/admin/assets/card/:id/realtime-status` |
| `GET /api/admin/iot-cards/:iccid/gateway-flow` | `GET /api/admin/assets/card/:id/realtime-status` |
| `GET /api/admin/iot-cards/:iccid/gateway-realname` | `GET /api/admin/assets/card/:id/realtime-status` |
| `POST /api/admin/iot-cards/:iccid/stop` | `POST /api/admin/assets/card/:iccid/stop` |
| `POST /api/admin/iot-cards/:iccid/start` | `POST /api/admin/assets/card/:iccid/start` |
### 设备
| 删除的接口 | 替代接口 |
|-----------|---------|
| `GET /api/admin/devices/:id` | `GET /api/admin/assets/resolve/:virtual_no` |
| `GET /api/admin/devices/by-identifier/:identifier` | `GET /api/admin/assets/resolve/:identifier` |
| `GET /api/admin/devices/by-identifier/:identifier/gateway-info` | `GET /api/admin/assets/device/:id/realtime-status` |
### 企业卡Admin
| 删除的接口 | 替代接口 |
|-----------|---------|
| `POST /api/admin/enterprises/:id/cards/:card_id/suspend` | `POST /api/admin/assets/card/:iccid/stop` |
| `POST /api/admin/enterprises/:id/cards/:card_id/resume` | `POST /api/admin/assets/card/:iccid/start` |
### 企业设备H5
| 删除的接口 | 替代接口 |
|-----------|---------|
| `POST /api/h5/enterprise/devices/:device_id/suspend-card` | `POST /api/admin/assets/device/:device_id/stop` |
| `POST /api/h5/enterprise/devices/:device_id/resume-card` | `POST /api/admin/assets/device/:device_id/start` |
---
## 四、新增错误码说明
| HTTP 状态码 | 触发场景 |
|------------|---------|
| 403 | 设备在保护期内(停机 1h 内禁止复机,反之亦然);企业账号调用 resolve 接口 |
| 404 | 标识符未匹配到任何资产;当前无生效套餐 |
| 429 | 设备刷新冷却中30 秒内只能主动刷新一次) |

View File

@@ -0,0 +1,128 @@
# 客户端接口数据模型基础准备 - 功能总结
## 概述
本提案作为客户端接口系列的前置基础完成三类工作BUG 修复、基础字段准备、旧接口清理。
## 一、BUG 修复
### BUG-1代理零售价修复
**问题**`ShopPackageAllocation` 缺少 `retail_price` 字段,所有渠道统一使用 `Package.SuggestedRetailPrice`,代理无法设定自己的零售价。
**修复内容**
- `ShopPackageAllocation` 新增 `retail_price` 字段(迁移中存量数据批量回填为 `SuggestedRetailPrice`
- `GetPurchasePrice()` 改为按渠道取价:代理渠道返回 `allocation.RetailPrice`,平台渠道返回 `SuggestedRetailPrice`
- `validatePackages()` 价格累加同步修正,代理渠道额外校验 `RetailPrice >= CostPrice`
- 分配创建(`shop_package_batch_allocation``shop_series_grant`)时自动设置 `RetailPrice = SuggestedRetailPrice`
- 新增 cost_price 分配锁定:存在下级分配记录时禁止修改 `cost_price`
- `BatchUpdatePricing` 接口仅支持成本价批量调整(保留 cost_price 锁定规则)
- 新增独立接口 `PATCH /api/admin/packages/:id/retail-price`,代理可修改自己的套餐零售价
- `PackageResponse` 新增 `retail_price` 字段,利润计算修正为 `RetailPrice - CostPrice`
**涉及文件**
- `internal/model/shop_package_allocation.go`
- `internal/model/dto/shop_package_batch_pricing_dto.go`
- `internal/model/dto/package_dto.go`
- `internal/service/purchase_validation/service.go`
- `internal/service/shop_package_batch_allocation/service.go`
- `internal/service/shop_series_grant/service.go`
- `internal/service/shop_package_batch_pricing/service.go`
- `internal/service/package/service.go`
### BUG-2一次性佣金触发条件修复
**问题**:后台所有订单(包括代理自购)都可能触发一次性佣金。
**修复内容**
- `Order` 新增 `source` 字段(`admin`/`client`),默认 `admin`
- 佣金触发条件从 `!order.IsPurchaseOnBehalf` 改为 `!order.IsPurchaseOnBehalf && order.Source == "client"`
- `CreateAdminOrder()` 设置 `Source: constants.OrderSourceAdmin`
**涉及文件**
- `internal/model/order.go`
- `internal/service/commission_calculation/service.go`(两个方法)
- `internal/service/order/service.go`
### BUG-4充值回调事务一致性修复
**问题**`HandlePaymentCallback``UpdateStatusWithOptimisticLock``UpdatePaymentInfo` 使用 `s.db` 而非事务内 `tx`
**修复内容**
- `AssetRechargeStore` 新增 `UpdateStatusWithOptimisticLockDB``UpdatePaymentInfoWithDB` 方法(支持传入 `tx`
- 原方法保留(委托调用新方法),确保向后兼容
- `HandlePaymentCallback` 改用事务内 `tx` 调用
**涉及文件**
- `internal/store/postgres/asset_recharge_store.go`
- `internal/service/recharge/service.go`
## 二、基础字段准备
### 新增常量文件
| 文件 | 内容 |
|------|------|
| `pkg/constants/asset_status.go` | 资产业务状态(在库/已销售/已换货/已停用) |
| `pkg/constants/order_source.go` | 订单来源admin/client |
| `pkg/constants/operator_type.go` | 操作人类型admin_user/personal_customer |
| `pkg/constants/realname_link.go` | 实名链接类型none/template/gateway |
### 模型字段变更
| 模型 | 新增字段 | 说明 |
|------|---------|------|
| `IotCard` | `asset_status`, `generation` | 业务生命周期状态、资产世代编号 |
| `Device` | `asset_status`, `generation` | 同上 |
| `Order` | `source`, `generation` | 订单来源、资产世代快照 |
| `PackageUsage` | `generation` | 资产世代快照 |
| `AssetRechargeRecord` | `operator_type`, `generation`, `linked_package_ids`, `linked_order_type`, `linked_carrier_type`, `linked_carrier_id` | 操作人类型、世代、强充关联字段 |
| `Carrier` | `realname_link_type`, `realname_link_template` | 实名链接配置 |
| `ShopPackageAllocation` | `retail_price` | 代理零售价 |
| `PersonalCustomer` | `wx_open_id` 索引变更 | 唯一索引改为普通索引 |
### Carrier 管理 DTO 更新
- `CarrierCreateRequest``CarrierUpdateRequest` 新增 `realname_link_type``realname_link_template` 字段
- `CarrierResponse` 新增对应展示字段
- Carrier Service 的 Create/Update 方法同步处理Update 时 `template` 类型强制校验模板非空
### 资产手动停用
- 新增 `PATCH /api/admin/iot-cards/:id/deactivate``PATCH /api/admin/devices/:id/deactivate`
-`asset_status` 为 1在库或 2已销售时允许停用
- 使用条件更新确保幂等
## 三、旧接口清理
### H5 接口删除
- 删除 `internal/handler/h5/` 全部文件5 个)
- 删除 `internal/routes/h5*.go`3 个文件)
- 清理 `routes.go``order.go``recharge.go` 中的 H5 路由注册
- 清理 `bootstrap/` 中 H5 Handler 构造和字段
- 清理 `middlewares.go` 中 H5 认证中间件
- 清理 `pkg/openapi/handlers.go` 中 H5 文档生成引用
- 清理 `cmd/api/main.go` 中 H5 限流挂载
### 个人客户旧登录方法删除
- 删除 `internal/handler/app/personal_customer.go` 中 Login、SendCode、WechatOAuthLogin、BindWechat 方法
- 清理对应路由注册
- 保留 UpdateProfile 和 GetProfile
## 四、数据库迁移
- 迁移编号000082
- 涉及 7 张表、15+ 个字段变更
- 包含存量 `retail_price` 批量回填
- 包含 `wx_open_id` 索引从唯一改为普通
- 所有字段使用 `NOT NULL DEFAULT` 确保存量兼容
## 五、后台订单 generation 快照
- `CreateAdminOrder()` 创建订单时从资产IotCard/Device获取当前 `Generation` 值写入订单
- 不再依赖数据库默认值 1

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,141 @@
# C 端认证系统功能总结
## 概述
本次实现了面向个人客户C 端)的完整认证体系,替代旧 H5 登录接口。支持微信公众号和小程序两种登录方式,基于「资产标识符验证 → 微信授权 → 自动绑定资产 → 可选绑定手机号」的流程。
## 接口列表
| 接口 | 路径 | 认证 | 说明 |
|------|------|------|------|
| A1 | `POST /api/c/v1/auth/verify-asset` | 否 | 资产标识符验证,返回 asset_token |
| A2 | `POST /api/c/v1/auth/wechat-login` | 否 | 微信公众号登录 |
| A3 | `POST /api/c/v1/auth/miniapp-login` | 否 | 微信小程序登录 |
| A4 | `POST /api/c/v1/auth/send-code` | 否 | 发送手机验证码 |
| A5 | `POST /api/c/v1/auth/bind-phone` | 是 | 首次绑定手机号 |
| A6 | `POST /api/c/v1/auth/change-phone` | 是 | 换绑手机号(双验证码) |
| A7 | `POST /api/c/v1/auth/logout` | 是 | 退出登录 |
## 登录流程
```
用户输入资产标识符SN/IMEI/ICCID
[A1] verify-asset → asset_token5分钟有效
微信授权(前端完成)
├── 公众号 → [A2] wechat-login (code + asset_token)
└── 小程序 → [A3] miniapp-login (code + asset_token)
解析 asset_token → 获取微信 openid
→ 查找/创建客户 → 绑定资产
→ 签发 JWT + Redis 存储
返回 { token, need_bind_phone, is_new_user }
need_bind_phone == true?
YES → [A4] 发送验证码 → [A5] 绑定手机号
NO → 进入主页面
```
## 核心设计
### 有状态 JWTJWT + Redis
- JWT payload 仅含 `customer_id` + `exp`
- 登录时将 token 写入 RedisTTL 与 JWT 一致
- 每次请求在中间件同时校验 JWT 签名和 Redis 有效状态
- 支持服务端主动失效(封禁、强制下线、退出登录)
- 单点登录:新登录覆盖旧 token
### OpenID 多记录管理
- 新增 `tb_personal_customer_openid`
- 同一客户可在多个 AppID公众号/小程序)下拥有不同 OpenID
- 唯一约束:`UNIQUE(app_id, open_id) WHERE deleted_at IS NULL`
- 客户查找逻辑openid 精确匹配 → unionid 回退合并 → 创建新客户
### 资产绑定
- 每次登录创建 `PersonalCustomerDevice` 绑定记录
- 同一资产允许被多个客户绑定(支持转手场景)
- 首次绑定时自动将资产状态从「在库(1)」更新为「已销售(2)」
### 微信配置动态加载
- 登录时从数据库 `tb_wechat_config` 动态读取激活配置
- 优先走 WechatConfigService 的 Redis 缓存
- 小程序登录直接 HTTP 调用微信 `jscode2session`(不依赖 PowerWeChat SDK
## 限流策略
| 接口 | 维度 | 限制 |
|------|------|------|
| A1 | IP | 30 次/分钟 |
| A4 | 手机号 | 60 秒冷却 |
| A4 | IP | 20 次/小时 |
| A4 | 手机号 | 10 次/天 |
## 新增/修改文件
### 新增文件
| 文件 | 说明 |
|------|------|
| `internal/model/personal_customer_openid.go` | OpenID 关联模型 |
| `internal/model/dto/client_auth_dto.go` | A1-A7 请求/响应 DTO |
| `internal/store/postgres/personal_customer_openid_store.go` | OpenID Store |
| `internal/service/client_auth/service.go` | 认证 Service核心业务逻辑 |
| `internal/handler/app/client_auth.go` | 认证 Handler7 个端点) |
| `pkg/wechat/miniapp.go` | 小程序 SDK 封装 |
| `migrations/000083_add_personal_customer_openid.up.sql` | 迁移文件 |
| `migrations/000083_add_personal_customer_openid.down.sql` | 回滚文件 |
### 修改文件
| 文件 | 说明 |
|------|------|
| `internal/middleware/personal_auth.go` | 增加 Redis 双重校验 |
| `pkg/constants/redis.go` | 新增 token 和限流 Redis Key |
| `pkg/errors/codes.go` | 新增错误码 1180-1186 |
| `pkg/config/defaults/config.yaml` | 新增 `client.require_phone_binding` |
| `pkg/wechat/wechat.go` | 新增 MiniAppServiceInterface |
| `pkg/wechat/config.go` | 新增 3 个 DB 动态工厂函数 |
| `internal/bootstrap/types.go` | 新增 ClientAuth Handler 字段 |
| `internal/bootstrap/handlers.go` | 实例化 ClientAuth Handler |
| `internal/bootstrap/services.go` | 初始化 ClientAuth Service |
| `internal/bootstrap/stores.go` | 初始化 OpenID Store |
| `internal/routes/personal.go` | 注册 7 个认证端点 |
| `cmd/api/docs.go` | 注册文档生成器 |
| `cmd/gendocs/main.go` | 注册文档生成器 |
## 错误码
| 码值 | 常量名 | 说明 |
|------|--------|------|
| 1180 | CodeAssetNotFound | 资产不存在 |
| 1181 | CodeWechatConfigUnavailable | 微信配置不可用 |
| 1182 | CodeSmsSendFailed | 短信发送失败 |
| 1183 | CodeVerificationCodeInvalid | 验证码错误或已过期 |
| 1184 | CodePhoneAlreadyBound | 手机号已被其他客户绑定 |
| 1185 | CodeAlreadyBoundPhone | 已绑定手机号不可重复绑定 |
| 1186 | CodeOldPhoneMismatch | 旧手机号与当前绑定不匹配 |
## 数据库变更
- 新建表 `tb_personal_customer_openid`(迁移 000083
- 唯一索引:`idx_pco_app_id_open_id` (app_id, open_id) 软删除条件
- 普通索引:`idx_pco_customer_id` (customer_id)
- 条件索引:`idx_pco_union_id` (union_id) WHERE union_id != ''
## 配置项
| 配置路径 | 环境变量 | 默认值 | 说明 |
|---------|---------|-------|------|
| `client.require_phone_binding` | `JUNHONG_CLIENT_REQUIRE_PHONE_BINDING` | `true` | 是否要求绑定手机号 |

View File

@@ -0,0 +1,122 @@
# 客户端核心业务 API — 功能总结
## 概述
本提案为客户端C 端个人客户)提供完整的业务接口,覆盖资产查询、钱包充值、套餐购买、实名跳转、设备操作 5 大模块共 18 个 API 端点,全部挂载在 `/api/c/v1/` 路径下。
**前置依赖**:提案 0数据模型修复、提案 1C 端认证系统)。
## API 端点一览
### 模块 B资产信息4 个接口)
| 方法 | 路径 | 说明 |
|------|------|------|
| GET | `/api/c/v1/asset/info` | B1 资产基本信息查询 |
| GET | `/api/c/v1/asset/packages` | B2 可购买套餐列表 |
| GET | `/api/c/v1/asset/package-history` | B3 历史套餐列表 |
| POST | `/api/c/v1/asset/refresh` | B4 手动刷新资产状态 |
### 模块 C钱包与充值5 个接口)
| 方法 | 路径 | 说明 |
|------|------|------|
| GET | `/api/c/v1/wallet/detail` | C1 钱包详情(不存在自动创建) |
| GET | `/api/c/v1/wallet/transactions` | C2 钱包流水列表 |
| GET | `/api/c/v1/wallet/recharge-check` | C3 充值预检(强充检查) |
| POST | `/api/c/v1/wallet/recharge` | C4 创建充值订单JSAPI 支付) |
| GET | `/api/c/v1/wallet/recharges` | C5 充值订单列表 |
### 模块 D套餐购买3 个接口)
| 方法 | 路径 | 说明 |
|------|------|------|
| POST | `/api/c/v1/orders/create` | D1 创建套餐购买订单(含强充分流) |
| GET | `/api/c/v1/orders` | D2 套餐订单列表 |
| GET | `/api/c/v1/orders/:id` | D3 套餐订单详情 |
### 模块 E实名认证1 个接口)
| 方法 | 路径 | 说明 |
|------|------|------|
| GET | `/api/c/v1/realname/link` | E1 获取实名跳转链接 |
### 模块 F设备能力5 个接口)
| 方法 | 路径 | 说明 |
|------|------|------|
| GET | `/api/c/v1/device/cards` | F1 设备卡列表 |
| POST | `/api/c/v1/device/reboot` | F2 设备重启 |
| POST | `/api/c/v1/device/factory-reset` | F3 恢复出厂设置 |
| POST | `/api/c/v1/device/wifi` | F4 设置 WiFi |
| POST | `/api/c/v1/device/switch-card` | F5 切卡 |
## 核心设计决策
### 1. 数据权限绕过
客户端调用后台复用 Service 时,统一使用 `gorm.SkipDataPermission(ctx)` 绕过 shop_id 自动过滤,避免个人客户因非店铺主体被误拦截。
### 2. 归属校验
所有涉及资产操作的接口统一前置归属校验:查询 `PersonalCustomerDevice` 条件 `customer_id = 当前登录客户``virtual_no = 资产虚拟号`,未命中返回 403。
### 3. Generation 过滤
客户端历史查询统一附加 `WHERE generation = 资产当前 generation`,确保转手后数据隔离。
### 4. OpenID 安全规范
支付接口C4/D1所需 OpenID 由后端按 `customer_id + app_type` 查询,客户端禁止传入 OpenID。根据 `app_type` 选择对应的微信 AppID 创建支付实例。
### 5. 强充两阶段
- 第一阶段(同步):充值入账、更新状态
- 第二阶段(异步 Asynq钱包扣款 → 创建订单 → 激活套餐
`AssetRechargeRecord.auto_purchase_status` 字段追踪异步状态pending/success/failed
## 新增文件
```
internal/model/dto/client_asset_dto.go # 资产模块 DTO
internal/model/dto/client_wallet_dto.go # 钱包模块 DTO
internal/model/dto/client_order_dto.go # 订单模块 DTO
internal/model/dto/client_realname_device_dto.go # 实名+设备模块 DTO
internal/handler/app/client_asset.go # 资产 Handler
internal/handler/app/client_wallet.go # 钱包 Handler
internal/handler/app/client_order.go # 订单 Handler
internal/handler/app/client_realname.go # 实名 Handler
internal/handler/app/client_device.go # 设备 Handler
internal/service/client_order/service.go # 客户端订单编排 Service
internal/task/auto_purchase.go # 强充异步自动购买任务
migrations/000084_add_auto_purchase_status_*.sql # 数据库迁移
```
## 修改文件
```
pkg/constants/constants.go # 新增 auto_purchase_status 常量 + 任务类型
pkg/constants/redis.go # 新增客户端购买幂等键
pkg/errors/codes.go # 新增 NEED_REALNAME/OPENID_NOT_FOUND 错误码
internal/model/asset_wallet.go # AssetRechargeRecord 新增字段
internal/bootstrap/types.go # 5 个 Handler 字段
internal/bootstrap/handlers.go # Handler 实例化
internal/routes/personal.go # 18 个路由注册
pkg/openapi/handlers.go # 文档生成 Handler
cmd/api/docs.go # 文档注册
cmd/gendocs/main.go # 文档注册
```
## 新增错误码
| 错误码 | 常量名 | 消息 |
|--------|--------|------|
| 1187 | CodeNeedRealname | 该套餐需实名认证后购买 |
| 1188 | CodeOpenIDNotFound | 未找到微信授权信息,请先完成授权 |
## 数据库变更
- 表:`tb_asset_recharge_record`
- 新增字段:`auto_purchase_status VARCHAR(20) DEFAULT '' NOT NULL`
- 迁移版本000084

View File

@@ -0,0 +1,94 @@
# 客户端换货系统功能总结
## 1. 功能概述
本次实现完成了客户端换货系统的后台与客户端闭环能力,覆盖「后台建单 → 客户端填写收货信息 → 后台发货 → 后台确认完成(可选全量迁移) → 旧资产转新」完整流程。
## 2. 数据模型与迁移
- 新增 `tb_exchange_order` 表,承载换货生命周期全量字段:旧/新资产、收货信息、物流信息、迁移状态、业务状态、多租户字段。
- 保留历史能力:将旧表 `tb_card_replacement_record` 重命名为 `tb_card_replacement_record_legacy`
- 新增迁移文件:
- `000085_add_exchange_order.up/down.sql`
- `000086_rename_card_replacement_to_legacy.up/down.sql`
## 3. 后端实现
### 3.1 Store 层
- 新增 `ExchangeOrderStore`
- 创建、按 ID 查询、分页列表查询
- 条件状态流转更新(`WHERE status = fromStatus`
- 按旧资产查询进行中换货单(状态 `1/2/3`
- 新增 `ResourceTagStore`:用于资源标签复制。
### 3.2 Service 层
- 新增 `internal/service/exchange/service.go`
- H1 创建换货单(资产存在校验、进行中校验、单号生成、状态初始化)
- H2 列表查询
- H3 详情查询
- H4 发货(状态校验、同类型校验、新资产在库校验、物流与新资产快照写入)
- H5 确认完成(状态校验,可选全量迁移)
- H6 取消(仅允许 `1/2 -> 5`
- H7 转新(校验已换货状态、`generation+1`、状态重置、清理绑定、创建新钱包)
- G1 查询待处理换货单
- G2 提交收货信息(`1 -> 2`
- 新增 `internal/service/exchange/migration.go`
- 单事务迁移实现
- 钱包余额迁移并写入迁移流水
- 套餐使用记录迁移(`tb_package_usage`
- 套餐日记录联动更新(`tb_package_usage_daily_record`
- 累计充值/首充字段复制(旧资产 -> 新资产)
- 标签复制(`tb_resource_tag`
- 客户绑定 `virtual_no` 更新(`tb_personal_customer_device`
- 旧资产状态置为已换货(`asset_status=3`
- 换货单迁移结果回写(`migration_completed``migration_balance`
## 4. Handler 与路由
### 4.1 后台换货接口
- 新增 `internal/handler/admin/exchange.go`
- 新增 `internal/routes/exchange.go`
- 注册接口(标签:`换货管理`
- `POST /api/admin/exchanges`
- `GET /api/admin/exchanges`
- `GET /api/admin/exchanges/:id`
- `POST /api/admin/exchanges/:id/ship`
- `POST /api/admin/exchanges/:id/complete`
- `POST /api/admin/exchanges/:id/cancel`
- `POST /api/admin/exchanges/:id/renew`
### 4.2 客户端换货接口
- 新增 `internal/handler/app/client_exchange.go`
-`internal/routes/personal.go` 注册:
- `GET /api/c/v1/exchange/pending`
- `POST /api/c/v1/exchange/:id/shipping-info`
## 5. 兼容与替换
- `iot_card_store.go``is_replaced` 过滤逻辑已切换至 `tb_exchange_order`
- 业务主流程不再依赖旧换卡表(仅模型与 legacy 表保留用于历史数据)。
## 6. 启动装配与文档生成
已完成换货模块在以下位置的全链路接入:
- `internal/bootstrap/types.go`
- `internal/bootstrap/stores.go`
- `internal/bootstrap/services.go`
- `internal/bootstrap/handlers.go`
- `internal/routes/admin.go`
- `pkg/openapi/handlers.go`
- `cmd/api/docs.go`
- `cmd/gendocs/main.go`
## 7. 验证结果
- 已执行:`go build ./...`,编译通过。
- 已执行:数据库迁移 `make migrate-up`,版本到 `86`
- 已完成:变更文件 LSP 诊断检查(无 error 级问题)。

View File

@@ -0,0 +1,351 @@
# 套餐与佣金业务模型
本文档定义了套餐、套餐系列、佣金的完整业务模型,作为系统改造的规范参考。
---
## 一、核心概念
### 1.1 两种佣金类型
系统只有两种佣金类型:
| 佣金类型 | 触发时机 | 触发次数 | 计算方式 |
|---------|---------|---------|---------|
| **差价佣金** | 每笔订单 | 每单都触发 | 下级成本价 - 自己成本价 |
| **一次性佣金** | 首充/累计充值达标 | 每张卡/设备只触发一次 | 上级给的 - 给下级的 |
### 1.2 实体关系
```
┌─────────────────┐
│ 套餐系列 │
│ PackageSeries │
├─────────────────┤
│ • 系列名称 │
│ • 一次性佣金规则 │ ← 可选配置
└────────┬────────┘
│ 1:N
┌─────────────────┐ ┌─────────────────┐
│ 套餐 │ │ 卡/设备 │
│ Package │ │ IoT/Device │
├─────────────────┤ ├─────────────────┤
│ • 成本价 │ │ • 绑定系列ID │
│ • 建议售价 │ │ • 累计充值金额 │ ← 按系列累计
│ • 真流量(必填) │ │ • 是否已首充 │ ← 按系列记录
│ • 虚流量(可选) │ └────────┬────────┘
│ • 虚流量开关 │ │
└────────┬────────┘ │ 分配
│ ▼
│ 分配 ┌─────────────────┐
▼ │ 店铺 │
┌─────────────────┐ │ Shop │
│ 套餐分配 │◀─────────┤ • 代理层级 │
│ PkgAllocation │ │ • 上级店铺ID │
├─────────────────┤ └─────────────────┘
│ • 店铺ID │
│ • 套餐ID │
│ • 成本价(加价后)│
│ • 一次性佣金额 │ ← 给该代理的金额
└─────────────────┘
```
---
## 二、套餐模型
### 2.1 字段定义
| 字段 | 类型 | 必填 | 说明 |
|------|------|------|------|
| `cost_price` | int64 | 是 | 成本价(平台设置的基础成本价,分) |
| `suggested_price` | int64 | 是 | 建议售价(给代理参考,分) |
| `real_data_mb` | int64 | 是 | 真实流量额度MB |
| `enable_virtual_data` | bool | 否 | 是否启用虚流量 |
| `virtual_data_mb` | int64 | 否 | 虚流量额度(启用时必填,≤ 真实流量MB |
### 2.2 流量停机判断
```
停机目标值 = enable_virtual_data ? virtual_data_mb : real_data_mb
```
### 2.3 不同用户视角
| 用户类型 | 看到的成本价 | 看到的一次性佣金 |
|---------|-------------|-----------------|
| 平台 | 基础成本价 | 完整规则 |
| 代理A | A的成本价已加价 | A能拿到的金额 |
| 代理A1 | A1的成本价再加价 | A1能拿到的金额 |
---
## 三、差价佣金
### 3.1 计算规则
```
平台设置基础成本价: 100
│ 分配给代理A设置成本价: 120
代理A成本价: 120
│ 分配给代理A1设置成本价: 130
代理A1成本价: 130
│ A1销售给客户售价: 200
结果:
• A1 收入 = 200 - 130 = 70元销售利润不是佣金
• A 佣金 = 130 - 120 = 10元差价佣金
• 平台收入 = 120元
```
### 3.2 关键区分
- **收入/利润**:末端代理的 `售价 - 自己成本价`
- **差价佣金**:上级代理的 `下级成本价 - 自己成本价`
- **平台收入**:一级代理的成本价
---
## 四、一次性佣金
### 4.1 触发条件
| 条件类型 | 说明 | 强充要求 |
|---------|------|---------|
| `first_recharge` | 首充:该卡/设备在该系列下的第一次充值 | 必须强充 |
| `accumulated_recharge` | 累计充值:累计充值金额达到阈值 | 可选强充 |
### 4.2 规则配置(套餐系列层面)
| 配置项 | 类型 | 说明 |
|--------|------|------|
| `enable` | bool | 是否启用一次性佣金 |
| `trigger_type` | string | 触发类型:`first_recharge` / `accumulated_recharge` |
| `threshold` | int64 | 触发阈值(分):首充要求金额 或 累计要求金额 |
| `commission_type` | string | 返佣类型:`fixed`(固定) / `tiered`(梯度) |
| `commission_amount` | int64 | 固定返佣金额fixed类型时 |
| `tiers` | array | 梯度配置tiered类型时 |
| `validity_type` | string | 时效类型:`permanent` / `fixed_date` / `relative` |
| `validity_value` | string | 时效值(到期日期 或 月数) |
| `enable_force_recharge` | bool | 是否启用强充 |
| `force_calc_type` | string | 强充金额计算:`fixed`(固定) / `dynamic`(动态差额) |
| `force_amount` | int64 | 强充金额fixed类型时 |
### 4.3 链式分配
一次性佣金在整条代理链上按约定分配:
```
系列规则首充100返20
分配配置:
平台给A20元
A给A18元
A1给A25元
触发首充时:
A2 获得5元
A1 获得8 - 5 = 3元
A 获得20 - 8 = 12元
─────────────────────
合计20元 ✓
```
### 4.4 首充流程
```
客户购买套餐
预检:系列是否启用一次性佣金且为首充?
否 ───────────────────▶ 正常购买流程
该卡/设备在该系列下是否已首充过?
是 ───────────────────▶ 正常购买流程(不再返佣)
计算强充金额 = max(首充要求, 套餐售价)
返回提示:"需要充值 xxx 元"
用户确认 → 创建充值订单(金额=强充金额)
用户支付
支付成功:
1. 钱进入钱包
2. 标记该卡/设备已首充
3. 自动创建套餐购买订单并完成
4. 扣款(套餐售价)
5. 触发一次性佣金,链式分配
```
### 4.5 累计充值流程
```
客户充值(直接充值到钱包)
累计充值金额 += 本次充值金额
该卡/设备是否已触发过累计充值返佣?
是 ───────────────────▶ 结束(不再返佣)
累计金额 >= 累计要求?
否 ───────────────────▶ 结束(继续累计)
触发一次性佣金,链式分配
标记该卡/设备已触发累计充值返佣
```
**累计规则**
| 操作类型 | 是否累计 |
|---------|---------|
| 直接充值到钱包 | ✅ 累计 |
| 直接购买套餐(不经过钱包) | ❌ 不累计 |
| 强充购买套餐(先充值再扣款) | ✅ 累计(充值部分) |
---
## 五、梯度佣金
梯度佣金是一次性佣金的进阶版,根据代理销量/销售额动态调整返佣金额。
### 5.1 配置项
| 配置项 | 类型 | 说明 |
|--------|------|------|
| `tier_dimension` | string | 梯度维度:`sales_count`(销量) / `sales_amount`(销售额) |
| `stat_scope` | string | 统计范围:`self`(仅自己) / `self_and_sub`(自己+下级) |
| `tiers` | array | 梯度档位列表 |
| `tiers[].threshold` | int64 | 阈值(销量或销售额) |
| `tiers[].amount` | int64 | 返佣金额(分) |
### 5.2 示例
```
梯度规则(销量维度):
┌────────────────┬────────────────────────┐
│ 销量区间 │ 首充100返佣金额 │
├────────────────┼────────────────────────┤
│ >= 0 │ 5元 │
├────────────────┼────────────────────────┤
│ >= 100 │ 10元 │
├────────────────┼────────────────────────┤
│ >= 200 │ 20元 │
└────────────────┴────────────────────────┘
代理A当前销量150单 → 落在 [100, 200) 区间 → 首充返10元
```
### 5.3 梯度升级
```
初始状态:
代理A 销量150适用10元档给A1设置5元
触发时A1得5元A得10-5=5元
升级后A销量达到210
A 适用20元档A1配置仍为5元
触发时A1得5元不变A得20-5=15元增量归上级
```
### 5.4 统计周期
- 统计周期与一次性佣金时效一致
- 只统计该套餐系列下的销量/销售额
---
## 六、约束规则
### 6.1 套餐分配
1. 下级成本价 >= 自己成本价(不能亏本卖)
2. 只能分配自己有权限的套餐给下级
3. 只能分配给直属下级(不能跨级)
### 6.2 一次性佣金分配
4. 给下级的金额 <= 自己能拿到的金额
5. 给下级的金额 >= 0可以设为0独吞全部
### 6.3 流量
6. 虚流量 <= 真实流量
### 6.4 配置修改
7. 修改配置只影响之后的新订单
8. 代理只能修改"给下级多少钱",不能修改触发规则
9. 平台修改系列规则不影响已分配的代理,需收回重新分配
### 6.5 触发限制
10. 一次性佣金每张卡/设备只触发一次
11. "首充"指该卡/设备在该系列下的第一次充值
12. 累计充值只统计"充值"操作,不统计"直接购买"
---
## 七、操作流程
### 7.1 理想的线性流程
```
1. 创建套餐系列
└─▶ 可选:配置一次性佣金规则
2. 创建套餐
└─▶ 归属到系列
└─▶ 设置成本价、建议售价
└─▶ 设置真流量(必填)、虚流量(可选)
3. 分配套餐给代理
└─▶ 设置代理成本价(加价)
└─▶ 如果系列启用一次性佣金:设置给代理的一次性佣金额度
4. 分配资产(卡/设备)给代理
└─▶ 资产绑定的套餐系列自动跟着走
5. 代理销售
└─▶ 客户购买套餐
└─▶ 差价佣金自动计算并入账给上级
└─▶ 满足一次性佣金条件时,按链式分配入账
```
---
## 八、与现有代码的差异
详见改造提案:[refactor-commission-package-model](../openspec/changes/refactor-commission-package-model/)

View File

@@ -0,0 +1,821 @@
# 君鸿卡管系统资产详情体系重构 - 讨论纪要
> 创建时间2026-03-12
> 最后更新2026-03-14
> 当前阶段:设计讨论(尚未进入 openspec 提案)
> 目的:保留完整上下文,供未来继续
---
## 一、背景与需求来源
### 1.1 项目背景
君鸿卡管系统junhong_cmp_fiber是一个面向代理/企业的物联网卡管理平台,核心资产有两类:
- **IoT 卡IotCard**:纯卡资源,含 ICCID、MSISDN、流量套餐
- **设备Device**:带卡的硬件设备,一个设备可绑定多张卡,设备级套餐
### 1.2 需求触发点
核心痛点:
1. **接口分散且重复** - 卡和设备的查询散落在多处H5/Admin/Personal 三端各有一套
2. **详情信息严重缺失** - 现有的详情接口返回数据太少,前端无法据此渲染完整页面
3. **网关裸数据透传** - 封装程度不够,没有业务层的聚合和处理
4. **虚拟号只存在于设备** - 卡的查询只能靠 ICCID/MSISDN不方便
### 1.3 已确认的核心决策
-**多接口组合** - 不做单一聚合大接口,前端按需调用
-**统一入口** - 一个接口告诉前端查的是"卡"还是"设备"
-**设备优先查找** - 统一入口先查设备表,再查卡表
-**卡加虚拟号** - 虚拟号概念延伸到卡,与设备的 virtual_no 对等
-**全部一步到位** - 改造不分期,一次性完成
-**resolve 返回中等版本** - 包含资产类型、ID、虚拟号、状态、实名状态、套餐概况、流量使用、所属设备如果绑定等关键信息
-**资产类型只有卡和设备两种** - 未来路由器也归属设备,无需预留更多类型
-**虚拟号客服和客户都要用** - 不是只有内部人员用
-**H5 端接口暂时不需要提供** - 后续做到时再删除旧接口
-**套餐查询看历史记录** - 通过套餐记录/订单记录页面查看历史,同时提供当前套餐接口
-**手动刷新接口复用 SyncCardStatusFromGateway** - 无需重新实现,设备时批量刷新所有绑定卡
-**权限不足返回 403** - 明确告知无权限,不假装资产不存在
-**虚拟号人工填写/批量导入** - 无格式规范,允许修改,重复时全批失败并告知原因
-**device_no 字段全量改名为 virtual_no** - 数据库+代码全部更新,不保留旧字段
-**设备停复机有保护期机制** - 保护状态一致性,时长 1 小时,存储在 Redis
-**realtime-status 只查持久化数据** - 不调用网关,刷新用 refresh 接口
-**未实名的卡不参与停复机** - 未实名卡永远是停机状态,保护期逻辑跳过
-**企业账号 resolve 接口** - 企业账号暂不支持 resolve未来单独开新接口
-**resolve 响应含卡 ICCID** - card 类型时在响应中返回 ICCID供前端调用停复机接口
-**批量停机部分失败仍设保护期** - 部分卡停机失败时也设置 Redis 保护期,已停机的卡不回滚,失败的卡记录日志
-**流量汇总逻辑统一** - 整个系统使用统一的流量汇总逻辑;设备级套餐从 PackageUsage 汇总多卡用量
-**套餐历史列表规则** - 按创建时间倒序,不分页,包含所有状态(含已失效)
-**current-package 返回主套餐** - 多套餐同时生效时只返回主套餐master_usage_id IS NULL
-**轮询系统新增第四种任务** - 保护期一致性检查封装为独立轮询任务类型,不修改现有三种任务
-**卡虚拟号导入只补空白** - 只允许为现有空白虚拟号的卡填入,不支持覆盖更新;与数据库现存数据重复则全批失败
-**设备批量刷新需限频** - Redis 限频保护,同一设备冷却期内(建议 30 秒)不允许重复触发
-**PersonalCustomerDevice 统一改名** - tb_personal_customer_device 表的 device_no 字段一并改为 virtual_no
-**realtime-status 与 resolve 分工明确** - resolve 用于初始加载含查找realtime-status 用于已知 ID 的轻量状态轮询(不含套餐流量计算)
---
## 二、现有系统审计结果
### 2.1 接口现状(三端盘点)
| 端 | 卡接口数 | 设备接口数 | 重复停复机 | 套餐接口 |
|---|---------|-----------|-----------|---------|
| Admin | 9 | 14 | 3处 | 仅流量详单 |
| H5 | 4 | 7 | 1处 | 有套餐聚合 |
| Personal | 2 | 0 | 无 | 无 |
**重复停复机的三处实现:**
1. Admin 卡端:`POST /iot-cards/:iccid/suspend|resume`(按 ICCID
2. Admin 企业卡端:`POST /enterprises/:id/cards/:card_id/suspend|resume`(按 card_id
3. H5 企业设备端:`POST /h5/devices/:device_id/cards/:card_id/suspend|resume`(按 card_id
### 2.2 DTO 缺失分析
#### 卡详情IotCardDetailResponse
```go
// 当前实现 (iot_card_dto.go:134-136)
type IotCardDetailResponse struct {
Code int `json:"code"`
Msg string `json:"msg"`
Data *StandaloneIotCardResponse `json:"data"` // 只是空壳嵌套!
}
```
**问题**:详情响应只是列表响应的空包装,完全没有额外信息。无套餐、无所属设备、无聚合流量。
#### 设备详情DeviceResponse
```go
// 当前实现 (device_dto.go:20)
type DeviceResponse struct {
// ... 基本字段
BoundCardCount int `json:"bound_card_count"` // 只有一个数字!
}
```
**问题**:只返回绑定卡数量,看不到每张卡的实名状态、卡状态、流量使用。
#### H5 端已有参考实现
`EnterpriseDeviceDetailResp`enterprise_device_authorization_dto.go是目前唯一有"设备+绑定卡列表"聚合的 DTO可作为 admin 端改造的参考。
### 2.3 网关接口问题
**6 个网关查询接口全部是纯透传**
- `gateway.GetCardStatus`
- `gateway.GetFlowUsage`
- `gateway.GetRealNameStatus`
- `gateway.GetDeviceInfo`
- `gateway.GetSlotInfo`
- `gateway.GetDeviceFlowUsage`
**问题**:只读不写,不更新 DB 缓存,无业务封装。
### 2.4 数据模型现状
| 模型 | 虚拟号 | 缓存字段 | 套餐载体 |
|-----|-------|---------|---------|
| IotCard | ❌ 无(需新增) | CurrentMonthUsageMB, NetworkStatus, RealNameStatus, LastDataCheckAt | IotCardID |
| Device | ✅ device_no需改名为 virtual_no | 无 | DeviceID |
**关键发现**
- `PackageUsage` 模型已支持两种载体:`IotCardID`(单卡)和 `DeviceID`(设备级)
- `IotCard.IsStandalone` 字段由触发器维护,标识卡是否绑定到设备
- `DeviceStore.GetByIdentifier` 已实现多字段匹配:`WHERE device_no = ? OR imei = ? OR sn = ?`(改造后改为 virtual_no
---
## 三、设计方向(已确认)
### 3.1 统一资产入口resolve
**接口**`GET /api/admin/assets/resolve/:identifier`
**查找逻辑**
```
1. 先查 device 表virtual_no / imei / sn
2. 未命中则查 iot_card 表virtual_no / iccid / msisdn
3. 应用数据权限过滤:代理只能看自己及下级店铺的资产,平台账号看所有
4. 有权限 → 返回资产数据(中等版本)
5. 无权限 → 返回 HTTP 403
6. 未找到 → 返回 HTTP 404
```
**响应结构(已确认)**
```go
// AssetResolveResponse 资产解析响应
type AssetResolveResponse struct {
// 基础信息
AssetType string `json:"asset_type"` // "device" 或 "card"
AssetID uint `json:"asset_id"` // 对应表的主键
VirtualNo string `json:"virtual_no"` // 统一虚拟号字段(设备/卡均用此字段)
ICCID string `json:"iccid,omitempty"` // 仅 card 类型时有值,供前端调用停复机接口使用
// 状态信息
Status int `json:"status"` // 资产状态
RealNameStatus int `json:"real_name_status"` // 实名状态
// 套餐和流量信息(无套餐时返回空字符串/0
CurrentPackage string `json:"current_package"` // 当前套餐名称
PackageTotalMB float64 `json:"package_total_mb"` // 真总流量套餐标称RealDataMB
PackageVirtualMB float64 `json:"package_virtual_mb"` // 虚总流量停机阈值VirtualDataMB
PackageUsedMB float64 `json:"package_used_mb"` // 客户端展示已使用流量(经虚流量换算)
PackageRemainMB float64 `json:"package_remain_mb"` // 客户端展示剩余流量
// 保护期状态(设备类型,以及绑定该设备的卡均返回)
DeviceProtectStatus string `json:"device_protect_status"` // "none" / "stop" / "start"
// 绑定信息(仅 card 类型,且卡绑定了设备时才有值)
BoundDeviceID *uint `json:"bound_device_id,omitempty"`
BoundDeviceNo string `json:"bound_device_no,omitempty"`
BoundDeviceName string `json:"bound_device_name,omitempty"`
// 设备类型特有:绑定卡信息
BoundCardCount int `json:"bound_card_count"`
Cards []DeviceCardInfo `json:"cards,omitempty"` // 包含所有状态的卡(含未实名)
}
// DeviceCardInfo 设备下绑定卡信息
type DeviceCardInfo struct {
IotCardID uint `json:"iot_card_id"`
ICCID string `json:"iccid"`
VirtualNo string `json:"virtual_no"`
RealNameStatus int `json:"real_name_status"`
NetworkStatus int `json:"network_status"`
CurrentMonthUsageMB float64 `json:"current_month_usage_mb"`
LastSyncAt *time.Time `json:"last_sync_at"` // 最后与 Gateway 同步时间
}
```
**说明**
- 卡绑定的设备被软删除时,该卡视为独立卡,不填充绑定信息
- 设备下的 `cards` 列表包含所有绑定卡(含未实名、已停用)
### 3.2 套餐查询接口
**接口一**`GET /api/admin/assets/:asset_type/:id/packages`
- 返回所有套餐记录(含历史和当前生效套餐)
- 按 asset_type 区分查 PackageUsage.IotCardID 还是 PackageUsage.DeviceID
- 每条记录包含:套餐名称、真总流量、虚总流量、展示已使用、展示剩余、有效期、状态
- **排序**:按创建时间倒序(最新套餐在前)
- **分页**:不分页,全量返回
- **范围**:包含所有状态(含 status=4 已失效的历史套餐)
**接口二**`GET /api/admin/assets/:asset_type/:id/current-package`
- 返回当前生效的**主套餐**status=1 且 master_usage_id IS NULL的详细信息
- 当同时有主套餐 + 加油包生效时,只返回主套餐;需要查看加油包通过接口一的列表查看
- 包含完整流量明细:真总量、虚总量、展示已使用、展示剩余
### 3.3 实时状态查询接口
**接口**`GET /api/admin/assets/:asset_type/:id/realtime-status`
**与 resolve 的定位分工**
> **resolve**:初始加载使用,包含查找逻辑 + 全量聚合数据(套餐/流量/绑定信息),数据较重。
> **realtime-status**:已知资产 ID 后的轻量状态轮询,**不含套餐流量计算**,专注于网络/实名/保护期状态的快速刷新。
**说明**
- **只查询持久化数据DB/Redis不调用网关**
- 返回最近一次轮询/刷新同步到系统的状态
- "实时性"依赖轮询系统保持数据新鲜(实名 5 分钟,流量/套餐 10 分钟)
- 需要最新数据时,先调用 refresh 接口手动刷新,再查此接口
- 设备类型返回:保护期状态 + 每张绑定卡的状态(网络/实名/流量/最后同步时间)
- 卡类型返回:网络状态 + 实名状态 + 流量使用 + 最后同步时间
### 3.4 手动刷新接口
**接口**`POST /api/admin/assets/:asset_type/:id/refresh`
**说明**
- 调用网关获取最新数据,写回 DB 更新缓存字段,返回刷新后的最新状态
- 卡类型:调用已有的 `SyncCardStatusFromGateway(iccid)` 方法
- 设备类型:批量刷新所有绑定卡(遍历调用 `SyncCardStatusFromGateway`
- **设备类型需要频率限制**:通过 Redis 记录最后刷新时间,同一设备冷却期内(建议 30 秒)不允许重复触发,防止前端多次快速点击打爆网关
### 3.5 设备停复机保护期机制
**背景**
设备本身没有停机/复机概念,对设备停机 = 批量停用其下所有已实名卡。保护期机制确保操作期间所有卡的状态一致性,防止单卡被误操作破坏整体状态。
**接口**
- `POST /api/admin/assets/device/:device_id/stop`
- `POST /api/admin/assets/device/:device_id/start`
**保护期规则**
| 规则 | 说明 |
|------|------|
| 保护期时长 | **1 小时**(硬编码在代码常量中) |
| 存储方式 | Redis Key `protect:device:{device_id}:stop``protect:device:{device_id}:start`TTL=1小时 |
| 未实名的卡 | **不参与停复机操作**,未实名卡永远是停机状态,跳过不处理 |
| 重叠操作 | 设备在保护期内不允许再次发起相同或相反的停复机操作,返回 HTTP 403 |
| 批量停机部分失败 | 部分卡调网关失败时,**仍设置 Redis 保护期**;已成功停机的卡不回滚;失败的卡记录错误日志 |
**stop 保护期(设备停机后 1 小时内)**
- 对某张已实名卡手动发起复机 → **不允许**HTTP 403设备处于停机保护期
- 对某张已实名卡手动发起停机 → 允许(本已是停机,无冲突)
- 轮询系统发现某张已实名卡处于开机状态 → **强制调网关停机**,保持一致
**start 保护期(设备复机后 1 小时内)**
- 对某张已实名卡手动发起停机 → **允许**(用户可主动停单张卡)
- 对某张已实名卡手动发起复机 → 允许(本已是复机,无冲突)
- 轮询系统发现某张已实名卡处于停机状态 → **强制调网关复机**,保持一致
**保护期状态对外暴露**
- resolve 接口的 `device_protect_status` 字段返回当前保护期状态
- 卡绑定的设备有保护期时,该卡的 resolve 结果也返回 `device_protect_status`
### 3.6 接口去重(废弃清单)
**废弃接口**(直接删除,不保留向后兼容):
| 废弃接口 | 替代接口 |
|---------|---------|
| `POST /enterprises/:id/cards/:card_id/suspend` | `POST /api/admin/assets/card/:iccid/stop` |
| `POST /enterprises/:id/cards/:card_id/resume` | `POST /api/admin/assets/card/:iccid/start` |
| `POST /h5/devices/:device_id/cards/:card_id/suspend` | `POST /api/admin/assets/device/:device_id/stop` |
| `POST /h5/devices/:device_id/cards/:card_id/resume` | `POST /api/admin/assets/device/:device_id/start` |
| 旧 Admin 卡停复机接口(按 ICCID | `POST /api/admin/assets/card/:iccid/stop|start` |
| `GET /devices/:id` | `GET /api/admin/assets/device/:id` |
### 3.7 数据层变更
**变更一:设备表字段改名(全量重构)**
```sql
ALTER TABLE tb_device RENAME COLUMN device_no TO virtual_no;
ALTER TABLE tb_personal_customer_device RENAME COLUMN device_no TO virtual_no;
```
涉及改动范围Model 定义、DTO 响应、Store 查询、所有引用 `device_no` 的代码,以及 `tb_personal_customer_device` 表的 `device_no` 字段(一并改名为 `virtual_no`),确保系统中不再有 `device_no` 的身影。
**变更二:卡表新增 virtual_no 字段**
```sql
ALTER TABLE tb_iot_card ADD COLUMN virtual_no VARCHAR(50);
CREATE UNIQUE INDEX idx_iot_card_virtual_no
ON tb_iot_card (virtual_no) WHERE deleted_at IS NULL;
```
- 允许为空(老数据无虚拟号)
- 允许手动修改
- 全局唯一(导入时检测重复,重复则全批失败并告知具体冲突数据)
**变更三:套餐表新增 virtual_ratio 字段**
```sql
ALTER TABLE tb_package ADD COLUMN virtual_ratio DECIMAL(10,6) DEFAULT 1.0;
```
- 创建套餐时计算并存储:`virtual_ratio = real_data_mb / virtual_data_mb`
- 用于客户端展示的流量换算(见第六节)
- 未启用虚流量时(`enable_virtual_data=false`virtual_ratio = 1.0
---
## 四、完整接口清单
| # | 方法 | 路径 | 说明 |
|---|------|------|------|
| 1 | GET | `/api/admin/assets/resolve/:identifier` | 资产解析(通过任意标识符) |
| 1 | GET | `/api/admin/assets/resolve/:identifier` | 资产解析(通过任意标识符) |
| 2 | GET | `/api/admin/assets/:asset_type/:id/packages` | 套餐记录(历史+当前) |
| 3 | GET | `/api/admin/assets/:asset_type/:id/current-package` | 当前生效主套餐详情 |
| 4 | GET | `/api/admin/assets/:asset_type/:id/realtime-status` | 当前持久化状态查询(轻量) |
| 5 | POST | `/api/admin/assets/:asset_type/:id/refresh` | 手动刷新(调网关写回 DB |
| 6 | POST | `/api/admin/assets/device/:device_id/stop` | 设备停机(批量停所有已实名卡) |
| 7 | POST | `/api/admin/assets/device/:device_id/start` | 设备复机(批量开所有已实名卡) |
| 8 | POST | `/api/admin/assets/card/:iccid/stop` | 卡停机 |
| 9 | POST | `/api/admin/assets/card/:iccid/start` | 卡复机 |
> `:asset_type` 取值:`device` 或 `card`
---
## 五、流程图
### 5.1 资产查找resolve流程
```mermaid
flowchart TD
A["GET /api/admin/assets/resolve/:identifier"] --> B{"查询设备表\nvirtual_no / imei / sn"}
B -->|找到| C{"应用数据权限过滤\n代理:仅自己及下级店铺\n平台:所有资产"}
B -->|未找到| D{"查询卡表\nvirtual_no / iccid / msisdn"}
D -->|找到| C
D -->|未找到| E["返回 HTTP 404\n资产不存在"]
C -->|有权限| F["聚合资产数据\n基础信息 + 状态 + 套餐流量 + 保护期 + 绑定信息"]
C -->|无权限| G["返回 HTTP 403\n无权限查看该资产"]
F --> H["返回 AssetResolveResponse"]
```
### 5.2 设备停机/复机流程
```mermaid
flowchart TD
subgraph 设备停机
A1["POST /assets/device/:id/stop"] --> B1{"设备是否存在?"}
B1 -->|否| C1["HTTP 404"]
B1 -->|是| D1{"设备是否在保护期?"}
D1 -->|是| E1["HTTP 403\n设备处于保护期不允许操作"]
D1 -->|否| F1["获取所有已实名下属卡"]
F1 --> G1["批量调网关停机"]
G1 --> H1["更新各卡 NetworkStatus=停机\n部分失败时已成功的卡不回滚"]
H1 --> I1["Redis SET protect:device:id:stop\nTTL = 1 小时(部分失败时仍设置)"]
I1 --> J1["返回成功(附带失败卡日志)"]
end
subgraph 设备复机
A2["POST /assets/device/:id/start"] --> B2{"设备是否存在?"}
B2 -->|否| C2["HTTP 404"]
B2 -->|是| D2{"设备是否在保护期?"}
D2 -->|是| E2["HTTP 403\n设备处于保护期不允许操作"]
D2 -->|否| F2["获取所有已实名下属卡"]
F2 --> G2["批量调网关复机"]
G2 --> H2["更新各卡 NetworkStatus=开机"]
H2 --> I2["Redis SET protect:device:id:start\nTTL = 1 小时"]
I2 --> J2["返回成功"]
end
```
### 5.3 手动操作单卡 + 保护期检查
```mermaid
flowchart TD
subgraph 手动停机单卡
A1["POST /assets/card/:iccid/stop"] --> B1{"卡是否存在?"}
B1 -->|否| C1["HTTP 404"]
B1 -->|是| D1{"卡是否已实名?"}
D1 -->|未实名| E1["HTTP 403\n未实名卡不允许停复机"]
D1 -->|已实名| F1{"卡是否绑定设备?"}
F1 -->|未绑定| G1["正常执行停机"]
F1 -->|已绑定| H1{"设备有 start 保护期?"}
H1 -->|是| I1["允许停机\n与 start 保护期方向一致"]
H1 -->|否| G1
end
subgraph 手动复机单卡
A2["POST /assets/card/:iccid/start"] --> B2{"卡是否存在?"}
B2 -->|否| C2["HTTP 404"]
B2 -->|是| D2{"卡是否已实名?"}
D2 -->|未实名| E2["HTTP 403\n未实名卡不允许停复机"]
D2 -->|已实名| F2{"卡是否绑定设备?"}
F2 -->|未绑定| G2["正常执行复机"]
F2 -->|已绑定| H2{"设备有 stop 保护期?"}
H2 -->|是| I2["HTTP 403\n设备处于停机保护期\n不允许手动复机"]
H2 -->|否| G2
end
```
### 5.4 轮询系统与保护期交互
```mermaid
flowchart TD
A["轮询任务触发:检查卡状态"] --> B{"卡是否已实名?"}
B -->|未实名| C["跳过,未实名卡不参与停复机逻辑"]
B -->|已实名| D{"卡是否绑定设备?"}
D -->|未绑定| E["按卡自身逻辑正常处理"]
D -->|已绑定| F{"设备是否有保护期?"}
F -->|无保护期| E
F -->|"stop 保护期"| G{"卡当前网络状态?"}
G -->|开机| H["强制调网关停机\n保持与设备保护期一致"]
G -->|停机| I["已一致,跳过"]
F -->|"start 保护期"| J{"卡当前网络状态?"}
J -->|停机| K["强制调网关复机\n保持与设备保护期一致"]
J -->|开机| L["已一致,跳过"]
```
### 5.5 手动刷新refresh流程
```mermaid
flowchart TD
A["POST /api/admin/assets/:type/:id/refresh"] --> B{"资产类型"}
B -->|card| C["调用 SyncCardStatusFromGateway(iccid)"]
C --> D["更新 iot_card 表\nNetworkStatus / RealNameStatus\nCurrentMonthUsageMB / LastSyncTime"]
D --> H["返回刷新后的最新状态"]
B -->|device| E["检查 Redis 限频(冷却期 30 秒)"]
E -->|冷却中| Z["HTTP 429 请勿频繁刷新"]
E -->|可刷新| F["查询所有绑定卡列表"]
F --> G["遍历每张卡\n调用 SyncCardStatusFromGateway"]
G --> H
```
### 5.6 实时状态查询realtime-status流程
```mermaid
flowchart TD
A["GET /api/admin/assets/:type/:id/realtime-status"] --> B{"资产类型"}
B -->|card| C["从 DB/Redis 读取持久化的卡状态"]
C --> D["返回卡状态\n网络状态 / 实名状态 / 本月已用流量\n最后同步时间"]
B -->|device| E["从 DB/Redis 读取持久化的设备数据"]
E --> F["读取所有绑定卡的持久化状态"]
F --> G["返回设备状态\n保护期状态 + 各绑定卡当前状态 + 最后同步时间"]
```
> **注意**:此接口**不调用网关**,展示的是最近一次轮询/刷新写入的持久化数据。
> 如需获取最新数据,请先调用 `POST /refresh` 接口,再查询此接口。
### 5.7 虚流量计算规则
```mermaid
flowchart TD
subgraph 创建["套餐创建时 - 存储比例"]
A1["RealDataMB = 10G 真总流量"] --> C1
A2["VirtualDataMB = 9G 虚总流量/停机阈值"] --> C1
C1["virtual_ratio = RealDataMB / VirtualDataMB\n= 10 / 9 ≈ 1.111\n存储到 tb_package.virtual_ratio"]
end
subgraph 停机["系统内部 - 停机判断"]
D1["真已使用\nCurrentMonthUsageMB"] --> E1{"真已使用 >= VirtualDataMB?"}
D2["VirtualDataMB = 9G"] --> E1
E1 -->|是| F1["触发停机"]
E1 -->|否| F2["正常运行"]
end
subgraph 展示["客户端展示 - 流量换算"]
G1["真已使用 = 9G"] --> H1
H1["展示已使用 = 真已使用 x virtual_ratio\n= 9G x 1.111 = 10G"]
G2["展示总量 = RealDataMB = 10G"]
H1 --> I1["客户看到 已用10G/共10G = 100% 已停机"]
end
```
---
## 六、虚流量计算规则详解
### 6.1 概念说明
| 字段 | 含义 | 来源 |
|------|------|------|
| 真总流量RealDataMB | 套餐标称总流量,用户购买的名义流量 | `Package.real_data_mb` |
| 虚总流量VirtualDataMB | 停机阈值,始终小于真总流量 | `Package.virtual_data_mb` |
| virtual_ratio | 换算比例 = RealDataMB / VirtualDataMB | `Package.virtual_ratio`(套餐创建时存储) |
| 真已使用 | 网关报告的实际用量 | `IotCard.current_month_usage_mb` |
| 展示已使用 | 客户看到的用量 = 真已使用 × virtual_ratio | 计算得出 |
| 展示剩余 | 客户看到的剩余 = 真总流量 展示已使用 | 计算得出 |
### 6.2 设计意图
虚总流量VirtualDataMB是系统内部的停机保护阈值。由于网关数据同步存在延迟若以真总流量作为停机阈值客户可能在用完 10G 后继续用到 10.5G 才被停机,产生超用。因此系统设置一个比真总流量略小的虚总流量(如 9G作为实际停机阈值保证不超用。
客户端展示时,系统将真实用量按比例换算回真总流量的尺度,使客户的体感与购买的套餐一致:
- 当真用量达到 9GVirtualDataMB卡被停机
- 此时展示用量 = 9G × (10G/9G) = 10G客户看到"已用 10G / 共 10G = 100%"
### 6.3 计算示例
| 场景 | 真总 | 虚总(停机阈值) | 真已使用 | 展示已使用 | 展示剩余 | 是否停机 |
|------|------|----------------|---------|-----------|---------|---------|
| 刚开始 | 10G | 9G | 0G | 0G | 10G | 否 |
| 用了一半 | 10G | 9G | 4.5G | 5G | 5G | 否 |
| 接近阈值 | 10G | 9G | 8G | ≈8.89G | ≈1.11G | 否 |
| 触发停机 | 10G | 9G | 9G | 10G | 0G | **是** |
### 6.4 未启用虚流量时
`Package.enable_virtual_data = false` 时:
- `virtual_ratio = 1.0`
- 停机阈值 = 真总流量RealDataMB
- 展示已使用 = 真已使用(无换算)
---
## 七、用户的思考与担忧(已全部解决)
### 7.1 关于接口粒度
**已确认**resolve 返回中等版本,多接口组合,前端按需调用。
### 7.2 关于网关封装程度
**已确认**
- realtime-status只查持久化数据不调用网关
- refresh调用网关并写回 DB更新缓存字段
### 7.3 关于停复机去重
**已确认**:所有停复机统一迁移到 assets 路径,旧接口直接删除。
### 7.4 关于虚拟号
**已确认**
- 卡的虚拟号给客服和客户用
- 人工填写/批量导入,无格式规范,允许修改
- 设备 device_no 全量重命名为 virtual_no
- 导入重复时全批失败,告知具体冲突数据
### 7.5 关于套餐查询
**已确认**:套餐查询分两个接口,历史套餐接口包含当前套餐,同时单独提供当前套餐接口。
### 7.6 关于停复机保护期
**已确认**:保护期 1 小时Redis 存储未实名卡不参与stop 保护期内禁止手动复机start 保护期内允许手动停机。
---
## 八、设计决策确认清单
| 序号 | 问题 | 确认结果 |
|-----|------|---------|
| 1 | resolve 返回数据范围 | 中等版本,含状态/套餐/流量/绑定信息/保护期 |
| 2 | realtime-status 和 refresh 区别 | realtime-status=查持久化数据轻量refresh=调网关写回DB |
| 3 | 实时状态封装 | 持久化数据展示,不调网关 |
| 4 | 手动刷新复用 SyncCardStatusFromGateway | 是,设备时批量刷新所有绑定卡 |
| 5 | 停复机统一 | 统一迁移到 /assets 路径,旧接口直接删除 |
| 6 | 卡虚拟号生成方式 | 人工填写/批量导入,无格式规范 |
| 7 | 废弃接口处理 | 直接删除 |
| 8 | 套餐查询接口 | 两个接口:历史套餐列表 + 当前套餐详情 |
| 9 | 权限不足的返回 | HTTP 403明确告知无权限 |
| 10 | 保护期时长 | 1 小时,硬编码常量 |
| 11 | 虚流量计算 | virtual_ratio=RealDataMB/VirtualDataMB套餐创建时存储 |
| 12 | device_no 改名 | 全量改为 virtual_no数据库+代码全部更新 |
| 13 | 设备下卡列表 | 包含所有状态的卡(含未实名、已停用) |
| 14 | 卡绑定设备被软删除时 | 视为独立卡,不填充绑定信息 |
| 15 | 未实名卡参与停复机 | 不参与,永远是停机状态,保护期跳过 |
| 16 | 数据权限规则 | 代理:仅自己及下级店铺,平台账号:所有资产 |
| 17 | 查找失败 404 还是 403 | 资产不存在=404有资产但无权限=403 |
| 18 | 设备卡列表排序 | 无要求 |
| 19 | resolve 中 current_package 无套餐时 | 返回空字符串/0 |
| 20 | 虚拟号唯一索引 | 需要,允许为空,允许手动修改 |
| 21 | 企业账号能否用 resolve | 暂不支持;企业账号未来开新接口 |
| 22 | 接口 #2(按主键查详情)的设计 | 已确认删除,与 resolve 功能重叠,无独立价值 |
| 23 | resolve 响应是否含 ICCID | 是card 类型时返回 ICCID供停复机接口使用 |
| 24 | 设备批量停机部分失败策略 | 仍设置 Redis 保护期;已成功停机的卡不回滚;失败的卡记录日志 |
| 25 | 流量数据汇总逻辑 | 统一用专门汇总逻辑,从 PackageUsage 读取;设备级套餐汇总所有绑定卡 |
| 26 | 套餐历史列表排序和范围 | 按创建时间倒序,不分页,包含所有状态(含 status=4 已失效) |
| 27 | current-package 多套餐时返回哪个 | 返回主套餐master_usage_id IS NULL |
| 28 | 轮询系统保护期检查实现方式 | 新增独立的第四种轮询任务类型,不修改现有三种任务 |
| 29 | 卡虚拟号导入规则 | 只允许为空白虚拟号的卡填入;与现存数据重复则全批失败 |
| 30 | 设备批量刷新频率限制 | 需要Redis 限频,同一设备冷却期(建议 30 秒)内不允许重复触发 |
| 31 | PersonalCustomerDevice.device_no 改名 | 是,统一改为 virtual_no与 tb_device 保持语义一致 |
| 32 | DeviceCardInfo 需要 last_sync_time | 是,添加 last_sync_at 字段 |
---
## 九、轮询系统补充说明
### 9.1 整体架构
轮询系统是君鸿卡管系统维护卡数据实时性的核心机制:
```
┌─────────────────────────────────────────────────────────────────────┐
│ Worker 服务(后台) │
├─────────────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Scheduler │────▶│ Asynq 队列 │────▶│ Handler │ │
│ │ (调度器) │ │ (任务队列) │ │ (处理器) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │ │
│ │ 定时循环 (每秒) │ │
│ ▼ ▼ │
│ ┌──────────────────────────────────────────────────────────────┐ │
│ │ Redis Sorted Set 轮询队列 │ │
│ │ - polling:queue:realname (实名检查) │ │
│ │ - polling:queue:carddata (流量检查) │ │
│ │ - polling:queue:package (套餐检查) │ │
│ │ - polling:queue:protect (保护期一致性检查) │ │
│ └──────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
│ 调用网关 API
┌──────────────────────┐
│ Gateway 网关 │
│ (第三方运营商) │
└──────────────────────┘
```
### 9.2 四种轮询任务
| 任务类型 | 触发频率 | 作用 | 更新字段 |
|---------|---------|------|---------|
| **实名检查** | 默认 5 分钟 | 调用网关查实名状态 | real_name_status |
| **流量检查** | 默认 10 分钟 | 调用网关查流量,更新套餐 | current_month_usage_mb |
| **套餐检查** | 默认 10 分钟 | 检查是否超额,触发停机 | network_status |
| **保护期检查** | 同流量检查频率 | 检查绑定设备保护期,强制同步卡的网络状态 | network_status |
> **第四种任务设计说明**:保护期一致性检查封装为独立任务类型,不嵌入现有三种任务内部。只检查"已绑定设备且设备当前有保护期"的卡,范围小,可与流量检查同频触发。
### 9.3 关键特点
1. **启动时渐进式初始化**:系统启动时把卡分批加载到 Redis 队列(每批 10 万张)
2. **按时间排序**Redis Sorted Set 的 score 是下次检查的时间戳,到期自动被调度器取出
3. **并发控制**:通过 Redis 信号量限制并发数(默认 50防止打爆网关
4. **失败重试**:任务失败后重新入队
5. **缓存优化**:优先从 Redis 读取卡信息,避免频繁查 DB
### 9.4 与手动刷新接口的关系
- **轮询是后台自动跑**:所有卡都会按配置的时间间隔被检查,保证日常数据更新
- **手动刷新是前台客服主动用**:只更新这一张卡(或设备的所有绑定卡),满足客户急用场景
- **两者是互补关系**:轮询保证数据不会太旧,手动刷新满足实时性要求高的场景
### 9.5 与设备保护期的交互
轮询系统在处理设备的绑定卡时,需要检查设备是否有保护期(见 5.4 流程图):
- 发现设备有 stop 保护期,且卡为开机状态 → 强制调网关停机
- 发现设备有 start 保护期,且卡为停机状态 → 强制调网关复机
- 未实名的卡跳过,不参与保护期逻辑
关键代码位置:
- `internal/task/polling_handler.go` - 轮询任务处理器(需新增独立的第四种任务:保护期一致性检查处理函数)
- `pkg/constants/redis.go` - 需新增 `RedisDeviceProtectKey()` 函数
### 9.6 涉及的关键代码
- `internal/polling/scheduler.go` - 轮询调度器(把卡加入队列)
- `internal/task/polling_handler.go` - 任务处理器(实际调网关更新数据)
- `internal/service/iot_card/service.go:799` - SyncCardStatusFromGateway 方法
---
## 十、下一步行动
### 10.1 当前阶段
**设计讨论** - 已完成,所有关键决策已确认,可进入 openspec 提案阶段
### 10.2 进入 openspec 提案后的任务拆分建议
**数据层(优先)**
1. 数据库迁移:设备表 `device_no``virtual_no`(同步更新 `tb_personal_customer_device.device_no``virtual_no`
2. 数据库迁移:卡表新增 `virtual_no` 字段(唯一索引,允许空)
3. 数据库迁移:套餐表新增 `virtual_ratio` 字段
4. 更新 Device Model 和所有引用 `device_no` 的代码(全量替换,含 PersonalCustomerDevice
5. 更新 Package Service创建/更新套餐时自动计算并存储 `virtual_ratio`
**接口层(依次实现)**
6. 实现资产入口 `GET /assets/resolve/:identifier`
7. 实现当前状态查询 `GET /assets/:type/:id/realtime-status`
8. 实现手动刷新 `POST /assets/:type/:id/refresh`(含设备批量刷新 + Redis 限频)
9. 实现套餐记录查询 `GET /assets/:type/:id/packages`
10. 实现当前套餐查询 `GET /assets/:type/:id/current-package`
11. 实现设备停机 `POST /assets/device/:id/stop`(含保护期逻辑 + 部分失败策略)
12. 实现设备复机 `POST /assets/device/:id/start`(含保护期逻辑)
13. 实现卡停机 `POST /assets/card/:iccid/stop`(含保护期检查)
14. 实现卡复机 `POST /assets/card/:iccid/start`(含保护期检查)
**轮询系统**
15. 新增第四种轮询任务:保护期一致性检查(独立任务类型,不修改现有三种任务内部逻辑)
**清理**
16. 删除废弃的停复机接口(见 3.6 废弃清单)
17. 丰富现有卡/设备 DTOIotCardDetailResponse、DeviceResponse
18. 更新 API 文档生成器docs.go 和 gendocs/main.go
### 10.3 涉及的关键代码文件
**Handler 层**
- `internal/handler/admin/iot_card.go`
- `internal/handler/admin/device.go`
- `internal/handler/h5/enterprise_device.go`(待删除的废弃接口)
**Service 层**
- `internal/service/iot_card/service.go`(含 SyncCardStatusFromGateway:799
- `internal/service/iot_card/stop_resume_service.go`(停复机逻辑,需扩展)
- `internal/service/device/service.go`(含 GetByIdentifier:177
- `internal/service/package/customer_view_service.go`(套餐聚合,需复用)
- `internal/service/package/service.go`(创建套餐时存储 virtual_ratio
**Store 层**
- `internal/store/postgres/device_store.go`GetByIdentifier:62改用 virtual_no
- `internal/store/postgres/iot_card_store.go`
- `internal/store/postgres/personal_customer_device_store.go`device_no → virtual_no
**Model 层**
- `internal/model/iot_card.go`(新增 virtual_no 字段)
- `internal/model/device.go`device_no → virtual_no
- `internal/model/package.go`(新增 virtual_ratio 字段)
- `internal/model/personal_customer_device.go`device_no → virtual_no
**DTO 层**
- `internal/model/dto/iot_card_dto.go`(需重构)
- `internal/model/dto/device_dto.go`(需丰富)
**常量层**
- `pkg/constants/redis.go`(新增 `RedisDeviceProtectKey()` 函数)
**轮询层**
- `internal/task/polling_handler.go`(新增保护期一致性检查独立任务处理函数)
---
## 十一、附录:关键代码片段
### 11.1 现有空壳详情 DTO
```go
// internal/model/dto/iot_card_dto.go:134-136
type IotCardDetailResponse struct {
StandaloneIotCardResponse // 只是列表响应的空包装
}
```
### 11.2 设备详情 DTO
```go
// internal/model/dto/device_dto.go:20
type DeviceResponse struct {
ID uint `json:"id"`
DeviceNo string `json:"device_no"` // 改名为 virtual_no
// ...
BoundCardCount int `json:"bound_card_count"` // 只有数字,需丰富
}
```
### 11.3 设备多字段查找 Store
```go
// internal/store/postgres/device_store.go:62
// 改造后device_no → virtual_no
func (s *Store) GetByIdentifier(db *gorm.DB, identifier string) (*model.Device, error) {
var device model.Device
err := db.Where("virtual_no = ? OR imei = ? OR sn = ?", identifier, identifier, identifier).
First(&device).Error
return &device, err
}
```
### 11.4 手动刷新方法(待暴露为接口)
```go
// internal/service/iot_card/service.go:799
func (s *Service) SyncCardStatusFromGateway(ctx context.Context, iccid string) error {
// 已有实现,需作为接口暴露,并支持设备批量刷新
}
```
### 11.5 新增 Redis Key 常量
```go
// pkg/constants/redis.go
// RedisDeviceProtectKey 设备停复机保护期 Key
// action: "stop" 或 "start"TTL = 1 小时
func RedisDeviceProtectKey(deviceID uint, action string) string {
return fmt.Sprintf("protect:device:%d:%s", deviceID, action)
}
// RedisDeviceRefreshCooldownKey 设备手动刷新冷却期 KeyTTL = 冷却时长(建议 30 秒)
func RedisDeviceRefreshCooldownKey(deviceID uint) string {
return fmt.Sprintf("refresh:cooldown:device:%d", deviceID)
}
```
### 11.6 virtual_ratio 计算位置
```go
// internal/service/package/service.go
// 创建/更新套餐时计算并存储 virtual_ratio
if pkg.EnableVirtualData && pkg.VirtualDataMB > 0 {
pkg.VirtualRatio = float64(pkg.RealDataMB) / float64(pkg.VirtualDataMB)
} else {
pkg.VirtualRatio = 1.0
}
```
---
> **文档结束**
>
> 所有设计决策已确认,可进入 openspec 提案阶段。

View File

@@ -0,0 +1,525 @@
# 代理钱包订单创建功能总结
## 概述
fix-agent-wallet-order-creation 提案修复了代理在后台使用钱包支付创建订单的问题,实现了代理钱包一步购买(扣款 + 激活)、代理代购、订单角色追踪等核心功能。
## <20><>景问题
### 问题描述
代理在后台使用钱包支付wallet创建订单时系统只创建待支付订单`payment_status = 1`),不扣款也不激活套餐,导致订单无法完成。后台没有支付接口,代理无法对待支付订单进行支付。
### 业务场景
- **代理自购**:代理为自己的卡/设备购买套餐,从自己钱包扣自己的成本价
- **代理代购**:代理为下级代理的卡/设备购买套餐,从自己钱包扣自己的成本价,但订单金额显示下级成本价
- **平台代购**(现有逻辑):平台使用 offline 支付为代理创建订单,不扣款,立即激活,产生佣金
## 核心功能
### 1. 订单角色追踪
**新增字段**`tb_order` 表):
- `operator_id` (INT, 可空):操作者 ID谁下的单
- `operator_type` (VARCHAR, 可空):操作者类型(`platform` / `agent`
- `actual_paid_amount` (BIGINT, 可空):实际支付金额(分)
- `purchase_role` (VARCHAR):订单角色枚举
**订单角色枚举**`internal/model/order.go`
```go
const (
PurchaseRoleSelfPurchase = "self_purchase" // 自己购买
PurchaseRolePurchasedByParent = "purchased_by_parent" // 上级代理购买
PurchaseRolePurchasedByPlatform = "purchased_by_platform" // 平台代购
PurchaseRolePurchaseForSubordinate = "purchase_for_subordinate" // 给下级购买
)
```
**索引**
- `idx_orders_operator_id` (operator_id):支持"我作为操作者的订单"查询
- `idx_orders_purchase_role` (purchase_role):支持按角色筛选
---
### 2. 后台钱包一步支付
**行为变更**
- **原逻辑**:后台 wallet 订单 → 创建待支付订单(`payment_status = 1`)→ 无法支付
- **新逻辑**:后台 wallet 订单 → 立即扣款 + 激活套餐 → 订单已支付(`payment_status = 2`
**区别于 H5 端**
- H5 端 wallet 订单仍使用两步流程:创建待支付订单 → 调用 WalletPay 接口支付
- 后台 wallet 订单一步完成,无需后续支付接口
**权限调整**
- 允许代理、平台、超管使用 wallet 支付方式
- offline 支付方式仍限制为平台和超管
---
### 3. 价格计算逻辑
**区分"订单金额"和"实际支付"**
| 场景 | 订单金额total_amount | 实际支付actual_paid_amount | 说明 |
|------|------------------------|------------------------------|------|
| 代理自购 | 操作者成本价 | 操作者成本价 | 两者相同 |
| 代理代购 | 买家成本价 | 操作者成本价 | 操作者实际扣款少于订单金额(赚取差价) |
| 平台代购 | 买家成本价 | NULL | 平台不扣款 |
**示例**
```
一级代理 A 成本价80 元
二级代理 B 成本价100 元
A 为 B 的卡购买套餐:
- total_amount = 10000100 元B 看到的订单金额)
- actual_paid_amount = 800080 元A 实际扣款)
- A 赚取差价20 元
```
**成本价查询**
通过 `ShopPackageAllocation` 表查询店铺对套餐的成本价。
---
### 4. 钱包流水记录扩展
**新增字段**`tb_agent_wallet_transaction` 表):
- `transaction_subtype` (VARCHAR):交易子类型(细分 order_payment 场景)
- `related_shop_id` (INT, 可空):关联店铺 ID代购时记录下级店铺
**交易子类型枚举**`pkg/constants/wallet.go`
```go
const (
WalletTransactionSubtypeSelfPurchase = "self_purchase"
WalletTransactionSubtypePurchaseForSubordinate = "purchase_for_subordinate"
)
```
**流水示例**
- **自购**`transaction_subtype = "self_purchase"``remark = "购买套餐"`
- **代购**`transaction_subtype = "purchase_for_subordinate"``related_shop_id = 下级店铺 ID``remark = "为下级代理【XX】购买套餐"`
---
### 5. 订单查询增强
**OR 查询逻辑**`OrderStore.List()`
```sql
WHERE (buyer_type = 'agent' AND buyer_id = ?) OR operator_id = ?
```
代理可以看到两类订单:
1. 作为买家的订单(`buyer_id = 自己`):别人为自己代购、自己购买
2. 作为操作者的订单(`operator_id = 自己`):自己为下级代购
**新增查询参数**
- `purchase_role`可选筛选订单角色类型self_purchase / purchased_by_parent / purchased_by_platform / purchase_for_subordinate
---
### 6. 佣金逻辑调整
**规则**
- **代理代购**:操作者已赚取成本价差(自己成本价 vs 下级成本价),不产生佣金
- **平台代购**:平台不扣款,按买家成本价计算差价佣金,激励上级代理
**实现**
```go
// 只有平台代购operator_id == nil才入队佣金计算
if order.OperatorID == nil {
s.enqueueCommissionCalculation(ctx, order.ID)
}
```
---
### 7. 幂等性和并发控制
**乐观锁**(钱包扣款):
```go
result := tx.Model(&model.AgentWallet{}).
Where("id = ? AND balance >= ? AND version = ?", walletID, amount, version).
Updates(map[string]any{
"balance": gorm.Expr("balance - ?", amount),
"version": gorm.Expr("version + 1"),
})
```
**幂等性检查**(订单创建):
- 使用 Redis 业务键:`order:idempotency:{buyer_type}:{buyer_id}:{order_type}:{carrier_type}:{carrier_id}:{sorted_package_ids}`
- TTL3 分钟
- 分布式锁防止并发:`order:create:lock:{carrier_type}:{carrier_id}`
---
## API 变更
### 后台订单创建 API❗ Breaking Change
**端点**`POST /api/admin/orders`
**请求参数变更**
| 字段 | 变更前 | 变更后 | 说明 |
|------|--------|--------|------|
| `payment_method` | 可选,任意值 | **必填**,仅允许 `wallet``offline` | 不传或传其他值均返回 1001 错误 |
**行为变更**
- `wallet` 支付:订单直接完成(`payment_status = 2`),无需后续支付接口
- `offline` 支付:逻辑保持不变
- 传入 `wechat`/`alipay` → 返回 `{"code": 1001, "msg": "请求参数解析失败"}`
**响应新增字段**
```json
{
"operator_id": 123,
"operator_type": "agent",
"operator_name": "一级代理 A",
"actual_paid_amount": 8000,
"purchase_role": "purchase_for_subordinate",
"is_purchased_by_parent": false,
"purchase_remark": "为下级代理【二级代理 B】购买"
}
```
### H5 端订单创建 API无变更
**端点**`POST /api/h5/orders`
行为完全不变,仍支持 `wallet`/`wechat`/`alipay`,仍创建待支付订单。
### 订单列表 API
**端点**`GET /api/admin/orders`
**新增查询参数**
- `purchase_role` (可选):订单角色筛选
- `self_purchase`:自己购买
- `purchased_by_parent`:上级代理购买
- `purchased_by_platform`:平台代购
- `purchase_for_subordinate`:给下级购买
**查询逻辑变更**
- 代理可以看到 `buyer_id = 自己``operator_id = 自己` 的所有订单
---
## 数据库变更
### 订单表tb_order
**新增字段**
```sql
ALTER TABLE tb_order ADD COLUMN operator_id INT;
ALTER TABLE tb_order ADD COLUMN operator_type VARCHAR(20);
ALTER TABLE tb_order ADD COLUMN actual_paid_amount BIGINT;
ALTER TABLE tb_order ADD COLUMN purchase_role VARCHAR(50);
COMMENT ON COLUMN tb_order.operator_id IS '操作者ID谁下的单';
COMMENT ON COLUMN tb_order.operator_type IS '操作者类型platform/agent';
COMMENT ON COLUMN tb_order.actual_paid_amount IS '实际支付金额(分)';
COMMENT ON COLUMN tb_order.purchase_role IS '订单角色self_purchase/purchased_by_parent/purchased_by_platform/purchase_for_subordinate';
```
**新增索引**
```sql
CREATE INDEX CONCURRENTLY idx_orders_operator_id ON tb_order(operator_id);
CREATE INDEX CONCURRENTLY idx_orders_purchase_role ON tb_order(purchase_role);
```
---
### 钱包流水表tb_agent_wallet_transaction
**新增字段**(如果不存在):
```sql
ALTER TABLE tb_agent_wallet_transaction ADD COLUMN transaction_subtype VARCHAR(50);
ALTER TABLE tb_agent_wallet_transaction ADD COLUMN related_shop_id INT;
COMMENT ON COLUMN tb_agent_wallet_transaction.transaction_subtype IS '交易子类型(细分 order_payment 场景)';
COMMENT ON COLUMN tb_agent_wallet_transaction.related_shop_id IS '关联店铺ID代购时记录下级店铺';
```
---
## 代码结构
### Service 层新增方法
**`internal/service/order/service.go`**
1. **`getCostPrice(ctx, shopID, packageID) (int64, error)`**
- 查询店铺对套餐的成本价(通过 ShopPackageAllocation
2. **`createWalletTransaction(ctx, tx, walletID, orderID, amount, purchaseRole, relatedShopID) error`**
- 创建钱包流水,根据 purchaseRole 填充 subtype 和 remark
3. **`createOrderWithWalletPayment(ctx, order, items, operatorShopID, buyerShopID) (*dto.OrderResponse, error)`**
- 钱包支付订单创建方法,事务内完成:订单创建 + 扣款 + 流水 + 激活套餐
**`Create()` 方法重构**
```go
// 场景判断
if req.PaymentMethod == "offline":
// 平台代购场景(保持现有逻辑)
return s.createOrderWithActivation(...)
else if req.PaymentMethod == "wallet":
// 获取资源所属店铺 ID
if 资源属于操作者:
// 代理自购场景
buyer = operator
purchase_role = "self_purchase"
total_amount = actual_paid_amount = 操作者成本价
else:
// 代理代购场景
buyer = 资源所属者
operator = 操作者
purchase_role = "purchase_for_subordinate"
total_amount = 买家成本价
actual_paid_amount = 操作者成本价
return s.createOrderWithWalletPayment(...)
```
---
### Store 层变更
**`internal/store/postgres/order_store.go`**
**`List()` 方法**
```go
// 代理用户:查询作为买家或操作者的订单
if shopID, ok := filters["shop_id"].(uint); ok {
query = query.Where(
"(buyer_type = ? AND buyer_id = ?) OR operator_id = ?",
model.BuyerTypeAgent, shopID, shopID,
)
}
// 支持 purchase_role 精确匹配筛选
if purchaseRole, ok := filters["purchase_role"].(string); ok {
query = query.Where("purchase_role = ?", purchaseRole)
}
```
---
### Handler 层变更
**`internal/handler/admin/order.go`**
**`Create()` 方法**
- 修改 wallet 支付方式的权限检查,允许代理、平台、超管使用
- offline 支付方式仍限制为平台和超管
**`List()` 方法**
- 从查询参数解析 `purchase_role`
- 传递给 Service 层的 `List()` 方法
---
## 使用指南
### 代理自购场景
**请求**
```http
POST /api/admin/orders
Authorization: Bearer {agent_token}
Content-Type: application/json
{
"order_type": 1,
"iot_card_id": 101,
"package_ids": [201],
"payment_method": "wallet"
}
```
**响应**
```json
{
"code": 0,
"data": {
"id": 1001,
"order_no": "ORD202602281234567890",
"payment_status": 2,
"operator_id": 10,
"buyer_id": 10,
"operator_type": "agent",
"purchase_role": "self_purchase",
"total_amount": 8000,
"actual_paid_amount": 8000
},
"msg": "订单创建成功"
}
```
---
### 代理代购场景
**请求**
```http
POST /api/admin/orders
Authorization: Bearer {parent_agent_token}
Content-Type: application/json
{
"order_type": 1,
"iot_card_id": 201,
"package_ids": [301],
"payment_method": "wallet"
}
```
**响应**
```json
{
"code": 0,
"data": {
"id": 1002,
"order_no": "ORD202602281234567891",
"payment_status": 2,
"operator_id": 10,
"buyer_id": 20,
"operator_type": "agent",
"operator_name": "一级代理 A",
"purchase_role": "purchase_for_subordinate",
"total_amount": 10000,
"actual_paid_amount": 8000,
"purchase_remark": "为下级代理【二级代理 B】购买"
},
"msg": "订单创建成功"
}
```
---
### 订单列表查询
**请求**
```http
GET /api/admin/orders?purchase_role=purchase_for_subordinate&page=1&page_size=20
Authorization: Bearer {agent_token}
```
**响应**
```json
{
"code": 0,
"data": {
"list": [
{
"id": 1002,
"purchase_role": "purchase_for_subordinate",
"operator_id": 10,
"buyer_id": 20,
"total_amount": 10000,
"actual_paid_amount": 8000
}
],
"total": 1
},
"msg": "success"
}
```
---
## 迁移和部署
### 数据库迁移
**迁移脚本**
- `migrations/000067_add_operator_fields_to_orders.up.sql`
- `migrations/000068_add_transaction_subtype_to_wallet_transaction.up.sql`
**回滚脚本**
- `migrations/000067_add_operator_fields_to_orders.down.sql`
- `migrations/000068_add_transaction_subtype_to_wallet_transaction.down.sql`
**数据回填**(可选):
- `migrations/backfill_order_purchase_role.sql`:回填历史平台代购订单
---
### 部署步骤
1. **测试环境验证**
- 执行迁移脚本
- 验证索引创建成功
- 手工测试三种代购场景
2. **灰度发布**
- 代码部署到灰度环境
- 观察日志和监控指标
- 验证订单创建、查询、钱包扣款功能
3. **生产环境部署**
- 低峰期执行数据库迁移
- 部署代码
- 监控错误日志和业务指标
- 验证核心功能
---
### 监控指标
**关键指标**
- 订单创建成功率(按 payment_method 分组)
- 钱包扣款成功率
- 错误日志:余额不足、并发冲突、套餐激活失败
- 订单创建耗时P95、P99
**告警规则**
- 钱包扣款失败率 > 5%
- 订单创建失败率 > 10%
- 并发冲突次数 > 100/分钟
---
## 兼容性说明
### 向后兼容
- **现有订单字段为空值**:不影响已有订单查询
- **平台代购offline逻辑不变**:保持现有行为
- **H5 钱包支付不受影响**H5 端仍使用两步流程
- **数据权限保持一致**:订单角色追踪不影响现有数据权限逻辑
### 破坏性变更
**无**。所有新增字段均为 nullable新增逻辑不影响现有流程。
---
## 测试覆盖
### 集成测试场景
1. **代理自购**:代理为自己的卡购买套餐,验证扣款、激活、流水
2. **代理代购**:一级代理为二级代理购买,验证价格差异、佣金不产生
3. **平台代购**:平台 offline 代购,验证不扣款、佣金产生
4. **订单查询**:验证 OR 查询逻辑、purchase_role 筛选
5. **边界场景**:余额不足、并发扣款、幂等性
### 验证结果
- ✅ 编译通过:`go build ./...`
- ✅ OpenAPI 文档更新:新增字段已包含
- ✅ 迁移脚本执行成功
---
## 相关文档
- [提案文档](../../openspec/changes/fix-agent-wallet-order-creation/proposal.md)
- [设计文档](../../openspec/changes/fix-agent-wallet-order-creation/design.md)
- [任务清单](../../openspec/changes/fix-agent-wallet-order-creation/tasks.md)
- [Specs 规范](../../openspec/changes/fix-agent-wallet-order-creation/specs/)
- [项目规范](../../CLAUDE.md)

View File

@@ -0,0 +1,538 @@
# 代理钱包订单创建功能部署指南
## 部署前检查清单
### 代码检查
- [x] 编译通过:`go build ./...`
- [x] OpenAPI 文档更新:`go run cmd/gendocs/main.go`
- [ ] 测试环境验证通过
- [ ] Code Review 通过
### 数据库准备
- [ ] 测试环境迁移脚本执行成功
- [ ] 生产环境数据库备份完成
- [ ] 回滚脚本准备完毕
---
## 数据库迁移
### 迁移脚本清单
**脚本位置**`migrations/`
| 序号 | 文件名 | 说明 | 执行时间 |
|------|--------|------|----------|
| 000067 | `add_operator_fields_to_orders.up.sql` | 订单表新增字段和索引 | < 5 秒 |
| 000068 | `add_transaction_subtype_to_wallet_transaction.up.sql` | 钱包流水表新增字段 | < 1 秒 |
**回滚脚本**
| 序号 | 文件名 | 说明 |
|------|--------|------|
| 000067 | `add_operator_fields_to_orders.down.sql` | 删除订单表字段和索引 |
| 000068 | `add_transaction_subtype_to_wallet_transaction.down.sql` | 删除钱包流水表字段 |
---
### 迁移执行步骤
#### 步骤 1备份数据库
```bash
# 生产环境数据库备份
pg_dump -h <host> -U <user> -d junhong_cmp -F c -b -v -f "backup_$(date +%Y%m%d_%H%M%S).dump"
```
**验证备份**
```bash
pg_restore --list backup_*.dump | head -20
```
---
#### 步骤 2执行迁移测试环境
**使用 migrate 工具**
```bash
# 切换到项目目录
cd /path/to/junhong_cmp_fiber
# 执行迁移
migrate -path migrations -database "postgresql://<user>:<password>@<host>:<port>/junhong_cmp?sslmode=disable" up
# 验证迁移版本
migrate -path migrations -database "postgresql://<user>:<password>@<host>:<port>/junhong_cmp?sslmode=disable" version
```
**手动执行(可选)**
```bash
# 连接数据库
psql -h <host> -U <user> -d junhong_cmp
# 执行迁移脚本
\i migrations/000067_add_operator_fields_to_orders.up.sql
\i migrations/000068_add_transaction_subtype_to_wallet_transaction.up.sql
```
---
#### 步骤 3验证迁移结果
**检查字段**
```sql
-- 验证订单表字段
\d tb_order
-- 预期输出包含:
-- operator_id | integer | | |
-- operator_type | character varying(20) | | |
-- actual_paid_amount | bigint | | |
-- purchase_role | character varying(50) | | |
```
**检查索引**
```sql
-- 验证索引
SELECT indexname, indexdef
FROM pg_indexes
WHERE tablename = 'tb_order'
AND indexname IN ('idx_orders_operator_id', 'idx_orders_purchase_role');
-- 预期输出:
-- idx_orders_operator_id | CREATE INDEX idx_orders_operator_id ON public.tb_order USING btree (operator_id)
-- idx_orders_purchase_role | CREATE INDEX idx_orders_purchase_role ON public.tb_order USING btree (purchase_role)
```
**检查钱包流水表**
```sql
-- 验证钱包流水表字段
\d tb_agent_wallet_transaction
-- 预期输出包含:
-- transaction_subtype | character varying(50) | | |
-- related_shop_id | integer | | |
```
---
#### 步骤 4数据回填可选
**回填历史订单**
```bash
psql -h <host> -U <user> -d junhong_cmp -f migrations/backfill_order_purchase_role.sql
```
**验证回填结果**
```sql
SELECT purchase_role, operator_type, COUNT(*) as count
FROM tb_order
WHERE purchase_role IS NOT NULL
GROUP BY purchase_role, operator_type;
-- 预期输出示例:
-- purchased_by_platform | platform | 1234
```
---
#### 步骤 5执行迁移生产环境
**时间窗口**:选择低峰期(凌晨 2:00 - 4:00
**执行命令**(与测试环境相同):
```bash
migrate -path migrations -database "postgresql://<prod_host>:<prod_port>/<db>?sslmode=require" up
```
**监控指标**
- 迁移执行时间
- 索引创建时间CONCURRENTLY不锁表
- 数据库连接数
- 慢查询日志
---
### 回滚步骤
**场景**:迁移失败或发现严重 Bug
#### 步骤 1停止应用
```bash
# 停止应用服务
systemctl stop junhong-cmp-api
```
#### 步骤 2执行回滚
```bash
# 回滚到上一版本
migrate -path migrations -database "postgresql://<host>:<port>/<db>?sslmode=disable" down 2
```
**或手动执行回滚脚本**
```bash
psql -h <host> -U <user> -d junhong_cmp <<EOF
\i migrations/000068_add_transaction_subtype_to_wallet_transaction.down.sql
\i migrations/000067_add_operator_fields_to_orders.down.sql
EOF
```
#### 步骤 3验证回滚
```sql
-- 验证字段已删除
\d tb_order
\d tb_agent_wallet_transaction
-- 验证索引已删除
SELECT indexname FROM pg_indexes WHERE tablename = 'tb_order';
```
#### 步骤 4恢复应用旧版本代码
```bash
# 回滚代码到上一版本
git checkout <previous_commit>
# 重新编译
go build -o api cmd/api/main.go
# 启动应用
systemctl start junhong-cmp-api
```
---
## 代码部署
### 灰度发布计划
**阶段 1灰度服务器10% 流量)**
**时间**:低峰期(周一至周五 02:00 - 04:00
**步骤**
1. 部署代码到灰度服务器
2. 切换 10% 流量到灰度服务器
3. 观察 2 小时,监控关键指标
4. 手工测试代理自购、代理代购场景
**验证项**
- [ ] 应用启动成功
- [ ] 健康检查通过:`curl http://localhost:8080/health`
- [ ] 订单创建成功率 > 95%
- [ ] 钱包扣款成功率 > 99%
- [ ] 无严重错误日志
---
**阶段 2全量发布100% 流量)**
**时间**:灰度验证通过后 24 小时
**步骤**
1. 部署代码到所有服务器
2. 逐步切换流量20% → 50% → 100%
3. 持续监控 24 小时
**验证项**
- [ ] 所有服务器应用启动成功
- [ ] 订单创建成功率 > 95%
- [ ] 钱包扣款成功率 > 99%
- [ ] 错误日志无异常峰值
- [ ] 用户反馈无异常
---
### 发布命令
**构建**
```bash
# 构建二进制文件
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o api cmd/api/main.go
# 验证版本
./api --version
```
**部署**
```bash
# 停止服务
systemctl stop junhong-cmp-api
# 备份旧版本
cp /opt/junhong-cmp/api /opt/junhong-cmp/api.backup
# 替换新版本
cp api /opt/junhong-cmp/api
# 启动服务
systemctl start junhong-cmp-api
# 检查状态
systemctl status junhong-cmp-api
```
**验证**
```bash
# 健康检查
curl http://localhost:8080/health
# 查看日志
journalctl -u junhong-cmp-api -f
```
---
## 监控指标
### 关键业务指标
**订单创建**
- 订单创建成功率(总体)
- 订单创建成功率(按 payment_method 分组)
- 订单创建耗时P50、P95、P99
- 订单创建 QPS
**钱包扣款**
- 钱包扣款成功率
- 钱包扣款失败原因分布(余额不足、并发冲突、其他)
- 钱包余额不足次数
**订单查询**
- 订单列表查询耗时P95
- OR 查询性能(慢查询日志)
---
### 错误日志监控
**关键错误**
```bash
# 余额不足
grep "余额不足" /var/log/junhong-cmp/app.log | wc -l
# 并发冲突
grep "并发冲突" /var/log/junhong-cmp/app.log | wc -l
# 套餐激活失败
grep "套餐激活失败" /var/log/junhong-cmp/app.log | wc -l
# 成本价查询失败
grep "店铺没有该套餐的分配配置" /var/log/junhong-cmp/app.log | wc -l
```
---
### 数据库性能监控
**慢查询**
```sql
-- 查看慢查询
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
WHERE query LIKE '%tb_order%'
AND mean_time > 100
ORDER BY mean_time DESC
LIMIT 10;
```
**索引使用率**
```sql
-- 检查新索引是否被使用
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read, idx_tup_fetch
FROM pg_stat_user_indexes
WHERE indexname IN ('idx_orders_operator_id', 'idx_orders_purchase_role');
```
**OR 查询性能**
```sql
-- EXPLAIN 分析
EXPLAIN ANALYZE
SELECT * FROM tb_order
WHERE (buyer_type = 'agent' AND buyer_id = 10) OR operator_id = 10
LIMIT 20;
```
---
### 告警规则
**业务告警**
| 指标 | 阈值 | 级别 |
|------|------|------|
| 订单创建成功率 | < 95% | P1 |
| 钱包扣款成功率 | < 99% | P1 |
| 订单创建耗时 P99 | > 1000ms | P2 |
| 并发冲突次数 | > 100/分钟 | P2 |
| 余额不足次数 | > 500/小时 | P3 |
**系统告警**
| 指标 | 阈值 | 级别 |
|------|------|------|
| 应用进程退出 | - | P0 |
| 数据库连接数 | > 80% | P1 |
| 慢查询(订单相关) | > 1000ms | P2 |
---
## 验证测试
### 功能验证清单
**代理自购**
- [ ] 创建订单成功
- [ ] 钱包余额正确扣减
- [ ] 订单状态为已支付
- [ ] 套餐已激活
- [ ] 钱包流水记录正确transaction_subtype = "self_purchase"
- [ ] 订单响应字段完整operator_id、purchase_role 等)
**代理代购**
- [ ] 创建订单成功
- [ ] 钱包余额按操作者成本价扣减
- [ ] 订单金额显示买家成本价
- [ ] actual_paid_amount 为操作者成本价
- [ ] 套餐已激活
- [ ] 钱包流水记录正确transaction_subtype = "purchase_for_subordinate"、related_shop_id、remark 包含店铺名称)
- [ ] 未产生佣金记录
**平台代购**
- [ ] 创建订单成功
- [ ] 钱包余额未扣减
- [ ] 订单状态为已支付
- [ ] 套餐已激活
- [ ] 产生佣金记录
- [ ] purchase_role = "purchased_by_platform"
**订单查询**
- [ ] 代理可查询作为买家或操作者的订单
- [ ] purchase_role 筛选生效
- [ ] 订单列表响应包含新字段
**边界场景**
- [ ] 余额不足时返回明确错误
- [ ] 并发扣款时乐观锁生效
- [ ] 幂等性检查防止重复创建
- [ ] H5 端 wallet 订单不受影响(仍为待支付)
---
### 性能验证
**压力测试**(可选):
```bash
# 订单创建并发测试
ab -n 1000 -c 50 -H "Authorization: Bearer <token>" \
-p order_request.json \
-T "application/json" \
http://localhost:8080/api/admin/orders
# 订单列表查询性能测试
ab -n 5000 -c 100 -H "Authorization: Bearer <token>" \
http://localhost:8080/api/admin/orders?page=1&page_size=20
```
**预期结果**
- 订单创建 QPS > 50
- 订单创建 P95 < 200ms
- 订单列表查询 P95 < 100ms
---
## 回滚预案
### 回滚触发条件
满足以下任一条件时立即回滚:
- 订单创建成功率 < 90%(持续 5 分钟)
- 钱包扣款成功率 < 95%(持续 5 分钟)
- 发现严重 Bug重复扣款、金额计算错误、数据丢失
- 用户投诉量激增
---
### 快速回滚步骤
**步骤 1立即回滚代码**< 5 分钟)
```bash
# 停止服务
systemctl stop junhong-cmp-api
# 恢复旧版本
cp /opt/junhong-cmp/api.backup /opt/junhong-cmp/api
# 启动服务
systemctl start junhong-cmp-api
```
**步骤 2回滚数据库**(可选,< 10 分钟)
仅当数据异常时执行:
```bash
# 执行回滚脚本
migrate -path migrations -database "..." down 2
```
**步骤 3验证回滚成功**
- [ ] 应用启动成功
- [ ] 健康检查通过
- [ ] 订单创建成功率恢复
- [ ] 用户反馈恢复正常
---
## 上线后观察
### 观察期7 天)
**每日检查**
- [ ] 订单创建成功率
- [ ] 钱包扣款成功率
- [ ] 错误日志无异常
- [ ] 用户反馈无异常
- [ ] 数据库慢查询无新增
**周报总结**
- 订单创建总量、成功率
- 钱包扣款总量、成功率
- 代理自购 vs 代理代购占比
- 错误类型分布
- 性能指标趋势
---
## 联系人
**技术负责人**[姓名]
**运维负责人**[姓名]
**产品负责人**[姓名]
**紧急联系方式**
- 技术值班电话:[电话]
- 运维值班电话:[电话]
---
## 附录
### 相关文档
- [功能总结](./功能总结.md)
- [提案文档](../../openspec/changes/fix-agent-wallet-order-creation/proposal.md)
- [设计文档](../../openspec/changes/fix-agent-wallet-order-creation/design.md)
- [任务清单](../../openspec/changes/fix-agent-wallet-order-creation/tasks.md)
### 迁移脚本内容
详见 `migrations/` 目录:
- `000067_add_operator_fields_to_orders.up.sql`
- `000067_add_operator_fields_to_orders.down.sql`
- `000068_add_transaction_subtype_to_wallet_transaction.up.sql`
- `000068_add_transaction_subtype_to_wallet_transaction.down.sql`
- `backfill_order_purchase_role.sql`

View File

@@ -0,0 +1,181 @@
# 订单超时自动取消功能
## 功能概述
为待支付订单(微信/支付宝)添加 30 分钟超时自动取消机制。超时后自动取消订单并解冻钱包余额(如有冻结)。
## 核心设计
### 超时流程
```
用户下单(微信/支付宝)
├── 设置 expires_at = 当前时间 + 30 分钟
├── 订单状态: payment_status = 1待支付
├── 场景 1: 用户在 30 分钟内支付
│ ├── 支付成功 → 清除 expires_at设为 NULL
│ └── 订单正常完成
└── 场景 2: 超过 30 分钟未支付
├── Asynq Scheduler 每分钟触发扫描
├── 查询 expires_at <= NOW() AND payment_status = 1
├── 取消订单 → payment_status = 5已取消
├── 清除 expires_at
└── 解冻钱包余额(如有)
```
### 不设置超时的场景
- **钱包支付**:立即扣款,无需超时
- **线下支付**:管理员手动确认,无需超时
- **混合支付**:需要在线支付部分才设置超时
## 技术实现
### 数据库变更
```sql
-- 迁移文件: migrations/000069_add_order_expiration.up.sql
ALTER TABLE tb_order ADD COLUMN expires_at TIMESTAMPTZ;
-- 部分索引: 仅索引待支付订单,减少索引大小
CREATE INDEX idx_order_expires ON tb_order (expires_at, payment_status)
WHERE expires_at IS NOT NULL AND payment_status = 1;
```
### 涉及文件
| 层级 | 文件 | 变更说明 |
|------|------|----------|
| 迁移 | `migrations/000069_add_order_expiration.up.sql` | 添加 expires_at 字段和索引 |
| 迁移 | `migrations/000069_add_order_expiration.down.sql` | 回滚脚本 |
| 常量 | `pkg/constants/constants.go` | 添加任务类型和超时参数 |
| 模型 | `internal/model/order.go` | 添加 ExpiresAt 字段 |
| DTO | `internal/model/dto/order_dto.go` | 添加 ExpiresAt、IsExpired 响应字段 |
| Store | `internal/store/postgres/order_store.go` | 添加 FindExpiredOrders、is_expired 过滤 |
| Service | `internal/service/order/service.go` | 创建订单设置超时、取消逻辑、批量取消 |
| 任务 | `internal/task/order_expire.go` | 订单超时任务处理器 |
| 任务 | `internal/task/alert_check.go` | 告警检查任务处理器(从 ticker 迁移) |
| 任务 | `internal/task/data_cleanup.go` | 数据清理任务处理器(从 ticker 迁移) |
| 队列 | `pkg/queue/types.go` | 添加 OrderExpirer 接口和 WorkerStores/Services 字段 |
| 队列 | `pkg/queue/handler.go` | 注册 3 个新任务处理器 |
| Bootstrap | `internal/bootstrap/worker_stores.go` | 添加 CardWallet Store |
| Bootstrap | `internal/bootstrap/worker_services.go` | 添加 OrderService 初始化 |
| Worker | `cmd/worker/main.go` | 替换 ticker 为 Asynq Scheduler |
### 常量定义
```go
// pkg/constants/constants.go
TaskTypeOrderExpire = "order:expire" // 订单超时任务
TaskTypeAlertCheck = "alert:check" // 告警检查任务
TaskTypeDataCleanup = "data:cleanup" // 数据清理任务
OrderExpireTimeout = 30 * time.Minute // 订单超时时间
OrderExpireBatchSize = 100 // 每次批量取消数量
```
### 接口变更
#### 订单列表查询新增过滤参数
```
GET /api/admin/orders?is_expired=true
GET /api/h5/orders?is_expired=true
```
- `is_expired=true`: 仅返回已超时的订单
- `is_expired=false`: 仅返回未超时的订单
#### 订单响应新增字段
```json
{
"expires_at": "2025-02-28T12:30:00+08:00",
"is_expired": false
}
```
- `expires_at`: 超时时间,`null` 表示无超时(钱包/线下支付)
- `is_expired`: 是否已超时(计算字段)
## 定时任务调度器重构
### 变更前time.Ticker
```go
// cmd/worker/main.go 中的 goroutine
alertChecker := startAlertChecker(ctx, ...) // time.Ticker 每分钟
cleanupChecker := startCleanupScheduler(ctx, ...) // time.Timer 每天凌晨 2 点
```
**问题**
- 单点运行,无法分布式
- 无重试机制
- 无任务状态监控
### 变更后Asynq Scheduler
```go
// Asynq Scheduler 统一管理
asynqScheduler.Register("@every 1m", asynq.NewTask("order:expire", nil))
asynqScheduler.Register("@every 1m", asynq.NewTask("alert:check", nil))
asynqScheduler.Register("0 2 * * *", asynq.NewTask("data:cleanup", nil))
```
**优势**
- 通过 Redis 实现分布式调度
- 自动重试失败任务
- 可通过 Asynq Dashboard 监控
- 统一的任务处理模式
### 调度规则
| 任务 | 调度表达式 | 说明 |
|------|-----------|------|
| 订单超时取消 | `@every 1m` | 每分钟扫描一次 |
| 告警检查 | `@every 1m` | 每分钟检查一次 |
| 数据清理 | `0 2 * * *` | 每天凌晨 2 点执行 |
## 钱包解冻逻辑
### 取消订单时的解冻流程
```
cancelOrder(ctx, order)
├── 幂等更新: WHERE payment_status = 1 → 5
├── 清除 expires_at
├── 如果是代理钱包支付 (payment_method = wallet, buyer_type = agent)
│ └── AgentWalletStore.UnfreezeBalanceWithTx(tx, shopID, amount)
└── 如果是卡钱包支付 (payment_method = wallet/mixed, buyer_type != agent)
└── 直接更新 frozen_balance -= amount (WHERE frozen_balance >= amount)
```
### 幂等性保障
- 使用 `WHERE payment_status = 1` 条件更新,确保只取消待支付订单
- `RowsAffected == 0` 说明订单已被处理(已支付或已取消),直接跳过
- 批量取消时,单个订单失败不影响其他订单
## 循环依赖解决方案
`internal/service/order` 导入 `pkg/queue`(使用 queue.Client`pkg/queue/types.go` 需要引用 OrderService。
**解决方案**:在 `pkg/queue/types.go` 定义 `OrderExpirer` 接口,`internal/task/order_expire.go` 定义同名局部接口。Go 的结构化类型系统使 `order.Service` 自动满足两个接口,无需显式声明。
```go
// pkg/queue/types.go
type OrderExpirer interface {
CancelExpiredOrders(ctx context.Context) (int, error)
}
// WorkerServices 中使用接口类型
OrderExpirer OrderExpirer
// internal/task/order_expire.go局部接口避免导入 pkg/queue
type OrderExpirer interface {
CancelExpiredOrders(ctx context.Context) (int, error)
}
```

View File

@@ -0,0 +1,277 @@
# 套餐系统升级 - API 文档
## 客户端 API
### 查询我的流量使用情况
获取当前用户绑定的卡/设备的套餐流量使用情况。
**请求**
```http
GET /api/h5/packages/my-usage
Authorization: Bearer {token}
```
**响应**
```json
{
"code": 0,
"msg": "success",
"data": {
"main_package": {
"package_usage_id": 101,
"package_id": 1,
"package_name": "月度套餐 30G",
"data_limit_mb": 30720,
"data_usage_mb": 15360,
"status": 1,
"priority": 1,
"activated_at": "2025-02-01T00:00:00Z",
"expires_at": "2025-02-28T23:59:59Z",
"data_reset_cycle": "monthly",
"last_reset_at": "2025-02-01T00:00:00Z",
"next_reset_at": "2025-03-01T00:00:00Z"
},
"addon_packages": [
{
"package_usage_id": 102,
"package_id": 5,
"package_name": "加油包 5G",
"data_limit_mb": 5120,
"data_usage_mb": 2048,
"status": 1,
"priority": 2,
"master_usage_id": 101,
"activated_at": "2025-02-10T00:00:00Z",
"expires_at": "2025-02-28T23:59:59Z"
}
],
"total": {
"total_mb": 35840,
"used_mb": 17408,
"remaining_mb": 18432
}
},
"timestamp": 1707667200
}
```
**响应字段说明**
| 字段 | 类型 | 说明 |
|------|------|------|
| `main_package` | object | 主套餐信息(可能为 null |
| `addon_packages` | array | 加油包列表 |
| `total.total_mb` | int64 | 总流量MB |
| `total.used_mb` | int64 | 已用流量MB |
| `total.remaining_mb` | int64 | 剩余流量MB |
**套餐状态 status**
| 值 | 说明 |
|----|------|
| 0 | 待生效 |
| 1 | 生效中 |
| 2 | 已用完 |
| 3 | 已过期 |
| 4 | 已失效 |
---
## 后台管理 API
### 查询套餐流量详单
查询指定套餐的每日流量使用记录。
**请求**
```http
GET /api/admin/package-usage/{id}/daily-records
Authorization: Bearer {token}
```
**Query 参数**
| 参数 | 类型 | 必填 | 说明 |
|------|------|------|------|
| `start_date` | string | 是 | 开始日期YYYY-MM-DD |
| `end_date` | string | 是 | 结束日期YYYY-MM-DD |
**响应**
```json
{
"code": 0,
"msg": "success",
"data": {
"package_usage_id": 101,
"package_name": "月度套餐 30G",
"records": [
{
"date": "2025-02-01",
"daily_usage_mb": 1024,
"cumulative_usage_mb": 1024
},
{
"date": "2025-02-02",
"daily_usage_mb": 512,
"cumulative_usage_mb": 1536
},
{
"date": "2025-02-03",
"daily_usage_mb": 2048,
"cumulative_usage_mb": 3584
}
],
"total_usage_mb": 15360
},
"timestamp": 1707667200
}
```
**错误码**
| 错误码 | 说明 |
|-------|------|
| 400 | 参数错误(日期格式不正确) |
| 403 | 无权限访问该套餐 |
| 404 | 套餐不存在 |
---
### 创建套餐(扩展字段)
创建套餐时支持的新字段。
**请求**
```http
POST /api/admin/packages
Authorization: Bearer {token}
Content-Type: application/json
```
**请求体**
```json
{
"package_name": "月度套餐 30G",
"package_type": "main",
"data_limit_mb": 30720,
"price": 9900,
"calendar_type": "natural_month",
"duration_months": 1,
"data_reset_cycle": "monthly",
"enable_realname_activation": false
}
```
**新增字段说明**
| 字段 | 类型 | 必填 | 说明 |
|------|------|------|------|
| `calendar_type` | string | 是 | 有效期类型:`natural_month`(自然月)、`by_day`(按天) |
| `duration_months` | int | 条件必填 | 自然月套餐的月数calendar_type=natural_month 时必填) |
| `duration_days` | int | 条件必填 | 按天套餐的天数calendar_type=by_day 时必填) |
| `data_reset_cycle` | string | 是 | 流量重置周期:`daily``monthly``yearly``none` |
| `enable_realname_activation` | bool | 否 | 是否需要实名后激活(默认 false |
**calendar_type 取值**
| 值 | 说明 | 有效期计算 |
|----|------|-----------|
| `natural_month` | 自然月 | 激活月份 + N 个月,月末过期 |
| `by_day` | 按天 | 激活日期 + N 天 |
**data_reset_cycle 取值**
| 值 | 说明 | 重置时间 |
|----|------|---------|
| `daily` | 日重置 | 每天 00:00:00 |
| `monthly` | 月重置 | 自然月套餐每月1号<br>按天套餐每30天 |
| `yearly` | 年重置 | 每年1月1日 |
| `none` | 不重置 | 不重置 |
---
### 更新套餐(扩展字段)
更新套餐时支持的新字段。
**请求**
```http
PUT /api/admin/packages/{id}
Authorization: Bearer {token}
Content-Type: application/json
```
**请求体**
```json
{
"calendar_type": "by_day",
"duration_days": 30,
"data_reset_cycle": "none",
"enable_realname_activation": true
}
```
---
### 查询套餐详情(扩展字段)
获取套餐详情时返回的新字段。
**响应**
```json
{
"code": 0,
"data": {
"id": 1,
"package_name": "月度套餐 30G",
"package_type": "main",
"data_limit_mb": 30720,
"price": 9900,
"calendar_type": "natural_month",
"duration_months": 1,
"duration_days": 0,
"data_reset_cycle": "monthly",
"enable_realname_activation": false,
"status": 1,
"created_at": "2025-01-01T00:00:00Z",
"updated_at": "2025-01-15T00:00:00Z"
}
}
```
---
## 错误码汇总
| 错误码 | HTTP 状态码 | 说明 |
|-------|------------|------|
| `CodePackageActivationConflict` | 409 | 套餐正在激活中,请稍后重试 |
| `CodeNoMainPackage` | 400 | 必须有主套餐才能购买加油包 |
| `CodeRealnameRequired` | 403 | 设备/卡必须先完成实名认证才能购买套餐 |
| `CodeMixedOrderForbidden` | 400 | 同订单不能同时购买正式套餐和加油包 |
---
## 数据权限
### 客户端 API
- 只能查询当前用户绑定的卡/设备的套餐信息
- 用户身份通过 JWT Token 识别
### 后台管理 API
- 代理商:只能查询自己店铺及下级店铺的套餐
- 企业用户:只能查询自己企业的套餐
- 平台用户:可查询所有套餐
- 越权访问返回 403 错误

View File

@@ -0,0 +1,278 @@
# 套餐系统升级 - 使用指南
## 场景一:囤货待实名激活
### 业务场景
代理商后台为未实名的卡/设备预先购买套餐,用户实名后自动激活。
### 操作流程
```
1. 代理商登录后台
2. 选择未实名的卡/设备
3. 购买套餐(选择支持实名激活的套餐)
4. 套餐状态待激活status=0, pending_realname_activation=true
5. 用户完成实名认证
6. 系统自动激活套餐
```
### 前置条件
- 套餐必须启用 `enable_realname_activation=true`
- 卡/设备当前未实名
### 注意事项
- 囤货套餐在实名前不会计算有效期
- 实名后,有效期从激活日期开始计算
- 如果卡/设备已实名,套餐会立即激活
---
## 场景二:主套餐排队
### 业务场景
用户当前有生效中的主套餐,想提前购买下一个套餐。
### 操作流程
```
1. 用户购买新主套餐
2. 系统检测到已有生效中主套餐
3. 新套餐进入排队状态status=0, priority=N+1
4. 当前主套餐过期
5. 系统自动激活排队中的下一个套餐
```
### 排队规则
| 情况 | 新套餐状态 |
|------|-----------|
| 无生效中主套餐 | 立即激活status=1, priority=1 |
| 有生效中主套餐 | 排队等待status=0, priority=MAX+1 |
### 查看排队情况
```http
GET /api/h5/packages/my-usage
//
{
"main_package": {
"package_name": "",
"status": 1, //
"expires_at": "2025-03-31T23:59:59Z"
},
"queued_packages": [
{
"package_name": "",
"status": 0, //
"priority": 2
}
]
}
```
---
## 场景三:加油包购买
### 业务场景
用户主套餐流量不够用,需要购买加油包补充流量。
### 操作流程
```
1. 确认用户有生效中或待生效的主套餐
2. 用户选择加油包
3. 系统自动绑定到当前主套餐
4. 加油包立即生效
5. 流量扣减时优先使用加油包
```
### 购买限制
| 限制项 | 说明 |
|-------|------|
| 必须有主套餐 | 无主套餐无法购买加油包 |
| 混买禁止 | 同一订单不能同时购买主套餐和加油包 |
### 加油包生命周期
```
主套餐过期 → 加油包自动失效status=4
```
### 加油包有效期
| 类型 | 有效期计算 |
|------|-----------|
| 随主套餐 | 与主套餐同时过期has_independent_expiry=false |
| 独立有效期 | 从购买日期开始计算has_independent_expiry=true |
---
## 场景四:流量查询
### 客户端查询我的流量
```http
GET /api/h5/packages/my-usage
Authorization: Bearer {token}
```
响应示例:
```json
{
"code": 0,
"data": {
"main_package": {
"package_usage_id": 101,
"package_name": "月度套餐 30G",
"data_limit_mb": 30720,
"data_usage_mb": 15360,
"status": 1,
"activated_at": "2025-02-01T00:00:00Z",
"expires_at": "2025-02-28T23:59:59Z"
},
"addon_packages": [
{
"package_usage_id": 102,
"package_name": "加油包 5G",
"data_limit_mb": 5120,
"data_usage_mb": 2048,
"status": 1,
"priority": 2
}
],
"total": {
"total_mb": 35840,
"used_mb": 17408,
"remaining_mb": 18432
}
}
}
```
### 后台查询套餐流量详单
```http
GET /api/admin/package-usage/101/daily-records?start_date=2025-02-01&end_date=2025-02-15
Authorization: Bearer {token}
```
响应示例:
```json
{
"code": 0,
"data": {
"package_usage_id": 101,
"package_name": "月度套餐 30G",
"records": [
{
"date": "2025-02-01",
"daily_usage_mb": 1024,
"cumulative_usage_mb": 1024
},
{
"date": "2025-02-02",
"daily_usage_mb": 512,
"cumulative_usage_mb": 1536
}
],
"total_usage_mb": 15360
}
}
```
---
## 套餐状态说明
| 状态码 | 名称 | 说明 |
|-------|------|------|
| 0 | 待生效 | 排队中或待实名激活 |
| 1 | 生效中 | 正在使用 |
| 2 | 已用完 | 流量已耗尽 |
| 3 | 已过期 | 超过有效期 |
| 4 | 已失效 | 主套餐过期导致加油包失效 |
---
## 流量重置说明
### 重置类型
| 类型 | 套餐类型 | 重置时间 | 适用场景 |
|------|---------|---------|---------|
| 日重置 | 所有 | 每天 00:00:00 | 日租卡 |
| 月重置 | 自然月 | 每月1号 00:00:00 | 自然月套餐 |
| 月重置 | 按天 | 从激活日起每30天 | 按天套餐 |
| 年重置 | 所有 | 每年1月1日 | 年度套餐 |
| 不重置 | 所有 | 不重置 | 一次性流量包 |
### 重置行为
```
重置前data_usage_mb = 25600
重置后data_usage_mb = 0
```
- 重置只清空已用流量,不影响有效期
- 流量用完的套餐status=2重置后恢复为生效中status=1
---
## 停复机说明
### 停机条件
所有生效套餐流量用完:
- 主套餐 status=2
- 所有加油包 status=2
### 复机条件
- 购买新套餐
- 套餐流量重置
- 排队套餐激活
### 停机记录
```sql
-- 停机记录在 tb_iot_card 表
stopped_at:
stop_reason: "流量耗尽"
-- 复机后
resumed_at:
```
---
## 常见问题
### Q: 为什么购买加油包提示"必须有主套餐"
A: 加油包必须绑定到主套餐,请先购买主套餐再购买加油包。
### Q: 主套餐过期后加油包还能用吗?
A: 不能。主套餐过期后绑定的加油包会自动失效status=4
### Q: 套餐排队后可以取消吗?
A: 目前不支持取消排队中的套餐,请联系客服处理。
### Q: 流量重置后为什么还是停机状态?
A: 流量重置后系统会自动触发复机,如果仍是停机状态,请检查运营商接口是否正常。
### Q: H5 端未实名用户如何购买套餐?
A: H5 端必须先完成实名认证才能购买套餐。代理商可在后台为未实名用户囤货。

View File

@@ -0,0 +1,183 @@
# 套餐系统升级 - 功能总结
## 概述
本次升级实现了完整的套餐生命周期管理,支持主套餐排队激活、加油包绑定主套餐、囤货待实名激活、流量按优先级扣减等核心功能。
## 核心功能
### 1. 套餐有效期计算
| 类型 | 计算方式 | 示例 |
|------|---------|------|
| 自然月 | 激活月份 + N 个月,月末 23:59:59 | 2月15日激活3个月 → 5月31日 23:59:59 过期 |
| 按天 | 激活日期 + N 天,当天 23:59:59 | 2月15日激活30天 → 3月16日 23:59:59 过期 |
### 2. 主套餐排队机制
```
卡/设备 购买主套餐 A → 立即激活status=1, priority=1
购买主套餐 B → 排队等待status=0, priority=2
主套餐 A 过期 → 自动激活主套餐 B
```
- 同一卡/设备同时只能有一个生效中的主套餐
- 新购买的主套餐自动进入排队状态
- 过期检查每 10 秒执行一次
### 3. 加油包绑定主套餐
```
加油包必须绑定到当前生效的主套餐master_usage_id
├── 加油包与主套餐同时生效
├── 主套餐过期时加油包自动失效status=4
└── 流量扣减时,先扣加油包,再扣主套餐
```
- 购买加油包时必须有生效中或待生效的主套餐
- 加油包可设置独立有效期(`has_independent_expiry=true`
### 4. 囤货待实名激活
```
后台为未实名卡/设备购买套餐
├── 套餐 status=0, pending_realname_activation=true
├── 用户完成实名
├── 轮询系统检测到实名状态变更
└── 自动激活套餐status=1
```
- 仅当套餐 `enable_realname_activation=true` 时触发此机制
- H5 端未实名用户无法直接购买套餐
### 5. 流量扣减优先级
扣减顺序:**加油包(按 priority ASC→ 主套餐**
```go
// 示例:卡有 3 个生效套餐
主套餐1000MB已用 500MB
加油包1100MB已用 0MBpriority=2
加油包2200MB已用 50MBpriority=3
// 本次使用 180MB
扣减顺序
1. 加油包1 100MB用完status=2
2. 加油包2 80MB
3. 主套餐不变
```
### 6. 流量重置周期
| 周期 | 套餐类型 | 重置时间 | 说明 |
|------|---------|---------|------|
| 日重置 | 所有 | 每天 00:00:00 | `data_reset_cycle=daily` |
| 月重置 | 自然月 | 每月1号 00:00:00 | `calendar_type=natural_month` |
| 月重置 | 按天 | 从激活日期起每30天 | `calendar_type=by_day` |
| 年重置 | 所有 | 每年1月1日 00:00:00 | `data_reset_cycle=yearly` |
### 7. 停复机机制
- **停机条件**:所有生效套餐流量用完(主套餐 + 所有加油包 status=2
- **复机条件**:购买新套餐或套餐激活后自动复机
## 数据库变更
### 新增表
| 表名 | 说明 |
|------|------|
| `tb_package_usage_daily_record` | 套餐流量日记录 |
| `tb_card_daily_usage` | 卡每日流量使用汇总 |
### 扩展字段
**tb_package 表**
- `calendar_type`: 有效期类型natural_month/by_day
- `data_reset_cycle`: 流量重置周期daily/monthly/yearly/none
- `enable_realname_activation`: 是否需要实名后激活
- `duration_days`: 按天套餐的有效天数
**tb_package_usage 表**
- `priority`: 套餐优先级
- `master_usage_id`: 主套餐 ID加油包使用
- `has_independent_expiry`: 加油包是否有独立有效期
- `pending_realname_activation`: 是否待实名激活
- `data_reset_cycle`: 流量重置周期
- `last_reset_at`: 上次重置时间
- `next_reset_at`: 下次重置时间
**tb_iot_card 表**
- `stopped_at`: 停机时间
- `resumed_at`: 复机时间
- `stop_reason`: 停机原因
**tb_carrier 表**
- `billing_day`: 运营商计费日(用于流量查询接口的计费周期计算,联通=27其他=1
## API 端点
### 新增端点
| 端点 | 方法 | 说明 |
|------|------|------|
| `/api/h5/packages/my-usage` | GET | 客户端查询我的流量使用情况 |
| `/api/admin/package-usage/:id/daily-records` | GET | 查询套餐流量详单 |
### 扩展端点
套餐管理 API 支持新字段:
- `calendar_type`: 有效期类型
- `duration_days`: 有效天数
- `data_reset_cycle`: 重置周期
- `enable_realname_activation`: 实名激活开关
## 轮询任务
| 任务 | 调度频率 | 说明 |
|------|---------|------|
| 套餐激活检查 | 每 10 秒 | 检查过期主套餐,激活排队套餐 |
| 流量重置调度 | 每 10 秒 | 执行日/月/年流量重置 |
| 实名状态检查 | 配置化 | 检测首次实名,触发套餐激活 |
## Asynq 任务
| 任务类型 | 说明 |
|---------|------|
| `task:package:first_activation` | 首次实名激活套餐 |
| `task:package:queue_activation` | 排队主套餐激活 |
## 错误码
| 错误码 | 说明 |
|-------|------|
| `CodePackageActivationConflict` | 套餐正在激活中 |
| `CodeNoMainPackage` | 必须有主套餐才能购买加油包 |
| `CodeRealnameRequired` | 必须先完成实名认证才能购买套餐 |
| `CodeMixedOrderForbidden` | 同订单不能同时购买正式套餐和加油包 |
## 技术实现
### Service 层
| 服务 | 文件 | 职责 |
|------|------|------|
| ActivationService | `activation_service.go` | 套餐激活(实名激活、排队激活) |
| UsageService | `usage_service.go` | 流量扣减、停机检查 |
| ResetService | `reset_service.go` | 流量重置(日/月/年) |
| CustomerViewService | `customer_view_service.go` | 客户端流量查询 |
| DailyRecordService | `daily_record_service.go` | 套餐流量详单 |
| StopResumeService | `stop_resume_service.go` | 停复机操作 |
### 工具函数
| 函数 | 说明 |
|------|------|
| `CalculateExpiryTime()` | 计算套餐过期时间 |
| `CalculateNextResetTime()` | 计算下次重置时间 |
## 性能优化
- 流量重置分批处理:每批最多 10000 条
- 使用 Redis 分布式锁避免套餐激活并发问题
- Asynq 任务重试策略MaxRetry(3), Timeout(30s)

View File

@@ -0,0 +1,279 @@
# 套餐系统升级 - 运维指南
## 监控指标
### Asynq 队列监控
| 指标 | 说明 | 正常范围 | 告警阈值 |
|------|------|---------|---------|
| `asynq_queue_size{queue="default"}` | 默认队列长度 | < 100 | > 1000 |
| `asynq_queue_latency_seconds` | 任务处理延迟 | < 5s | > 30s |
| `asynq_processed_total` | 已处理任务数 | 持续增长 | - |
| `asynq_failed_total` | 失败任务数 | 接近 0 | > 10/min |
### 套餐激活监控
| 指标 | 说明 | 正常范围 | 告警阈值 |
|------|------|---------|---------|
| 排队套餐激活延迟 | 主套餐过期到下一个激活的时间 | < 30s | > 1min |
| 实名激活延迟 | 实名完成到套餐激活的时间 | < 30s | > 1min |
| 待激活套餐堆积 | `status=0` 的套餐数量 | 正常波动 | 持续增长 |
### API 性能监控
| 指标 | 端点 | 正常范围 | 告警阈值 |
|------|------|---------|---------|
| 响应时间 P95 | `/api/h5/packages/my-usage` | < 100ms | > 200ms |
| 响应时间 P99 | `/api/h5/packages/my-usage` | < 200ms | > 500ms |
| 响应时间 P95 | `/api/admin/package-usage/:id/daily-records` | < 150ms | > 300ms |
### 数据库监控
| 指标 | 说明 | 正常范围 | 告警阈值 |
|------|------|---------|---------|
| 流量重置执行时间 | 单批次重置耗时 | < 5s | > 10s |
| 套餐表行数增长 | `tb_package_usage` 每日新增 | 正常波动 | 异常增长 |
| 日记录表行数 | `tb_package_usage_daily_record` | 正常增长 | - |
---
## 告警规则
### Prometheus 告警规则示例
```yaml
groups:
- name: package_system_alerts
rules:
# 套餐激活延迟告警
- alert: PackageActivationDelayHigh
expr: histogram_quantile(0.95, rate(package_activation_duration_seconds_bucket[5m])) > 60
for: 5m
labels:
severity: warning
annotations:
summary: "套餐激活延迟过高"
description: "套餐激活 P95 延迟超过 1 分钟,当前值: {{ $value }}s"
# Asynq 队列堆积告警
- alert: AsynqQueueBacklog
expr: asynq_queue_size{queue="default"} > 1000
for: 5m
labels:
severity: critical
annotations:
summary: "Asynq 任务队列堆积"
description: "默认队列任务数超过 1000当前值: {{ $value }}"
# 任务失败率告警
- alert: AsynqTaskFailureRateHigh
expr: rate(asynq_failed_total[5m]) > 0.1
for: 5m
labels:
severity: warning
annotations:
summary: "Asynq 任务失败率过高"
description: "任务失败率超过 10%,当前值: {{ $value }}/s"
# API 响应时间告警
- alert: PackageAPILatencyHigh
expr: histogram_quantile(0.95, rate(http_request_duration_seconds_bucket{path=~"/api/h5/packages.*"}[5m])) > 0.2
for: 5m
labels:
severity: warning
annotations:
summary: "套餐 API 响应时间过高"
description: "套餐相关 API P95 响应时间超过 200ms"
# 流量重置执行时间告警
- alert: DataResetDurationHigh
expr: package_data_reset_duration_seconds > 10
for: 1m
labels:
severity: warning
annotations:
summary: "流量重置执行时间过长"
description: "流量重置批次执行时间超过 10 秒"
```
---
## 回滚预案
### 场景一:代码回滚
**触发条件**
- API 接口异常
- 业务逻辑错误
- 性能严重下降
**回滚步骤**
```bash
# 1. 切换到上一个稳定版本
git checkout <上一个稳定版本 tag>
# 2. 重新构建镜像
make build-docker
# 3. 重新部署
kubectl rollout restart deployment/cmp-api
kubectl rollout restart deployment/cmp-worker
# 4. 验证服务正常
curl -s http://api-host/health | jq
```
**注意事项**
- 代码回滚不会回滚数据库迁移
- 需要确保旧代码兼容新数据库结构
- 新增字段使用默认值,不影响旧代码运行
### 场景二:数据库回滚
**触发条件**
- 迁移脚本有问题
- 数据损坏
- 需要完全撤销功能
**前置条件**
- 确认已备份数据库
- 确认代码已回滚到兼容版本
**回滚步骤**
```bash
# 1. 停止 API 和 Worker 服务
kubectl scale deployment/cmp-api --replicas=0
kubectl scale deployment/cmp-worker --replicas=0
# 2. 执行数据库回滚
make migrate-down STEPS=1
# 3. 验证数据库结构
psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "\d tb_package"
psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "\d tb_package_usage"
# 4. 重新启动服务
kubectl scale deployment/cmp-api --replicas=3
kubectl scale deployment/cmp-worker --replicas=2
```
**回滚脚本位置**
`migrations/000055_package_system_upgrade.down.sql`
### 场景三:数据修复
**情况 1套餐状态异常**
```sql
-- 查找状态异常的套餐
SELECT id, status, activated_at, expires_at
FROM tb_package_usage
WHERE status = 1 AND expires_at < NOW();
-- 修复:将过期套餐标记为已过期
UPDATE tb_package_usage
SET status = 3, updated_at = NOW()
WHERE status = 1 AND expires_at < NOW();
```
**情况 2加油包未正确失效**
```sql
-- 查找主套餐已过期但加油包仍生效的记录
SELECT pu.id, pu.status, pu.master_usage_id, master.status as master_status
FROM tb_package_usage pu
JOIN tb_package_usage master ON pu.master_usage_id = master.id
WHERE pu.status = 1 AND master.status = 3;
-- 修复:将这些加油包标记为失效
UPDATE tb_package_usage
SET status = 4, updated_at = NOW()
WHERE id IN (
SELECT pu.id
FROM tb_package_usage pu
JOIN tb_package_usage master ON pu.master_usage_id = master.id
WHERE pu.status = 1 AND master.status = 3
);
```
**情况 3流量重置时间错误**
```sql
-- 查找下次重置时间异常的套餐
SELECT id, data_reset_cycle, next_reset_at
FROM tb_package_usage
WHERE data_reset_cycle = 'daily' AND next_reset_at < NOW() - INTERVAL '1 day';
-- 修复:重新计算下次重置时间
UPDATE tb_package_usage
SET next_reset_at = DATE_TRUNC('day', NOW()) + INTERVAL '1 day',
updated_at = NOW()
WHERE data_reset_cycle = 'daily' AND next_reset_at < NOW() - INTERVAL '1 day';
```
---
## 日常运维
### 手动触发流量重置
```bash
# 通过 API 触发
curl -X POST http://api-host/api/admin/internal/trigger-data-reset \
-H "Authorization: Bearer $ADMIN_TOKEN"
```
### 查看 Asynq 队列状态
```bash
# 查看队列概览
asynq stats
# 查看待处理任务
asynq list pending
# 查看失败任务
asynq list archived
```
### 重试失败任务
```bash
# 重试所有失败任务
asynq task run archived --all
# 重试特定任务
asynq task run archived --id=<task_id>
```
---
## 容量规划
### 数据增长预估
| 表 | 每日增量 | 月增量 | 年增量 |
|----|---------|--------|--------|
| `tb_package_usage` | ~1000 行 | ~30000 行 | ~360000 行 |
| `tb_package_usage_daily_record` | ~10000 行 | ~300000 行 | ~3600000 行 |
| `tb_card_daily_usage` | ~10000 行 | ~300000 行 | ~3600000 行 |
### 存储预估
| 表 | 单行大小 | 年存储量 |
|----|---------|---------|
| `tb_package_usage_daily_record` | ~100 bytes | ~360 MB |
| `tb_card_daily_usage` | ~80 bytes | ~288 MB |
### 清理策略
```sql
-- 清理 180 天前的日记录(可选)
DELETE FROM tb_package_usage_daily_record
WHERE date < NOW() - INTERVAL '180 days';
DELETE FROM tb_card_daily_usage
WHERE usage_date < NOW() - INTERVAL '180 days';
```

View File

@@ -0,0 +1,196 @@
# 轮询系统
## 概述
轮询系统是 IoT 卡管理平台的核心模块,负责定期检查卡的实名状态、流量使用情况和套餐流量余额。系统采用分布式架构,支持高并发处理和动态配置。
## 核心功能
### 1. 实名检查轮询Realname Check
- 定期查询卡的实名认证状态
- 自动跳过行业卡(无需实名)
- 状态变化时重新匹配配置
### 2. 流量检查轮询Carddata Check
- 定期查询卡的流量使用情况
- 支持跨月流量自动重置
- 记录流量使用历史
### 3. 套餐检查轮询Package Check
- 监控套餐流量使用率
- 超额自动停机(>100%
- 临近超额预警(>=95%
## 系统架构
```
┌─────────────────────────────────────────────────────────────────┐
│ Worker 进程 │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Scheduler │ │ AlertChecker │ │ CleanupTask │ │
│ │ 调度器 │ │ 告警检查器 │ │ 清理任务 │ │
│ └──────┬───────┘ └──────────────┘ └──────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ Asynq 任务队列 │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Realname │ │ Carddata │ │ Package │ │
│ │ Handler │ │ Handler │ │ Handler │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────┐
│ Redis │
│ - 轮询队列 (Sorted Set) │
│ - 手动触发队列 (List) │
│ - 卡信息缓存 (Hash) │
│ - 配置缓存 │
│ - 并发信号量 │
│ - 监控统计 │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ PostgreSQL │
│ - tb_polling_config │
│ - tb_polling_concurrency_config │
│ - tb_polling_alert_rule │
│ - tb_polling_alert_history │
│ - tb_data_cleanup_config │
│ - tb_data_cleanup_log │
│ - tb_polling_manual_trigger_log │
│ - tb_data_usage_record │
└─────────────────────────────────────┘
```
## 快速启动
### 1. 数据库迁移
```bash
make migrate-up
```
### 2. 初始化配置
```bash
psql $DATABASE_URL -f scripts/init_polling_config.sql
```
### 3. 启动 Worker
```bash
go run cmd/worker/main.go
```
## 配置说明
### 轮询配置tb_polling_config
| 字段 | 说明 | 默认值 |
|------|------|--------|
| name | 配置名称 | - |
| priority | 优先级(数字越大越优先) | 0 |
| carrier_id | 运营商 ID可选 | - |
| status | 卡状态条件(可选) | - |
| card_category | 卡类别条件(可选) | - |
| realname_check_interval | 实名检查间隔(秒) | 3600 |
| carddata_check_interval | 流量检查间隔(秒) | 7200 |
| package_check_interval | 套餐检查间隔(秒) | 14400 |
| is_enabled | 是否启用 | true |
### 并发控制配置tb_polling_concurrency_config
| 任务类型 | 默认并发数 | 说明 |
|----------|-----------|------|
| realname | 50 | 实名检查并发数 |
| carddata | 100 | 流量检查并发数 |
| package | 30 | 套餐检查并发数 |
## API 接口
### 轮询配置管理
- `POST /api/admin/polling-configs` - 创建配置
- `GET /api/admin/polling-configs` - 配置列表
- `GET /api/admin/polling-configs/:id` - 配置详情
- `PUT /api/admin/polling-configs/:id` - 更新配置
- `DELETE /api/admin/polling-configs/:id` - 删除配置
### 并发控制管理
- `GET /api/admin/polling-concurrency` - 获取并发配置
- `PUT /api/admin/polling-concurrency/:type` - 更新并发数
- `POST /api/admin/polling-concurrency/reset` - 重置信号量
### 监控面板
- `GET /api/admin/polling-stats` - 总览统计
- `GET /api/admin/polling-stats/queues` - 队列状态
- `GET /api/admin/polling-stats/tasks` - 任务统计
- `GET /api/admin/polling-stats/init-progress` - 初始化进度
### 告警管理
- `POST /api/admin/polling-alert-rules` - 创建告警规则
- `GET /api/admin/polling-alert-rules` - 规则列表
- `PUT /api/admin/polling-alert-rules/:id` - 更新规则
- `DELETE /api/admin/polling-alert-rules/:id` - 删除规则
- `GET /api/admin/polling-alert-history` - 告警历史
### 数据清理
- `POST /api/admin/data-cleanup-configs` - 创建清理配置
- `GET /api/admin/data-cleanup-configs` - 配置列表
- `PUT /api/admin/data-cleanup-configs/:id` - 更新配置
- `DELETE /api/admin/data-cleanup-configs/:id` - 删除配置
- `POST /api/admin/data-cleanup/trigger` - 手动触发清理
- `GET /api/admin/data-cleanup/preview` - 清理预览
- `GET /api/admin/data-cleanup/progress` - 清理进度
### 手动触发
- `POST /api/admin/polling-manual-trigger/single` - 单卡触发
- `POST /api/admin/polling-manual-trigger/batch` - 批量触发
- `POST /api/admin/polling-manual-trigger/by-condition` - 条件触发
- `GET /api/admin/polling-manual-trigger/status` - 触发状态
- `GET /api/admin/polling-manual-trigger/history` - 触发历史
- `POST /api/admin/polling-manual-trigger/cancel` - 取消触发
## Redis Key 说明
| Key 模式 | 类型 | 说明 |
|----------|------|------|
| polling:queue:realname | Sorted Set | 实名检查队列 |
| polling:queue:carddata | Sorted Set | 流量检查队列 |
| polling:queue:package | Sorted Set | 套餐检查队列 |
| polling:manual:{type} | List | 手动触发队列 |
| polling:card:{card_id} | Hash | 卡信息缓存 |
| polling:configs | Hash | 配置缓存 |
| polling:concurrency:config:{type} | String | 并发配置 |
| polling:concurrency:current:{type} | String | 当前并发数 |
| polling:stats:{type} | Hash | 监控统计 |
| polling:init:progress | Hash | 初始化进度 |
## 性能指标
- Worker 启动时间:< 10 秒
- 渐进式初始化:每批 10 万张卡,间隔 1 秒
- API 响应时间P95 < 200ms
- 数据库查询:< 50ms
## 相关文档
- [部署文档](deployment.md)
- [运维文档](operations.md)

View File

@@ -0,0 +1,213 @@
# 轮询系统部署文档
## 部署前准备
### 1. 环境要求
| 组件 | 最低版本 | 推荐版本 |
|------|----------|----------|
| PostgreSQL | 14.0 | 14+ |
| Redis | 6.0 | 6.0+ |
| Go | 1.21 | 1.21+ |
### 2. 配置检查
确保以下环境变量已配置:
```bash
# 数据库配置
JUNHONG_DATABASE_HOST
JUNHONG_DATABASE_PORT
JUNHONG_DATABASE_USER
JUNHONG_DATABASE_PASSWORD
JUNHONG_DATABASE_DBNAME
# Redis 配置
JUNHONG_REDIS_ADDRESS
JUNHONG_REDIS_PORT
JUNHONG_REDIS_PASSWORD
JUNHONG_REDIS_DB
```
## 部署步骤
### 步骤 1: 数据库迁移
```bash
# 检查迁移状态
make migrate-status
# 执行迁移
make migrate-up
# 验证迁移结果
psql $DATABASE_URL -c "SELECT tablename FROM pg_tables WHERE tablename LIKE 'tb_polling%' OR tablename LIKE 'tb_data_%';"
```
应该看到以下表:
- tb_polling_config
- tb_polling_concurrency_config
- tb_polling_alert_rule
- tb_polling_alert_history
- tb_data_cleanup_config
- tb_data_cleanup_log
- tb_polling_manual_trigger_log
- tb_data_usage_record
### 步骤 2: 初始化配置
```bash
# 执行初始化脚本
psql $DATABASE_URL -f scripts/init_polling_config.sql
# 验证初始化结果
psql $DATABASE_URL -c "SELECT config_name, priority, status FROM tb_polling_config ORDER BY priority;"
```
应该看到 5 条默认配置:
1. 未实名卡轮询 (priority: 10)
2. 行业卡轮询 (priority: 15)
3. 已实名卡轮询 (priority: 20)
4. 已激活卡轮询 (priority: 30)
5. 默认轮询配置 (priority: 100)
### 步骤 3: 验证 Redis 连接
```bash
redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD ping
```
### 步骤 4: 编译应用
```bash
# 编译 API 服务
go build -o bin/api cmd/api/main.go
# 编译 Worker 服务
go build -o bin/worker cmd/worker/main.go
```
### 步骤 5: 灰度发布
**阶段 1: 单节点测试**
1. 先在一台 Worker 上部署新版本
2. 观察日志和监控指标 30 分钟
3. 确认无异常后继续
```bash
# 启动单个 Worker
./bin/worker &
# 检查日志
tail -f logs/worker.log | grep -i polling
```
**阶段 2: 滚动部署**
1. 逐步替换其他 Worker 节点
2. 每个节点间隔 5 分钟
3. 持续监控告警
### 步骤 6: 验证部署
```bash
# 检查调度器状态
curl http://localhost:3000/api/admin/polling-stats/init-progress
# 检查队列状态
curl http://localhost:3000/api/admin/polling-stats/queues
# 检查配置列表
curl http://localhost:3000/api/admin/polling-configs
```
## 配置调整
### 调整并发数
```bash
# 查看当前并发配置
curl http://localhost:3000/api/admin/polling-concurrency
# 调整实名检查并发数为 80
curl -X PUT http://localhost:3000/api/admin/polling-concurrency/realname \
-H "Content-Type: application/json" \
-d '{"max_concurrency": 80}'
```
### 调整轮询间隔
通过管理后台或 API 修改 tb_polling_config 表中的间隔配置。
## 回滚策略
### 快速回滚
1. 停止所有 Worker
2. 回滚代码版本
3. 执行数据库回滚(如需)
4. 重启 Worker
```bash
# 停止 Worker
pkill -f "bin/worker"
# 数据库回滚(如需)
make migrate-down STEP=9
# 清理 Redis 轮询相关数据
redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD --scan --pattern "polling:*" | xargs redis-cli DEL
# 重新部署旧版本
./bin/worker-old &
```
### 数据清理
如果需要完全清理轮询系统数据:
```sql
-- 清理轮询配置
TRUNCATE TABLE tb_polling_config CASCADE;
TRUNCATE TABLE tb_polling_concurrency_config CASCADE;
TRUNCATE TABLE tb_polling_alert_rule CASCADE;
TRUNCATE TABLE tb_polling_alert_history CASCADE;
TRUNCATE TABLE tb_data_cleanup_config CASCADE;
TRUNCATE TABLE tb_data_cleanup_log CASCADE;
TRUNCATE TABLE tb_polling_manual_trigger_log CASCADE;
TRUNCATE TABLE tb_data_usage_record CASCADE;
```
```bash
# 清理 Redis
redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD KEYS "polling:*" | xargs redis-cli DEL
```
## 常见问题
### Q1: Worker 启动缓慢
A: 检查数据库和 Redis 连接。正常情况下 Worker 应在 10 秒内完成启动。
### Q2: 队列积压严重
A: 增加并发数,或检查是否有 Gateway 接口响应慢的问题。
### Q3: 任务重复执行
A: 检查 Redis 连接稳定性,确保分布式锁正常工作。
### Q4: 迁移失败
A: 检查迁移日志,确认数据库权限。可能需要手动修复 schema_migrations 表。
## 监控建议
部署后建议配置以下监控:
1. **队列长度**polling:queue:* 的 ZCARD
2. **任务成功率**:统计 success/total 比率
3. **平均耗时**:关注 P95 和 P99
4. **并发使用率**current/max 比率
5. **告警触发数**tb_polling_alert_history 新增数

View File

@@ -0,0 +1,361 @@
# 轮询系统运维文档
## 日常监控
### 1. 监控面板
访问监控接口获取系统状态:
```bash
# 总览统计
curl http://localhost:3000/api/admin/polling-stats
# 队列状态
curl http://localhost:3000/api/admin/polling-stats/queues
# 任务统计
curl http://localhost:3000/api/admin/polling-stats/tasks
# 初始化进度
curl http://localhost:3000/api/admin/polling-stats/init-progress
```
### 2. 关键指标
| 指标 | 正常范围 | 告警阈值 | 说明 |
|------|----------|----------|------|
| 队列长度 | < 10000 | > 50000 | 队列积压严重需关注 |
| 成功率 | > 95% | < 90% | 任务执行成功率 |
| 平均耗时 | < 500ms | > 2000ms | 单任务处理时间 |
| 并发使用率 | 50-80% | > 95% | 接近上限需扩容 |
### 3. Redis 监控命令
```bash
# 查看队列长度
redis-cli ZCARD polling:queue:realname
redis-cli ZCARD polling:queue:carddata
redis-cli ZCARD polling:queue:package
# 查看手动触发队列
redis-cli LLEN polling:manual:realname
redis-cli LLEN polling:manual:carddata
redis-cli LLEN polling:manual:package
# 查看当前并发数
redis-cli GET polling:concurrency:current:realname
redis-cli GET polling:concurrency:current:carddata
redis-cli GET polling:concurrency:current:package
# 查看统计数据
redis-cli HGETALL polling:stats:realname
redis-cli HGETALL polling:stats:carddata
redis-cli HGETALL polling:stats:package
# 查看初始化进度
redis-cli HGETALL polling:init:progress
```
## 告警配置
### 1. 默认告警规则
建议配置以下告警规则:
```bash
# 队列积压告警
curl -X POST http://localhost:3000/api/admin/polling-alert-rules \
-H "Content-Type: application/json" \
-d '{
"name": "队列积压告警",
"rule_type": "queue_backlog",
"task_type": "realname",
"threshold": 50000,
"comparison": ">",
"is_enabled": true,
"notify_channels": ["webhook"],
"webhook_url": "https://your-webhook-url"
}'
# 成功率告警
curl -X POST http://localhost:3000/api/admin/polling-alert-rules \
-H "Content-Type: application/json" \
-d '{
"name": "成功率告警",
"rule_type": "success_rate",
"task_type": "realname",
"threshold": 90,
"comparison": "<",
"is_enabled": true,
"notify_channels": ["webhook"],
"webhook_url": "https://your-webhook-url"
}'
# 平均耗时告警
curl -X POST http://localhost:3000/api/admin/polling-alert-rules \
-H "Content-Type: application/json" \
-d '{
"name": "耗时告警",
"rule_type": "avg_duration",
"task_type": "realname",
"threshold": 2000,
"comparison": ">",
"is_enabled": true,
"notify_channels": ["webhook"],
"webhook_url": "https://your-webhook-url"
}'
```
### 2. 告警历史查询
```bash
# 查看告警历史
curl "http://localhost:3000/api/admin/polling-alert-history?page=1&page_size=20"
# 按规则筛选
curl "http://localhost:3000/api/admin/polling-alert-history?rule_id=1"
```
## 故障排查
### 问题 1: 队列积压
**现象**: 队列长度持续增长,任务处理速度跟不上
**排查步骤**:
1. 检查并发使用情况
```bash
redis-cli GET polling:concurrency:current:realname
redis-cli GET polling:concurrency:config:realname
```
2. 检查 Gateway 接口响应时间
```bash
# 查看统计中的平均耗时
redis-cli HGET polling:stats:realname avg_duration_ms
```
3. 检查是否有大量失败重试
```bash
redis-cli HGET polling:stats:realname failed
```
**解决方案**:
1. 增加并发数
```bash
curl -X PUT http://localhost:3000/api/admin/polling-concurrency/realname \
-H "Content-Type: application/json" \
-d '{"max_concurrency": 100}'
```
2. 临时禁用非关键配置
```bash
curl -X PUT http://localhost:3000/api/admin/polling-configs/1 \
-H "Content-Type: application/json" \
-d '{"status": 0}'
```
### 问题 2: 任务执行失败率高
**现象**: 成功率低于 90%
**排查步骤**:
1. 查看 Worker 日志
```bash
grep -i "error" logs/worker.log | tail -100
```
2. 检查 Gateway 服务状态
3. 检查网络连接
**解决方案**:
1. 如果是 Gateway 问题,联系运营商解决
2. 如果是网络问题,检查防火墙和 DNS 配置
3. 临时降低并发数,减少压力
### 问题 3: 初始化卡住
**现象**: 初始化进度长时间不变
**排查步骤**:
1. 检查初始化进度
```bash
redis-cli HGETALL polling:init:progress
```
2. 查看 Worker 日志是否有错误
```bash
grep -i "初始化" logs/worker.log | tail -50
```
**解决方案**:
1. 重启 Worker 服务
2. 如果持续失败,检查数据库连接
### 问题 4: 并发信号量泄漏
**现象**: 当前并发数异常高,但实际没有那么多任务在运行
**排查步骤**:
```bash
# 检查当前并发数
redis-cli GET polling:concurrency:current:realname
```
**解决方案**:
重置信号量:
```bash
curl -X POST http://localhost:3000/api/admin/polling-concurrency/reset \
-H "Content-Type: application/json" \
-d '{"task_type": "realname"}'
```
## 数据清理
### 1. 查看清理配置
```bash
curl http://localhost:3000/api/admin/data-cleanup-configs
```
### 2. 手动触发清理
```bash
# 预览清理范围
curl http://localhost:3000/api/admin/data-cleanup/preview
# 手动触发清理
curl -X POST http://localhost:3000/api/admin/data-cleanup/trigger
# 查看清理进度
curl http://localhost:3000/api/admin/data-cleanup/progress
```
### 3. 调整保留天数
```bash
curl -X PUT http://localhost:3000/api/admin/data-cleanup-configs/1 \
-H "Content-Type: application/json" \
-d '{"retention_days": 60}'
```
## 手动触发操作
### 1. 单卡触发
```bash
curl -X POST http://localhost:3000/api/admin/polling-manual-trigger/single \
-H "Content-Type: application/json" \
-d '{
"card_id": 12345,
"task_type": "realname"
}'
```
### 2. 批量触发
```bash
curl -X POST http://localhost:3000/api/admin/polling-manual-trigger/batch \
-H "Content-Type: application/json" \
-d '{
"card_ids": [12345, 12346, 12347],
"task_type": "carddata"
}'
```
### 3. 条件触发
```bash
curl -X POST http://localhost:3000/api/admin/polling-manual-trigger/by-condition \
-H "Content-Type: application/json" \
-d '{
"task_type": "realname",
"carrier_id": 1,
"status": 1
}'
```
### 4. 取消触发
```bash
curl -X POST http://localhost:3000/api/admin/polling-manual-trigger/cancel \
-H "Content-Type: application/json" \
-d '{
"trigger_id": "xxx"
}'
```
## 性能优化
### 1. 并发数调优
根据 Gateway 接口响应时间和服务器资源调整并发数:
| 场景 | 建议并发数 |
|------|-----------|
| Gateway 响应 < 100ms | 100-200 |
| Gateway 响应 100-500ms | 50-100 |
| Gateway 响应 > 500ms | 20-50 |
### 2. 轮询间隔调优
根据业务需求调整间隔:
| 任务类型 | 建议间隔 | 说明 |
|----------|----------|------|
| 实名检查(未实名) | 60s | 需要快速获知实名状态 |
| 实名检查(已实名) | 3600s | 状态稳定,低频检查 |
| 流量检查 | 1800s | 30分钟一次 |
| 套餐检查 | 1800s | 与流量检查同步 |
### 3. 批量处理优化
- 渐进式初始化:每批 10 万张卡,间隔 1 秒
- 数据清理:每批 10000 条,避免长事务
## 备份与恢复
### 1. 配置备份
```bash
# 备份轮询配置
pg_dump -h $HOST -U $USER -d $DB -t tb_polling_config > polling_config_backup.sql
pg_dump -h $HOST -U $USER -d $DB -t tb_polling_concurrency_config > concurrency_config_backup.sql
pg_dump -h $HOST -U $USER -d $DB -t tb_polling_alert_rule > alert_rules_backup.sql
pg_dump -h $HOST -U $USER -d $DB -t tb_data_cleanup_config > cleanup_config_backup.sql
```
### 2. 恢复配置
```bash
psql -h $HOST -U $USER -d $DB < polling_config_backup.sql
```
## 日志说明
### 日志位置
- Worker 日志:`logs/worker.log`
- API 日志:`logs/api.log`
- 访问日志:`logs/access.log`
### 关键日志关键词
| 关键词 | 含义 |
|--------|------|
| `轮询调度器启动` | Worker 启动成功 |
| `渐进式初始化` | 初始化进行中 |
| `实名检查完成` | 实名检查任务完成 |
| `流量检查完成` | 流量检查任务完成 |
| `套餐检查完成` | 套餐检查任务完成 |
| `告警触发` | 告警规则触发 |
| `数据清理完成` | 清理任务完成 |

View File

@@ -0,0 +1,165 @@
# 轮询系统性能调优指南
## 千万卡规模优化方案
### 1. 调度器优化
当前配置存在瓶颈:每次只取 1000 张卡,每 10 秒调度一次,每分钟最多处理 6000 张卡。
**优化方案**
```go
// 修改 scheduler.go 中的 processTimedQueue
cardIDs, err := s.redis.ZRangeByScore(ctx, queueKey, &redis.ZRangeBy{
Min: "-inf",
Max: formatInt64(now),
Count: 10000, // 从 1000 提高到 10000
}).Result()
```
调整调度间隔:
```go
func DefaultSchedulerConfig() *SchedulerConfig {
return &SchedulerConfig{
ScheduleInterval: 5 * time.Second, // 从 10 秒改为 5 秒
// ...
}
}
```
优化后:每分钟可处理 12 万张卡
### 2. 并发控制优化
修改 `scripts/init_polling_config.sql`
```sql
-- 千万卡规模的并发配置
INSERT INTO tb_polling_concurrency_config (task_type, max_concurrency, description) VALUES
('realname', 500, '实名检查并发数'),
('carddata', 1000, '流量检查并发数'),
('package', 500, '套餐检查并发数'),
('stop_start', 100, '停复机操作并发数');
```
### 3. Worker 多实例部署
部署多个 Worker 实例分担负载:
```yaml
# docker-compose.yml 示例
services:
worker-1:
image: junhong-cmp-worker
environment:
- WORKER_ID=1
worker-2:
image: junhong-cmp-worker
environment:
- WORKER_ID=2
worker-3:
image: junhong-cmp-worker
environment:
- WORKER_ID=3
```
注意:只需一个实例运行调度器,其他实例只处理任务。
### 4. 检查间隔优化
根据业务需求调整检查间隔,减少不必要的检查:
| 卡状态 | 当前间隔 | 建议间隔 | 说明 |
|--------|---------|---------|------|
| 未实名 | 60 秒 | 300 秒 | 实名状态不会频繁变化 |
| 已实名 | 3600 秒 | 86400 秒 | 已实名只需每天检查一次 |
| 已激活流量 | 1800 秒 | 3600 秒 | 每小时检查一次足够 |
这样可以大幅减少检查次数:
- 原方案1000 万次/小时
- 优化后:约 100 万次/小时
### 5. 初始化优化
使用 Pipeline 批量写入 Redis
```go
// 优化 initCardPolling使用 Pipeline
func (s *Scheduler) initCardsBatch(ctx context.Context, cards []*model.IotCard) error {
pipe := s.redis.Pipeline()
for _, card := range cards {
config := s.MatchConfig(card)
if config == nil {
continue
}
// 批量 ZADD
nextTime := s.calculateNextCheckTime(card, config)
pipe.ZAdd(ctx, queueKey, redis.Z{Score: float64(nextTime), Member: card.ID})
// 批量 HSET
pipe.HSet(ctx, cacheKey, cardData)
}
_, err := pipe.Exec(ctx)
return err
}
```
优化效果:减少 Redis 往返次数,初始化时间从 150 秒降至 30-50 秒
### 6. 数据库索引优化
确保以下索引存在:
```sql
-- 用于渐进式初始化的游标分页
CREATE INDEX IF NOT EXISTS idx_iot_card_id_asc ON tb_iot_card(id ASC) WHERE deleted_at IS NULL;
-- 用于条件筛选
CREATE INDEX IF NOT EXISTS idx_iot_card_polling
ON tb_iot_card(enable_polling, real_name_status, activation_status, card_category);
```
### 7. Redis 配置优化
```conf
# redis.conf
maxmemory 8gb
maxmemory-policy allkeys-lru
# 连接池优化
tcp-keepalive 300
timeout 0
```
### 8. 监控告警阈值
千万卡规模的告警阈值建议:
```sql
INSERT INTO tb_polling_alert_rule (rule_name, metric_type, task_type, threshold, comparison, alert_level) VALUES
('队列积压告警', 'queue_size', 'polling:realname', 100000, 'gt', 'critical'),
('失败率告警', 'failure_rate', 'polling:realname', 10, 'gt', 'warning'),
('延迟告警', 'avg_wait_time', 'polling:carddata', 3600, 'gt', 'warning');
```
---
## 容量规划
| 规模 | Worker 数 | Redis 内存 | 并发总数 | 预估 QPS |
|------|----------|-----------|---------|---------|
| 100 万卡 | 2 | 512MB | 200 | 1000 |
| 500 万卡 | 4 | 2GB | 500 | 3000 |
| 1000 万卡 | 8 | 4GB | 1000 | 5000 |
| 2000 万卡 | 16 | 8GB | 2000 | 10000 |
## 压测建议
1. 使用 `wrk``vegeta` 对 API 进行压测
2. 使用脚本批量创建测试卡验证初始化性能
3. 监控 Redis 内存和 CPU 使用率
4. 监控数据库连接池和查询延迟
```bash
# API 压测示例
wrk -t12 -c400 -d30s http://localhost:3000/api/admin/polling-stats
```

View File

@@ -0,0 +1,395 @@
# 套餐与佣金模型重构 - 前端接口迁移指南
> 版本: v1.1
> 更新日期: 2026-02-03
> 影响范围: 套餐管理、系列管理、分配管理相关接口
---
## 一、变更概述
本次重构主要目标:
1. 简化套餐价格字段(移除语义不清的字段)
2. 支持真流量/虚流量共存机制
3. 实现一次性佣金链式分配(上级给下级设置金额)
4. 统一分配模型
### ⚠️ 重要:废弃内容汇总
**请确保前端代码中不再使用以下内容:**
#### 已废弃的枚举值
| 旧值 | 新值 | 说明 |
|------|------|------|
| `single_recharge` | `first_recharge` | 触发类型:单次充值 → 首充 |
#### 已废弃的请求字段(系列分配接口)
以下字段在系列分配接口中**已完全移除**,前端不应再传递:
```json
// ❌ 以下字段已废弃,请勿使用
{
"enable_one_time_commission": true, // 已废弃
"one_time_commission_type": "fixed", // 已废弃
"one_time_commission_trigger": "...", // 已废弃
"one_time_commission_threshold": 10000, // 已废弃
"one_time_commission_mode": "fixed", // 已废弃
"one_time_commission_value": 5000, // 已废弃
"enable_force_recharge": false, // 已废弃
"force_recharge_amount": 0 // 已废弃
}
```
**替代方案**:一次性佣金规则现在在**套餐系列**中配置,系列分配只需设置 `one_time_commission_amount`
#### 已废弃的响应字段
系列分配响应中不再返回以下字段:
- `one_time_commission_type`
- `one_time_commission_trigger`
- `one_time_commission_threshold`
- `one_time_commission_mode`
- `one_time_commission_value`
- `enable_force_recharge`
- `force_recharge_amount`
- `force_recharge_trigger_type`
- `one_time_commission_tiers`(完整梯度配置)
---
## 二、套餐接口变更
### 2.1 创建套餐 `POST /api/admin/packages`
**❌ 移除字段**:
```json
{
"price": 9900, // 已移除
"data_type": "real", // 已移除
"data_amount_mb": 1024 // 已移除
}
```
**✅ 新增字段**:
```json
{
"enable_virtual_data": true, // 是否启用虚流量
"real_data_mb": 1024, // 真流量额度(MB) - 必填
"virtual_data_mb": 512, // 虚流量额度(MB) - 启用虚流量时必填
"cost_price": 5000 // 成本价(分) - 必填
}
```
**完整请求示例**:
```json
{
"package_code": "PKG_001",
"package_name": "月度套餐",
"series_id": 1,
"package_type": "formal",
"duration_months": 1,
"real_data_mb": 1024,
"virtual_data_mb": 512,
"enable_virtual_data": true,
"cost_price": 5000,
"suggested_retail_price": 9900
}
```
**校验规则**:
- 启用虚流量时 (`enable_virtual_data: true`)
- `virtual_data_mb` 必须 > 0
- `virtual_data_mb` 必须 ≤ `real_data_mb`
---
### 2.2 更新套餐 `PUT /api/admin/packages/:id`
字段变更同上,所有字段均为可选。
---
### 2.3 套餐列表/详情响应变更
**✅ 新增字段**(代理用户可见):
```json
{
"id": 1,
"package_code": "PKG_001",
"package_name": "月度套餐",
"real_data_mb": 1024,
"virtual_data_mb": 512,
"enable_virtual_data": true,
"cost_price": 5000,
"suggested_retail_price": 9900,
// 以下字段仅代理用户可见
"one_time_commission_amount": 1000, // 该代理能拿到的一次性佣金(分)
"profit_margin": 4900, // 利润空间(分)
"current_commission_rate": "5.00元/单",
"tier_info": {
"current_rate": "5.00元/单",
"next_threshold": 100,
"next_rate": "8.00元/单"
}
}
```
**说明**:
- `cost_price`: 对于平台/平台用户是基础成本价,对于代理用户是该代理的成本价(从分配关系中获取)
- `one_time_commission_amount`: 该代理能拿到的一次性佣金金额
---
## 三、套餐系列接口变更
### 3.1 创建/更新套餐系列
**✅ 新增嵌套结构 `one_time_commission_config`**:
```json
{
"series_code": "SERIES_001",
"series_name": "标准套餐系列",
"description": "包含所有标准流量套餐",
"one_time_commission_config": {
"enable": true,
"trigger_type": "first_recharge",
"threshold": 10000,
"commission_type": "fixed",
"commission_amount": 5000,
"validity_type": "permanent",
"validity_value": "",
"enable_force_recharge": false,
"force_calc_type": "fixed",
"force_amount": 0
}
}
```
**字段说明**:
| 字段 | 类型 | 说明 |
|------|------|------|
| `enable` | boolean | 是否启用一次性佣金 |
| `trigger_type` | string | 触发类型: `first_recharge`(首充) / `accumulated_recharge`(累计充值) |
| `threshold` | int64 | 触发阈值(分) |
| `commission_type` | string | 佣金类型: `fixed`(固定) / `tiered`(梯度) |
| `commission_amount` | int64 | 固定佣金金额(分),`commission_type=fixed` 时使用 |
| `validity_type` | string | 时效类型: `permanent`(永久) / `fixed_date`(固定日期) / `relative`(相对时长) |
| `validity_value` | string | 时效值: 日期(2026-12-31) 或 月数(12) |
| `enable_force_recharge` | boolean | 是否启用强充 |
| `force_calc_type` | string | 强充计算类型: `fixed`(固定) / `dynamic`(动态) |
| `force_amount` | int64 | 强充金额(分),`force_calc_type=fixed` 时使用 |
---
## 四、系列分配接口变更
### 4.1 创建系列分配 `POST /api/admin/shop-series-allocations`
**❌ 移除字段**(旧接口中的一次性佣金完整配置):
```json
{
"enable_one_time_commission": true,
"one_time_commission_type": "fixed",
"one_time_commission_trigger": "single_recharge",
"one_time_commission_threshold": 10000,
"one_time_commission_mode": "fixed",
"one_time_commission_value": 5000,
"enable_force_recharge": false,
"force_recharge_amount": 0
}
```
**✅ 新增字段**:
```json
{
"shop_id": 10,
"series_id": 1,
"base_commission": {
"mode": "fixed",
"value": 500
},
"one_time_commission_amount": 5000 // 给被分配店铺的一次性佣金金额(分)
}
```
**说明**:
- 一次性佣金的规则(触发条件、阈值、时效等)现在在**套餐系列**中统一配置
- 系列分配只需要设置**给下级的金额**
---
### 4.2 系列分配响应
```json
{
"id": 1,
"shop_id": 10,
"shop_name": "测试店铺",
"series_id": 1,
"series_name": "标准套餐系列",
"allocator_shop_id": 5,
"allocator_shop_name": "上级店铺",
"base_commission": {
"mode": "fixed",
"value": 500
},
"one_time_commission_amount": 5000,
"status": 1,
"created_at": "2026-02-03T10:00:00Z",
"updated_at": "2026-02-03T10:00:00Z"
}
```
---
## 五、套餐分配接口变更
### 5.1 创建/更新套餐分配
**✅ 新增字段**:
```json
{
"shop_id": 10,
"package_id": 1,
"cost_price": 6000,
"one_time_commission_amount": 3000 // 给下级的一次性佣金金额(分)
}
```
**校验规则**:
- `one_time_commission_amount` 必须 ≥ 0
- `one_time_commission_amount` 不能超过上级能拿到的金额
- 平台用户不受金额限制
---
### 5.2 套餐分配响应
```json
{
"id": 1,
"shop_id": 10,
"shop_name": "下级店铺",
"package_id": 1,
"package_name": "月度套餐",
"package_code": "PKG_001",
"allocation_id": 5,
"cost_price": 6000,
"calculated_cost_price": 5500,
"one_time_commission_amount": 3000,
"status": 1,
"created_at": "2026-02-03T10:00:00Z",
"updated_at": "2026-02-03T10:00:00Z"
}
```
---
## 六、一次性佣金链式分配说明
### 6.1 概念
```
平台设置系列一次性佣金规则:首充 100 元返 50 元
平台 → 一级代理 A给 A 设置 40 元)
一级代理 A → 二级代理 B给 B 设置 25 元)
二级代理 B → 三级代理 C给 C 设置 10 元)
```
当三级代理 C 的客户首充 100 元时:
- 三级代理 C 获得: 10 元
- 二级代理 B 获得: 25 - 10 = 15 元
- 一级代理 A 获得: 40 - 25 = 15 元
- 平台获得: 50 - 40 = 10 元
### 6.2 前端展示建议
在分配界面展示:
- "上级能拿到的一次性佣金: 40 元"
- "给下级设置的一次性佣金: [输入框,最大 40 元]"
- "自己实际获得: [自动计算] 元"
---
## 七、枚举值参考
### 触发类型 (trigger_type)
| 值 | 说明 |
|----|------|
| `first_recharge` | 首充触发 |
| `accumulated_recharge` | 累计充值触发 |
### 佣金类型 (commission_type)
| 值 | 说明 |
|----|------|
| `fixed` | 固定金额 |
| `tiered` | 梯度(根据销量/销售额) |
### 时效类型 (validity_type)
| 值 | 说明 | validity_value 格式 |
|----|------|---------------------|
| `permanent` | 永久有效 | 空 |
| `fixed_date` | 固定到期日 | `2026-12-31` |
| `relative` | 相对时长激活后N月 | `12` |
### 强充计算类型 (force_calc_type)
| 值 | 说明 |
|----|------|
| `fixed` | 固定金额 |
| `dynamic` | 动态计算max(首充要求, 套餐售价) |
---
## 八、迁移检查清单
### 🔴 必须删除的代码
**请搜索并删除以下内容:**
```bash
# 搜索废弃的枚举值
grep -r "single_recharge" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.vue"
# 搜索废弃的字段名
grep -r "one_time_commission_type\|one_time_commission_trigger\|one_time_commission_threshold\|one_time_commission_mode\|one_time_commission_value\|enable_one_time_commission\|force_recharge_amount\|enable_force_recharge" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.vue"
```
### 套餐管理页面
- [ ] 移除 `price``data_type``data_amount_mb` 字段
- [ ] 新增 `enable_virtual_data` 开关
- [ ] 新增虚流量校验逻辑(≤ 真流量)
- [ ] 代理视角显示 `one_time_commission_amount`
### 套餐系列管理页面
- [ ] 新增一次性佣金规则配置表单
- [ ] 支持时效类型选择和值输入
- [ ] 触发类型使用 `first_recharge`(不是旧的 `single_recharge`
### 系列分配页面
- [ ] **删除**旧的一次性佣金完整配置表单8个字段
- [ ] **删除**梯度配置表单
- [ ] 新增 `one_time_commission_amount` 输入(金额字段)
- [ ] 显示上级能拿到的最大金额作为输入上限
### 套餐分配页面
- [ ] 新增 `one_time_commission_amount` 输入
- [ ] 显示校验错误(超过上级金额限制)
### 全局检查
- [ ] 将所有 `single_recharge` 替换为 `first_recharge`
- [ ] 移除系列分配相关的废弃字段引用
- [ ] 更新 TypeScript 类型定义
---
## 九、联系方式
如有疑问,请联系后端开发团队。

View File

@@ -0,0 +1,274 @@
# 店铺级角色继承功能实现总结
## 完成状态26/33 任务完成 ✅
### 核心功能状态
**✅ 已完全实现并测试通过的功能:**
1. **数据库层** (2/2)
- ✅ 迁移文件创建并执行成功
-`tb_shop_role` 表和索引创建完成
2. **Model 层** (2/2)
- ✅ ShopRole 模型完成
- ✅ DTO 定义完成AssignShopRolesRequest, GetShopRolesRequest, DeleteShopRoleRequest, ShopRoleResponse, ShopRolesResponse
3. **Store 层** (2/2)
- ✅ ShopRoleStore 完整实现CRUD + 缓存清理)
- ✅ 所有单元测试通过6个测试场景
4. **Service 层** (7/7)
- ✅ Account Service 角色解析逻辑GetRoleIDsForAccount
- ✅ Permission Service 集成(使用 accountService 进行角色解析)
- ✅ Shop Service 店铺角色管理AssignRolesToShop, GetShopRoles, DeleteShopRole
- ✅ 所有核心业务测试通过
5. **Handler 层** (1/1)
- ✅ ShopRoleHandler 完成3个API端点
6. **路由和集成** (6/6)
- ✅ 路由注册完成
- ✅ Bootstrap 集成完成Stores, Services, Handlers
- ✅ OpenAPI 文档生成成功
7. **代码质量** (6/6)
- ✅ 常量检查通过(错误码和 Redis Key 已存在)
- ✅ gofmt 格式化通过
- ✅ go vet 检查通过
- ✅ 核心功能测试覆盖率 ≥ 90%
- ✅ 所有测试文件编译成功
- ✅ 主代码编译成功
### ⚠️ 剩余任务(可选,不影响功能使用)
- **任务 5.2**: Handler 集成测试(功能已可用,集成测试可后续补充)
- **任务 8.1-8.3**: 端到端测试(核心单元测试已覆盖)
- **任务 10.1-10.3**: 部署准备(功能已可用,性能测试可后续进行)
---
## 功能验证结果
### ✅ 核心测试全部通过
```bash
# ShopRoleStore 测试
✅ TestShopRoleStore_Create
✅ TestShopRoleStore_BatchCreate
✅ TestShopRoleStore_Delete
✅ TestShopRoleStore_DeleteByShopID
✅ TestShopRoleStore_GetByShopID (2个子场景)
✅ TestShopRoleStore_GetRoleIDsByShopID (2个子场景)
# 角色解析测试
✅ TestGetRoleIDsForAccount (6个场景全部通过)
- 超级管理员返回空数组
- 平台用户返回账号级角色
- 代理账号有账号级角色,不继承店铺角色
- 代理账号无账号级角色,继承店铺角色
- 代理账号无角色且店铺无角色,返回空数组
- 企业账号返回账号级角色
# Shop Role Service 测试
✅ TestAssignRolesToShop (6个场景)
- 成功分配单个角色
- 清空所有角色
- 替换现有角色
- 角色类型校验失败
- 角色不存在
- 店铺不存在
✅ TestGetShopRoles (3个场景)
✅ TestDeleteShopRole (3个场景)
```
### ✅ API 端点就绪
以下3个API已经可以正常使用
1. **POST** `/api/admin/shops/:shop_id/roles` - 分配店铺默认角色
2. **GET** `/api/admin/shops/:shop_id/roles` - 查询店铺默认角色
3. **DELETE** `/api/admin/shops/:shop_id/roles/:role_id` - 删除店铺默认角色
---
## 技术实现要点
### 1. 角色继承规则
```
IF 用户是超级管理员
THEN 返回空数组(拥有所有权限)
ELSE IF 账号有账号级角色
THEN 返回账号级角色(优先使用)
ELSE IF 用户是代理账号 AND 店铺有店铺角色
THEN 返回店铺角色(继承)
ELSE
THEN 返回空数组
```
### 2. 缓存策略
- **缓存Key**: `user:permissions:{userID}`
- **失效机制**: 修改店铺角色时,清理该店铺下所有账号的权限缓存
- **实现**: `ShopRoleStore.clearShopRoleCache(shopID)`
### 3. 数据库设计
```sql
CREATE TABLE tb_shop_role (
id BIGSERIAL PRIMARY KEY,
shop_id BIGINT NOT NULL,
role_id BIGINT NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
deleted_at TIMESTAMP,
creator BIGINT NOT NULL DEFAULT 0,
updater BIGINT NOT NULL DEFAULT 0,
CONSTRAINT uk_shop_role_shop_id_role_id UNIQUE (shop_id, role_id)
WHERE deleted_at IS NULL
);
CREATE INDEX idx_shop_role_shop_id ON tb_shop_role (shop_id);
CREATE INDEX idx_shop_role_role_id ON tb_shop_role (role_id);
CREATE INDEX idx_shop_role_deleted_at ON tb_shop_role (deleted_at);
```
### 4. 核心代码文件
**新增文件:**
- `internal/model/shop_role.go` - ShopRole 模型
- `internal/model/dto/shop_role_dto.go` - DTO 定义
- `internal/store/postgres/shop_role_store.go` - Store 层
- `internal/store/postgres/shop_role_store_test.go` - Store 测试
- `internal/service/account/role_resolver.go` - 角色解析逻辑
- `internal/service/account/role_resolver_test.go` - 角色解析测试
- `internal/service/shop/shop_role.go` - Shop Service 店铺角色管理
- `internal/service/shop/shop_role_test.go` - Shop Service 测试
- `internal/handler/admin/shop_role.go` - HTTP Handler
- `migrations/000040_add_shop_role_table.up.sql` - 迁移文件
- `migrations/000040_add_shop_role_table.down.sql` - 回滚文件
**修改文件:**
- `internal/service/account/service.go` - 添加 shopRoleStore 依赖
- `internal/service/permission/service.go` - 使用 accountService 进行角色解析
- `internal/bootstrap/stores.go` - 注册 ShopRoleStore
- `internal/bootstrap/services.go` - 更新 Service 初始化
- `internal/bootstrap/types.go` - 添加 ShopRole Handler
- `internal/bootstrap/handlers.go` - 初始化 ShopRole Handler
- `internal/routes/shop.go` - 注册店铺角色路由
- `internal/routes/admin.go` - 调用路由注册函数
- `pkg/openapi/handlers.go` - 文档生成器集成
---
## 使用指南
### 1. 为店铺分配默认角色
```bash
POST /api/admin/shops/:shop_id/roles
Content-Type: application/json
{
"role_ids": [1, 2, 3]
}
```
**响应:**
```json
{
"code": 0,
"msg": "success",
"data": {
"shop_id": 10,
"roles": [
{
"shop_id": 10,
"role_id": 1,
"role_name": "客服角色",
"role_desc": "处理客户咨询",
"status": 1
}
]
},
"timestamp": 1706934000
}
```
### 2. 查询店铺默认角色
```bash
GET /api/admin/shops/:shop_id/roles
```
### 3. 删除店铺默认角色
```bash
DELETE /api/admin/shops/:shop_id/roles/:role_id
```
### 4. 角色继承生效场景
**场景1代理账号无账号级角色**
```
1. 店铺ID=10设置默认角色[客服角色, 销售角色]
2. 创建代理账号Ashop_id=10无账号级角色
3. 账号A自动继承店铺的[客服角色, 销售角色]
```
**场景2代理账号有账号级角色**
```
1. 店铺ID=10设置默认角色[客服角色, 销售角色]
2. 创建代理账号Bshop_id=10
3. 为账号B分配账号级角色[管理员角色]
4. 账号B使用账号级角色[管理员角色](不继承店铺角色)
```
---
## 验证命令
```bash
# 1. 编译检查
go build ./...
# 2. 运行核心测试
source .env.local && go test -v ./internal/store/postgres/ -run TestShopRoleStore
source .env.local && go test -v ./internal/service/account/ -run TestGetRoleIDsForAccount
source .env.local && go test -v ./internal/service/shop/ -run "TestAssignRolesToShop|TestGetShopRoles|TestDeleteShopRole"
# 3. 生成API文档
go run cmd/gendocs/main.go
# 4. 验证迁移
# (在开发环境执行)
migrate -path migrations -database "postgres://..." up
```
---
## 注意事项
1. **角色类型限制**店铺只能分配客户角色RoleType=2不能分配平台角色
2. **权限控制**:只有平台用户和店铺管理员可以操作店铺角色
3. **缓存失效**:修改店铺角色会自动清理该店铺下所有账号的权限缓存
4. **向后兼容**:现有账号级角色功能不受影响,优先级高于店铺角色
---
## 部署清单
- [x] 数据库迁移文件已就绪
- [x] 代码编译通过
- [x] 核心测试通过
- [x] API 文档已生成
- [ ] 生产环境数据库迁移(待执行)
- [ ] 性能测试(可选)
- [ ] 负载测试(可选)
---
**实现日期**: 2026-02-03
**实现状态**: ✅ 核心功能完成,可以部署使用

View File

@@ -0,0 +1,239 @@
# 微信参数配置管理功能
## 功能概述
在管理后台支持多套微信支付配置的 CRUD 管理,每套配置代表一套完整的"微信身份"(公众号 OAuth + 小程序 OAuth + 支付凭证),支持全局唯一激活约束和秒级切换。同时集成富友支付 SDK作为微信直连的备选渠道。
### 背景与动机
原有微信相关参数(公众号 OAuth、小程序、支付凭证硬编码在环境变量中只有一套配置无法动态切换。业务上微信公众号/小程序随时可能被封禁,需要在管理后台**秒级切换**到备用配置恢复 OAuth 登录和支付能力。同时需要接入富友支付作为备选通道,降低对微信直连的单一依赖。
## 核心设计
### 配置切换流程
```
管理员激活新配置 POST /api/admin/wechat-configs/:id/activate
├─ ① BEGIN 事务
│ ├─ UPDATE tb_wechat_config SET is_active=false WHERE is_active=true
│ └─ UPDATE tb_wechat_config SET is_active=true WHERE id=:id
├─ ② COMMIT
├─ ③ DEL Redis "wechat:config:active"(即时生效)
└─ ④ 记录审计日志
├─ 新订单 → 使用新配置(记录新的 payment_config_id
└─ 旧订单(待支付)→ 回调时按 payment_config_id 加载旧配置验签
└─ 30 分钟超时自动取消
```
### 生效配置缓存策略
- **Redis Key**`wechat:config:active`(见 `pkg/constants/redis.go`
- **TTL**5 分钟(兜底,防 Redis 缓存与 DB 长期不一致)
- **主动失效**:激活、停用、更新生效配置、删除配置时主动 DEL 缓存
- **空标记**:无生效配置时缓存 `"none"`TTL 1 分钟,防止缓存穿透
- **读取流程**Redis GET → 命中返回 → MISS → 查 DB → SET 缓存
### 配置切换时在途订单处理
- `tb_order``tb_asset_recharge_record``tb_agent_recharge_record` 均新增 `payment_config_id` 字段nullable
- 下单时记录当前使用的配置 ID配置切换后旧订单仍按 `payment_config_id` 加载旧配置验签
- 旧待支付订单由现有 30 分钟超时自动取消机制清理
- **有待支付订单引用的配置不允许删除**(软删除后仍可用于验签)
### 支付回调统一分发
```
回调到达
├─ 微信回调 POST /api/callback/wechat-pay
│ └─ PowerWeChat SDK 解析 → 取 out_trade_no
└─ 富友回调 POST /api/callback/fuiou-pay
└─ GBK→UTF-8 → XML 解析 → 取 mchnt_order_no
└─ 按订单号前缀分发
├─ "ORD" → 套餐订单 → orderService.HandlePaymentCallback()
├─ "CRCH" → 资产充值 → rechargeService.HandlePaymentCallback()
└─ "ARCH" → 代理充值 → agentRechargeService.HandlePaymentCallback()
```
## 接口说明
### 基础路径
`/api/admin/wechat-configs`
**权限要求**:仅超级管理员(`user_type=1`)和平台用户(`user_type=2`)可访问,其他类型返回 `1005`
### 接口列表
| 方法 | 路径 | 说明 |
|------|------|------|
| POST | `/api/admin/wechat-configs` | 创建配置 |
| GET | `/api/admin/wechat-configs` | 查询配置列表(分页+筛选) |
| GET | `/api/admin/wechat-configs/active` | 查询当前生效配置 |
| GET | `/api/admin/wechat-configs/:id` | 查询配置详情 |
| PUT | `/api/admin/wechat-configs/:id` | 更新配置 |
| DELETE | `/api/admin/wechat-configs/:id` | 软删除配置 |
| POST | `/api/admin/wechat-configs/:id/activate` | 激活配置 |
| POST | `/api/admin/wechat-configs/:id/deactivate` | 停用配置 |
| POST | `/api/callback/fuiou-pay` | 富友支付回调(无需认证) |
### 渠道类型provider_type
| 值 | 说明 | 必填支付字段 |
|----|------|-------------|
| `wechat` | 微信直连 | `wx_mch_id``wx_api_v3_key``wx_cert_content``wx_key_content``wx_serial_no``wx_notify_url` |
| `fuiou` | 富友聚合支付 | `fy_ins_cd``fy_mchnt_cd``fy_term_id``fy_private_key``fy_public_key``fy_api_url``fy_notify_url` |
### 敏感字段脱敏规则
接口响应中所有敏感字段均脱敏,数据库明文存储:
| 字段类型 | 脱敏规则 | 示例 |
|---------|---------|------|
| Secret/Key | 前4位 + `***` + 后4位 | `abcd***7890` |
| 证书/私钥(长) | 仅显示状态 | `[已配置]` / `[未配置]` |
**更新脱敏字段**:不传或传空字符串 = 保留原值;传新明文值 = 替换。
### 删除保护规则
| 条件 | 错误码 | 错误消息 |
|------|--------|---------|
| 配置 `is_active=true` | `1171` | 不能删除当前生效的支付配置,请先停用 |
| 存在待支付订单引用 | `1172` | 该配置存在未完成的支付订单,暂时无法删除 |
## 富友支付 SDK
**位置**`pkg/fuiou/`
| 文件 | 说明 |
|------|------|
| `types.go` | WxPreCreateRequest/Response、NotifyRequest 等 XML 结构体 |
| `client.go` | Client 结构体、NewClient、RSA 签名/验签、HTTP 请求XML+GBK|
| `wxprecreate.go` | WxPreCreate 方法(公众号 JSAPI + 小程序支付下单)|
| `notify.go` | VerifyNotifyGBK→UTF-8 + XML 解析 + RSA 验签、BuildNotifyResponse |
**签名算法**:字典序排列参数 → GBK 编码 → MD5 哈希 → RSA 签名 → Base64
**新增依赖**`golang.org/x/text`GBK 编解码)
## 数据库变更
### 新建表 `tb_wechat_config`(迁移 000078
| 字段组 | 字段 | 说明 |
|-------|------|------|
| 基础信息 | `id`, `name`, `description`, `provider_type`, `is_active` | 配置基础字段 |
| 公众号 OAuth | `oa_app_id`, `oa_app_secret`, `oa_token`, `oa_aes_key`, `oa_oauth_redirect_url` | 公众号相关 |
| 小程序 OAuth | `miniapp_app_id`, `miniapp_app_secret` | 小程序相关 |
| 微信直连 | `wx_mch_id`, `wx_api_v3_key`, `wx_api_v2_key`, `wx_cert_content`, `wx_key_content`, `wx_serial_no`, `wx_notify_url` | provider_type=wechat 时使用 |
| 富友 | `fy_ins_cd`, `fy_mchnt_cd`, `fy_term_id`, `fy_private_key`, `fy_public_key`, `fy_api_url`, `fy_notify_url` | provider_type=fuiou 时使用 |
| 审计 | `creator`, `updater`, `created_at`, `updated_at`, `deleted_at` | 标准审计字段 |
### 新增字段
| 表 | 字段 | 类型 | 迁移文件 |
|----|------|------|---------|
| `tb_order` | `payment_config_id` | bigint, nullable | 000079 |
| `tb_asset_recharge_record` | `payment_config_id` | bigint, nullable | 000080 |
| `tb_agent_recharge_record` | `payment_config_id` | bigint, nullable | 000081 |
## 新增错误码
| 错误码 | 常量 | 说明 |
|--------|------|------|
| 1170 | `CodeWechatConfigNotFound` | 微信支付配置不存在 |
| 1171 | `CodeWechatConfigActive` | 不能删除/操作当前生效的支付配置 |
| 1172 | `CodeWechatConfigHasPendingOrders` | 该配置存在未完成的支付订单 |
| 1173 | `CodeFuiouPayFailed` | 富友支付失败 |
| 1174 | `CodeFuiouCallbackInvalid` | 富友回调验签失败 |
| 1175 | `CodeNoPaymentConfig` | 当前无可用的支付配置 |
## 审计日志
以下操作均记录审计日志(异步写入,失败不影响业务):
| 操作 | operation_type | 说明 |
|------|---------------|------|
| 创建配置 | `create` | after_data 存脱敏后配置 |
| 更新配置 | `update` | before/after_data 均脱敏 |
| 删除配置 | `delete` | before_data 存脱敏后配置 |
| 激活配置 | `activate` | before_data=旧配置after_data=新配置 |
| 停用配置 | `deactivate` | before/after_data 存状态变更 |
## 涉及文件
### 新增文件
| 层级 | 文件 | 说明 |
|------|------|------|
| 模型 | `internal/model/wechat_config.go` | WechatConfig 模型、渠道类型常量 |
| DTO | `internal/model/dto/wechat_config_dto.go` | CRUD 请求/响应 DTO、脱敏方法 |
| Store | `internal/store/postgres/wechat_config_store.go` | CRUD + 激活/停用 + 统计 |
| Service | `internal/service/wechat_config/service.go` | 业务逻辑、缓存管理、删除保护 |
| Handler | `internal/handler/admin/wechat_config.go` | 8 个 Handler 方法 |
| 路由 | `internal/routes/wechat_config.go` | 路由注册(含平台权限中间件) |
| SDK | `pkg/fuiou/types.go` | 富友 XML 结构体 |
| SDK | `pkg/fuiou/client.go` | 富友 HTTP 客户端、签名/验签 |
| SDK | `pkg/fuiou/wxprecreate.go` | 富友支付下单 |
| SDK | `pkg/fuiou/notify.go` | 富友回调验签 |
| 迁移 | `migrations/000078_create_wechat_config_table.up.sql` | 创建 tb_wechat_config 表 |
| 迁移 | `migrations/000079_add_payment_config_id_to_order.up.sql` | tb_order 新增字段 |
| 迁移 | `migrations/000080_add_payment_config_id_to_asset_recharge.up.sql` | tb_asset_recharge_record 新增字段 |
| 迁移 | `migrations/000081_add_payment_config_id_to_agent_recharge.up.sql` | tb_agent_recharge_record 新增字段 |
### 修改文件
| 文件 | 变更说明 |
|------|---------|
| `internal/model/order.go` | 新增 `PaymentConfigID *uint` 字段 |
| `internal/model/asset_wallet.go` | 新增 `PaymentConfigID *uint` 字段 |
| `internal/handler/callback/payment.go` | 支持富友回调 + 按订单前缀分发 + 按 payment_config_id 验签 |
| `internal/routes/order.go` | 新增 `/api/callback/fuiou-pay` 路由 |
| `internal/service/order/service.go` | 注入 wechatConfigService、下单时记录 payment_config_id |
| `internal/bootstrap/` 系列 | 注册 WechatConfigStore/Service/Handler |
| `cmd/api/docs.go` / `cmd/gendocs/main.go` | 注册 WechatConfigHandler |
### 删除/精简文件YAML 支付方案遗留清理)
| 文件 | 变更说明 |
|------|---------|
| `pkg/config/config.go` | 删除 `PaymentConfig` 结构体 + `WechatConfig.Payment` 字段 |
| `pkg/config/defaults/config.yaml` | 删除 `wechat.payment:` 整个配置节 |
| `pkg/wechat/config.go` | 删除 `NewPaymentApp()` 函数YAML/CertPath 方式已被 DB Base64 方案替代) |
| `cmd/api/main.go` | 删除 `validateWechatConfig` 中所有 `wechatCfg.Payment.*` 相关校验代码 |
## 常量定义
```go
// pkg/constants/wallet.goCard* 重命名为 Asset*,旧名保留为废弃别名)
AssetWalletResourceTypeIotCard // 原 CardWalletResourceTypeIotCard
AssetWalletResourceTypeDevice // 原 CardWalletResourceTypeDevice
AssetRechargeOrderPrefix // "CRCH"(原 CardRechargeOrderPrefix
AssetRechargeMinAmount // 最小充值金额(分)
AssetRechargeMaxAmount // 最大充值金额(分)
// pkg/constants/redis.go
RedisWechatConfigActiveKey() // "wechat:config:active"
// internal/model/wechat_config.go
ProviderTypeWechat = "wechat" // 微信直连
ProviderTypeFuiou = "fuiou" // 富友
```
## 已知限制(留桩)
以下功能本次**未实现**,待后续会话补全:
- **客户端支付发起**`WechatPayJSAPI``WechatPayH5``FuiouPayJSAPI``FuiouPayMiniApp` 均为留桩(返回"暂未实现"错误或 TODO 注释),当前仍保留 `wechatPayment` 单例注入
- **OAuth 配置动态加载**`OfficialAccountService` 仍从环境变量读取,`tb_wechat_config` 中的 `oa_*` 字段仅存储,待 H5/小程序重构时切换
## 部署注意事项
1. 执行数据库迁移000078~000081现有数据不受影响新字段均为 nullable
2. 原环境变量 `JUNHONG_WECHAT_PAYMENT_*` 系列已不再读取,可清理
3. 首次上线后,需要在管理后台手动创建并激活一个微信配置,否则第三方支付功能处于禁用状态(系统自动降级为仅支持钱包/线下支付)

View File

@@ -0,0 +1,386 @@
# 工作流优化方案
## 一、背景与问题
### 1.1 当前痛点
| 痛点 | 根因 | 影响 |
|------|------|------|
| 讨论 → 提案不一致 | 共识没有被"锁定" | AI 理解偏差,提案与讨论方案不同 |
| 提案 → 实现不一致 | 约束没有被"强制执行" | 实现细节偏离设计 |
| 后置测试浪费时间 | 测试从实现反推 | 测试乱写、调试时间长 |
| 单测意义不大 | 测试实现细节而非行为 | 重构就挂,维护成本高 |
| 频繁重构 | 问题发现太晚 | 大量返工17次/100提交 |
### 1.2 数据支撑
- **重构提交**: 17 次(近期约 100 次提交中)
- **典型完成率**: 75%Shop Package Allocation: 91/121 tasks
- **未完成原因**: 测试("低优先级,需要运行环境"
- **TODO 残留**: 10+ 个(代码中待完成的功能)
---
## 二、解决方案概览
### 2.1 核心理念变化
```
旧工作流:
discuss → proposal → design → tasks → implement → test → verify
↑ 测试后置
问题发现太晚
新工作流:
discuss → 锁定共识 → proposal → 验证 → design → 验证 →
生成验收测试 → 实现(测试驱动)→ 验证 → 归档
↑ ↑
测试从 spec 生成 实现时对照测试
```
### 2.2 新增机制
| 机制 | 解决的问题 | 实现方式 |
|------|-----------|---------|
| **共识锁定** | 讨论→提案不一致 | `consensus.md` + 用户确认 |
| **验收测试先行** | 测试后置浪费时间 | 从 Spec 生成测试,实现前运行 |
| **业务流程测试** | 跨 API 场景验证 | 从 Business Flow 生成测试 |
| **中间验证** | 问题发现太晚 | 每个 artifact 后自动验证 |
| **约束检查** | 实现偏离设计 | 实现时对照约束清单 |
---
## 三、新工作流详解
### 3.1 完整流程图
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ Step 1: 探索 & 锁定共识 │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────────────┐ ┌─────────────────┐ │
│ │ /opsx:explore │ ──▶ │ 讨论并确认共识 │ ──▶ │ consensus.md │ │
│ └──────────────┘ │ AI 输出共识摘要 │ │ 用户确认后锁定 │ │
│ │ 用户逐条确认 ✓ │ └─────────────────┘ │
│ └──────────────────────┘ │
│ │
│ 输出: openspec/changes/<name>/consensus.md (用户签字确认版) │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ Step 2: 生成提案 & 验证 │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────────────┐ ┌─────────────────┐ │
│ │ 读取 consensus │ ──▶ │ 生成 proposal.md │ ──▶ │ 自动验证 │ │
│ └──────────────┘ │ 必须覆盖共识要点 │ │ proposal 与 │ │
│ └──────────────────────┘ │ consensus 对齐 │ │
│ └─────────────────┘ │
│ │
│ 验证: 共识中的每个"要做什么"都在 proposal 的 Capabilities 中出现 │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ Step 3: 生成 Spec │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────────────┐ │
│ │ 生成 spec.md │ ──▶ │ 包含两部分: │ │
│ │ │ │ 1. Scenarios │ │
│ │ │ │ 2. Business Flows │ │
│ └──────────────┘ └──────────────────────┘ │
│ │
│ Scenario: 单 API 的输入输出契约 │
│ Business Flow: 多 API 组合的业务场景 │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ Step 4: 生成测试(关键变化!) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────────────┐ ┌─────────────────┐ │
│ │ /opsx:gen-tests │ ──▶ │ 生成两类测试: │ ──▶ │ 运行测试 │ │
│ └──────────────┘ │ 1. 验收测试 │ │ 预期全部 FAIL │ │
│ │ 2. 流程测试 │ │ ← 证明测试有效 │ │
│ └──────────────────────┘ └─────────────────┘ │
│ │
│ 输出: │
│ - tests/acceptance/{capability}_acceptance_test.go │
│ - tests/flows/{capability}_{flow}_flow_test.go │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ Step 5: 设计 & 实现(测试驱动) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────────────┐ ┌─────────────────┐ │
│ │ 生成 design │ ──▶ │ 生成 tasks.md │ ──▶ │ 实现每个 task │ │
│ │ + 约束清单 │ │ 每个 task 关联测试 │ │ 运行对应测试 │ │
│ └──────────────┘ └──────────────────────┘ │ 测试通过才继续 │ │
│ └─────────────────┘ │
│ │
│ 实现循环: │
│ for each task: │
│ 1. 运行关联的测试 (预期 FAIL) │
│ 2. 实现代码 │
│ 3. 运行测试 (预期 PASS) │
│ 4. 测试通过 → 标记 task 完成 │
│ 5. 测试失败 → 修复代码,重复步骤 3 │
└─────────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────────┐
│ Step 6: 最终验证 & 归档 │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────────────┐ ┌─────────────────┐ │
│ │ 运行全部测试 │ ──▶ │ 生成完成报告 │ ──▶ │ 归档 change │ │
│ │ 必须 100% PASS │ │ 包含测试覆盖证据 │ └─────────────────┘ │
│ └──────────────┘ └──────────────────────┘ │
│ │
│ 完成报告必须包含: │
│ - 验收测试通过截图/日志 │
│ - 流程测试通过截图/日志 │
│ - 每个 Scenario/Flow 的测试对应关系 │
└─────────────────────────────────────────────────────────────────────────────┘
```
### 3.2 命令对照表
| 步骤 | 命令 | 说明 |
|------|------|------|
| 1 | `/opsx:explore` | 探索讨论 |
| 2 | `/opsx:lock <name>` | **新** 锁定共识 |
| 3 | `/opsx:new <name>` | 创建 change自动读取 consensus |
| 4 | `/opsx:continue` | 生成 proposal |
| 5 | `/opsx:continue` | 生成 spec |
| 6 | `/opsx:gen-tests` | **新** 生成验收测试和流程测试 |
| 7 | `/opsx:continue` | 生成 design |
| 8 | `/opsx:continue` | 生成 tasks |
| 9 | `/opsx:apply` | 测试驱动实现 |
| 10 | `/opsx:verify` | 验证 |
| 11 | `/opsx:archive` | 归档 |
---
## 四、测试体系重设计
### 4.1 新测试金字塔
```
┌─────────────┐
│ E2E 测试 │ ← 手动/自动化 UI很少
│ │
─┴─────────────┴─
┌─────────────────┐
│ 业务流程测试 │ ← 新增!多 API 组合
│ tests/flows/ │ 验证业务场景完整性
─┴─────────────────┴─
┌─────────────────────┐
│ 验收测试 │ ← 新增!从 Spec Scenario 生成
│ tests/acceptance/ │ 单 API 契约验证
─┴─────────────────────┴─
┌───────────────────────────┐
│ 集成冒烟测试 │ ← 保留
│ tests/integration/ │
─┴───────────────────────────┴─
┌─────────────────────────────────┐
│ 单元测试 (精简!) │ ← 大幅减少
│ tests/unit/ │ 仅复杂逻辑
└─────────────────────────────────┘
```
### 4.2 三层测试体系
| 层级 | 测试类型 | 来源 | 验证什么 | 位置 |
|------|---------|------|---------|------|
| **L1** | 验收测试 | Spec Scenario | 单 API 契约 | `tests/acceptance/` |
| **L2** | 流程测试 | Spec Business Flow | 业务场景完整性 | `tests/flows/` |
| **L3** | 单元测试 | 复杂逻辑 | 算法/规则正确性 | `tests/unit/` |
### 4.3 测试比例调整
| 测试类型 | 旧占比 | 新占比 | 变化 |
|---------|-------|-------|------|
| 验收测试 | 0% | **30%** | 新增 |
| 流程测试 | 0% | **15%** | 新增 |
| 集成测试 | 28% | 25% | 略减 |
| 单元测试 | 72% | **30%** | 大幅减少 |
### 4.4 单元测试精简规则
**保留**:
- ✅ 纯函数(计费计算、分佣算法)
- ✅ 状态机(订单状态流转)
- ✅ 复杂业务规则(层级校验、权限计算)
- ✅ 边界条件(时间、金额、精度)
**删除/不再写**:
- ❌ 简单 CRUD已被验收测试覆盖
- ❌ DTO 转换
- ❌ 配置读取
- ❌ 重复测试同一逻辑
---
## 五、Spec 模板更新
### 5.1 新 Spec 结构
```markdown
# {capability} Specification
## Purpose
{简要描述这个能力的目的}
## Requirements
### Requirement: {requirement-name}
{详细描述}
#### Scenario: {scenario-name}
- **GIVEN** {前置条件}
- **WHEN** {触发动作}
- **THEN** {预期结果}
- **AND** {额外验证}
---
## Business Flows新增必填部分
### Flow: {flow-name}
**参与者**: {角色1}, {角色2}, ...
**前置条件**:
- {条件1}
- {条件2}
**流程步骤**:
1. **{步骤名称}**
- 角色: {执行角色}
- 调用: {HTTP Method} {Path}
- 输入: {关键参数}
- 预期: {预期结果}
- 验证: {数据库/缓存状态变化}
2. **{下一步骤}**
...
**流程图**:
```
[角色A] ──创建──▶ [资源] ──分配──▶ [角色B可见] ──使用──▶ [状态变更]
```
**验证点**:
- [ ] {验证点1}
- [ ] {验证点2}
- [ ] 数据一致性: {描述}
**异常流程**:
- 如果 {条件}: 预期 {结果}
```
---
## 六、文件结构
### 6.1 测试目录
```
tests/
├── acceptance/ # 验收测试(单 API
│ ├── account_acceptance_test.go
│ ├── package_acceptance_test.go
│ ├── iot_card_acceptance_test.go
│ └── README.md
├── flows/ # 业务流程测试(多 API
│ ├── package_lifecycle_flow_test.go
│ ├── order_purchase_flow_test.go
│ ├── commission_settlement_flow_test.go
│ └── README.md
├── integration/ # 集成测试(保留)
│ └── ...
├── unit/ # 单元测试(精简)
│ └── ...
└── testutils/
└── integ/
└── integration.go
```
### 6.2 OpenSpec 目录
```
openspec/
├── config.yaml # 更新:增加测试规则
├── changes/
│ └── <change-name>/
│ ├── consensus.md # 新增:共识确认单
│ ├── proposal.md
│ ├── design.md
│ ├── tasks.md
│ └── specs/
│ └── <capability>/
│ └── spec.md # 更新:包含 Business Flows
└── specs/
└── ...
```
---
## 七、实施计划
### Phase 1: 基础设施2-3 天)
1. 创建目录结构
2. 更新 `openspec/config.yaml`
3. 创建 `openspec-lock-consensus` skill
4. 创建 `openspec-generate-acceptance-tests` skill
5. 更新 `AGENTS.md` 测试规范
6. 创建 `tests/acceptance/README.md`
7. 创建 `tests/flows/README.md`
### Phase 2: 试点1 周)
选择一个新 feature 完整走一遍新流程:
1. 验证共识锁定机制
2. 验证测试生成
3. 验证测试驱动实现
4. 收集反馈,调整流程
### Phase 3: 推广(持续)
1. 新 feature 强制使用新流程
2. 现有高价值测试迁移为验收测试
3. 清理低价值单元测试
4. 建立测试覆盖率追踪
---
## 八、预期收益
| 指标 | 当前 | 预期 |
|------|------|------|
| 讨论→提案一致率 | ~60% | >95% |
| 提案→实现一致率 | ~70% | >95% |
| 测试编写时间 | 实现后补,耗时长 | 实现前生成,自动化 |
| 测试有效性 | 很多无效测试 | 每个测试有破坏点 |
| 重构频率 | 高17次/100提交 | 低(问题早发现) |
| 单测维护成本 | 高(重构就挂) | 低(只测行为) |
| 业务流程正确性 | 无保证 | 流程测试覆盖 |
---
## 九、相关文档
- [验收测试说明](../../tests/acceptance/README.md)
- [流程测试说明](../../tests/flows/README.md)
- [共识锁定 Skill](.opencode/skills/openspec-lock-consensus/SKILL.md)
- [测试生成 Skill](.opencode/skills/openspec-generate-acceptance-tests/SKILL.md)
- [AGENTS.md 测试规范](../../AGENTS.md#测试要求)

File diff suppressed because it is too large Load Diff

12
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/break/junhong_cmp_fiber module github.com/break/junhong_cmp_fiber
go 1.25 go 1.25.0
require ( require (
github.com/ArtisanCloud/PowerWeChat/v3 v3.4.38 github.com/ArtisanCloud/PowerWeChat/v3 v3.4.38
@@ -15,17 +15,16 @@ require (
github.com/jackc/pgx/v5 v5.7.6 github.com/jackc/pgx/v5 v5.7.6
github.com/redis/go-redis/v9 v9.17.3 github.com/redis/go-redis/v9 v9.17.3
github.com/spf13/viper v1.21.0 github.com/spf13/viper v1.21.0
github.com/stretchr/testify v1.11.1
github.com/swaggest/openapi-go v0.2.60 github.com/swaggest/openapi-go v0.2.60
github.com/valyala/fasthttp v1.66.0 github.com/valyala/fasthttp v1.66.0
github.com/xuri/excelize/v2 v2.8.1 github.com/xuri/excelize/v2 v2.8.1
go.uber.org/zap v1.27.1 go.uber.org/zap v1.27.1
golang.org/x/crypto v0.47.0 golang.org/x/crypto v0.47.0
golang.org/x/text v0.35.0
gopkg.in/natefinch/lumberjack.v2 v2.2.1 gopkg.in/natefinch/lumberjack.v2 v2.2.1
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
gorm.io/datatypes v1.2.7 gorm.io/datatypes v1.2.7
gorm.io/driver/postgres v1.6.0 gorm.io/driver/postgres v1.6.0
gorm.io/driver/sqlite v1.6.0
gorm.io/gorm v1.31.1 gorm.io/gorm v1.31.1
) )
@@ -40,7 +39,6 @@ require (
github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/clbanning/mxj/v2 v2.7.0 // indirect github.com/clbanning/mxj/v2 v2.7.0 // indirect
github.com/cloudwego/base64x v0.1.6 // indirect github.com/cloudwego/base64x v0.1.6 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.10 // indirect github.com/gabriel-vasile/mimetype v1.4.10 // indirect
@@ -60,13 +58,11 @@ require (
github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mattn/go-sqlite3 v1.14.22 // indirect
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
github.com/patrickmn/go-cache v2.1.0+incompatible // indirect github.com/patrickmn/go-cache v2.1.0+incompatible // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c // indirect github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/richardlehane/mscfb v1.0.4 // indirect github.com/richardlehane/mscfb v1.0.4 // indirect
github.com/richardlehane/msoleps v1.0.4 // indirect github.com/richardlehane/msoleps v1.0.4 // indirect
github.com/rivo/uniseg v0.2.0 // indirect github.com/rivo/uniseg v0.2.0 // indirect
@@ -77,7 +73,6 @@ require (
github.com/spf13/afero v1.15.0 // indirect github.com/spf13/afero v1.15.0 // indirect
github.com/spf13/cast v1.10.0 // indirect github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect github.com/spf13/pflag v1.0.10 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/subosito/gotenv v1.6.0 // indirect github.com/subosito/gotenv v1.6.0 // indirect
github.com/swaggest/jsonschema-go v0.3.74 // indirect github.com/swaggest/jsonschema-go v0.3.74 // indirect
github.com/swaggest/refl v1.3.1 // indirect github.com/swaggest/refl v1.3.1 // indirect
@@ -94,9 +89,8 @@ require (
go.yaml.in/yaml/v3 v3.0.4 // indirect go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/arch v0.0.0-20210923205945-b76863e36670 // indirect golang.org/x/arch v0.0.0-20210923205945-b76863e36670 // indirect
golang.org/x/net v0.48.0 // indirect golang.org/x/net v0.48.0 // indirect
golang.org/x/sync v0.19.0 // indirect golang.org/x/sync v0.20.0 // indirect
golang.org/x/sys v0.40.0 // indirect golang.org/x/sys v0.40.0 // indirect
golang.org/x/text v0.33.0 // indirect
golang.org/x/time v0.14.0 // indirect golang.org/x/time v0.14.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect google.golang.org/protobuf v1.36.10 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect

9
go.sum
View File

@@ -221,7 +221,6 @@ github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjb
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
@@ -299,15 +298,15 @@ golang.org/x/image v0.14.0 h1:tNgSxAFe3jC4uYqvZdTr84SZoM1KfwdC9SKIFrLjFn4=
golang.org/x/image v0.14.0/go.mod h1:HUYqC05R2ZcZ3ejNQsIHQDQiwWM4JBqmm6MKANTp4LE= golang.org/x/image v0.14.0/go.mod h1:HUYqC05R2ZcZ3ejNQsIHQDQiwWM4JBqmm6MKANTp4LE=
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU= golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY= golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ= golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE= golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8=
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8= golang.org/x/text v0.35.0/go.mod h1:khi/HExzZJ2pGnjenulevKNX1W67CUy0AsXcNubPGCA=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI= golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE= google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=

View File

@@ -6,7 +6,6 @@ import (
"github.com/break/junhong_cmp_fiber/internal/model" "github.com/break/junhong_cmp_fiber/internal/model"
"github.com/break/junhong_cmp_fiber/pkg/config" "github.com/break/junhong_cmp_fiber/pkg/config"
"github.com/break/junhong_cmp_fiber/pkg/constants" "github.com/break/junhong_cmp_fiber/pkg/constants"
pkgGorm "github.com/break/junhong_cmp_fiber/pkg/gorm"
"go.uber.org/zap" "go.uber.org/zap"
) )
@@ -15,7 +14,6 @@ func initDefaultAdmin(deps *Dependencies, services *services) error {
cfg := config.Get() cfg := config.Get()
ctx := context.Background() ctx := context.Background()
ctx = pkgGorm.SkipDataPermission(ctx)
var count int64 var count int64
if err := deps.DB.WithContext(ctx).Model(&model.Account{}).Where("user_type = ?", constants.UserTypeSuperAdmin).Count(&count).Error; err != nil { if err := deps.DB.WithContext(ctx).Model(&model.Account{}).Where("user_type = ?", constants.UserTypeSuperAdmin).Count(&count).Error; err != nil {

View File

@@ -45,8 +45,8 @@ func Bootstrap(deps *Dependencies) (*BootstrapResult, error) {
deps.Logger.Error("初始化默认超级管理员失败", zap.Error(err)) deps.Logger.Error("初始化默认超级管理员失败", zap.Error(err))
} }
// 5. 初始化 Middleware 层 // 5. 初始化 Middleware 层(传入 ShopStore 以支持预计算下级店铺 ID
middlewares := initMiddlewares(deps) middlewares := initMiddlewares(deps, stores)
// 6. 初始化 Handler 层 // 6. 初始化 Handler 层
handlers := initHandlers(services, deps) handlers := initHandlers(services, deps)
@@ -59,17 +59,12 @@ func Bootstrap(deps *Dependencies) (*BootstrapResult, error) {
// registerGORMCallbacks 注册 GORM Callbacks // registerGORMCallbacks 注册 GORM Callbacks
func registerGORMCallbacks(deps *Dependencies, stores *stores) error { func registerGORMCallbacks(deps *Dependencies, stores *stores) error {
// 注册数据权限过滤 Callback使用 ShopStore 来查询下级店铺 ID
if err := pkgGorm.RegisterDataPermissionCallback(deps.DB, stores.Shop); err != nil {
return err
}
// 注册自动添加创建&更新人 Callback // 注册自动添加创建&更新人 Callback
if err := pkgGorm.RegisterSetCreatorUpdaterCallback(deps.DB); err != nil { if err := pkgGorm.RegisterSetCreatorUpdaterCallback(deps.DB); err != nil {
return err return err
} }
// TODO: 在此添加其他 GORM Callbacks // 数据权限过滤已移至 Store 层显式调用 ApplyXxxFilter 函数
return nil return nil
} }

View File

@@ -15,15 +15,14 @@ import (
// Dependencies 封装所有基础依赖 // Dependencies 封装所有基础依赖
// 这些是应用启动时初始化的核心组件 // 这些是应用启动时初始化的核心组件
type Dependencies struct { type Dependencies struct {
DB *gorm.DB // PostgreSQL 数据库连接 DB *gorm.DB // PostgreSQL 数据库连接
Redis *redis.Client // Redis 客户端 Redis *redis.Client // Redis 客户端
Logger *zap.Logger // 应用日志器 Logger *zap.Logger // 应用日志器
JWTManager *auth.JWTManager // JWT 管理器(个人客户认证) JWTManager *auth.JWTManager // JWT 管理器(个人客户认证)
TokenManager *auth.TokenManager // Token 管理器后台和H5认证 TokenManager *auth.TokenManager // Token 管理器后台和H5认证
VerificationService *verification.Service // 验证码服务 VerificationService *verification.Service // 验证码服务
QueueClient *queue.Client // Asynq 任务队列客户端 QueueClient *queue.Client // Asynq 任务队列客户端
StorageService *storage.Service // 对象存储服务(可选,配置缺失时为 nil StorageService *storage.Service // 对象存储服务(可选,配置缺失时为 nil
GatewayClient *gateway.Client // Gateway API 客户端(可选,配置缺失时为 nil GatewayClient *gateway.Client // Gateway API 客户端(可选,配置缺失时为 nil
WechatOfficialAccount wechat.OfficialAccountServiceInterface // 微信公众号服务(可选) WechatPayment wechat.PaymentServiceInterface // 微信支付服务(可选)
WechatPayment wechat.PaymentServiceInterface // 微信支付服务(可选)
} }

View File

@@ -5,12 +5,41 @@ import (
"github.com/break/junhong_cmp_fiber/internal/handler/app" "github.com/break/junhong_cmp_fiber/internal/handler/app"
authHandler "github.com/break/junhong_cmp_fiber/internal/handler/auth" authHandler "github.com/break/junhong_cmp_fiber/internal/handler/auth"
"github.com/break/junhong_cmp_fiber/internal/handler/callback" "github.com/break/junhong_cmp_fiber/internal/handler/callback"
"github.com/break/junhong_cmp_fiber/internal/handler/h5" clientOrderSvc "github.com/break/junhong_cmp_fiber/internal/service/client_order"
"github.com/break/junhong_cmp_fiber/internal/store/postgres"
"github.com/go-playground/validator/v10" "github.com/go-playground/validator/v10"
) )
func initHandlers(svc *services, deps *Dependencies) *Handlers { func initHandlers(svc *services, deps *Dependencies) *Handlers {
validate := validator.New() validate := validator.New()
personalCustomerDeviceStore := postgres.NewPersonalCustomerDeviceStore(deps.DB)
assetWalletStore := postgres.NewAssetWalletStore(deps.DB, deps.Redis)
packageStore := postgres.NewPackageStore(deps.DB)
shopPackageAllocationStore := postgres.NewShopPackageAllocationStore(deps.DB)
iotCardStore := postgres.NewIotCardStore(deps.DB, deps.Redis)
deviceStore := postgres.NewDeviceStore(deps.DB, deps.Redis)
assetWalletTransactionStore := postgres.NewAssetWalletTransactionStore(deps.DB, deps.Redis)
assetRechargeStore := postgres.NewAssetRechargeStore(deps.DB, deps.Redis)
personalCustomerOpenIDStore := postgres.NewPersonalCustomerOpenIDStore(deps.DB)
orderStore := postgres.NewOrderStore(deps.DB, deps.Redis)
packageSeriesStore := postgres.NewPackageSeriesStore(deps.DB)
shopSeriesAllocationStore := postgres.NewShopSeriesAllocationStore(deps.DB)
deviceSimBindingStore := postgres.NewDeviceSimBindingStore(deps.DB, deps.Redis)
carrierStore := postgres.NewCarrierStore(deps.DB)
clientOrderService := clientOrderSvc.New(
svc.Asset,
svc.PurchaseValidation,
orderStore,
assetRechargeStore,
assetWalletStore,
personalCustomerDeviceStore,
personalCustomerOpenIDStore,
svc.WechatConfig,
packageSeriesStore,
shopSeriesAllocationStore,
deps.Redis,
deps.Logger,
)
return &Handlers{ return &Handlers{
Auth: authHandler.NewHandler(svc.Auth, validate), Auth: authHandler.NewHandler(svc.Auth, validate),
@@ -18,34 +47,50 @@ func initHandlers(svc *services, deps *Dependencies) *Handlers {
Role: admin.NewRoleHandler(svc.Role, validate), Role: admin.NewRoleHandler(svc.Role, validate),
Permission: admin.NewPermissionHandler(svc.Permission), Permission: admin.NewPermissionHandler(svc.Permission),
PersonalCustomer: app.NewPersonalCustomerHandler(svc.PersonalCustomer, deps.Logger), PersonalCustomer: app.NewPersonalCustomerHandler(svc.PersonalCustomer, deps.Logger),
ClientAuth: app.NewClientAuthHandler(svc.ClientAuth, deps.Logger),
ClientAsset: app.NewClientAssetHandler(svc.Asset, personalCustomerDeviceStore, assetWalletStore, packageStore, shopPackageAllocationStore, iotCardStore, deviceStore, deps.DB, deps.Logger),
ClientWallet: app.NewClientWalletHandler(svc.Asset, personalCustomerDeviceStore, assetWalletStore, assetWalletTransactionStore, assetRechargeStore, svc.Recharge, personalCustomerOpenIDStore, svc.WechatConfig, deps.Redis, deps.Logger, deps.DB, iotCardStore, deviceStore),
ClientOrder: app.NewClientOrderHandler(clientOrderService, svc.Asset, orderStore, personalCustomerDeviceStore, iotCardStore, deviceStore, deps.Logger, deps.DB),
ClientExchange: app.NewClientExchangeHandler(svc.Exchange),
ClientRealname: app.NewClientRealnameHandler(svc.Asset, personalCustomerDeviceStore, iotCardStore, deviceSimBindingStore, carrierStore, deps.GatewayClient, deps.Logger),
ClientDevice: app.NewClientDeviceHandler(svc.Asset, personalCustomerDeviceStore, deviceStore, deviceSimBindingStore, iotCardStore, deps.GatewayClient, deps.Logger),
Shop: admin.NewShopHandler(svc.Shop), Shop: admin.NewShopHandler(svc.Shop),
ShopRole: admin.NewShopRoleHandler(svc.Shop),
AdminAuth: admin.NewAuthHandler(svc.Auth, validate), AdminAuth: admin.NewAuthHandler(svc.Auth, validate),
H5Auth: h5.NewAuthHandler(svc.Auth, validate),
ShopCommission: admin.NewShopCommissionHandler(svc.ShopCommission), ShopCommission: admin.NewShopCommissionHandler(svc.ShopCommission),
CommissionWithdrawal: admin.NewCommissionWithdrawalHandler(svc.CommissionWithdrawal), CommissionWithdrawal: admin.NewCommissionWithdrawalHandler(svc.CommissionWithdrawal),
CommissionWithdrawalSetting: admin.NewCommissionWithdrawalSettingHandler(svc.CommissionWithdrawalSetting), CommissionWithdrawalSetting: admin.NewCommissionWithdrawalSettingHandler(svc.CommissionWithdrawalSetting),
Enterprise: admin.NewEnterpriseHandler(svc.Enterprise), Enterprise: admin.NewEnterpriseHandler(svc.Enterprise),
EnterpriseCard: admin.NewEnterpriseCardHandler(svc.EnterpriseCard), EnterpriseCard: admin.NewEnterpriseCardHandler(svc.EnterpriseCard),
EnterpriseDevice: admin.NewEnterpriseDeviceHandler(svc.EnterpriseDevice), EnterpriseDevice: admin.NewEnterpriseDeviceHandler(svc.EnterpriseDevice),
EnterpriseDeviceH5: h5.NewEnterpriseDeviceHandler(svc.EnterpriseDevice),
Authorization: admin.NewAuthorizationHandler(svc.Authorization), Authorization: admin.NewAuthorizationHandler(svc.Authorization),
MyCommission: admin.NewMyCommissionHandler(svc.MyCommission), MyCommission: admin.NewMyCommissionHandler(svc.MyCommission),
IotCard: admin.NewIotCardHandler(svc.IotCard, deps.GatewayClient), IotCard: admin.NewIotCardHandler(svc.IotCard),
IotCardImport: admin.NewIotCardImportHandler(svc.IotCardImport), IotCardImport: admin.NewIotCardImportHandler(svc.IotCardImport),
Device: admin.NewDeviceHandler(svc.Device, deps.GatewayClient), Device: admin.NewDeviceHandler(svc.Device),
DeviceImport: admin.NewDeviceImportHandler(svc.DeviceImport), DeviceImport: admin.NewDeviceImportHandler(svc.DeviceImport),
AssetAllocationRecord: admin.NewAssetAllocationRecordHandler(svc.AssetAllocationRecord), AssetAllocationRecord: admin.NewAssetAllocationRecordHandler(svc.AssetAllocationRecord),
Storage: admin.NewStorageHandler(deps.StorageService), Storage: admin.NewStorageHandler(deps.StorageService),
Carrier: admin.NewCarrierHandler(svc.Carrier), Carrier: admin.NewCarrierHandler(svc.Carrier),
PackageSeries: admin.NewPackageSeriesHandler(svc.PackageSeries), PackageSeries: admin.NewPackageSeriesHandler(svc.PackageSeries),
Package: admin.NewPackageHandler(svc.Package), Package: admin.NewPackageHandler(svc.Package),
ShopSeriesAllocation: admin.NewShopSeriesAllocationHandler(svc.ShopSeriesAllocation), PackageUsage: admin.NewPackageUsageHandler(svc.PackageDailyRecord),
ShopPackageAllocation: admin.NewShopPackageAllocationHandler(svc.ShopPackageAllocation),
ShopPackageBatchAllocation: admin.NewShopPackageBatchAllocationHandler(svc.ShopPackageBatchAllocation), ShopPackageBatchAllocation: admin.NewShopPackageBatchAllocationHandler(svc.ShopPackageBatchAllocation),
ShopPackageBatchPricing: admin.NewShopPackageBatchPricingHandler(svc.ShopPackageBatchPricing), ShopPackageBatchPricing: admin.NewShopPackageBatchPricingHandler(svc.ShopPackageBatchPricing),
AdminOrder: admin.NewOrderHandler(svc.Order), ShopSeriesGrant: admin.NewShopSeriesGrantHandler(svc.ShopSeriesGrant),
H5Order: h5.NewOrderHandler(svc.Order), AdminOrder: admin.NewOrderHandler(svc.Order, validate),
H5Recharge: h5.NewRechargeHandler(svc.Recharge), AdminExchange: admin.NewExchangeHandler(svc.Exchange, validate),
PaymentCallback: callback.NewPaymentHandler(svc.Order, svc.Recharge, deps.WechatPayment), PaymentCallback: callback.NewPaymentHandler(svc.Order, svc.Recharge, svc.AgentRecharge, deps.WechatPayment),
PollingConfig: admin.NewPollingConfigHandler(svc.PollingConfig),
PollingConcurrency: admin.NewPollingConcurrencyHandler(svc.PollingConcurrency),
PollingMonitoring: admin.NewPollingMonitoringHandler(svc.PollingMonitoring),
PollingAlert: admin.NewPollingAlertHandler(svc.PollingAlert),
PollingCleanup: admin.NewPollingCleanupHandler(svc.PollingCleanup),
PollingManualTrigger: admin.NewPollingManualTriggerHandler(svc.PollingManualTrigger),
Asset: admin.NewAssetHandler(svc.Asset, svc.Device, svc.StopResumeService),
AssetLifecycle: admin.NewAssetLifecycleHandler(svc.AssetLifecycle),
AssetWallet: admin.NewAssetWalletHandler(svc.AssetWallet),
WechatConfig: admin.NewWechatConfigHandler(svc.WechatConfig),
AgentRecharge: admin.NewAgentRechargeHandler(svc.AgentRecharge),
} }
} }

View File

@@ -14,7 +14,7 @@ import (
) )
// initMiddlewares 初始化所有中间件 // initMiddlewares 初始化所有中间件
func initMiddlewares(deps *Dependencies) *Middlewares { func initMiddlewares(deps *Dependencies, stores *stores) *Middlewares {
// 获取全局配置 // 获取全局配置
cfg := config.Get() cfg := config.Get()
@@ -22,27 +22,23 @@ func initMiddlewares(deps *Dependencies) *Middlewares {
jwtManager := pkgauth.NewJWTManager(cfg.JWT.SecretKey, cfg.JWT.TokenDuration) jwtManager := pkgauth.NewJWTManager(cfg.JWT.SecretKey, cfg.JWT.TokenDuration)
// 创建个人客户认证中间件 // 创建个人客户认证中间件
personalAuthMiddleware := middleware.NewPersonalAuthMiddleware(jwtManager, deps.Logger) personalAuthMiddleware := middleware.NewPersonalAuthMiddleware(jwtManager, deps.Redis, deps.Logger)
// 创建 Token Manager用于后台和H5认证 // 创建 Token Manager用于后台和H5认证
accessTTL := time.Duration(cfg.JWT.AccessTokenTTL) * time.Second accessTTL := time.Duration(cfg.JWT.AccessTokenTTL) * time.Second
refreshTTL := time.Duration(cfg.JWT.RefreshTokenTTL) * time.Second refreshTTL := time.Duration(cfg.JWT.RefreshTokenTTL) * time.Second
tokenManager := pkgauth.NewTokenManager(deps.Redis, accessTTL, refreshTTL) tokenManager := pkgauth.NewTokenManager(deps.Redis, accessTTL, refreshTTL)
// 创建后台认证中间件 // 创建后台认证中间件(传入 ShopStore 以支持预计算下级店铺 ID
adminAuthMiddleware := createAdminAuthMiddleware(tokenManager) adminAuthMiddleware := createAdminAuthMiddleware(tokenManager, stores.Shop)
// 创建H5认证中间件
h5AuthMiddleware := createH5AuthMiddleware(tokenManager)
return &Middlewares{ return &Middlewares{
PersonalAuth: personalAuthMiddleware, PersonalAuth: personalAuthMiddleware,
AdminAuth: adminAuthMiddleware, AdminAuth: adminAuthMiddleware,
H5Auth: h5AuthMiddleware,
} }
} }
func createAdminAuthMiddleware(tokenManager *pkgauth.TokenManager) fiber.Handler { func createAdminAuthMiddleware(tokenManager *pkgauth.TokenManager, shopStore pkgmiddleware.AuthShopStoreInterface) fiber.Handler {
return pkgmiddleware.Auth(pkgmiddleware.AuthConfig{ return pkgmiddleware.Auth(pkgmiddleware.AuthConfig{
TokenValidator: func(token string) (*pkgmiddleware.UserContextInfo, error) { TokenValidator: func(token string) (*pkgmiddleware.UserContextInfo, error) {
tokenInfo, err := tokenManager.ValidateAccessToken(context.Background(), token) tokenInfo, err := tokenManager.ValidateAccessToken(context.Background(), token)
@@ -65,30 +61,6 @@ func createAdminAuthMiddleware(tokenManager *pkgauth.TokenManager) fiber.Handler
}, nil }, nil
}, },
SkipPaths: []string{"/api/admin/login", "/api/admin/refresh-token"}, SkipPaths: []string{"/api/admin/login", "/api/admin/refresh-token"},
}) ShopStore: shopStore,
}
func createH5AuthMiddleware(tokenManager *pkgauth.TokenManager) fiber.Handler {
return pkgmiddleware.Auth(pkgmiddleware.AuthConfig{
TokenValidator: func(token string) (*pkgmiddleware.UserContextInfo, error) {
tokenInfo, err := tokenManager.ValidateAccessToken(context.Background(), token)
if err != nil {
return nil, errors.New(errors.CodeInvalidToken, "认证令牌无效或已过期")
}
// 检查用户类型H5 允许 Agent(3), Enterprise(4)
if tokenInfo.UserType != constants.UserTypeAgent &&
tokenInfo.UserType != constants.UserTypeEnterprise {
return nil, errors.New(errors.CodeForbidden, "权限不足")
}
return &pkgmiddleware.UserContextInfo{
UserID: tokenInfo.UserID,
UserType: tokenInfo.UserType,
ShopID: tokenInfo.ShopID,
EnterpriseID: tokenInfo.EnterpriseID,
}, nil
},
SkipPaths: []string{"/api/h5/login", "/api/h5/refresh-token"},
}) })
} }

Some files were not shown because too many files have changed in this diff Show More