docs(constitution): 新增数据库设计原则(v2.4.0)

在项目宪章中新增第九条原则"数据库设计原则",明确禁止使用数据库外键约束和ORM关联标签。

主要变更:
- 新增原则IX:数据库设计原则(Database Design Principles)
- 强制要求:数据库表不得使用外键约束
- 强制要求:GORM模型不得使用ORM关联标签(foreignKey、hasMany等)
- 强制要求:表关系必须通过ID字段手动维护
- 强制要求:关联数据查询必须显式编写,避免ORM魔法
- 强制要求:时间字段由GORM处理,不使用数据库触发器

设计理念:
- 提升业务逻辑灵活性(无数据库约束限制)
- 优化高并发性能(无外键检查开销)
- 增强代码可读性(显式查询,无隐式预加载)
- 简化数据库架构和迁移流程
- 支持分布式和微服务场景

版本升级:2.3.0 → 2.4.0(MINOR)
This commit is contained in:
2025-11-13 13:40:19 +08:00
parent ea0c6a8b16
commit 984ccccc63
63 changed files with 12099 additions and 83 deletions

View File

@@ -0,0 +1,41 @@
# Specification Quality Checklist: 数据持久化与异步任务处理集成
**Purpose**: 在进入规划阶段前验证规格说明的完整性和质量
**Created**: 2025-11-12
**Feature**: [spec.md](../spec.md)
## Content Quality
- [x] 无实现细节(语言、框架、API)
- [x] 专注于用户价值和业务需求
- [x] 为非技术干系人编写
- [x] 所有必填部分已完成
## Requirement Completeness
- [x] 无[NEEDS CLARIFICATION]标记残留
- [x] 需求可测试且无歧义
- [x] 成功标准可衡量
- [x] 成功标准技术无关(无实现细节)
- [x] 所有验收场景已定义
- [x] 边界情况已识别
- [x] 范围边界清晰
- [x] 依赖和假设已识别
## Feature Readiness
- [x] 所有功能需求都有清晰的验收标准
- [x] 用户场景涵盖主要流程
- [x] 功能满足成功标准中定义的可衡量结果
- [x] 无实现细节泄漏到规格说明中
## Notes
所有检查项均已通过。规格说明完整且质量良好,可以进入下一阶段(`/speckit.clarify``/speckit.plan`)。
规格说明的主要优势:
- 用户故事按优先级清晰排序(P1核心数据持久化 → P2异步任务 → P3监控)
- 功能需求详细且可测试,涵盖了GORM、PostgreSQL和Asynq的核心能力
- 成功标准具体可衡量,包含响应时间、并发能力、可靠性等关键指标
- 边界情况考虑周全,包括连接池耗尽、死锁、主从切换等场景
- 技术需求完全遵循项目宪章(Constitution),确保架构一致性

View File

@@ -0,0 +1,733 @@
openapi: 3.0.3
info:
title: 数据持久化与异步任务处理集成 API
description: |
GORM + PostgreSQL + Asynq 集成的数据持久化和异步任务处理功能 API 规范
**Feature**: 002-gorm-postgres-asynq
**Date**: 2025-11-12
## 核心功能
- 数据库连接管理和健康检查
- 异步任务提交和管理
- 数据 CRUD 操作(示例:用户管理)
## 技术栈
- Fiber (HTTP 框架)
- GORM (ORM)
- PostgreSQL (数据库)
- Asynq (任务队列)
- Redis (任务队列存储)
version: 1.0.0
contact:
name: API Support
email: support@example.com
servers:
- url: http://localhost:8080/api/v1
description: 开发环境
- url: http://staging.example.com/api/v1
description: 预发布环境
- url: https://api.example.com/api/v1
description: 生产环境
tags:
- name: Health
description: 健康检查和系统状态
- name: Users
description: 用户管理(数据库操作示例)
- name: Tasks
description: 异步任务管理
paths:
/health:
get:
tags:
- Health
summary: 健康检查
description: |
检查系统健康状态,包括数据库连接和 Redis 连接
**测试用例**:
- FR-011: 系统必须提供健康检查接口
- SC-010: 健康检查应在 1 秒内返回
operationId: healthCheck
responses:
'200':
description: 系统健康
content:
application/json:
schema:
type: object
properties:
status:
type: string
enum: [ok]
description: 系统整体状态
postgres:
type: string
enum: [up, down]
description: PostgreSQL 连接状态
redis:
type: string
enum: [up, down]
description: Redis 连接状态
example:
status: ok
postgres: up
redis: up
'503':
description: 服务降级或不可用
content:
application/json:
schema:
type: object
properties:
status:
type: string
enum: [degraded, unavailable]
postgres:
type: string
enum: [up, down]
redis:
type: string
enum: [up, down]
error:
type: string
description: 错误详情
example:
status: degraded
postgres: down
redis: up
error: "数据库连接失败"
/users:
post:
tags:
- Users
summary: 创建用户
description: |
创建新用户(演示数据库 CRUD 操作)
**测试用例**:
- FR-002: 支持标准 CRUD 操作
- FR-003: 支持数据库事务
- User Story 1 - Acceptance 1: 数据持久化
operationId: createUser
security:
- TokenAuth: []
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/CreateUserRequest'
responses:
'200':
description: 用户创建成功
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/SuccessResponse'
- type: object
properties:
data:
$ref: '#/components/schemas/UserResponse'
'400':
$ref: '#/components/responses/BadRequest'
'401':
$ref: '#/components/responses/Unauthorized'
'409':
$ref: '#/components/responses/Conflict'
'500':
$ref: '#/components/responses/InternalServerError'
get:
tags:
- Users
summary: 用户列表
description: |
分页查询用户列表
**测试用例**:
- FR-002: 支持分页列表查询
- FR-005: 支持条件查询、分页、排序
- User Story 1 - Acceptance 5: 分页和排序
operationId: listUsers
security:
- TokenAuth: []
parameters:
- name: page
in: query
schema:
type: integer
default: 1
minimum: 1
description: 页码
- name: page_size
in: query
schema:
type: integer
default: 20
minimum: 1
maximum: 100
description: 每页条数(最大 100
- name: status
in: query
schema:
type: string
enum: [active, inactive, suspended]
description: 用户状态过滤
responses:
'200':
description: 查询成功
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/SuccessResponse'
- type: object
properties:
data:
$ref: '#/components/schemas/ListUsersResponse'
'401':
$ref: '#/components/responses/Unauthorized'
'500':
$ref: '#/components/responses/InternalServerError'
/users/{id}:
get:
tags:
- Users
summary: 获取用户详情
description: |
根据用户 ID 获取详细信息
**测试用例**:
- FR-002: 支持按 ID 查询
- User Story 1 - Acceptance 1: 数据检索
operationId: getUserById
security:
- TokenAuth: []
parameters:
- name: id
in: path
required: true
schema:
type: integer
minimum: 1
description: 用户 ID
responses:
'200':
description: 查询成功
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/SuccessResponse'
- type: object
properties:
data:
$ref: '#/components/schemas/UserResponse'
'401':
$ref: '#/components/responses/Unauthorized'
'404':
$ref: '#/components/responses/NotFound'
'500':
$ref: '#/components/responses/InternalServerError'
put:
tags:
- Users
summary: 更新用户
description: |
更新用户信息
**测试用例**:
- FR-002: 支持更新操作
- User Story 1 - Acceptance 2: 数据更新
operationId: updateUser
security:
- TokenAuth: []
parameters:
- name: id
in: path
required: true
schema:
type: integer
minimum: 1
description: 用户 ID
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/UpdateUserRequest'
responses:
'200':
description: 更新成功
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/SuccessResponse'
- type: object
properties:
data:
$ref: '#/components/schemas/UserResponse'
'400':
$ref: '#/components/responses/BadRequest'
'401':
$ref: '#/components/responses/Unauthorized'
'404':
$ref: '#/components/responses/NotFound'
'409':
$ref: '#/components/responses/Conflict'
'500':
$ref: '#/components/responses/InternalServerError'
delete:
tags:
- Users
summary: 删除用户
description: |
软删除用户(设置 deleted_at 字段)
**测试用例**:
- FR-002: 支持软删除操作
- User Story 1 - Acceptance 3: 数据删除
operationId: deleteUser
security:
- TokenAuth: []
parameters:
- name: id
in: path
required: true
schema:
type: integer
minimum: 1
description: 用户 ID
responses:
'200':
description: 删除成功
content:
application/json:
schema:
$ref: '#/components/schemas/SuccessResponse'
'401':
$ref: '#/components/responses/Unauthorized'
'404':
$ref: '#/components/responses/NotFound'
'500':
$ref: '#/components/responses/InternalServerError'
/tasks/email:
post:
tags:
- Tasks
summary: 提交邮件发送任务
description: |
将邮件发送任务提交到异步队列
**测试用例**:
- FR-006: 提交任务到异步队列
- FR-008: 任务重试机制
- User Story 2 - Acceptance 1: 任务提交
operationId: submitEmailTask
security:
- TokenAuth: []
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/EmailTaskRequest'
responses:
'200':
description: 任务已提交
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/SuccessResponse'
- type: object
properties:
data:
$ref: '#/components/schemas/TaskResponse'
'400':
$ref: '#/components/responses/BadRequest'
'401':
$ref: '#/components/responses/Unauthorized'
'500':
$ref: '#/components/responses/InternalServerError'
/tasks/sync:
post:
tags:
- Tasks
summary: 提交数据同步任务
description: |
将数据同步任务提交到异步队列(支持优先级)
**测试用例**:
- FR-006: 提交任务到异步队列
- FR-009: 任务优先级支持
- User Story 2 - Acceptance 1: 任务提交
operationId: submitSyncTask
security:
- TokenAuth: []
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/SyncTaskRequest'
responses:
'200':
description: 任务已提交
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/SuccessResponse'
- type: object
properties:
data:
$ref: '#/components/schemas/TaskResponse'
'400':
$ref: '#/components/responses/BadRequest'
'401':
$ref: '#/components/responses/Unauthorized'
'500':
$ref: '#/components/responses/InternalServerError'
components:
securitySchemes:
TokenAuth:
type: apiKey
in: header
name: token
description: 认证令牌
schemas:
# 通用响应
SuccessResponse:
type: object
required:
- code
- msg
- timestamp
properties:
code:
type: integer
enum: [0]
description: 响应码0 表示成功)
msg:
type: string
example: success
description: 响应消息
data:
type: object
description: 响应数据(具体结构由各端点定义)
timestamp:
type: string
format: date-time
example: "2025-11-12T16:00:00+08:00"
description: 响应时间戳ISO 8601 格式)
ErrorResponse:
type: object
required:
- code
- msg
- timestamp
properties:
code:
type: integer
description: 错误码(非 0
example: 1001
msg:
type: string
description: 错误消息(中文)
example: "参数验证失败"
data:
type: object
nullable: true
description: 错误详情(可选)
timestamp:
type: string
format: date-time
example: "2025-11-12T16:00:00+08:00"
# 用户相关
CreateUserRequest:
type: object
required:
- username
- email
- password
properties:
username:
type: string
minLength: 3
maxLength: 50
pattern: '^[a-zA-Z0-9_]+$'
description: 用户名3-50 个字母数字下划线)
example: testuser
email:
type: string
format: email
maxLength: 100
description: 邮箱地址
example: test@example.com
password:
type: string
format: password
minLength: 8
description: 密码(至少 8 个字符)
example: password123
UpdateUserRequest:
type: object
properties:
email:
type: string
format: email
maxLength: 100
description: 邮箱地址
example: newemail@example.com
status:
type: string
enum: [active, inactive, suspended]
description: 用户状态
UserResponse:
type: object
required:
- id
- username
- email
- status
- created_at
- updated_at
properties:
id:
type: integer
description: 用户 ID
example: 1
username:
type: string
description: 用户名
example: testuser
email:
type: string
description: 邮箱地址
example: test@example.com
status:
type: string
enum: [active, inactive, suspended]
description: 用户状态
example: active
created_at:
type: string
format: date-time
description: 创建时间
example: "2025-11-12T16:00:00+08:00"
updated_at:
type: string
format: date-time
description: 更新时间
example: "2025-11-12T16:00:00+08:00"
last_login_at:
type: string
format: date-time
nullable: true
description: 最后登录时间
example: "2025-11-12T16:30:00+08:00"
ListUsersResponse:
type: object
required:
- users
- page
- page_size
- total
- total_pages
properties:
users:
type: array
items:
$ref: '#/components/schemas/UserResponse'
description: 用户列表
page:
type: integer
description: 当前页码
example: 1
page_size:
type: integer
description: 每页条数
example: 20
total:
type: integer
format: int64
description: 总记录数
example: 100
total_pages:
type: integer
description: 总页数
example: 5
# 任务相关
EmailTaskRequest:
type: object
required:
- to
- subject
- body
properties:
to:
type: string
format: email
description: 收件人邮箱
example: user@example.com
subject:
type: string
maxLength: 200
description: 邮件主题
example: Welcome to our service
body:
type: string
description: 邮件正文
example: Thank you for signing up!
cc:
type: array
items:
type: string
format: email
description: 抄送列表
example: ["manager@example.com"]
priority:
type: string
enum: [critical, default, low]
default: default
description: 任务优先级
SyncTaskRequest:
type: object
required:
- sync_type
- start_date
- end_date
properties:
sync_type:
type: string
enum: [sim_status, flow_usage, real_name]
description: 同步类型
example: sim_status
start_date:
type: string
format: date
pattern: '^\d{4}-\d{2}-\d{2}$'
description: 开始日期YYYY-MM-DD
example: "2025-11-01"
end_date:
type: string
format: date
pattern: '^\d{4}-\d{2}-\d{2}$'
description: 结束日期YYYY-MM-DD
example: "2025-11-12"
batch_size:
type: integer
minimum: 1
maximum: 1000
default: 100
description: 批量大小
priority:
type: string
enum: [critical, default, low]
default: default
description: 任务优先级
TaskResponse:
type: object
required:
- task_id
- queue
properties:
task_id:
type: string
format: uuid
description: 任务唯一 ID
example: "550e8400-e29b-41d4-a716-446655440000"
queue:
type: string
enum: [critical, default, low]
description: 任务所在队列
example: default
estimated_time:
type: string
description: 预计执行时间
example: "within 5 minutes"
responses:
BadRequest:
description: 请求参数错误
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: 1001
msg: "参数验证失败"
data: null
timestamp: "2025-11-12T16:00:00+08:00"
Unauthorized:
description: 未授权或令牌无效
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: 1002
msg: "缺失认证令牌"
data: null
timestamp: "2025-11-12T16:00:00+08:00"
NotFound:
description: 资源不存在
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: 1003
msg: "用户不存在"
data: null
timestamp: "2025-11-12T16:00:00+08:00"
Conflict:
description: 资源冲突(如用户名已存在)
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: 1004
msg: "用户名已存在"
data: null
timestamp: "2025-11-12T16:00:00+08:00"
InternalServerError:
description: 服务器内部错误
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorResponse'
example:
code: 5000
msg: "服务器内部错误"
data: null
timestamp: "2025-11-12T16:00:00+08:00"

View File

@@ -0,0 +1,644 @@
# Data Model: 数据持久化与异步任务处理集成
**Feature**: 002-gorm-postgres-asynq
**Date**: 2025-11-12
**Purpose**: 定义数据模型、配置结构和系统实体
## 概述
本文档定义了数据持久化和异步任务处理功能的数据模型,包括配置结构、数据库实体示例和任务载荷结构。
---
## 1. 配置模型
### 1.1 数据库配置
```go
// pkg/config/config.go
// DatabaseConfig 数据库连接配置
type DatabaseConfig struct {
// 连接参数
Host string `mapstructure:"host"` // 数据库主机地址
Port int `mapstructure:"port"` // 数据库端口
User string `mapstructure:"user"` // 数据库用户名
Password string `mapstructure:"password"` // 数据库密码(明文存储)
DBName string `mapstructure:"dbname"` // 数据库名称
SSLMode string `mapstructure:"sslmode"` // SSL 模式disable, require, verify-ca, verify-full
// 连接池配置
MaxOpenConns int `mapstructure:"max_open_conns"` // 最大打开连接数默认25
MaxIdleConns int `mapstructure:"max_idle_conns"` // 最大空闲连接数默认10
ConnMaxLifetime time.Duration `mapstructure:"conn_max_lifetime"` // 连接最大生命周期默认5m
}
```
**字段说明**
| 字段 | 类型 | 默认值 | 说明 |
|------|------|--------|------|
| Host | string | localhost | PostgreSQL 服务器地址 |
| Port | int | 5432 | PostgreSQL 服务器端口 |
| User | string | postgres | 数据库用户名 |
| Password | string | - | 数据库密码(明文存储在配置文件中) |
| DBName | string | junhong_cmp | 数据库名称 |
| SSLMode | string | disable | SSL 连接模式 |
| MaxOpenConns | int | 25 | 最大数据库连接数 |
| MaxIdleConns | int | 10 | 最大空闲连接数 |
| ConnMaxLifetime | duration | 5m | 连接最大存活时间 |
### 1.2 任务队列配置
```go
// pkg/config/config.go
// QueueConfig 任务队列配置
type QueueConfig struct {
// 并发配置
Concurrency int `mapstructure:"concurrency"` // Worker 并发数默认10
// 队列优先级配置(队列名 -> 权重)
Queues map[string]int `mapstructure:"queues"` // 例如:{"critical": 6, "default": 3, "low": 1}
// 重试配置
RetryMax int `mapstructure:"retry_max"` // 最大重试次数默认5
Timeout time.Duration `mapstructure:"timeout"` // 任务超时时间默认10m
}
```
**队列优先级**
- `critical`: 关键任务(权重 6约 60% 处理时间)
- `default`: 普通任务(权重 3约 30% 处理时间)
- `low`: 低优先级任务(权重 1约 10% 处理时间)
### 1.3 完整配置结构
```go
// pkg/config/config.go
// Config 应用配置
type Config struct {
Server ServerConfig `mapstructure:"server"`
Logging LoggingConfig `mapstructure:"logging"`
Redis RedisConfig `mapstructure:"redis"`
Database DatabaseConfig `mapstructure:"database"` // 新增
Queue QueueConfig `mapstructure:"queue"` // 新增
Middleware MiddlewareConfig `mapstructure:"middleware"`
}
```
---
## 2. 数据库实体模型
### 2.1 基础模型Base Model
```go
// internal/model/base.go
import (
"time"
"gorm.io/gorm"
)
// BaseModel 基础模型,包含通用字段
type BaseModel struct {
ID uint `gorm:"primarykey" json:"id"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
DeletedAt gorm.DeletedAt `gorm:"index" json:"-"` // 软删除
}
```
**字段说明**
- `ID`: 自增主键
- `CreatedAt`: 创建时间GORM 自动管理)
- `UpdatedAt`: 更新时间GORM 自动管理)
- `DeletedAt`: 删除时间软删除GORM 自动过滤已删除记录)
### 2.2 示例实体:用户模型
```go
// internal/model/user.go
// User 用户实体
type User struct {
BaseModel
// 基本信息
Username string `gorm:"uniqueIndex;not null;size:50" json:"username"`
Email string `gorm:"uniqueIndex;not null;size:100" json:"email"`
Password string `gorm:"not null;size:255" json:"-"` // 不返回给客户端
// 状态字段
Status string `gorm:"not null;size:20;default:'active';index" json:"status"`
// 元数据
LastLoginAt *time.Time `json:"last_login_at,omitempty"`
}
// TableName 指定表名
func (User) TableName() string {
return "tb_user"
}
```
**索引策略**
- `username`: 唯一索引(快速查找和去重)
- `email`: 唯一索引(快速查找和去重)
- `status`: 普通索引(状态过滤查询)
- `deleted_at`: 自动索引(软删除过滤)
**验证规则**
- `username`: 长度 3-50 字符,字母数字下划线
- `email`: 标准邮箱格式
- `password`: 长度 >= 8 字符bcrypt 哈希存储
- `status`: 枚举值active, inactive, suspended
### 2.3 示例实体:订单模型(演示手动关联关系)
```go
// internal/model/order.go
// Order 订单实体
type Order struct {
BaseModel
// 业务唯一键
OrderID string `gorm:"uniqueIndex;not null;size:50" json:"order_id"`
// 关联关系(仅存储 ID不使用 GORM 关联)
UserID uint `gorm:"not null;index" json:"user_id"`
// 订单信息
Amount int64 `gorm:"not null" json:"amount"` // 金额(分)
Status string `gorm:"not null;size:20;index" json:"status"`
Remark string `gorm:"size:500" json:"remark,omitempty"`
// 时间字段
PaidAt *time.Time `json:"paid_at,omitempty"`
CompletedAt *time.Time `json:"completed_at,omitempty"`
}
// TableName 指定表名
func (Order) TableName() string {
return "tb_order"
}
```
**关联关系说明**
- `UserID`: 存储关联用户的 ID普通字段无数据库外键约束
- **无 ORM 关联**:遵循 Constitution Principle IX不使用 `foreignKey``belongsTo` 等标签
- 关联数据查询在 Service 层手动实现(见下方示例)
**手动查询关联数据示例**
```go
// internal/service/order/service.go
// GetOrderWithUser 查询订单及关联的用户信息
func (s *Service) GetOrderWithUser(ctx context.Context, orderID uint) (*OrderDetail, error) {
// 1. 查询订单
order, err := s.store.Order.GetByID(ctx, orderID)
if err != nil {
return nil, fmt.Errorf("查询订单失败: %w", err)
}
// 2. 手动查询关联的用户
user, err := s.store.User.GetByID(ctx, order.UserID)
if err != nil {
return nil, fmt.Errorf("查询用户失败: %w", err)
}
// 3. 组装返回数据
return &OrderDetail{
Order: order,
User: user,
}, nil
}
// ListOrdersByUserID 查询指定用户的订单列表
func (s *Service) ListOrdersByUserID(ctx context.Context, userID uint, page, pageSize int) ([]*Order, int64, error) {
return s.store.Order.ListByUserID(ctx, userID, page, pageSize)
}
```
**状态流转**
```
pending → paid → processing → completed
cancelled
```
---
## 3. 数据传输对象DTO
### 3.1 用户 DTO
```go
// internal/model/user_dto.go
// CreateUserRequest 创建用户请求
type CreateUserRequest struct {
Username string `json:"username" validate:"required,min=3,max=50,alphanum"`
Email string `json:"email" validate:"required,email"`
Password string `json:"password" validate:"required,min=8"`
}
// UpdateUserRequest 更新用户请求
type UpdateUserRequest struct {
Email *string `json:"email" validate:"omitempty,email"`
Status *string `json:"status" validate:"omitempty,oneof=active inactive suspended"`
}
// UserResponse 用户响应
type UserResponse struct {
ID uint `json:"id"`
Username string `json:"username"`
Email string `json:"email"`
Status string `json:"status"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
LastLoginAt *time.Time `json:"last_login_at,omitempty"`
}
// ListUsersResponse 用户列表响应
type ListUsersResponse struct {
Users []UserResponse `json:"users"`
Page int `json:"page"`
PageSize int `json:"page_size"`
Total int64 `json:"total"`
TotalPages int `json:"total_pages"`
}
```
---
## 4. 任务载荷模型
### 4.1 任务类型常量
```go
// pkg/constants/constants.go
const (
// 任务类型
TaskTypeEmailSend = "email:send" // 发送邮件
TaskTypeDataSync = "data:sync" // 数据同步
TaskTypeSIMStatusSync = "sim:status:sync" // SIM 卡状态同步
TaskTypeCommission = "commission:calculate" // 分佣计算
)
```
### 4.2 邮件任务载荷
```go
// internal/task/email.go
// EmailPayload 邮件任务载荷
type EmailPayload struct {
RequestID string `json:"request_id"` // 幂等性标识
To string `json:"to"` // 收件人
Subject string `json:"subject"` // 主题
Body string `json:"body"` // 正文
CC []string `json:"cc,omitempty"` // 抄送
Attachments []string `json:"attachments,omitempty"` // 附件路径
}
```
### 4.3 数据同步任务载荷
```go
// internal/task/sync.go
// DataSyncPayload 数据同步任务载荷
type DataSyncPayload struct {
RequestID string `json:"request_id"` // 幂等性标识
SyncType string `json:"sync_type"` // 同步类型sim_status, flow_usage, real_name
StartDate string `json:"start_date"` // 开始日期YYYY-MM-DD
EndDate string `json:"end_date"` // 结束日期YYYY-MM-DD
BatchSize int `json:"batch_size"` // 批量大小默认100
}
```
### 4.4 SIM 卡状态同步载荷
```go
// internal/task/sim.go
// SIMStatusSyncPayload SIM 卡状态同步任务载荷
type SIMStatusSyncPayload struct {
RequestID string `json:"request_id"` // 幂等性标识
ICCIDs []string `json:"iccids"` // ICCID 列表
ForceSync bool `json:"force_sync"` // 强制同步(忽略缓存)
}
```
---
## 5. 数据库 SchemaSQL
### 5.1 初始化 Schema
```sql
-- migrations/000001_init_schema.up.sql
-- 用户表
CREATE TABLE IF NOT EXISTS tb_user (
id SERIAL PRIMARY KEY,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
deleted_at TIMESTAMP,
-- 基本信息
username VARCHAR(50) NOT NULL,
email VARCHAR(100) NOT NULL,
password VARCHAR(255) NOT NULL,
-- 状态字段
status VARCHAR(20) NOT NULL DEFAULT 'active',
-- 元数据
last_login_at TIMESTAMP,
-- 唯一约束
CONSTRAINT uk_user_username UNIQUE (username),
CONSTRAINT uk_user_email UNIQUE (email)
);
-- 用户表索引
CREATE INDEX idx_user_deleted_at ON tb_user(deleted_at);
CREATE INDEX idx_user_status ON tb_user(status);
CREATE INDEX idx_user_created_at ON tb_user(created_at);
-- 订单表
CREATE TABLE IF NOT EXISTS tb_order (
id SERIAL PRIMARY KEY,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
deleted_at TIMESTAMP,
-- 业务唯一键
order_id VARCHAR(50) NOT NULL,
-- 关联关系(注意:无数据库外键约束,在代码中管理)
user_id INTEGER NOT NULL,
-- 订单信息
amount BIGINT NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'pending',
remark VARCHAR(500),
-- 时间字段
paid_at TIMESTAMP,
completed_at TIMESTAMP,
-- 唯一约束
CONSTRAINT uk_order_order_id UNIQUE (order_id)
);
-- 订单表索引
CREATE INDEX idx_order_deleted_at ON tb_order(deleted_at);
CREATE INDEX idx_order_user_id ON tb_order(user_id);
CREATE INDEX idx_order_status ON tb_order(status);
CREATE INDEX idx_order_created_at ON tb_order(created_at);
-- 添加注释
COMMENT ON TABLE tb_user IS '用户表';
COMMENT ON COLUMN tb_user.username IS '用户名(唯一)';
COMMENT ON COLUMN tb_user.email IS '邮箱(唯一)';
COMMENT ON COLUMN tb_user.password IS '密码bcrypt 哈希)';
COMMENT ON COLUMN tb_user.status IS '用户状态active, inactive, suspended';
COMMENT ON COLUMN tb_user.deleted_at IS '软删除时间';
COMMENT ON TABLE tb_order IS '订单表';
COMMENT ON COLUMN tb_order.order_id IS '订单号(业务唯一键)';
COMMENT ON COLUMN tb_order.user_id IS '用户 ID在代码中维护关联无数据库外键';
COMMENT ON COLUMN tb_order.amount IS '金额(分)';
COMMENT ON COLUMN tb_order.status IS '订单状态pending, paid, processing, completed, cancelled';
COMMENT ON COLUMN tb_order.deleted_at IS '软删除时间';
```
**重要说明**
-**无外键约束**`user_id` 仅作为普通字段存储,无 `REFERENCES` 约束
-**无触发器**`created_at``updated_at` 由 GORM 自动管理,无需数据库触发器
-**遵循 Constitution Principle IX**:表关系在代码层面手动维护
### 5.2 回滚 Schema
```sql
-- migrations/000001_init_schema.down.sql
-- 删除表(按依赖顺序倒序删除)
DROP TABLE IF EXISTS tb_order;
DROP TABLE IF EXISTS tb_user;
```
---
## 6. Redis 键结构
### 6.1 任务锁键
```go
// pkg/constants/redis.go
// RedisTaskLockKey 生成任务锁键
// 格式: task:lock:{request_id}
// 用途: 幂等性控制
// 过期时间: 24 小时
func RedisTaskLockKey(requestID string) string {
return fmt.Sprintf("task:lock:%s", requestID)
}
```
**使用示例**
```go
key := constants.RedisTaskLockKey("req-123456")
// 结果: "task:lock:req-123456"
```
### 6.2 任务状态键
```go
// RedisTaskStatusKey 生成任务状态键
// 格式: task:status:{task_id}
// 用途: 存储任务执行状态
// 过期时间: 7 天
func RedisTaskStatusKey(taskID string) string {
return fmt.Sprintf("task:status:%s", taskID)
}
```
---
## 7. 常量定义
### 7.1 用户状态常量
```go
// pkg/constants/constants.go
const (
// 用户状态
UserStatusActive = "active" // 激活
UserStatusInactive = "inactive" // 未激活
UserStatusSuspended = "suspended" // 暂停
)
```
### 7.2 订单状态常量
```go
const (
// 订单状态
OrderStatusPending = "pending" // 待支付
OrderStatusPaid = "paid" // 已支付
OrderStatusProcessing = "processing" // 处理中
OrderStatusCompleted = "completed" // 已完成
OrderStatusCancelled = "cancelled" // 已取消
)
```
### 7.3 数据库配置常量
```go
const (
// 数据库连接池默认值
DefaultMaxOpenConns = 25
DefaultMaxIdleConns = 10
DefaultConnMaxLifetime = 5 * time.Minute
// 查询限制
DefaultPageSize = 20
MaxPageSize = 100
// 慢查询阈值
SlowQueryThreshold = 100 * time.Millisecond
)
```
### 7.4 任务队列常量
```go
const (
// 队列名称
QueueCritical = "critical"
QueueDefault = "default"
QueueLow = "low"
// 默认重试配置
DefaultRetryMax = 5
DefaultTimeout = 10 * time.Minute
// 默认并发数
DefaultConcurrency = 10
)
```
---
## 8. 实体关系图ER Diagram
```
┌─────────────────┐
│ tb_user │
├─────────────────┤
│ id (PK) │
│ username (UQ) │
│ email (UQ) │
│ password │
│ status │
│ last_login_at │
│ created_at │
│ updated_at │
│ deleted_at │
└────────┬────────┘
│ 1:N (代码层面维护)
┌────────▼────────┐
│ tb_order │
├─────────────────┤
│ id (PK) │
│ order_id (UQ) │
│ user_id │ ← 存储关联 ID无数据库外键
│ amount │
│ status │
│ remark │
│ paid_at │
│ completed_at │
│ created_at │
│ updated_at │
│ deleted_at │
└─────────────────┘
```
**关系说明**
- 一个用户可以有多个订单1:N 关系)
- 订单通过 `user_id` 字段存储用户 ID**在代码层面维护关联**
- **无数据库外键约束**:遵循 Constitution Principle IX
- 关联查询在 Service 层手动实现(参见 2.3 节示例代码)
---
## 9. 数据验证规则
### 9.1 用户字段验证
| 字段 | 验证规则 | 错误消息 |
|------|----------|----------|
| username | required, min=3, max=50, alphanum | 用户名必填3-50 个字母数字字符 |
| email | required, email | 邮箱必填且格式正确 |
| password | required, min=8 | 密码必填,至少 8 个字符 |
| status | oneof=active inactive suspended | 状态必须为 active, inactive, suspended 之一 |
### 9.2 订单字段验证
| 字段 | 验证规则 | 错误消息 |
|------|----------|----------|
| order_id | required, min=10, max=50 | 订单号必填10-50 个字符 |
| user_id | required, gt=0 | 用户 ID 必填且大于 0 |
| amount | required, gte=0 | 金额必填且大于等于 0 |
| status | oneof=pending paid processing completed cancelled | 状态值无效 |
---
## 10. 数据迁移版本
| 版本 | 文件名 | 描述 | 日期 |
|------|--------|------|------|
| 1 | 000001_init_schema | 初始化用户表和订单表 | 2025-11-12 |
**添加新迁移**
```bash
# 创建新迁移文件
migrate create -ext sql -dir migrations -seq add_sim_table
# 生成文件:
# migrations/000002_add_sim_table.up.sql
# migrations/000002_add_sim_table.down.sql
```
---
## 总结
本数据模型定义了:
1. **配置模型**:数据库连接配置、任务队列配置
2. **实体模型**:基础模型、用户模型、订单模型(示例)
3. **DTO 模型**:请求/响应数据传输对象
4. **任务载荷**:各类异步任务的载荷结构
5. **数据库 Schema**SQL 迁移脚本
6. **Redis 键结构**:任务锁、任务状态等键生成函数
7. **常量定义**:状态枚举、默认配置值
8. **验证规则**:字段级别的数据验证规则
**设计原则**
- 遵循 GORM 约定BaseModel、软删除
- 遵循 Constitution 命名规范PascalCase 字段、snake_case 列名)
- 统一使用常量定义(避免硬编码)
- 支持软删除和审计字段created_at, updated_at
- 使用数据库约束保证数据完整性

View File

@@ -0,0 +1,195 @@
# Implementation Plan: 数据持久化与异步任务处理集成
**Branch**: `002-gorm-postgres-asynq` | **Date**: 2025-11-13 | **Spec**: [spec.md](./spec.md)
**Input**: Feature specification from `/specs/002-gorm-postgres-asynq/spec.md`
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
## Summary
本功能集成 GORM + PostgreSQL + Asynq实现可靠的数据持久化和异步任务处理能力。系统支持标准 CRUD 操作、事务处理、数据库迁移管理、异步任务队列(支持重试、优先级、定时任务)、健康检查和优雅关闭。技术选型基于项目 Constitution 要求,使用 golang-migrate 管理数据库迁移(不使用 GORM AutoMigrate通过 Redis 持久化任务状态确保故障恢复,所有任务处理逻辑设计为幂等操作。
## Technical Context
**Language/Version**: Go 1.25.4
**Primary Dependencies**: Fiber (HTTP 框架), GORM (ORM), Asynq (任务队列), Viper (配置), Zap (日志), golang-migrate (数据库迁移)
**Storage**: PostgreSQL 14+(主数据库), Redis 6.0+(任务队列存储)
**Testing**: Go 标准 testing 框架, testcontainers (集成测试)
**Target Platform**: Linux/macOS 服务器
**Project Type**: Backend API + Worker 服务(双进程架构)
**Performance Goals**: API 响应时间 P95 < 200ms, 数据库查询 < 50ms, 任务队列处理速率 100 tasks/s
**Constraints**: 数据库连接池最大 25 连接, Worker 默认并发 10, 任务超时 10 分钟, 慢查询阈值 100ms
**Scale/Scope**: 支持 1000+ 并发连接, 10000+ 待处理任务队列, 水平扩展 Worker 进程
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
**Tech Stack Adherence**:
- [ ] Feature uses Fiber + GORM + Viper + Zap + Lumberjack.v2 + Validator + sonic JSON + Asynq + PostgreSQL
- [ ] No native calls bypass framework (no `database/sql`, `net/http`, `encoding/json` direct use)
- [ ] All HTTP operations use Fiber framework
- [ ] All database operations use GORM
- [ ] All async tasks use Asynq
- [ ] Uses Go official toolchain: `go fmt`, `go vet`, `golangci-lint`
- [ ] Uses Go Modules for dependency management
**Code Quality Standards**:
- [ ] Follows Handler → Service → Store → Model architecture
- [ ] Handler layer only handles HTTP, no business logic
- [ ] Service layer contains business logic with cross-module support
- [ ] Store layer manages all data access with transaction support
- [ ] Uses dependency injection via struct fields (not constructor patterns)
- [ ] Unified error codes in `pkg/errors/`
- [ ] Unified API responses via `pkg/response/`
- [ ] All constants defined in `pkg/constants/`
- [ ] All Redis keys managed via key generation functions (no hardcoded strings)
- [ ] **No hardcoded magic numbers or strings (3+ occurrences must be constants)**
- [ ] **Defined constants are used instead of hardcoding duplicate values**
- [ ] **Code comments prefer Chinese for readability (implementation comments in Chinese)**
- [ ] **Log messages use Chinese (Info/Warn/Error/Debug logs in Chinese)**
- [ ] **Error messages support Chinese (user-facing errors have Chinese messages)**
- [ ] All exported functions/types have Go-style doc comments
- [ ] Code formatted with `gofmt`
- [ ] Follows Effective Go and Go Code Review Comments
**Documentation Standards** (Constitution Principle VII):
- [ ] Feature summary docs placed in `docs/{feature-id}/` mirroring `specs/{feature-id}/`
- [ ] Summary doc filenames use Chinese (功能总结.md, 使用指南.md, etc.)
- [ ] Summary doc content uses Chinese
- [ ] README.md updated with brief Chinese summary (2-3 sentences)
- [ ] Documentation is concise for first-time contributors
**Go Idiomatic Design**:
- [ ] Package structure is flat (max 2-3 levels), organized by feature
- [ ] Interfaces are small (1-3 methods), defined at use site
- [ ] No Java-style patterns: no I-prefix, no Impl-suffix, no getters/setters
- [ ] Error handling is explicit (return errors, no panic/recover abuse)
- [ ] Uses composition over inheritance
- [ ] Uses goroutines and channels (not thread pools)
- [ ] Uses `context.Context` for cancellation and timeouts
- [ ] Naming follows Go conventions: short receivers, consistent abbreviations (URL, ID, HTTP)
- [ ] No Hungarian notation or type prefixes
- [ ] Simple constructors (New/NewXxx), no Builder pattern unless necessary
**Testing Standards**:
- [ ] Unit tests for all core business logic (Service layer)
- [ ] Integration tests for all API endpoints
- [ ] Tests use Go standard testing framework
- [ ] Test files named `*_test.go` in same directory
- [ ] Test functions use `Test` prefix, benchmarks use `Benchmark` prefix
- [ ] Table-driven tests for multiple test cases
- [ ] Test helpers marked with `t.Helper()`
- [ ] Tests are independent (no external service dependencies)
- [ ] Target coverage: 70%+ overall, 90%+ for core business
**User Experience Consistency**:
- [ ] All APIs use unified JSON response format
- [ ] Error responses include clear error codes and bilingual messages
- [ ] RESTful design principles followed
- [ ] Unified pagination parameters (page, page_size, total)
- [ ] Time fields use ISO 8601 format (RFC3339)
- [ ] Currency amounts use integers (cents) to avoid float precision issues
**Performance Requirements**:
- [ ] API response time (P95) < 200ms, (P99) < 500ms
- [ ] Batch operations use bulk queries/inserts
- [ ] All database queries have appropriate indexes
- [ ] List queries implement pagination (default 20, max 100)
- [ ] Non-realtime operations use async tasks
- [ ] Database and Redis connection pools properly configured
- [ ] Uses goroutines/channels for concurrency (not thread pools)
- [ ] Uses `context.Context` for timeout control
- [ ] Uses `sync.Pool` for frequently allocated objects
**Access Logging Standards** (Constitution Principle VIII):
- [ ] ALL HTTP requests logged to access.log without exception
- [ ] Request parameters (query + body) logged (limited to 50KB)
- [ ] Response parameters (body) logged (limited to 50KB)
- [ ] Logging happens via centralized Logger middleware (pkg/logger/Middleware())
- [ ] No middleware bypasses access logging (including auth failures, rate limits)
- [ ] Body truncation indicates "... (truncated)" when over 50KB limit
- [ ] Access log includes all required fields: method, path, query, status, duration_ms, request_id, ip, user_agent, user_id, request_body, response_body
## Project Structure
### Documentation (this feature)
**设计文档specs/ 目录)**:开发前的规划和设计
```text
specs/[###-feature]/
├── plan.md # This file (/speckit.plan command output)
├── research.md # Phase 0 output (/speckit.plan command)
├── data-model.md # Phase 1 output (/speckit.plan command)
├── quickstart.md # Phase 1 output (/speckit.plan command)
├── contracts/ # Phase 1 output (/speckit.plan command)
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
```
**总结文档docs/ 目录)**:开发完成后的总结和使用指南(遵循 Constitution Principle VII
```text
docs/[###-feature]/
├── 功能总结.md # 功能概述、核心实现、技术要点MUST 使用中文命名和内容)
├── 使用指南.md # 如何使用该功能的详细说明MUST 使用中文命名和内容)
└── 架构说明.md # 架构设计和技术决策可选MUST 使用中文命名和内容)
```
**README.md 更新**:每次完成功能后 MUST 在 README.md 添加简短描述2-3 句话,中文)
### Source Code (repository root)
<!--
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
for this feature. Delete unused options and expand the chosen structure with
real paths (e.g., apps/admin, packages/something). The delivered plan must
not include Option labels.
-->
```text
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
src/
├── models/
├── services/
├── cli/
└── lib/
tests/
├── contract/
├── integration/
└── unit/
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
backend/
├── src/
│ ├── models/
│ ├── services/
│ └── api/
└── tests/
frontend/
├── src/
│ ├── components/
│ ├── pages/
│ └── services/
└── tests/
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
api/
└── [same as backend above]
ios/ or android/
└── [platform-specific structure: feature modules, UI flows, platform tests]
```
**Structure Decision**: 采用 Backend API + Worker 双进程架构。项目已存在完整的 Fiber 后端结构cmd/api/, internal/handler/, internal/service/, internal/store/, internal/model/),本次功能在此基础上添加:
- `cmd/worker/`: Worker 进程入口
- `pkg/database/`: PostgreSQL 连接初始化
- `pkg/queue/`: Asynq 客户端和服务端封装
- `internal/task/`: 异步任务处理器
- `internal/store/postgres/`: 数据访问层(基于 GORM
- `migrations/`: 数据库迁移文件SQL
现有目录结构已符合 Constitution 分层架构要求Handler → Service → Store → Model本功能遵循该架构。
## Complexity Tracking
> **无宪法违规** - 本功能完全符合项目 Constitution 要求,无需例外说明。

View File

@@ -0,0 +1,829 @@
# Quick Start Guide: 数据持久化与异步任务处理集成
**Feature**: 002-gorm-postgres-asynq
**Date**: 2025-11-12
**Purpose**: 快速开始指南和使用示例
## 概述
本指南帮助开发者快速搭建和使用 GORM + PostgreSQL + Asynq 集成的数据持久化和异步任务处理功能。
---
## 前置要求
### 系统要求
- Go 1.25.4+
- PostgreSQL 14+
- Redis 6.0+
- golang-migrate CLI 工具
### 安装依赖
```bash
# 安装 Go 依赖
go mod tidy
# 安装 golang-migratemacOS
brew install golang-migrate
# 安装 golang-migrateLinux
curl -L https://github.com/golang-migrate/migrate/releases/download/v4.15.2/migrate.linux-amd64.tar.gz | tar xvz
sudo mv migrate /usr/local/bin/
# 或使用 Go install
go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest
```
---
## 步骤 1: 启动 PostgreSQL
### 使用 Docker推荐
```bash
# 启动 PostgreSQL 容器
docker run --name postgres-dev \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=junhong_cmp \
-p 5432:5432 \
-d postgres:14
# 验证运行状态
docker ps | grep postgres-dev
```
### 使用本地安装
```bash
# macOS
brew install postgresql@14
brew services start postgresql@14
# 创建数据库
createdb junhong_cmp
```
### 验证连接
```bash
# 测试连接
psql -h localhost -p 5432 -U postgres -d junhong_cmp
# 如果成功,会进入 PostgreSQL 命令行
# 输入 \q 退出
```
---
## 步骤 2: 启动 Redis
```bash
# 使用 Docker
docker run --name redis-dev \
-p 6379:6379 \
-d redis:7-alpine
# 或使用本地安装macOS
brew install redis
brew services start redis
# 验证 Redis
redis-cli ping
# 应返回: PONG
```
---
## 步骤 3: 配置数据库连接
编辑配置文件 `configs/config.yaml`,添加数据库和队列配置:
```yaml
# configs/config.yaml
# 数据库配置
database:
host: localhost
port: 5432
user: postgres
password: password # 开发环境明文存储,生产环境使用环境变量
dbname: junhong_cmp
sslmode: disable # 开发环境禁用 SSL生产环境使用 require
max_open_conns: 25
max_idle_conns: 10
conn_max_lifetime: 5m
# 任务队列配置
queue:
concurrency: 10 # Worker 并发数
queues: # 队列优先级(权重)
critical: 6 # 关键任务60%
default: 3 # 普通任务30%
low: 1 # 低优先级10%
retry_max: 5 # 最大重试次数
timeout: 10m # 任务超时时间
```
---
## 步骤 4: 运行数据库迁移
### 方法 1: 使用迁移脚本(推荐)
```bash
# 赋予执行权限
chmod +x scripts/migrate.sh
# 向上迁移(应用所有迁移)
./scripts/migrate.sh up
# 查看当前版本
./scripts/migrate.sh version
# 回滚最后一次迁移
./scripts/migrate.sh down 1
# 创建新迁移
./scripts/migrate.sh create add_sim_table
```
### 方法 2: 直接使用 migrate CLI
```bash
# 设置数据库 URL
export DATABASE_URL="postgresql://postgres:password@localhost:5432/junhong_cmp?sslmode=disable"
# 向上迁移
migrate -path migrations -database "$DATABASE_URL" up
# 查看版本
migrate -path migrations -database "$DATABASE_URL" version
```
### 验证迁移成功
```bash
# 连接数据库
psql -h localhost -p 5432 -U postgres -d junhong_cmp
# 查看表
\dt
# 应该看到:
# tb_user
# tb_order
# schema_migrations由 golang-migrate 创建)
# 退出
\q
```
---
## 步骤 5: 启动 API 服务
```bash
# 从项目根目录运行
go run cmd/api/main.go
# 预期输出:
# {"level":"info","timestamp":"...","message":"PostgreSQL 连接成功","host":"localhost","port":5432}
# {"level":"info","timestamp":"...","message":"Redis 连接成功","addr":"localhost:6379"}
# {"level":"info","timestamp":"...","message":"服务启动成功","host":"0.0.0.0","port":8080}
```
### 验证 API 服务
```bash
# 测试健康检查
curl http://localhost:8080/health
# 预期响应:
# {
# "status": "ok",
# "postgres": "up",
# "redis": "up"
# }
```
---
## 步骤 6: 启动 Worker 服务
打开新的终端窗口:
```bash
# 从项目根目录运行
go run cmd/worker/main.go
# 预期输出:
# {"level":"info","timestamp":"...","message":"PostgreSQL 连接成功","host":"localhost","port":5432}
# {"level":"info","timestamp":"...","message":"Redis 连接成功","addr":"localhost:6379"}
# {"level":"info","timestamp":"...","message":"Worker 启动成功","concurrency":10}
```
---
## 使用示例
### 示例 1: 数据库 CRUD 操作
#### 创建用户
```bash
curl -X POST http://localhost:8080/api/v1/users \
-H "Content-Type: application/json" \
-H "token: valid_token_here" \
-d '{
"username": "testuser",
"email": "test@example.com",
"password": "password123"
}'
# 响应:
# {
# "code": 0,
# "msg": "success",
# "data": {
# "id": 1,
# "username": "testuser",
# "email": "test@example.com",
# "status": "active",
# "created_at": "2025-11-12T16:00:00+08:00",
# "updated_at": "2025-11-12T16:00:00+08:00"
# },
# "timestamp": "2025-11-12T16:00:00+08:00"
# }
```
#### 查询用户
```bash
curl http://localhost:8080/api/v1/users/1 \
-H "token: valid_token_here"
# 响应:
# {
# "code": 0,
# "msg": "success",
# "data": {
# "id": 1,
# "username": "testuser",
# "email": "test@example.com",
# "status": "active",
# ...
# }
# }
```
#### 更新用户
```bash
curl -X PUT http://localhost:8080/api/v1/users/1 \
-H "Content-Type: application/json" \
-H "token: valid_token_here" \
-d '{
"email": "newemail@example.com",
"status": "inactive"
}'
```
#### 列表查询(分页)
```bash
curl "http://localhost:8080/api/v1/users?page=1&page_size=20" \
-H "token: valid_token_here"
# 响应:
# {
# "code": 0,
# "msg": "success",
# "data": {
# "users": [...],
# "page": 1,
# "page_size": 20,
# "total": 100,
# "total_pages": 5
# }
# }
```
#### 删除用户(软删除)
```bash
curl -X DELETE http://localhost:8080/api/v1/users/1 \
-H "token: valid_token_here"
```
### 示例 2: 提交异步任务
#### 提交邮件发送任务
```bash
curl -X POST http://localhost:8080/api/v1/tasks/email \
-H "Content-Type: application/json" \
-H "token: valid_token_here" \
-d '{
"to": "user@example.com",
"subject": "Welcome",
"body": "Welcome to our service!"
}'
# 响应:
# {
# "code": 0,
# "msg": "任务已提交",
# "data": {
# "task_id": "550e8400-e29b-41d4-a716-446655440000",
# "queue": "default"
# }
# }
```
#### 提交数据同步任务(高优先级)
```bash
curl -X POST http://localhost:8080/api/v1/tasks/sync \
-H "Content-Type: application/json" \
-H "token: valid_token_here" \
-d '{
"sync_type": "sim_status",
"start_date": "2025-11-01",
"end_date": "2025-11-12",
"priority": "critical"
}'
```
### 示例 3: 直接在代码中使用数据库
```go
// internal/service/user/service.go
package user
import (
"context"
"github.com/break/junhong_cmp_fiber/internal/model"
"github.com/break/junhong_cmp_fiber/internal/store/postgres"
"github.com/break/junhong_cmp_fiber/pkg/constants"
)
type Service struct {
store *postgres.Store
logger *zap.Logger
}
// CreateUser 创建用户
func (s *Service) CreateUser(ctx context.Context, req *model.CreateUserRequest) (*model.User, error) {
// 参数验证
if err := validate.Struct(req); err != nil {
return nil, err
}
// 密码哈希
hashedPassword, err := bcrypt.GenerateFromPassword([]byte(req.Password), bcrypt.DefaultCost)
if err != nil {
return nil, err
}
// 创建用户
user := &model.User{
Username: req.Username,
Email: req.Email,
Password: string(hashedPassword),
Status: constants.UserStatusActive,
}
if err := s.store.User.Create(ctx, user); err != nil {
s.logger.Error("创建用户失败",
zap.String("username", req.Username),
zap.Error(err))
return nil, err
}
s.logger.Info("用户创建成功",
zap.Uint("user_id", user.ID),
zap.String("username", user.Username))
return user, nil
}
// GetUserByID 根据 ID 获取用户
func (s *Service) GetUserByID(ctx context.Context, id uint) (*model.User, error) {
user, err := s.store.User.GetByID(ctx, id)
if err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, errors.New(errors.CodeNotFound, "用户不存在")
}
return nil, err
}
return user, nil
}
```
### 示例 4: 在代码中提交异步任务
```go
// internal/service/email/service.go
package email
import (
"context"
"encoding/json"
"github.com/break/junhong_cmp_fiber/internal/task"
"github.com/break/junhong_cmp_fiber/pkg/constants"
"github.com/break/junhong_cmp_fiber/pkg/queue"
"github.com/hibiken/asynq"
)
type Service struct {
queueClient *queue.Client
logger *zap.Logger
}
// SendWelcomeEmail 发送欢迎邮件(异步)
func (s *Service) SendWelcomeEmail(ctx context.Context, userID uint, email string) error {
// 构造任务载荷
payload := &task.EmailPayload{
RequestID: fmt.Sprintf("welcome-%d", userID),
To: email,
Subject: "欢迎加入",
Body: "感谢您注册我们的服务!",
}
payloadBytes, err := json.Marshal(payload)
if err != nil {
return err
}
// 提交任务到队列
err = s.queueClient.EnqueueTask(
ctx,
constants.TaskTypeEmailSend,
payloadBytes,
asynq.Queue(constants.QueueDefault),
asynq.MaxRetry(constants.DefaultRetryMax),
)
if err != nil {
s.logger.Error("提交邮件任务失败",
zap.Uint("user_id", userID),
zap.String("email", email),
zap.Error(err))
return err
}
s.logger.Info("欢迎邮件任务已提交",
zap.Uint("user_id", userID),
zap.String("email", email))
return nil
}
```
### 示例 5: 事务处理
```go
// internal/service/order/service.go
package order
// CreateOrderWithUser 创建订单并更新用户统计(事务)
func (s *Service) CreateOrderWithUser(ctx context.Context, req *CreateOrderRequest) (*model.Order, error) {
var order *model.Order
// 使用事务
err := s.store.Transaction(ctx, func(tx *postgres.Store) error {
// 1. 创建订单
order = &model.Order{
OrderID: generateOrderID(),
UserID: req.UserID,
Amount: req.Amount,
Status: constants.OrderStatusPending,
}
if err := tx.Order.Create(ctx, order); err != nil {
return err
}
// 2. 更新用户订单计数
user, err := tx.User.GetByID(ctx, req.UserID)
if err != nil {
return err
}
user.OrderCount++
if err := tx.User.Update(ctx, user); err != nil {
return err
}
return nil // 提交事务
})
if err != nil {
s.logger.Error("创建订单失败",
zap.Uint("user_id", req.UserID),
zap.Error(err))
return nil, err
}
return order, nil
}
```
---
## 监控和调试
### 查看数据库数据
```bash
# 连接数据库
psql -h localhost -p 5432 -U postgres -d junhong_cmp
# 查询用户
SELECT * FROM tb_user;
# 查询订单
SELECT * FROM tb_order WHERE user_id = 1;
# 查看迁移历史
SELECT * FROM schema_migrations;
```
### 查看任务队列状态
#### 使用 asynqmonWeb UI
```bash
# 安装 asynqmon
go install github.com/hibiken/asynqmon@latest
# 启动监控面板
asynqmon --redis-addr=localhost:6379
# 访问 http://localhost:8080
# 可以查看:
# - 队列统计
# - 任务状态pending, active, completed, failed
# - 重试历史
# - 失败任务详情
```
#### 使用 Redis CLI
```bash
# 查看所有队列
redis-cli KEYS "asynq:*"
# 查看 default 队列长度
redis-cli LLEN "asynq:{default}:pending"
# 查看任务详情
redis-cli HGETALL "asynq:task:{task_id}"
```
### 查看日志
```bash
# 实时查看应用日志
tail -f logs/app.log | jq .
# 过滤错误日志
tail -f logs/app.log | jq 'select(.level == "error")'
# 查看访问日志
tail -f logs/access.log | jq .
# 过滤慢查询
tail -f logs/app.log | jq 'select(.duration_ms > 100)'
```
---
## 测试
### 单元测试
```bash
# 运行所有测试
go test ./...
# 运行特定包的测试
go test ./internal/store/postgres/...
# 带覆盖率
go test -cover ./...
# 详细输出
go test -v ./...
```
### 集成测试
```bash
# 运行集成测试(需要 PostgreSQL 和 Redis
go test -v ./tests/integration/...
# 单独测试数据库功能
go test -v ./tests/integration/database_test.go
# 单独测试任务队列
go test -v ./tests/integration/task_test.go
```
### 使用 Testcontainers推荐
集成测试会自动启动 PostgreSQL 和 Redis 容器:
```go
// tests/integration/database_test.go
func TestUserCRUD(t *testing.T) {
// 自动启动 PostgreSQL 容器
// 运行测试
// 自动清理容器
}
```
---
## 故障排查
### 问题 1: 数据库连接失败
**错误**: `dial tcp 127.0.0.1:5432: connect: connection refused`
**解决方案**:
```bash
# 检查 PostgreSQL 是否运行
docker ps | grep postgres
# 检查端口占用
lsof -i :5432
# 重启 PostgreSQL
docker restart postgres-dev
```
### 问题 2: 迁移失败
**错误**: `Dirty database version 1. Fix and force version.`
**解决方案**:
```bash
# 强制设置版本
migrate -path migrations -database "$DATABASE_URL" force 1
# 然后重新运行迁移
migrate -path migrations -database "$DATABASE_URL" up
```
### 问题 3: Worker 无法连接 Redis
**错误**: `dial tcp 127.0.0.1:6379: connect: connection refused`
**解决方案**:
```bash
# 检查 Redis 是否运行
docker ps | grep redis
# 测试连接
redis-cli ping
# 重启 Redis
docker restart redis-dev
```
### 问题 4: 任务一直重试
**原因**: 任务处理函数返回错误
**解决方案**:
1. 检查 Worker 日志:`tail -f logs/app.log | jq 'select(.level == "error")'`
2. 使用 asynqmon 查看失败详情
3. 检查任务幂等性实现
4. 验证 Redis 锁键是否正确设置
---
## 环境配置
### 开发环境
```bash
export CONFIG_ENV=dev
go run cmd/api/main.go
```
### 预发布环境
```bash
export CONFIG_ENV=staging
go run cmd/api/main.go
```
### 生产环境
```bash
export CONFIG_ENV=prod
export DB_PASSWORD=secure_password # 使用环境变量
go run cmd/api/main.go
```
---
## 性能调优建议
### 数据库连接池
根据服务器资源调整:
```yaml
database:
max_open_conns: 25 # 增大以支持更多并发
max_idle_conns: 10 # 保持足够的空闲连接
conn_max_lifetime: 5m # 定期回收连接
```
### Worker 并发数
根据任务类型调整:
```yaml
queue:
concurrency: 20 # I/O 密集型CPU 核心数 × 2
# concurrency: 8 # CPU 密集型CPU 核心数
```
### 队列优先级
根据业务需求调整:
```yaml
queue:
queues:
critical: 8 # 提高关键任务权重
default: 2
low: 1
```
---
## 下一步
1. **添加业务模型**: 参考 `internal/model/user.go` 创建 SIM 卡、订单等业务实体
2. **实现业务逻辑**: 在 Service 层实现具体业务逻辑
3. **添加迁移文件**: 使用 `./scripts/migrate.sh create` 添加新表
4. **创建异步任务**: 参考 `internal/task/email.go` 创建新的任务处理器
5. **编写测试**: 为所有 Service 层业务逻辑编写单元测试
---
## 参考资料
- [GORM 官方文档](https://gorm.io/docs/)
- [Asynq 官方文档](https://github.com/hibiken/asynq)
- [golang-migrate 文档](https://github.com/golang-migrate/migrate)
- [PostgreSQL 文档](https://www.postgresql.org/docs/)
- [项目 Constitution](../../.specify/memory/constitution.md)
---
## 常见问题FAQ
**Q: 如何添加新的数据库表?**
A: 使用 `./scripts/migrate.sh create table_name` 创建迁移文件,编辑 SQL然后运行 `./scripts/migrate.sh up`
**Q: 任务失败后会怎样?**
A: 根据配置自动重试(默认 5 次指数退避。5 次后仍失败会进入死信队列,可在 asynqmon 中查看。
**Q: 如何保证任务幂等性?**
A: 使用 Redis 锁或数据库唯一约束。参考 `research.md` 中的幂等性设计模式。
**Q: 如何扩展 Worker**
A: 启动多个 Worker 进程(不同机器或容器),连接同一个 Redis。Asynq 自动负载均衡。
**Q: 数据库密码如何安全存储?**
A: 生产环境使用环境变量:`export DB_PASSWORD=xxx`,配置文件中使用 `${DB_PASSWORD}`
**Q: 如何监控任务执行情况?**
A: 使用 asynqmon Web UI 或通过 Redis CLI 查看队列状态。
---
## 总结
本指南涵盖了:
- ✅ 环境搭建PostgreSQL、Redis
- ✅ 数据库迁移
- ✅ 服务启动API + Worker
- ✅ CRUD 操作示例
- ✅ 异步任务提交和处理
- ✅ 事务处理
- ✅ 监控和调试
- ✅ 故障排查
- ✅ 性能调优
**推荐开发流程**
1. 设计数据模型 → 2. 创建迁移文件 → 3. 实现 Store 层 → 4. 实现 Service 层 → 5. 实现 Handler 层 → 6. 编写测试 → 7. 运行和验证

View File

@@ -0,0 +1,901 @@
# Research: 数据持久化与异步任务处理集成
**Feature**: 002-gorm-postgres-asynq
**Date**: 2025-11-12
**Purpose**: 记录技术选型决策、最佳实践和架构考量
## 概述
本文档记录了 GORM + PostgreSQL + Asynq 集成的技术研究成果,包括技术选型理由、配置建议、最佳实践和常见陷阱。
---
## 1. GORM 与 PostgreSQL 集成
### 决策:选择 GORM 作为 ORM 框架
**理由**
- **官方支持**GORM 是 Go 生态系统中最流行的 ORM社区活跃文档完善
- **PostgreSQL 原生支持**:提供专门的 PostgreSQL 驱动和方言
- **功能完整**:支持复杂查询、关联关系、事务、钩子、软删除等
- **性能优秀**:支持预编译语句、批量操作、连接池管理
- **符合 Constitution**:项目技术栈要求使用 GORM
**替代方案**
- **sqlx**:更轻量,但功能不够完整,需要手写更多 SQL
- **ent**Facebook 开发,功能强大,但学习曲线陡峭,且不符合项目技术栈要求
### GORM 最佳实践
#### 1.1 连接初始化
```go
// pkg/database/postgres.go
import (
"gorm.io/driver/postgres"
"gorm.io/gorm"
"gorm.io/gorm/logger"
)
func InitPostgres(cfg *config.DatabaseConfig, log *zap.Logger) (*gorm.DB, error) {
dsn := fmt.Sprintf(
"host=%s port=%d user=%s password=%s dbname=%s sslmode=%s",
cfg.Host, cfg.Port, cfg.User, cfg.Password, cfg.DBName, cfg.SSLMode,
)
// GORM 配置
gormConfig := &gorm.Config{
Logger: logger.Default.LogMode(logger.Silent), // 使用 Zap 替代 GORM 日志
NamingStrategy: schema.NamingStrategy{
TablePrefix: "tb_", // 表名前缀
SingularTable: true, // 使用单数表名
},
PrepareStmt: true, // 启用预编译语句缓存
}
db, err := gorm.Open(postgres.Open(dsn), gormConfig)
if err != nil {
return nil, fmt.Errorf("连接 PostgreSQL 失败: %w", err)
}
// 获取底层 sql.DB 进行连接池配置
sqlDB, err := db.DB()
if err != nil {
return nil, fmt.Errorf("获取 sql.DB 失败: %w", err)
}
// 连接池配置(参考 Constitution 性能要求)
sqlDB.SetMaxOpenConns(cfg.MaxOpenConns) // 最大连接数25
sqlDB.SetMaxIdleConns(cfg.MaxIdleConns) // 最大空闲连接10
sqlDB.SetConnMaxLifetime(cfg.ConnMaxLifetime) // 连接最大生命周期5m
// 验证连接
if err := sqlDB.Ping(); err != nil {
return nil, fmt.Errorf("PostgreSQL 连接验证失败: %w", err)
}
log.Info("PostgreSQL 连接成功",
zap.String("host", cfg.Host),
zap.Int("port", cfg.Port),
zap.String("database", cfg.DBName))
return db, nil
}
```
#### 1.2 连接池配置建议
| 参数 | 推荐值 | 理由 |
|------|--------|------|
| MaxOpenConns | 25 | 平衡性能和资源,避免 PostgreSQL 连接耗尽 |
| MaxIdleConns | 10 | 保持足够的空闲连接以应对突发流量 |
| ConnMaxLifetime | 5m | 定期回收连接,避免长连接问题 |
**计算公式**
```
MaxOpenConns = (可用内存 / 每连接内存) * 安全系数
每连接内存 ≈ 10MBPostgreSQL 典型值)
安全系数 = 0.7(为其他进程预留资源)
```
#### 1.3 模型定义规范
```go
// internal/model/user.go
type User struct {
ID uint `gorm:"primarykey" json:"id"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
DeletedAt gorm.DeletedAt `gorm:"index" json:"-"` // 软删除
Username string `gorm:"uniqueIndex;not null;size:50" json:"username"`
Email string `gorm:"uniqueIndex;not null;size:100" json:"email"`
Status string `gorm:"not null;size:20;default:'active'" json:"status"`
// 关联关系示例(如果需要)
// Orders []Order `gorm:"foreignKey:UserID" json:"orders,omitempty"`
}
// TableName 指定表名(如果不使用默认命名)
func (User) TableName() string {
return "tb_user" // 遵循 NamingStrategy 的 TablePrefix
}
```
**命名规范**
- 字段名使用 PascalCaseGo 约定)
- 数据库列名自动转换为 snake_case
- 表名使用 `tb_` 前缀(可配置)
- JSON tag 使用 snake_case
#### 1.4 事务处理
```go
// internal/store/postgres/transaction.go
func (s *Store) Transaction(ctx context.Context, fn func(*Store) error) error {
return s.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
// 创建事务内的 Store 实例
txStore := &Store{db: tx, logger: s.logger}
return fn(txStore)
})
}
// 使用示例
err := store.Transaction(ctx, func(tx *Store) error {
if err := tx.User.Create(ctx, user); err != nil {
return err // 自动回滚
}
if err := tx.Order.Create(ctx, order); err != nil {
return err // 自动回滚
}
return nil // 自动提交
})
```
**事务最佳实践**
- 使用 `context.Context` 传递超时和取消信号
- 事务内操作尽可能快(< 50ms避免长事务锁表
- 事务失败自动回滚,无需手动处理
- 避免事务嵌套GORM 使用 SavePoint 处理嵌套事务)
---
## 2. 数据库迁移golang-migrate
### 决策:使用 golang-migrate 而非 GORM AutoMigrate
**理由**
- **版本控制**:迁移文件版本化,可追溯数据库 schema 变更历史
- **可回滚**:每个迁移包含 up/down 脚本,支持安全回滚
- **生产安全**:明确的 SQL 语句,避免 AutoMigrate 的意外变更
- **团队协作**:迁移文件可 code review减少数据库变更风险
- **符合 Constitution**:项目规范要求使用外部迁移工具
**GORM AutoMigrate 的问题**
- 无法回滚
- 无法删除列(只能添加和修改)
- 不支持复杂的 schema 变更(如重命名列)
- 生产环境风险高
### golang-migrate 使用指南
#### 2.1 安装
```bash
# macOS
brew install golang-migrate
# Linux
curl -L https://github.com/golang-migrate/migrate/releases/download/v4.15.2/migrate.linux-amd64.tar.gz | tar xvz
sudo mv migrate /usr/local/bin/
# Go install
go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest
```
#### 2.2 创建迁移文件
```bash
# 创建新迁移
migrate create -ext sql -dir migrations -seq init_schema
# 生成文件:
# migrations/000001_init_schema.up.sql
# migrations/000001_init_schema.down.sql
```
#### 2.3 迁移文件示例
```sql
-- migrations/000001_init_schema.up.sql
CREATE TABLE IF NOT EXISTS tb_user (
id SERIAL PRIMARY KEY,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
deleted_at TIMESTAMP,
username VARCHAR(50) NOT NULL UNIQUE,
email VARCHAR(100) NOT NULL UNIQUE,
status VARCHAR(20) NOT NULL DEFAULT 'active'
);
CREATE INDEX idx_user_deleted_at ON tb_user(deleted_at);
CREATE INDEX idx_user_status ON tb_user(status);
-- migrations/000001_init_schema.down.sql
DROP TABLE IF EXISTS tb_user;
```
#### 2.4 执行迁移
```bash
# 向上迁移(应用所有未执行的迁移)
migrate -path migrations -database "postgresql://user:password@localhost:5432/dbname?sslmode=disable" up
# 回滚最后一次迁移
migrate -path migrations -database "postgresql://user:password@localhost:5432/dbname?sslmode=disable" down 1
# 迁移到指定版本
migrate -path migrations -database "postgresql://user:password@localhost:5432/dbname?sslmode=disable" goto 3
# 强制设置版本(修复脏迁移)
migrate -path migrations -database "postgresql://user:password@localhost:5432/dbname?sslmode=disable" force 2
```
#### 2.5 迁移脚本封装
```bash
#!/bin/bash
# scripts/migrate.sh
set -e
DB_USER=${DB_USER:-"postgres"}
DB_PASSWORD=${DB_PASSWORD:-"password"}
DB_HOST=${DB_HOST:-"localhost"}
DB_PORT=${DB_PORT:-"5432"}
DB_NAME=${DB_NAME:-"junhong_cmp"}
DATABASE_URL="postgresql://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_NAME}?sslmode=disable"
case "$1" in
up)
migrate -path migrations -database "$DATABASE_URL" up
;;
down)
migrate -path migrations -database "$DATABASE_URL" down ${2:-1}
;;
create)
migrate create -ext sql -dir migrations -seq "$2"
;;
version)
migrate -path migrations -database "$DATABASE_URL" version
;;
*)
echo "Usage: $0 {up|down [n]|create <name>|version}"
exit 1
esac
```
---
## 3. Asynq 任务队列
### 决策:选择 Asynq 作为异步任务队列
**理由**
- **Redis 原生支持**:基于 Redis无需额外中间件
- **功能完整**:支持任务重试、优先级、定时任务、唯一性约束
- **高性能**:支持并发处理,可配置 worker 数量
- **可观测性**:提供 Web UI 监控面板asynqmon
- **符合 Constitution**:项目技术栈要求使用 Asynq
**替代方案**
- **Machinery**:功能类似,但社区活跃度不如 Asynq
- **RabbitMQ + amqp091-go**:更重量级,需要额外部署 RabbitMQ
- **Kafka**:适合大规模流处理,对本项目过于复杂
### Asynq 架构设计
#### 3.1 Client任务提交
```go
// pkg/queue/client.go
import (
"github.com/hibiken/asynq"
"github.com/redis/go-redis/v9"
)
type Client struct {
client *asynq.Client
logger *zap.Logger
}
func NewClient(rdb *redis.Client, logger *zap.Logger) *Client {
return &Client{
client: asynq.NewClient(asynq.RedisClientOpt{Addr: rdb.Options().Addr}),
logger: logger,
}
}
func (c *Client) EnqueueTask(ctx context.Context, taskType string, payload []byte, opts ...asynq.Option) error {
task := asynq.NewTask(taskType, payload, opts...)
info, err := c.client.EnqueueContext(ctx, task)
if err != nil {
c.logger.Error("任务入队失败",
zap.String("task_type", taskType),
zap.Error(err))
return err
}
c.logger.Info("任务入队成功",
zap.String("task_id", info.ID),
zap.String("queue", info.Queue))
return nil
}
```
#### 3.2 Server任务处理
```go
// pkg/queue/server.go
func NewServer(rdb *redis.Client, cfg *config.QueueConfig, logger *zap.Logger) *asynq.Server {
return asynq.NewServer(
asynq.RedisClientOpt{Addr: rdb.Options().Addr},
asynq.Config{
Concurrency: cfg.Concurrency, // 并发数(默认 10
Queues: map[string]int{
"critical": 6, // 权重60%
"default": 3, // 权重30%
"low": 1, // 权重10%
},
ErrorHandler: asynq.ErrorHandlerFunc(func(ctx context.Context, task *asynq.Task, err error) {
logger.Error("任务执行失败",
zap.String("task_type", task.Type()),
zap.Error(err))
}),
Logger: &AsynqLogger{logger: logger}, // 自定义日志适配器
},
)
}
// cmd/worker/main.go
func main() {
// ... 初始化配置、日志、Redis
srv := queue.NewServer(rdb, cfg.Queue, logger)
mux := asynq.NewServeMux()
// 注册任务处理器
mux.HandleFunc(constants.TaskTypeEmailSend, task.HandleEmailSend)
mux.HandleFunc(constants.TaskTypeDataSync, task.HandleDataSync)
if err := srv.Run(mux); err != nil {
logger.Fatal("Worker 启动失败", zap.Error(err))
}
}
```
#### 3.3 任务处理器Handler
```go
// internal/task/email.go
func HandleEmailSend(ctx context.Context, t *asynq.Task) error {
var payload EmailPayload
if err := json.Unmarshal(t.Payload(), &payload); err != nil {
return fmt.Errorf("解析任务参数失败: %w", err)
}
// 幂等性检查(使用 Redis 或数据库)
key := constants.RedisTaskLockKey(payload.RequestID)
if exists, _ := rdb.Exists(ctx, key).Result(); exists > 0 {
logger.Info("任务已处理,跳过",
zap.String("request_id", payload.RequestID))
return nil // 返回 nil 表示成功,避免重试
}
// 执行任务
if err := sendEmail(ctx, payload); err != nil {
return fmt.Errorf("发送邮件失败: %w", err) // 返回错误触发重试
}
// 标记任务已完成(设置过期时间,避免内存泄漏)
rdb.SetEx(ctx, key, "1", 24*time.Hour)
logger.Info("邮件发送成功",
zap.String("to", payload.To),
zap.String("request_id", payload.RequestID))
return nil
}
```
### Asynq 配置建议
#### 3.4 重试策略
```go
// 默认重试策略:指数退避
task := asynq.NewTask(
constants.TaskTypeDataSync,
payload,
asynq.MaxRetry(5), // 最大重试 5 次
asynq.Timeout(10*time.Minute), // 任务超时 10 分钟
asynq.Queue("default"), // 队列名称
asynq.Retention(24*time.Hour), // 保留成功任务 24 小时
)
// 自定义重试延迟指数退避1s, 2s, 4s, 8s, 16s
asynq.RetryDelayFunc(func(n int, e error, t *asynq.Task) time.Duration {
return time.Duration(1<<uint(n)) * time.Second
})
```
#### 3.5 并发配置
| 场景 | 并发数 | 理由 |
|------|--------|------|
| CPU 密集型任务 | CPU 核心数 | 避免上下文切换开销 |
| I/O 密集型任务 | CPU 核心数 × 2 | 充分利用等待时间 |
| 混合任务 | 10默认 | 平衡性能和资源 |
**水平扩展**
- 启动多个 Worker 进程(不同机器或容器)
- 所有 Worker 连接同一个 Redis
- Asynq 自动负载均衡
#### 3.6 监控与调试
```bash
# 安装 asynqmonWeb UI
go install github.com/hibiken/asynqmon@latest
# 启动监控面板
asynqmon --redis-addr=localhost:6379
# 访问 http://localhost:8080
# 查看任务状态、队列统计、失败任务、重试历史
```
---
## 4. 幂等性设计
### 4.1 为什么需要幂等性?
**场景**
- 系统重启时Asynq 自动重新排队未完成的任务
- 任务执行失败后自动重试
- 网络抖动导致任务重复提交
**风险**
- 重复发送邮件/短信
- 重复扣款/充值
- 重复创建订单
### 4.2 幂等性实现模式
#### 模式 1唯一键去重推荐
```go
func HandleOrderCreate(ctx context.Context, t *asynq.Task) error {
var payload OrderPayload
json.Unmarshal(t.Payload(), &payload)
// 使用业务唯一键(如订单号)去重
key := constants.RedisTaskLockKey(payload.OrderID)
// SetNX仅当 key 不存在时设置
ok, err := rdb.SetNX(ctx, key, "1", 24*time.Hour).Result()
if err != nil {
return fmt.Errorf("Redis 操作失败: %w", err)
}
if !ok {
logger.Info("订单已创建,跳过",
zap.String("order_id", payload.OrderID))
return nil // 幂等返回
}
// 执行业务逻辑
if err := createOrder(ctx, payload); err != nil {
rdb.Del(ctx, key) // 失败时删除锁,允许重试
return err
}
return nil
}
```
#### 模式 2数据库唯一约束
```sql
CREATE TABLE tb_order (
id SERIAL PRIMARY KEY,
order_id VARCHAR(50) NOT NULL UNIQUE, -- 业务唯一键
status VARCHAR(20) NOT NULL,
created_at TIMESTAMP NOT NULL
);
```
```go
func createOrder(ctx context.Context, payload OrderPayload) error {
order := &model.Order{
OrderID: payload.OrderID,
Status: constants.OrderStatusPending,
}
// GORM 插入,如果 order_id 重复则返回错误
if err := db.WithContext(ctx).Create(order).Error; err != nil {
if errors.Is(err, gorm.ErrDuplicatedKey) {
logger.Info("订单已存在,跳过", zap.String("order_id", payload.OrderID))
return nil // 幂等返回
}
return err
}
return nil
}
```
#### 模式 3状态机复杂业务
```go
func HandleOrderProcess(ctx context.Context, t *asynq.Task) error {
var payload OrderPayload
json.Unmarshal(t.Payload(), &payload)
// 加载订单
order, err := store.Order.GetByID(ctx, payload.OrderID)
if err != nil {
return err
}
// 状态检查:仅处理特定状态的订单
if order.Status != constants.OrderStatusPending {
logger.Info("订单状态不匹配,跳过",
zap.String("order_id", payload.OrderID),
zap.String("current_status", order.Status))
return nil // 幂等返回
}
// 状态转换
order.Status = constants.OrderStatusProcessing
if err := store.Order.Update(ctx, order); err != nil {
return err
}
// 执行业务逻辑
// ...
order.Status = constants.OrderStatusCompleted
return store.Order.Update(ctx, order)
}
```
---
## 5. 配置管理
### 5.1 数据库配置结构
```go
// pkg/config/config.go
type DatabaseConfig struct {
Host string `mapstructure:"host"`
Port int `mapstructure:"port"`
User string `mapstructure:"user"`
Password string `mapstructure:"password"` // 明文存储(按需求)
DBName string `mapstructure:"dbname"`
SSLMode string `mapstructure:"sslmode"`
MaxOpenConns int `mapstructure:"max_open_conns"`
MaxIdleConns int `mapstructure:"max_idle_conns"`
ConnMaxLifetime time.Duration `mapstructure:"conn_max_lifetime"`
}
type QueueConfig struct {
Concurrency int `mapstructure:"concurrency"`
Queues map[string]int `mapstructure:"queues"`
RetryMax int `mapstructure:"retry_max"`
Timeout time.Duration `mapstructure:"timeout"`
}
```
### 5.2 配置文件示例
```yaml
# configs/config.yaml
database:
host: localhost
port: 5432
user: postgres
password: password # 明文存储(生产环境建议使用环境变量)
dbname: junhong_cmp
sslmode: disable
max_open_conns: 25
max_idle_conns: 10
conn_max_lifetime: 5m
queue:
concurrency: 10
queues:
critical: 6
default: 3
low: 1
retry_max: 5
timeout: 10m
```
---
## 6. 性能优化建议
### 6.1 数据库查询优化
**索引策略**
- 为 WHERE、JOIN、ORDER BY 常用字段添加索引
- 复合索引按选择性从高到低排列
- 避免过多索引(影响写入性能)
```sql
-- 单列索引
CREATE INDEX idx_user_status ON tb_user(status);
-- 复合索引(状态 + 创建时间)
CREATE INDEX idx_user_status_created ON tb_user(status, created_at);
-- 部分索引(仅索引活跃用户)
CREATE INDEX idx_user_active ON tb_user(status) WHERE status = 'active';
```
**批量操作**
```go
// 避免 N+1 查询
// ❌ 错误
for _, orderID := range orderIDs {
order, _ := db.Where("id = ?", orderID).First(&Order{}).Error
}
// ✅ 正确
var orders []Order
db.Where("id IN ?", orderIDs).Find(&orders)
// 批量插入
db.CreateInBatches(users, 100) // 每批 100 条
```
### 6.2 慢查询监控
```go
// GORM 慢查询日志
db.Logger = logger.New(
log.New(os.Stdout, "\r\n", log.LstdFlags),
logger.Config{
SlowThreshold: 100 * time.Millisecond, // 慢查询阈值
LogLevel: logger.Warn,
IgnoreRecordNotFoundError: true,
Colorful: false,
},
)
```
---
## 7. 故障处理与恢复
### 7.1 数据库连接失败
**重试策略**
```go
func InitPostgresWithRetry(cfg *config.DatabaseConfig, logger *zap.Logger) (*gorm.DB, error) {
maxRetries := 5
retryDelay := 2 * time.Second
for i := 0; i < maxRetries; i++ {
db, err := InitPostgres(cfg, logger)
if err == nil {
return db, nil
}
logger.Warn("数据库连接失败,重试中",
zap.Int("attempt", i+1),
zap.Int("max_retries", maxRetries),
zap.Error(err))
time.Sleep(retryDelay)
retryDelay *= 2 // 指数退避
}
return nil, fmt.Errorf("数据库连接失败,已重试 %d 次", maxRetries)
}
```
### 7.2 任务队列故障恢复
**Redis 断线重连**
- Asynq 自动处理 Redis 断线重连
- Worker 重启后自动从 Redis 恢复未完成任务
**脏任务清理**
```bash
# 使用 asynqmon 手动清理死信队列
# 或编写定时任务自动归档失败任务
```
---
## 8. 测试策略
### 8.1 数据库集成测试
```go
// tests/integration/database_test.go
func TestUserCRUD(t *testing.T) {
// 使用 testcontainers 启动 PostgreSQL
ctx := context.Background()
postgresContainer, err := postgres.RunContainer(ctx,
testcontainers.WithImage("postgres:14"),
postgres.WithDatabase("test_db"),
postgres.WithUsername("postgres"),
postgres.WithPassword("password"),
)
require.NoError(t, err)
defer postgresContainer.Terminate(ctx)
// 连接测试数据库
connStr, _ := postgresContainer.ConnectionString(ctx)
db, _ := gorm.Open(postgres.Open(connStr), &gorm.Config{})
// 运行迁移
db.AutoMigrate(&model.User{})
// 测试 CRUD
user := &model.User{Username: "test", Email: "test@example.com"}
assert.NoError(t, db.Create(user).Error)
var found model.User
assert.NoError(t, db.Where("username = ?", "test").First(&found).Error)
assert.Equal(t, "test@example.com", found.Email)
}
```
### 8.2 任务队列测试
```go
// tests/integration/task_test.go
func TestEmailTask(t *testing.T) {
// 启动内存模式的 Asynq测试用
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: "localhost:6379"},
asynq.Config{Concurrency: 1},
)
mux := asynq.NewServeMux()
mux.HandleFunc(constants.TaskTypeEmailSend, task.HandleEmailSend)
// 提交任务
client := asynq.NewClient(asynq.RedisClientOpt{Addr: "localhost:6379"})
payload, _ := json.Marshal(EmailPayload{To: "test@example.com"})
client.Enqueue(asynq.NewTask(constants.TaskTypeEmailSend, payload))
// 启动 worker 处理
go srv.Run(mux)
time.Sleep(2 * time.Second)
// 验证任务已处理
// ...
}
```
---
## 9. 安全考虑
### 9.1 SQL 注入防护
**✅ GORM 自动防护**
```go
// GORM 使用预编译语句,自动转义参数
db.Where("username = ?", userInput).First(&user)
```
**❌ 避免原始 SQL**
```go
// 危险SQL 注入风险
db.Raw("SELECT * FROM users WHERE username = '" + userInput + "'").Scan(&user)
// 安全:使用参数化查询
db.Raw("SELECT * FROM users WHERE username = ?", userInput).Scan(&user)
```
### 9.2 密码存储
```yaml
# configs/config.yaml
database:
password: ${DB_PASSWORD} # 从环境变量读取(生产环境推荐)
```
```bash
# .env 文件(不提交到 Git
export DB_PASSWORD=secret_password
```
---
## 10. 部署与运维
### 10.1 健康检查
```go
// internal/handler/health.go
func (h *Handler) HealthCheck(c *fiber.Ctx) error {
health := map[string]string{
"status": "ok",
}
// 检查 PostgreSQL
sqlDB, _ := h.db.DB()
if err := sqlDB.Ping(); err != nil {
health["postgres"] = "down"
health["status"] = "degraded"
} else {
health["postgres"] = "up"
}
// 检查 Redis任务队列
if err := h.rdb.Ping(c.Context()).Err(); err != nil {
health["redis"] = "down"
health["status"] = "degraded"
} else {
health["redis"] = "up"
}
statusCode := fiber.StatusOK
if health["status"] != "ok" {
statusCode = fiber.StatusServiceUnavailable
}
return c.Status(statusCode).JSON(health)
}
```
### 10.2 优雅关闭
```go
// cmd/worker/main.go
func main() {
// ... 初始化
srv := queue.NewServer(rdb, cfg.Queue, logger)
// 处理信号
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-quit
logger.Info("收到关闭信号,开始优雅关闭")
// 停止接收新任务,等待现有任务完成(最多 30 秒)
srv.Shutdown()
}()
// 启动 Worker
if err := srv.Run(mux); err != nil {
logger.Fatal("Worker 运行失败", zap.Error(err))
}
}
```
---
## 总结
| 技术选型 | 关键决策 | 核心理由 |
|---------|----------|----------|
| **GORM** | 使用 GORM 而非 sqlx | 功能完整,符合项目技术栈 |
| **golang-migrate** | 使用外部迁移工具而非 AutoMigrate | 版本控制,可回滚,生产安全 |
| **Asynq** | 使用 Asynq 而非 Machinery | Redis 原生,功能完整,监控友好 |
| **连接池** | MaxOpenConns=25, MaxIdleConns=10 | 平衡性能和资源消耗 |
| **重试策略** | 最大 5 次,指数退避 | 避免雪崩,给系统恢复时间 |
| **幂等性** | Redis 去重 + 数据库唯一约束 | 防止重复执行,确保数据一致性 |
**下一步**Phase 1 设计与契约生成data-model.md、contracts/、quickstart.md

View File

@@ -0,0 +1,194 @@
# Feature Specification: 数据持久化与异步任务处理集成
**Feature Branch**: `002-gorm-postgres-asynq`
**Created**: 2025-11-12
**Status**: Draft
**Input**: User description: "集成gorm、Postgresql数据库和asynq任务队列系统"
## Clarifications
### Session 2025-11-12
- Q: PostgreSQL连接应如何处理凭证管理和传输安全? → A: 凭证直接写在配置文件(config.yaml)中,明文存储
- Q: 任务失败后应该如何重试? → A: 最大重试5次,指数退避策略(1s、2s、4s、8s、16s)
- Q: 数据库表结构的创建和变更应该如何执行? → A: 完全不使用GORM迁移,使用外部迁移工具(如golang-migrate)管理SQL迁移文件
- Q: Worker进程应该如何配置并发任务处理? → A: 支持多个worker进程,每个进程可配置并发数(默认10);不同任务类型可配置独立的队列优先级
- Q: 系统重启时,正在执行中或排队中的任务应该如何处理? → A: 所有未完成任务(包括执行中)自动重新排队,重启后继续执行;任务处理逻辑需保证幂等性
### Session 2025-11-13
- Q: 数据库慢查询(>100ms)和任务执行状态应该如何进行监控和指标收集? → A: 仅记录日志文件,不收集指标
- Q: 当数据库连接池耗尽时,新的数据库请求应该如何处理? → A: 请求排队等待直到获得连接(带超时,如5秒)
- Q: 当数据库执行慢查询时,系统应该如何避免请求超时? → A: 使用context超时控制(如3秒),超时后取消查询
- Q: 当PostgreSQL主从切换时,系统应该如何感知并重新连接? → A: 依赖GORM的自动重连机制,连接失败时重试
- Q: 当并发事务产生死锁时,系统应该如何检测和恢复? → A: 依赖PostgreSQL自动检测,捕获死锁错误并重试(最多3次)
## User Scenarios & Testing *(mandatory)*
### User Story 1 - 可靠的数据存储与检索 (Priority: P1)
作为系统,需要能够可靠地持久化存储业务数据(如用户信息、业务记录等),并支持高效的数据查询和修改操作,确保数据的一致性和完整性。
**Why this priority**: 这是系统的核心基础能力,没有数据持久化就无法提供任何有意义的业务功能。所有后续功能都依赖于数据存储能力。
**Independent Test**: 可以通过创建、读取、更新、删除(CRUD)测试数据来独立验证。测试应包括基本的数据操作、事务提交、数据一致性验证等场景。
**Acceptance Scenarios**:
1. **Given** 系统接收到新的业务数据, **When** 执行数据保存操作, **Then** 数据应成功持久化到数据库,并可以被后续查询检索到
2. **Given** 需要修改已存在的数据, **When** 执行更新操作, **Then** 数据应被正确更新,且旧数据被新数据替换
3. **Given** 需要删除数据, **When** 执行删除操作, **Then** 数据应从数据库中移除,后续查询不应返回该数据
4. **Given** 多个数据操作需要原子性执行, **When** 在事务中执行这些操作, **Then** 要么全部成功提交,要么全部回滚,保证数据一致性
5. **Given** 执行数据查询, **When** 查询条件匹配多条记录, **Then** 系统应返回所有匹配的记录,支持分页和排序
---
### User Story 2 - 异步任务处理能力 (Priority: P2)
作为系统,需要能够将耗时的操作(如发送邮件、生成报表、数据同步等)放到后台异步执行,避免阻塞用户请求,提升用户体验和系统响应速度。
**Why this priority**: 许多业务操作需要较长时间完成,如果在用户请求中同步执行会导致超时和糟糕的用户体验。异步任务处理是提升系统性能和用户体验的关键。
**Independent Test**: 可以通过提交一个耗时任务(如模拟发送邮件),验证任务被成功加入队列,然后在后台完成执行,用户请求立即返回而不等待任务完成。
**Acceptance Scenarios**:
1. **Given** 系统需要执行一个耗时操作, **When** 将任务提交到任务队列, **Then** 任务应被成功加入队列,用户请求立即返回,不阻塞等待
2. **Given** 任务队列中有待处理的任务, **When** 后台工作进程运行, **Then** 任务应按顺序被取出并执行
3. **Given** 任务执行过程中发生错误, **When** 任务失败, **Then** 系统应记录错误信息,并根据配置进行重试
4. **Given** 任务需要定时执行, **When** 到达指定时间, **Then** 任务应自动触发执行
5. **Given** 需要查看任务执行状态, **When** 查询任务信息, **Then** 应能获取任务的当前状态(等待、执行中、成功、失败)和执行历史
---
### User Story 3 - 数据库连接管理与监控 (Priority: P3)
作为系统管理员,需要能够监控数据库连接状态、查询性能和任务队列健康度,及时发现和解决潜在问题,确保系统稳定运行。
**Why this priority**: 虽然不是核心业务功能,但对系统的稳定性和可维护性至关重要。良好的监控能力可以预防故障和提升运维效率。
**Independent Test**: 可以通过健康检查接口验证数据库连接状态和任务队列状态,模拟连接失败场景验证系统的容错能力。
**Acceptance Scenarios**:
1. **Given** 系统启动时, **When** 初始化数据库连接池, **Then** 应成功建立连接,并验证数据库可访问性
2. **Given** 数据库连接出现问题, **When** 检测到连接失败, **Then** 系统应记录错误日志,并尝试重新建立连接
3. **Given** 需要监控系统健康状态, **When** 调用健康检查接口, **Then** 应返回数据库和任务队列的当前状态(正常/异常)
4. **Given** 系统关闭时, **When** 执行清理操作, **Then** 应优雅地关闭数据库连接和任务队列,等待正在执行的任务完成
---
### Edge Cases
- 当数据库连接池耗尽时,新的数据库请求会排队等待可用连接,等待超时时间为5秒。超时后返回503 Service Unavailable错误,错误消息提示"数据库连接池繁忙,请稍后重试"
- 当任务队列积压过多任务(超过 10,000 个待处理任务或 Redis 内存使用超过 80%)时,系统应触发告警,并考虑暂停低优先级任务提交或扩展 Worker 进程数量
- 当数据库执行慢查询时,系统使用context.WithTimeout为每个数据库操作设置超时时间(默认3秒)。超时后自动取消查询并返回504 Gateway Timeout错误,错误消息提示"数据库查询超时,请优化查询条件或联系管理员"
- 当任务重复执行5次后仍然失败时,任务应被标记为"最终失败"状态,记录完整错误历史,并可选择发送告警通知或进入死信队列等待人工处理
- 当PostgreSQL主从切换时,系统依赖GORM的自动重连机制。当检测到连接失败或不可用时,GORM会自动尝试重新建立连接。失败的查询会返回数据库连接错误,应用层应在合理范围内进行重试(建议重试1-3次,每次间隔100ms)
- 当并发事务产生死锁时,PostgreSQL会自动检测并中止其中一个事务(返回SQLSTATE 40P01错误)。应用层捕获死锁错误后,应自动重试该事务(建议最多重试3次,每次间隔50-100ms随机延迟)。超过重试次数后,返回409 Conflict错误,提示"数据库操作冲突,请稍后重试"
- 当系统重启时,所有未完成的任务(包括排队中和执行中的任务)会利用Asynq的Redis持久化机制自动重新排队,重启后Worker进程会继续处理这些任务。所有任务处理逻辑必须设计为幂等操作,确保任务重复执行不会产生副作用或数据不一致
## Requirements *(mandatory)*
### Functional Requirements
- **FR-001**: 系统必须能够建立和管理与PostgreSQL数据库的连接池,支持配置最大连接数、空闲连接数等参数。数据库连接配置(包括主机地址、端口、用户名、密码、数据库名)存储在配置文件(config.yaml)中,明文形式保存。当连接池耗尽时,新请求排队等待可用连接(默认超时5秒),超时后返回503错误。系统依赖GORM的自动重连机制处理数据库连接失败或主从切换场景
- **FR-002**: 系统必须支持标准的CRUD操作(创建、读取、更新、删除),并提供统一的数据访问接口。接口应包括但不限于: Create(创建记录)、GetByID(按ID查询)、Update(更新记录)、Delete(软删除)、List(分页列表查询)等基础方法,所有 Store 层接口遵循一致的命名和参数约定(详见 data-model.md)
- **FR-003**: 系统必须支持数据库事务,包括事务的开始、提交、回滚操作,确保数据一致性。当发生死锁时(SQLSTATE 40P01),系统应捕获错误并自动重试事务(最多3次,每次间隔50-100ms随机延迟),超过重试次数后返回409错误
- **FR-004**: 系统必须支持数据库迁移,使用外部迁移工具(如golang-migrate)通过版本化的SQL迁移文件管理表结构的创建和变更,不使用GORM AutoMigrate功能。迁移文件应包含up/down脚本以支持正向迁移和回滚
- **FR-005**: 系统必须提供查询构建能力,支持条件查询、分页、排序、关联查询等常见操作。所有数据库查询必须使用context.WithTimeout设置超时时间(默认3秒),超时后自动取消查询并返回504错误
- **FR-006**: 系统必须能够将任务提交到异步任务队列,任务应包含任务类型、参数、优先级等信息
- **FR-007**: 系统必须提供后台工作进程,从任务队列中获取任务并执行。支持启动多个worker进程实例,每个进程可独立配置并发处理数(默认10个并发goroutine)。不同任务类型可配置到不同的队列,并设置队列优先级,实现资源隔离和灵活扩展。Worker 进程异常退出时,Asynq 会自动将执行中的任务标记为失败并重新排队;建议使用进程管理工具(如 systemd, supervisord)实现 Worker 自动重启
- **FR-008**: 系统必须支持任务重试机制,当任务执行失败时能够按配置的策略自动重试。默认最大重试5次,采用指数退避策略(重试间隔为1s、2s、4s、8s、16s),每个任务类型可独立配置重试参数
- **FR-009**: 系统必须支持任务优先级,高优先级任务应优先被处理
- **FR-010**: 系统必须能够记录任务执行历史和状态,包括开始时间、结束时间、执行结果、错误信息等。任务执行状态通过日志文件记录,不使用外部指标收集系统
- **FR-011**: 系统必须提供健康检查接口,能够验证数据库连接和任务队列的可用性
- **FR-012**: 系统必须支持定时任务,能够按照cron表达式或固定间隔调度任务执行
- **FR-013**: 系统必须记录慢查询日志,当数据库查询超过阈值(100ms)时记录详细信息用于优化。日志应包含 SQL 语句、执行时间、参数和上下文信息。监控采用日志文件方式,不使用 Prometheus 或其他指标收集系统
- **FR-014**: 系统必须支持配置化的数据库和任务队列参数,如连接字符串、最大重试次数、任务超时时间等
- **FR-015**: 系统必须在关闭时优雅地清理资源,关闭数据库连接并等待正在执行的任务完成
- **FR-016**: 系统必须支持任务持久化和故障恢复。利用Asynq基于Redis的持久化机制,确保系统重启或崩溃时未完成的任务不会丢失。所有任务处理函数必须设计为幂等操作,支持任务重新执行而不产生副作用
### Technical Requirements (Constitution-Driven)
**Tech Stack Compliance**:
- [x] 所有数据库操作使用GORM (不直接使用 `database/sql`)
- [x] 数据库迁移使用golang-migrate (不使用GORM AutoMigrate)
- [x] 所有异步任务使用Asynq
- [x] 所有HTTP操作使用Fiber框架 (不使用 `net/http`)
- [x] 所有JSON操作使用sonic (不使用 `encoding/json`)
- [x] 所有日志使用Zap + Lumberjack.v2
- [x] 所有配置使用Viper
- [x] 使用Go官方工具链: `go fmt`, `go vet`, `golangci-lint`
**Architecture Requirements**:
- [x] 实现遵循 Handler → Service → Store → Model 分层架构
- [x] 依赖通过结构体字段注入(不使用构造函数模式)
- [x] 统一错误码定义在 `pkg/errors/`
- [x] 统一API响应通过 `pkg/response/`
- [x] 所有常量定义在 `pkg/constants/` (不使用魔法数字/字符串)
- [x] **不允许硬编码值: 3次及以上相同字面量必须定义为常量**
- [x] **已定义的常量必须使用(不允许重复硬编码)**
- [x] **代码注释优先使用中文(实现注释用中文)**
- [x] **日志消息使用中文(logger.Info/Warn/Error/Debug用中文)**
- [x] **错误消息支持中文(面向用户的错误有中文文本)**
- [x] 所有Redis键通过 `pkg/constants/` 键生成函数管理
- [x] 包结构扁平化,按功能组织(不按层级)
**Go Idiomatic Design Requirements**:
- [x] 不使用Java风格模式: 无getter/setter方法、无I-前缀接口、无Impl-后缀
- [x] 接口应小型化(1-3个方法),在使用处定义
- [x] 错误处理显式化(返回错误,不使用panic)
- [x] 使用组合(结构体嵌入)而非继承
- [x] 使用goroutines和channels处理并发
- [x] 命名遵循Go约定: `UserID` 不是 `userId`, `HTTPServer` 不是 `HttpServer`
- [x] 不使用匈牙利命名法或类型前缀
- [x] 代码结构简单直接
**API Design Requirements**:
- [x] 所有API遵循RESTful原则
- [x] 所有响应使用统一JSON格式,包含code/message/data/timestamp
- [x] 所有错误消息包含错误码和双语描述
- [x] 所有分页使用标准参数(page, page_size, total)
- [x] 所有时间字段使用ISO 8601格式(RFC3339)
- [x] 所有货币金额使用整数(分)
**Performance Requirements**:
- [x] API响应时间(P95) < 200ms
- [x] 数据库查询 < 50ms
- [x] 批量操作使用bulk查询
- [x] 列表查询实现分页(默认20条,最大100条)
- [x] 非实时操作委托给异步任务
- [x] 使用 `context.Context` 进行超时和取消控制
**Testing Requirements**:
- [x] Service层业务逻辑必须有单元测试
- [x] 所有API端点必须有集成测试
- [x] 所有异步任务处理函数必须有幂等性测试,验证重复执行的正确性
- [x] 测试使用Go标准testing框架,文件名为 `*_test.go`
- [x] 多测试用例使用表驱动测试
- [x] 测试相互独立,使用mocks/testcontainers
- [x] 目标覆盖率: 总体70%+, 核心业务逻辑90%+
### Key Entities
- **DatabaseConnection**: 代表与PostgreSQL数据库的连接,包含连接池配置、连接状态、健康检查等属性
- **DataModel**: 代表业务数据模型,通过ORM映射到数据库表,包含数据验证规则和关联关系
- **Task**: 代表异步任务,包含任务类型、任务参数、优先级、重试次数、执行状态等属性
- **TaskQueue**: 代表任务队列,管理任务的提交、调度、执行和状态跟踪
- **Worker**: 代表后台工作进程,从任务队列中获取任务并执行。每个Worker进程支持可配置的并发数(通过goroutine池实现),可以部署多个Worker进程实例实现水平扩展。不同Worker可订阅不同的任务队列,实现任务类型的资源隔离
## Success Criteria *(mandatory)*
### Measurable Outcomes
- **SC-001**: 数据库基本CRUD操作响应时间(P95)应小于50毫秒
- **SC-002**: 系统应支持至少1000个并发数据库连接而不出现连接池耗尽
- **SC-003**: 任务队列应能够处理每秒至少100个任务的提交速率
- **SC-004**: 异步任务从提交到开始执行的延迟(空闲情况下)应小于100毫秒
- **SC-005**: 数据持久化的可靠性应达到99.99%,即每10000次操作中失败不超过1次
- **SC-006**: 失败任务的自动重试成功率应达到90%以上
- **SC-007**: 系统启动时应在10秒内完成数据库连接和任务队列初始化
- **SC-008**: 数据库查询慢查询(超过100ms)的占比应小于1%
- **SC-009**: 系统关闭时应在30秒内优雅完成所有资源清理,不丢失正在执行的任务
- **SC-010**: 健康检查接口应在1秒内返回系统健康状态

View File

@@ -0,0 +1,393 @@
# Tasks: 数据持久化与异步任务处理集成
**Feature**: 002-gorm-postgres-asynq
**Input**: Design documents from `/specs/002-gorm-postgres-asynq/`
**Prerequisites**: plan.md, spec.md, data-model.md, contracts/api.yaml, research.md, quickstart.md
**Organization**: Tasks are grouped by user story (US1: 数据存储与检索, US2: 异步任务处理, US3: 连接管理与监控) to enable independent implementation and testing.
## Format: `[ID] [P?] [Story] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (US1, US2, US3)
- Include exact file paths in descriptions
---
## Phase 1: Setup (Shared Infrastructure)
**Purpose**: Project initialization and basic structure (project already exists, validate/enhance)
- [ ] T001 Validate project structure matches plan.md (internal/, pkg/, cmd/, configs/, migrations/, tests/)
- [ ] T002 Validate Go dependencies for Fiber + GORM + Asynq + Viper + Zap + golang-migrate
- [ ] T003 [P] Validate unified error codes in pkg/errors/codes.go and pkg/errors/errors.go
- [ ] T004 [P] Validate unified API response in pkg/response/response.go
- [ ] T005 [P] Add database configuration constants in pkg/constants/constants.go (DefaultMaxOpenConns=25, DefaultMaxIdleConns=10, etc.)
- [ ] T006 [P] Add task queue constants in pkg/constants/constants.go (TaskTypeEmailSend, TaskTypeDataSync, QueueCritical, QueueDefault, etc.)
- [ ] T007 [P] Add user/order status constants in pkg/constants/constants.go (UserStatusActive, OrderStatusPending, etc.)
- [ ] T008 [P] Add Redis key generation functions in pkg/constants/redis.go (RedisTaskLockKey, RedisTaskStatusKey)
---
## Phase 2: Foundational (Blocking Prerequisites)
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
- [ ] T009 Implement PostgreSQL connection initialization in pkg/database/postgres.go (GORM + connection pool)
- [ ] T010 Validate Redis connection initialization in pkg/database/redis.go (connection pool: PoolSize=10, MinIdleConns=5)
- [ ] T011 [P] Add DatabaseConfig to pkg/config/config.go (Host, Port, User, Password, MaxOpenConns, MaxIdleConns, ConnMaxLifetime)
- [ ] T012 [P] Add QueueConfig to pkg/config/config.go (Concurrency, Queues, RetryMax, Timeout)
- [ ] T013 [P] Update config.yaml files with database and queue configurations (config.dev.yaml, config.staging.yaml, config.prod.yaml)
- [ ] T014 Implement Asynq client initialization in pkg/queue/client.go (EnqueueTask with logging)
- [ ] T015 Implement Asynq server initialization in pkg/queue/server.go (with queue priorities and error handler)
- [ ] T016 Create base Store structure in internal/store/store.go with transaction support
- [ ] T017 Initialize postgres store in internal/store/postgres/store.go (embed UserStore, OrderStore)
- [ ] T018 Validate migrations directory structure (migrations/000001_init_schema.up.sql and .down.sql exist)
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
---
## Phase 3: User Story 1 - 可靠的数据存储与检索 (Priority: P1) 🎯 MVP
**Goal**: 实现可靠的数据持久化存储和高效的 CRUD 操作,确保数据一致性和完整性
**Independent Test**: 通过创建、读取、更新、删除用户和订单数据验证。包括基本 CRUD、事务提交、数据一致性验证等场景。
### Tests for User Story 1 (REQUIRED per Constitution)
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T019 [P] [US1] Unit tests for User Store layer in tests/unit/store_test.go (Create, GetByID, Update, Delete, List)
- [ ] T020 [P] [US1] Unit tests for Order Store layer in tests/unit/store_test.go (Create, GetByID, Update, Delete, ListByUserID)
- [ ] T021 [P] [US1] Unit tests for User Service layer in tests/unit/service_test.go (business logic validation)
- [ ] T022 [P] [US1] Integration tests for User API endpoints in tests/integration/database_test.go (POST/GET/PUT/DELETE /users)
- [ ] T023 [P] [US1] Transaction rollback tests in tests/unit/store_test.go (verify atomic operations)
### Implementation for User Story 1
**Models & DTOs**:
- [ ] T024 [P] [US1] Validate BaseModel in internal/model/base.go (ID, CreatedAt, UpdatedAt, DeletedAt)
- [ ] T025 [P] [US1] Validate User model in internal/model/user.go with GORM tags (Username, Email, Password, Status)
- [ ] T026 [P] [US1] Validate Order model in internal/model/order.go with GORM tags (OrderID, UserID, Amount, Status)
- [ ] T027 [P] [US1] Validate User DTOs in internal/model/user_dto.go (CreateUserRequest, UpdateUserRequest, UserResponse, ListUsersResponse)
- [ ] T028 [P] [US1] Create Order DTOs in internal/model/order_dto.go (CreateOrderRequest, UpdateOrderRequest, OrderResponse, ListOrdersResponse)
**Store Layer (Data Access)**:
- [ ] T029 [US1] Implement UserStore in internal/store/postgres/user_store.go (Create, GetByID, Update, Delete, List with pagination)
- [ ] T030 [US1] Implement OrderStore in internal/store/postgres/order_store.go (Create, GetByID, Update, Delete, ListByUserID)
- [ ] T031 [US1] Add context timeout handling (3s default) and slow query logging (>100ms) in Store methods
**Service Layer (Business Logic)**:
- [ ] T032 [US1] Implement UserService in internal/service/user/service.go (CreateUser, GetUserByID, UpdateUser, DeleteUser, ListUsers)
- [ ] T033 [US1] Implement OrderService in internal/service/order/service.go (CreateOrder, GetOrderByID, UpdateOrder, DeleteOrder, ListOrdersByUserID)
- [ ] T034 [US1] Add password hashing (bcrypt) in UserService.CreateUser
- [ ] T035 [US1] Add validation logic in Service layer using Validator
- [ ] T036 [US1] Implement transaction example in OrderService (CreateOrderWithUser)
**Handler Layer (HTTP Endpoints)**:
- [ ] T037 [US1] Validate/enhance User Handler in internal/handler/user.go (Create, GetByID, Update, Delete, List endpoints)
- [ ] T038 [US1] Create Order Handler in internal/handler/order.go (Create, GetByID, Update, Delete, List endpoints)
- [ ] T039 [US1] Add request validation using Validator in handlers
- [ ] T040 [US1] Add unified error handling using pkg/errors/ and pkg/response/ in handlers
- [ ] T041 [US1] Add structured logging with Zap in handlers (log user_id, order_id, operation, duration)
- [ ] T042 [US1] Register User routes in cmd/api/main.go (POST/GET/PUT/DELETE /api/v1/users, /api/v1/users/:id)
- [ ] T043 [US1] Register Order routes in cmd/api/main.go (POST/GET/PUT/DELETE /api/v1/orders, /api/v1/orders/:id)
**Database Migrations**:
- [ ] T044 [US1] Validate migration 000001_init_schema.up.sql (tb_user and tb_order tables with indexes)
- [ ] T045 [US1] Validate migration 000001_init_schema.down.sql (DROP tables)
- [ ] T046 [US1] Test migration up/down with scripts/migrate.sh
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
---
## Phase 4: User Story 2 - 异步任务处理能力 (Priority: P2)
**Goal**: 实现耗时操作的后台异步执行,避免阻塞用户请求,提升系统响应速度
**Independent Test**: 提交耗时任务(如发送邮件),验证任务被成功加入队列,用户请求立即返回,后台 Worker 完成任务执行。
### Tests for User Story 2 (REQUIRED per Constitution)
- [ ] T047 [P] [US2] Unit tests for Email task handler in tests/unit/task_handler_test.go (HandleEmailSend idempotency)
- [ ] T048 [P] [US2] Unit tests for Sync task handler in tests/unit/task_handler_test.go (HandleDataSync idempotency)
- [ ] T049 [P] [US2] Integration tests for task submission in tests/integration/task_test.go (EnqueueEmailTask, EnqueueSyncTask)
- [ ] T050 [P] [US2] Integration tests for task queue in tests/integration/task_test.go (verify Worker processes tasks)
### Implementation for User Story 2
**Task Payloads**:
- [ ] T051 [P] [US2] Validate EmailPayload in internal/task/email.go (RequestID, To, Subject, Body, CC, Attachments)
- [ ] T052 [P] [US2] Validate DataSyncPayload in internal/task/sync.go (RequestID, SyncType, StartDate, EndDate, BatchSize)
- [ ] T053 [P] [US2] Create SIMStatusSyncPayload in internal/task/sim.go (RequestID, ICCIDs, ForceSync)
**Task Handlers (Worker)**:
- [ ] T054 [US2] Implement HandleEmailSend in internal/task/email.go (with Redis idempotency lock and retry)
- [ ] T055 [US2] Implement HandleDataSync in internal/task/sync.go (with idempotency and batch processing)
- [ ] T056 [US2] Implement HandleSIMStatusSync in internal/task/sim.go (with idempotency)
- [ ] T057 [US2] Add structured logging in task handlers (task_id, task_type, request_id, duration)
- [ ] T058 [US2] Add error handling and retry logic in task handlers (max 5 retries, exponential backoff)
**Service Integration**:
- [ ] T059 [US2] Implement EmailService in internal/service/email/service.go (SendWelcomeEmail, EnqueueEmailTask)
- [ ] T060 [US2] Implement SyncService in internal/service/sync/service.go (EnqueueDataSyncTask, EnqueueSIMStatusSyncTask)
- [ ] T061 [US2] Add Queue Client dependency injection in Service constructors
**Handler Layer (Task Submission)**:
- [ ] T062 [US2] Validate/enhance Task Handler in internal/handler/task.go (SubmitEmailTask, SubmitSyncTask endpoints)
- [ ] T063 [US2] Add request validation for task payloads in handler
- [ ] T064 [US2] Add priority queue selection logic (critical/default/low) in handler
- [ ] T065 [US2] Register task routes in cmd/api/main.go (POST /api/v1/tasks/email, POST /api/v1/tasks/sync)
**Worker Process**:
- [ ] T066 [US2] Validate Worker main in cmd/worker/main.go (initialize Server, register handlers, graceful shutdown)
- [ ] T067 [US2] Register task handlers in Worker (HandleEmailSend, HandleDataSync, HandleSIMStatusSync)
- [ ] T068 [US2] Add signal handling for graceful shutdown in Worker (SIGINT, SIGTERM)
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
---
## Phase 5: User Story 3 - 数据库连接管理与监控 (Priority: P3)
**Goal**: 监控数据库连接状态、查询性能和任务队列健康度,确保系统稳定运行
**Independent Test**: 通过健康检查接口验证数据库和 Redis 连接状态,模拟连接失败场景验证容错能力。
### Tests for User Story 3 (REQUIRED per Constitution)
- [ ] T069 [P] [US3] Integration tests for health check in tests/integration/health_test.go (GET /health returns 200 when healthy)
- [ ] T070 [P] [US3] Integration tests for degraded state in tests/integration/health_test.go (503 when database down)
- [ ] T071 [P] [US3] Unit tests for graceful shutdown in tests/unit/shutdown_test.go (verify connections closed)
### Implementation for User Story 3
**Health Check**:
- [ ] T072 [US3] Validate/enhance Health Handler in internal/handler/health.go (check PostgreSQL and Redis status)
- [ ] T073 [US3] Add database Ping check with timeout in Health Handler
- [ ] T074 [US3] Add Redis Ping check with timeout in Health Handler
- [ ] T075 [US3] Return appropriate status codes (200 ok, 503 degraded/unavailable)
- [ ] T076 [US3] Register health route in cmd/api/main.go (GET /health)
**Connection Management**:
- [ ] T077 [US3] Add connection pool monitoring in pkg/database/postgres.go (log Stats: OpenConnections, InUse, Idle)
- [ ] T078 [US3] Add connection retry logic in pkg/database/postgres.go (max 5 retries, exponential backoff)
- [ ] T079 [US3] Add slow query logging middleware in pkg/logger/middleware.go (log queries >100ms)
**Graceful Shutdown**:
- [ ] T080 [US3] Implement graceful shutdown in cmd/api/main.go (close DB, Redis, wait for requests, max 30s timeout)
- [ ] T081 [US3] Validate graceful shutdown in cmd/worker/main.go (stop accepting tasks, wait for completion, max 30s)
- [ ] T082 [US3] Add signal handling (SIGINT, SIGTERM) in both API and Worker processes
**Checkpoint**: All user stories should now be independently functional
---
## Phase 6: Polish & Quality Gates
**Purpose**: Improvements that affect multiple user stories and final quality checks
### Documentation (Constitution Principle VII - REQUIRED)
- [ ] T083 [P] Create feature summary doc in docs/002-gorm-postgres-asynq/功能总结.md (Chinese filename and content)
- [ ] T084 [P] Create usage guide in docs/002-gorm-postgres-asynq/使用指南.md (Chinese filename and content)
- [ ] T085 [P] Create architecture doc in docs/002-gorm-postgres-asynq/架构说明.md (Chinese filename and content)
- [ ] T086 Update README.md with brief feature description (2-3 sentences in Chinese)
### Code Quality
- [ ] T087 Code cleanup: Remove unused imports, variables, and functions
- [ ] T088 Code refactoring: Extract duplicate logic into helper functions
- [ ] T089 Performance optimization: Add database indexes for common queries (username, email, order_id, user_id, status)
- [ ] T090 Performance testing: Verify API response time P95 < 200ms, P99 < 500ms
- [ ] T091 [P] Additional unit tests to reach 70%+ overall coverage, 90%+ for Service layer
- [ ] T092 Security audit: Verify no SQL injection (GORM uses prepared statements)
- [ ] T093 Security audit: Verify password storage uses bcrypt hashing
- [ ] T094 Security audit: Verify sensitive data not logged (passwords, tokens)
- [ ] T095 Run quickstart.md validation (test all curl examples work)
### Quality Gates (Constitution Compliance)
- [ ] T096 Quality Gate: Run `go test ./...` (all tests pass)
- [ ] T097 Quality Gate: Run `gofmt -l .` (no formatting issues)
- [ ] T098 Quality Gate: Run `go vet ./...` (no issues)
- [ ] T099 Quality Gate: Run `golangci-lint run` (no critical issues)
- [ ] T100 Quality Gate: Verify test coverage with `go test -cover ./...` (70%+ overall, 90%+ Service)
- [ ] T101 Quality Gate: Check no TODO/FIXME remains (or documented in GitHub issues)
- [ ] T102 Quality Gate: Verify database migrations work (up and down)
- [ ] T103 Quality Gate: Verify API documentation in contracts/api.yaml matches implementation
- [ ] T104 Quality Gate: Verify no hardcoded constants (all use pkg/constants/)
- [ ] T105 Quality Gate: Verify no duplicate hardcoded values (3+ identical literals must be constants)
- [ ] T106 Quality Gate: Verify defined constants are used (no duplicate hardcoding)
- [ ] T107 Quality Gate: Verify code comments use Chinese (implementation comments in Chinese)
- [ ] T108 Quality Gate: Verify log messages use Chinese (logger.Info/Warn/Error/Debug in Chinese)
- [ ] T109 Quality Gate: Verify error messages support Chinese (user-facing errors have Chinese text)
- [ ] T110 Quality Gate: Verify no Java-style patterns (no getter/setter, no I-prefix, no Impl-suffix)
- [ ] T111 Quality Gate: Verify Go naming conventions (UserID not userId, HTTPServer not HttpServer)
- [ ] T112 Quality Gate: Verify error handling is explicit (no panic/recover in business logic)
- [ ] T113 Quality Gate: Verify uses goroutines/channels for concurrency (not thread pools)
- [ ] T114 Quality Gate: Verify no ORM associations (foreignKey, belongsTo tags - use manual joins)
- [ ] T115 Quality Gate: Verify feature docs created in docs/002-gorm-postgres-asynq/ with Chinese filenames
- [ ] T116 Quality Gate: Verify summary doc content uses Chinese
- [ ] T117 Quality Gate: Verify README.md updated with brief description
- [ ] T118 Quality Gate: Verify ALL HTTP requests logged to access.log (via pkg/logger/Middleware())
- [ ] T119 Quality Gate: Verify access log includes request/response bodies (limited to 50KB)
- [ ] T120 Quality Gate: Verify no middleware bypasses logging (test auth failures, rate limits)
- [ ] T121 Quality Gate: Verify access log has all required fields (method, path, status, duration_ms, request_id, ip, user_agent, request_body, response_body)
---
## Dependencies & Execution Order
### Phase Dependencies
- **Setup (Phase 1)**: No dependencies - can start immediately
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
- **User Stories (Phase 3-5)**: All depend on Foundational phase completion
- User Story 1 (P1): Can start after Foundational - No dependencies on other stories
- User Story 2 (P2): Can start after Foundational - Independent (may integrate with US1 for examples)
- User Story 3 (P3): Can start after Foundational - Independent
- **Polish (Phase 6)**: Depends on all user stories being complete
### User Story Independence
- **US1 (P1)**: Fully independent - can be tested and deployed alone (MVP)
- **US2 (P2)**: Fully independent - can be tested and deployed alone (may reference US1 models as examples)
- **US3 (P3)**: Fully independent - can be tested and deployed alone
### Within Each User Story
- Tests MUST be written and FAIL before implementation
- Models → Store → Service → Handler → Routes
- Core implementation before integration
- Story complete before moving to next priority
### Parallel Opportunities
**Phase 1 (Setup)**:
- T003, T004, T005, T006, T007, T008 can all run in parallel
**Phase 2 (Foundational)**:
- T011, T012, T013 (config) can run in parallel
- T014, T015 (queue client/server) can run in parallel after config
**Phase 3 (User Story 1)**:
- T019-T023 (all tests) can run in parallel
- T024-T028 (all models/DTOs) can run in parallel
- T029, T030 (Store implementations) can run in parallel after models
- T032, T033 (Service implementations) can run in parallel after Store
**Phase 4 (User Story 2)**:
- T047-T050 (all tests) can run in parallel
- T051-T053 (all payloads) can run in parallel
- T054-T056 (all task handlers) can run in parallel after payloads
- T059, T060 (Service implementations) can run in parallel after handlers
**Phase 5 (User Story 3)**:
- T069-T071 (all tests) can run in parallel
- T073, T074 (Ping checks) can run in parallel
- T077, T078, T079 (connection management) can run in parallel
**Phase 6 (Polish)**:
- T083-T085 (all docs) can run in parallel
- T096-T121 (quality gates) run sequentially but can be automated in CI
---
## Parallel Example: User Story 1
```bash
# Launch all tests together:
go test -v tests/unit/store_test.go & # T019, T020
go test -v tests/unit/service_test.go & # T021
go test -v tests/integration/database_test.go & # T022
wait
# Launch all models together:
Task: "Validate User model in internal/model/user.go" # T025
Task: "Validate Order model in internal/model/order.go" # T026
Task: "Validate User DTOs" # T027
Task: "Create Order DTOs" # T028
# Launch both Store implementations together:
Task: "Implement UserStore" # T029
Task: "Implement OrderStore" # T030
```
---
## Implementation Strategy
### MVP First (User Story 1 Only)
1. Complete Phase 1: Setup (T001-T008)
2. Complete Phase 2: Foundational (T009-T018) - CRITICAL
3. Complete Phase 3: User Story 1 (T019-T046)
4. **STOP and VALIDATE**: Test CRUD operations independently
5. Deploy/demo if ready
### Incremental Delivery
1. Setup + Foundational → Foundation ready
2. Add User Story 1 → Test independently → Deploy/Demo (MVP! 🎯)
3. Add User Story 2 → Test independently → Deploy/Demo
4. Add User Story 3 → Test independently → Deploy/Demo
5. Polish → Final quality checks → Production ready
### Parallel Team Strategy
With multiple developers:
1. Team completes Setup (Phase 1) + Foundational (Phase 2) together
2. Once Foundational is done:
- Developer A: User Story 1 (T019-T046)
- Developer B: User Story 2 (T047-T068)
- Developer C: User Story 3 (T069-T082)
3. Stories complete and integrate independently
4. Team reconvenes for Polish (Phase 6)
---
## Notes
- [P] tasks = different files, no dependencies, can run in parallel
- [Story] label (US1, US2, US3) maps task to specific user story for traceability
- Each user story is independently completable and testable
- Tests are written FIRST and should FAIL before implementation (TDD approach)
- Commit after each task or logical group
- Stop at any checkpoint to validate story independently
- Project structure already exists - tasks validate/enhance existing code where noted
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence
---
## Task Count Summary
- **Total Tasks**: 121
- **Phase 1 (Setup)**: 8 tasks
- **Phase 2 (Foundational)**: 10 tasks
- **Phase 3 (User Story 1)**: 28 tasks (5 tests + 23 implementation)
- **Phase 4 (User Story 2)**: 22 tasks (4 tests + 18 implementation)
- **Phase 5 (User Story 3)**: 14 tasks (3 tests + 11 implementation)
- **Phase 6 (Polish)**: 39 tasks (4 docs + 35 quality gates)
**Parallel Opportunities**: ~40 tasks marked [P] can run in parallel within their phases
**Suggested MVP Scope**: Phase 1 + Phase 2 + Phase 3 (User Story 1) = 46 tasks