实现服务启动时自动生成OpenAPI文档

主要变更:
1. 新增 cmd/api/docs.go 实现文档自动生成逻辑
2. 修改 cmd/api/main.go 在服务启动时调用文档生成
3. 重构 cmd/gendocs/main.go 提取生成函数
4. 更新 .gitignore 忽略自动生成的 openapi.yaml
5. 新增 Makefile 支持 make docs 命令
6. OpenSpec 框架更新和变更归档

功能特性:
- 服务启动时自动生成 OpenAPI 文档到项目根目录
- 保留独立的文档生成工具 (make docs)
- 生成失败时记录错误但不影响服务启动
- 所有代码已通过 openspec validate --strict 验证

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-01-09 12:25:50 +08:00
parent ddbc69135d
commit 6fc90abeb6
47 changed files with 1095 additions and 5519 deletions

View File

@@ -10,6 +10,7 @@ tags: [openspec, change]
- Keep changes tightly scoped to the requested outcome. - Keep changes tightly scoped to the requested outcome.
- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications. - Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
- Identify any vague or ambiguous details and ask the necessary follow-up questions before editing files. - Identify any vague or ambiguous details and ask the necessary follow-up questions before editing files.
- Do not write any code during the proposal stage. Only create design documents (proposal.md, tasks.md, design.md, and spec deltas). Implementation happens in the apply stage after approval.
**Steps** **Steps**
1. Review `openspec/project.md`, run `openspec list` and `openspec list --specs`, and inspect related code or docs (e.g., via `rg`/`ls`) to ground the proposal in current behaviour; note any gaps that require clarification. 1. Review `openspec/project.md`, run `openspec list` and `openspec list --specs`, and inspect related code or docs (e.g., via `rg`/`ls`) to ground the proposal in current behaviour; note any gaps that require clarification.

View File

@@ -1,184 +0,0 @@
---
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
## Operating Constraints
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
## Execution Steps
### 1. Initialize Analysis Context
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
- SPEC = FEATURE_DIR/spec.md
- PLAN = FEATURE_DIR/plan.md
- TASKS = FEATURE_DIR/tasks.md
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
### 2. Load Artifacts (Progressive Disclosure)
Load only the minimal necessary context from each artifact:
**From spec.md:**
- Overview/Context
- Functional Requirements
- Non-Functional Requirements
- User Stories
- Edge Cases (if present)
**From plan.md:**
- Architecture/stack choices
- Data Model references
- Phases
- Technical constraints
**From tasks.md:**
- Task IDs
- Descriptions
- Phase grouping
- Parallel markers [P]
- Referenced file paths
**From constitution:**
- Load `.specify/memory/constitution.md` for principle validation
### 3. Build Semantic Models
Create internal representations (do not include raw artifacts in output):
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
- **User story/action inventory**: Discrete user actions with acceptance criteria
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
### 4. Detection Passes (Token-Efficient Analysis)
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
#### A. Duplication Detection
- Identify near-duplicate requirements
- Mark lower-quality phrasing for consolidation
#### B. Ambiguity Detection
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
#### C. Underspecification
- Requirements with verbs but missing object or measurable outcome
- User stories missing acceptance criteria alignment
- Tasks referencing files or components not defined in spec/plan
#### D. Constitution Alignment
- Any requirement or plan element conflicting with a MUST principle
- Missing mandated sections or quality gates from constitution
#### E. Coverage Gaps
- Requirements with zero associated tasks
- Tasks with no mapped requirement/story
- Non-functional requirements not reflected in tasks (e.g., performance, security)
#### F. Inconsistency
- Terminology drift (same concept named differently across files)
- Data entities referenced in plan but absent in spec (or vice versa)
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
### 5. Severity Assignment
Use this heuristic to prioritize findings:
- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
### 6. Produce Compact Analysis Report
Output a Markdown report (no file writes) with the following structure:
## Specification Analysis Report
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
(Add one row per finding; generate stable IDs prefixed by category initial.)
**Coverage Summary Table:**
| Requirement Key | Has Task? | Task IDs | Notes |
|-----------------|-----------|----------|-------|
**Constitution Alignment Issues:** (if any)
**Unmapped Tasks:** (if any)
**Metrics:**
- Total Requirements
- Total Tasks
- Coverage % (requirements with >=1 task)
- Ambiguity Count
- Duplication Count
- Critical Issues Count
### 7. Provide Next Actions
At end of report, output a concise Next Actions block:
- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
### 8. Offer Remediation
Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
## Operating Principles
### Context Efficiency
- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
### Analysis Guidelines
- **NEVER modify files** (this is read-only analysis)
- **NEVER hallucinate missing sections** (if absent, report them accurately)
- **Prioritize constitution violations** (these are always CRITICAL)
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
- **Report zero issues gracefully** (emit success report with coverage statistics)
## Context
$ARGUMENTS

View File

@@ -1,294 +0,0 @@
---
description: Generate a custom checklist for the current feature based on user requirements.
---
## Checklist Purpose: "Unit Tests for English"
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
**NOT for verification/testing**:
- ❌ NOT "Verify the button clicks correctly"
- ❌ NOT "Test error handling works"
- ❌ NOT "Confirm the API returns 200"
- ❌ NOT checking if code/implementation matches the spec
**FOR requirements quality validation**:
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Execution Steps
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
- All file paths must be absolute.
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
- Only ask about information that materially changes checklist content
- Be skipped individually if already unambiguous in `$ARGUMENTS`
- Prefer precision over breadth
Generation algorithm:
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
5. Formulate questions chosen from these archetypes:
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
Question formatting rules:
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
- Limit to AE options maximum; omit table if a free-form answer is clearer
- Never ask the user to restate what they already said
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
Defaults when interaction impossible:
- Depth: Standard
- Audience: Reviewer (PR) if code-related; Author otherwise
- Focus: Top 2 relevance clusters
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted followups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
- Derive checklist theme (e.g., security, review, deploy, ux)
- Consolidate explicit must-have items mentioned by user
- Map focus selections to category scaffolding
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
4. **Load feature context**: Read from FEATURE_DIR:
- spec.md: Feature requirements and scope
- plan.md (if exists): Technical details, dependencies
- tasks.md (if exists): Implementation tasks
**Context Loading Strategy**:
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
- Prefer summarizing long sections into concise scenario/requirement bullets
- Use progressive disclosure: add follow-on retrieval only if gaps detected
- If source docs are large, generate interim summary items instead of embedding raw text
5. **Generate checklist** - Create "Unit Tests for Requirements":
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
- Generate unique checklist filename:
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
- Format: `[domain].md`
- If file exists, append to existing file
- Number items sequentially starting from CHK001
- Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
- **Completeness**: Are all necessary requirements present?
- **Clarity**: Are requirements unambiguous and specific?
- **Consistency**: Do requirements align with each other?
- **Measurability**: Can requirements be objectively verified?
- **Coverage**: Are all scenarios/edge cases addressed?
**Category Structure** - Group items by requirement quality dimensions:
- **Requirement Completeness** (Are all necessary requirements documented?)
- **Requirement Clarity** (Are requirements specific and unambiguous?)
- **Requirement Consistency** (Do requirements align without conflicts?)
- **Acceptance Criteria Quality** (Are success criteria measurable?)
- **Scenario Coverage** (Are all flows/cases addressed?)
- **Edge Case Coverage** (Are boundary conditions defined?)
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
- **Dependencies & Assumptions** (Are they documented and validated?)
- **Ambiguities & Conflicts** (What needs clarification?)
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
**WRONG** (Testing implementation):
- "Verify landing page displays 3 episode cards"
- "Test hover states work on desktop"
- "Confirm logo click navigates home"
**CORRECT** (Testing requirements quality):
- "Are the exact number and layout of featured episodes specified?" [Completeness]
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
- "Are loading states defined for asynchronous episode data?" [Completeness]
- "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
**ITEM STRUCTURE**:
Each item should follow this pattern:
- Question format asking about requirement quality
- Focus on what's WRITTEN (or not written) in the spec/plan
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
- Reference spec section `[Spec §X.Y]` when checking existing requirements
- Use `[Gap]` marker when checking for missing requirements
**EXAMPLES BY QUALITY DIMENSION**:
Completeness:
- "Are error handling requirements defined for all API failure modes? [Gap]"
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
Clarity:
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
Consistency:
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
Coverage:
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
Measurability:
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
**Scenario Classification & Coverage** (Requirements Quality Focus):
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
**Traceability Requirements**:
- MINIMUM: ≥80% of items MUST include at least one traceability reference
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
**Surface & Resolve Issues** (Requirements Quality Problems):
Ask questions about the requirements themselves:
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
**Content Consolidation**:
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
- Merge near-duplicates checking the same requirement aspect
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
- ❌ References to code execution, user actions, system behavior
- ❌ "Displays correctly", "works properly", "functions as expected"
- ❌ "Click", "navigate", "render", "load", "execute"
- ❌ Test cases, test plans, QA procedures
- ❌ Implementation details (frameworks, APIs, algorithms)
**✅ REQUIRED PATTERNS** - These test requirements quality:
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
- ✅ "Are requirements consistent between [section A] and [section B]?"
- ✅ "Can [requirement] be objectively measured/verified?"
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
- ✅ "Does the spec define [missing aspect]?"
6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
- Focus areas selected
- Depth level
- Actor/timing
- Any explicit user-specified must-have items incorporated
**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
- Simple, memorable filenames that indicate checklist purpose
- Easy identification and navigation in the `checklists/` folder
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
## Example Checklist Types & Sample Items
**UX Requirements Quality:** `ux.md`
Sample items (testing the requirements, NOT the implementation):
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
**API Requirements Quality:** `api.md`
Sample items:
- "Are error response formats specified for all failure scenarios? [Completeness]"
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
- "Are authentication requirements consistent across all endpoints? [Consistency]"
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
- "Is versioning strategy documented in requirements? [Gap]"
**Performance Requirements Quality:** `performance.md`
Sample items:
- "Are performance requirements quantified with specific metrics? [Clarity]"
- "Are performance targets defined for all critical user journeys? [Coverage]"
- "Are performance requirements under different load conditions specified? [Completeness]"
- "Can performance requirements be objectively measured? [Measurability]"
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
**Security Requirements Quality:** `security.md`
Sample items:
- "Are authentication requirements specified for all protected resources? [Coverage]"
- "Are data protection requirements defined for sensitive information? [Completeness]"
- "Is the threat model documented and requirements aligned to it? [Traceability]"
- "Are security requirements consistent with compliance obligations? [Consistency]"
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
## Anti-Examples: What NOT To Do
**❌ WRONG - These test implementation, not requirements:**
```markdown
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
```
**✅ CORRECT - These test requirements quality:**
```markdown
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
```
**Key Differences:**
- Wrong: Tests if the system works correctly
- Correct: Tests if the requirements are written correctly
- Wrong: Verification of behavior
- Correct: Validation of requirement quality
- Wrong: "Does it do X?"
- Correct: "Is X clearly specified?"

View File

@@ -1,177 +0,0 @@
---
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
Execution steps:
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
- `FEATURE_DIR`
- `FEATURE_SPEC`
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
- If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
Functional Scope & Behavior:
- Core user goals & success criteria
- Explicit out-of-scope declarations
- User roles / personas differentiation
Domain & Data Model:
- Entities, attributes, relationships
- Identity & uniqueness rules
- Lifecycle/state transitions
- Data volume / scale assumptions
Interaction & UX Flow:
- Critical user journeys / sequences
- Error/empty/loading states
- Accessibility or localization notes
Non-Functional Quality Attributes:
- Performance (latency, throughput targets)
- Scalability (horizontal/vertical, limits)
- Reliability & availability (uptime, recovery expectations)
- Observability (logging, metrics, tracing signals)
- Security & privacy (authN/Z, data protection, threat assumptions)
- Compliance / regulatory constraints (if any)
Integration & External Dependencies:
- External services/APIs and failure modes
- Data import/export formats
- Protocol/versioning assumptions
Edge Cases & Failure Handling:
- Negative scenarios
- Rate limiting / throttling
- Conflict resolution (e.g., concurrent edits)
Constraints & Tradeoffs:
- Technical constraints (language, storage, hosting)
- Explicit tradeoffs or rejected alternatives
Terminology & Consistency:
- Canonical glossary terms
- Avoided synonyms / deprecated terms
Completion Signals:
- Acceptance criteria testability
- Measurable Definition of Done style indicators
Misc / Placeholders:
- TODO markers / unresolved decisions
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
For each category with Partial or Missing status, add a candidate question opportunity unless:
- Clarification would not materially change implementation or validation strategy
- Information is better deferred to planning phase (note internally)
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
- Maximum of 10 total questions across the whole session.
- Each question must be answerable with EITHER:
- A short multiplechoice selection (25 distinct, mutually exclusive options), OR
- A one-word / shortphrase answer (explicitly constrain: "Answer in <=5 words").
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
4. Sequential questioning loop (interactive):
- Present EXACTLY ONE question at a time.
- For multiplechoice questions:
- **Analyze all options** and determine the **most suitable option** based on:
- Best practices for the project type
- Common patterns in similar implementations
- Risk reduction (security, performance, maintainability)
- Alignment with any explicit project goals or constraints visible in the spec
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
- Format as: `**Recommended:** Option [X] - <reasoning>`
- Then render all options as a Markdown table:
| Option | Description |
|--------|-------------|
| A | <Option A description> |
| B | <Option B description> |
| C | <Option C description> (add D/E as needed up to 5) |
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
- For shortanswer style (no meaningful discrete options):
- Provide your **suggested answer** based on best practices and context.
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
- After the user answers:
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
- Stop asking further questions when:
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
- User signals completion ("done", "good", "no more"), OR
- You reach 5 asked questions.
- Never reveal future queued questions in advance.
- If no valid questions exist at start, immediately report no critical ambiguities.
5. Integration after EACH accepted answer (incremental update approach):
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
- For the first integrated answer in this session:
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
- Then immediately apply the clarification to the most appropriate section(s):
- Functional ambiguity → Update or add a bullet in Functional Requirements.
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
- Keep each inserted clarification minimal and testable (avoid narrative drift).
6. Validation (performed after EACH write plus final pass):
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
- Total asked (accepted) questions ≤ 5.
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
- Terminology consistency: same canonical term used across all updated sections.
7. Write the updated spec back to `FEATURE_SPEC`.
8. Report completion (after questioning loop ends or early termination):
- Number of questions asked & answered.
- Path to updated spec.
- Sections touched (list names).
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
- If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
- Suggested next command.
Behavior rules:
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
- If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
- Respect user early termination signals ("stop", "done", "proceed").
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
Context for prioritization: $ARGUMENTS

View File

@@ -1,78 +0,0 @@
---
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
Follow this execution flow:
1. Load the existing constitution template at `.specify/memory/constitution.md`.
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
2. Collect/derive values for placeholders:
- If user input (conversation) supplies a value, use it.
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
- MAJOR: Backward incompatible governance/principle removals or redefinitions.
- MINOR: New principle/section added or materially expanded guidance.
- PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
- If version bump type ambiguous, propose reasoning before finalizing.
3. Draft the updated constitution content:
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing nonnegotiable rules, explicit rationale if not obvious.
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
4. Consistency propagation checklist (convert prior checklist into active validations):
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
- Version change: old → new
- List of modified principles (old title → new title if renamed)
- Added sections
- Removed sections
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
- Follow-up TODOs if any placeholders intentionally deferred.
6. Validation before final output:
- No remaining unexplained bracket tokens.
- Version line matches report.
- Dates ISO format YYYY-MM-DD.
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
8. Output a final summary to the user with:
- New version and bump rationale.
- Any files flagged for manual follow-up.
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
Formatting & Style Requirements:
- Use Markdown headings exactly as in the template (do not demote/promote levels).
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
- Keep a single blank line between sections.
- Avoid trailing whitespace.
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.

View File

@@ -1,134 +0,0 @@
---
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
- Scan all checklist files in the checklists/ directory
- For each checklist, count:
- Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
- Completed items: Lines matching `- [X]` or `- [x]`
- Incomplete items: Lines matching `- [ ]`
- Create a status table:
```text
| Checklist | Total | Completed | Incomplete | Status |
|-----------|-------|-----------|------------|--------|
| ux.md | 12 | 12 | 0 | ✓ PASS |
| test.md | 8 | 5 | 3 | ✗ FAIL |
| security.md | 6 | 6 | 0 | ✓ PASS |
```
- Calculate overall status:
- **PASS**: All checklists have 0 incomplete items
- **FAIL**: One or more checklists have incomplete items
- **If any checklist is incomplete**:
- Display the table with incomplete item counts
- **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
- Wait for user response before continuing
- If user says "no" or "wait" or "stop", halt execution
- If user says "yes" or "proceed" or "continue", proceed to step 3
- **If all checklists are complete**:
- Display the table showing all checklists passed
- Automatically proceed to step 3
3. Load and analyze the implementation context:
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
- **IF EXISTS**: Read data-model.md for entities and relationships
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
- **IF EXISTS**: Read research.md for technical decisions and constraints
- **IF EXISTS**: Read quickstart.md for integration scenarios
4. **Project Setup Verification**:
- **REQUIRED**: Create/verify ignore files based on actual project setup:
**Detection & Creation Logic**:
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
```sh
git rev-parse --git-dir 2>/dev/null
```
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
- Check if .eslintrc*or eslint.config.* exists → create/verify .eslintignore
- Check if .prettierrc* exists → create/verify .prettierignore
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
- Check if terraform files (*.tf) exist → create/verify .terraformignore
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
**If ignore file missing**: Create with full pattern set for detected technology
**Common Patterns by Technology** (from plan.md tech stack):
- **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
- **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
- **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
- **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
- **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
- **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
- **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*`
- **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
- **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
**Tool-Specific Patterns**:
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
- **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
5. Parse tasks.md structure and extract:
- **Task phases**: Setup, Tests, Core, Integration, Polish
- **Task dependencies**: Sequential vs parallel execution rules
- **Task details**: ID, description, file paths, parallel markers [P]
- **Execution flow**: Order and dependency requirements
6. Execute implementation following the task plan:
- **Phase-by-phase execution**: Complete each phase before moving to the next
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
- **File-based coordination**: Tasks affecting the same files must run sequentially
- **Validation checkpoints**: Verify each phase completion before proceeding
7. Implementation execution rules:
- **Setup first**: Initialize project structure, dependencies, configuration
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
- **Core development**: Implement models, services, CLI commands, endpoints
- **Integration work**: Database connections, middleware, logging, external services
- **Polish and validation**: Unit tests, performance optimization, documentation
8. Progress tracking and error handling:
- Report progress after each completed task
- Halt execution if any non-parallel task fails
- For parallel tasks [P], continue with successful tasks, report failed ones
- Provide clear error messages with context for debugging
- Suggest next steps if implementation cannot proceed
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
9. Completion validation:
- Verify all required tasks are completed
- Check that implemented features match the original specification
- Validate that tests pass and coverage meets requirements
- Confirm the implementation follows the technical plan
- Report final status with summary of completed work
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.

View File

@@ -1,81 +0,0 @@
---
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
- Fill Constitution Check section from constitution
- Evaluate gates (ERROR if violations unjustified)
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
- Phase 1: Generate data-model.md, contracts/, quickstart.md
- Phase 1: Update agent context by running the agent script
- Re-evaluate Constitution Check post-design
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
## Phases
### Phase 0: Outline & Research
1. **Extract unknowns from Technical Context** above:
- For each NEEDS CLARIFICATION → research task
- For each dependency → best practices task
- For each integration → patterns task
2. **Generate and dispatch research agents**:
```text
For each unknown in Technical Context:
Task: "Research {unknown} for {feature context}"
For each technology choice:
Task: "Find best practices for {tech} in {domain}"
```
3. **Consolidate findings** in `research.md` using format:
- Decision: [what was chosen]
- Rationale: [why chosen]
- Alternatives considered: [what else evaluated]
**Output**: research.md with all NEEDS CLARIFICATION resolved
### Phase 1: Design & Contracts
**Prerequisites:** `research.md` complete
1. **Extract entities from feature spec** → `data-model.md`:
- Entity name, fields, relationships
- Validation rules from requirements
- State transitions if applicable
2. **Generate API contracts** from functional requirements:
- For each user action → endpoint
- Use standard REST/GraphQL patterns
- Output OpenAPI/GraphQL schema to `/contracts/`
3. **Agent context update**:
- Run `.specify/scripts/bash/update-agent-context.sh claude`
- These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file
- Add only new technology from current plan
- Preserve manual additions between markers
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
## Key rules
- Use absolute paths
- ERROR on gate failures or unresolved clarifications

View File

@@ -1,249 +0,0 @@
---
description: Create or update the feature specification from a natural language feature description.
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `$ARGUMENTS` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
Given that feature description, do this:
1. **Generate a concise short name** (2-4 words) for the branch:
- Analyze the feature description and extract the most meaningful keywords
- Create a 2-4 word short name that captures the essence of the feature
- Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
- Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
- Keep it concise but descriptive enough to understand the feature at a glance
- Examples:
- "I want to add user authentication" → "user-auth"
- "Implement OAuth2 integration for the API" → "oauth2-api-integration"
- "Create a dashboard for analytics" → "analytics-dashboard"
- "Fix payment processing timeout bug" → "fix-payment-timeout"
2. **Check for existing branches before creating new one**:
a. First, fetch all remote branches to ensure we have the latest information:
```bash
git fetch --all --prune
```
b. Find the highest feature number across all sources for the short-name:
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
c. Determine the next available number:
- Extract all numbers from all three sources
- Find the highest number N
- Use N+1 for the new branch number
d. Run the script `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS"` with the calculated number and short-name:
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" --json --number 5 --short-name "user-auth" "Add user authentication"`
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
**IMPORTANT**:
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
- Only match branches/directories with the exact short-name pattern
- If no existing branches/directories found with this short-name, start with number 1
- You must only ever run this script once per feature
- The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
- The JSON output will contain BRANCH_NAME and SPEC_FILE paths
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot")
3. Load `.specify/templates/spec-template.md` to understand required sections.
4. Follow this execution flow:
1. Parse user description from Input
If empty: ERROR "No feature description provided"
2. Extract key concepts from description
Identify: actors, actions, data, constraints
3. For unclear aspects:
- Make informed guesses based on context and industry standards
- Only mark with [NEEDS CLARIFICATION: specific question] if:
- The choice significantly impacts feature scope or user experience
- Multiple reasonable interpretations exist with different implications
- No reasonable default exists
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
4. Fill User Scenarios & Testing section
If no clear user flow: ERROR "Cannot determine user scenarios"
5. Generate Functional Requirements
Each requirement must be testable
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
6. Define Success Criteria
Create measurable, technology-agnostic outcomes
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
Each criterion must be verifiable without implementation details
7. Identify Key Entities (if data involved)
8. Return: SUCCESS (spec ready for planning)
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
```markdown
# Specification Quality Checklist: [FEATURE NAME]
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: [DATE]
**Feature**: [Link to spec.md]
## Content Quality
- [ ] No implementation details (languages, frameworks, APIs)
- [ ] Focused on user value and business needs
- [ ] Written for non-technical stakeholders
- [ ] All mandatory sections completed
## Requirement Completeness
- [ ] No [NEEDS CLARIFICATION] markers remain
- [ ] Requirements are testable and unambiguous
- [ ] Success criteria are measurable
- [ ] Success criteria are technology-agnostic (no implementation details)
- [ ] All acceptance scenarios are defined
- [ ] Edge cases are identified
- [ ] Scope is clearly bounded
- [ ] Dependencies and assumptions identified
## Feature Readiness
- [ ] All functional requirements have clear acceptance criteria
- [ ] User scenarios cover primary flows
- [ ] Feature meets measurable outcomes defined in Success Criteria
- [ ] No implementation details leak into specification
## Notes
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
```
b. **Run Validation Check**: Review the spec against each checklist item:
- For each item, determine if it passes or fails
- Document specific issues found (quote relevant spec sections)
c. **Handle Validation Results**:
- **If all items pass**: Mark checklist complete and proceed to step 6
- **If items fail (excluding [NEEDS CLARIFICATION])**:
1. List the failing items and specific issues
2. Update the spec to address each issue
3. Re-run validation until all items pass (max 3 iterations)
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
- **If [NEEDS CLARIFICATION] markers remain**:
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
3. For each clarification needed (max 3), present options to user in this format:
```markdown
## Question [N]: [Topic]
**Context**: [Quote relevant spec section]
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
**Suggested Answers**:
| Option | Answer | Implications |
|--------|--------|--------------|
| A | [First suggested answer] | [What this means for the feature] |
| B | [Second suggested answer] | [What this means for the feature] |
| C | [Third suggested answer] | [What this means for the feature] |
| Custom | Provide your own answer | [Explain how to provide custom input] |
**Your choice**: _[Wait for user response]_
```
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
- Use consistent spacing with pipes aligned
- Each cell should have spaces around content: `| Content |` not `|Content|`
- Header separator must have at least 3 dashes: `|--------|`
- Test that the table renders correctly in markdown preview
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
6. Present all questions together before waiting for responses
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
9. Re-run validation after all clarifications are resolved
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
## General Guidelines
## Quick Guidelines
- Focus on **WHAT** users need and **WHY**.
- Avoid HOW to implement (no tech stack, APIs, code structure).
- Written for business stakeholders, not developers.
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
### Section Requirements
- **Mandatory sections**: Must be completed for every feature
- **Optional sections**: Include only when relevant to the feature
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
### For AI Generation
When creating this spec from a user prompt:
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
- Significantly impact feature scope or user experience
- Have multiple reasonable interpretations with different implications
- Lack any reasonable default
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
6. **Common areas needing clarification** (only if no reasonable default exists):
- Feature scope and boundaries (include/exclude specific use cases)
- User types and permissions (if multiple conflicting interpretations possible)
- Security/compliance requirements (when legally/financially significant)
**Examples of reasonable defaults** (don't ask about these):
- Data retention: Industry-standard practices for the domain
- Performance targets: Standard web/mobile app expectations unless specified
- Error handling: User-friendly messages with appropriate fallbacks
- Authentication method: Standard session-based or OAuth2 for web apps
- Integration patterns: RESTful APIs unless specified otherwise
### Success Criteria Guidelines
Success criteria must be:
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
4. **Verifiable**: Can be tested/validated without knowing implementation details
**Good examples**:
- "Users can complete checkout in under 3 minutes"
- "System supports 10,000 concurrent users"
- "95% of searches return results in under 1 second"
- "Task completion rate improves by 40%"
**Bad examples** (implementation-focused):
- "API response time is under 200ms" (too technical, use "Users see results instantly")
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
- "React components render efficiently" (framework-specific)
- "Redis cache hit rate above 80%" (technology-specific)

View File

@@ -1,128 +0,0 @@
---
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load design documents**: Read from FEATURE_DIR:
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
- Note: Not all projects have all documents. Generate tasks based on what's available.
3. **Execute task generation workflow**:
- Load plan.md and extract tech stack, libraries, project structure
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
- If data-model.md exists: Extract entities and map to user stories
- If contracts/ exists: Map endpoints to user stories
- If research.md exists: Extract decisions for setup tasks
- Generate tasks organized by user story (see Task Generation Rules below)
- Generate dependency graph showing user story completion order
- Create parallel execution examples per user story
- Validate task completeness (each user story has all needed tasks, independently testable)
4. **Generate tasks.md**: Use `.specify.specify/templates/tasks-template.md` as structure, fill with:
- Correct feature name from plan.md
- Phase 1: Setup tasks (project initialization)
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
- Phase 3+: One phase per user story (in priority order from spec.md)
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
- Final Phase: Polish & cross-cutting concerns
- All tasks must follow the strict checklist format (see Task Generation Rules below)
- Clear file paths for each task
- Dependencies section showing story completion order
- Parallel execution examples per story
- Implementation strategy section (MVP first, incremental delivery)
5. **Report**: Output path to generated tasks.md and summary:
- Total task count
- Task count per user story
- Parallel opportunities identified
- Independent test criteria for each story
- Suggested MVP scope (typically just User Story 1)
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
Context for task generation: $ARGUMENTS
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
## Task Generation Rules
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
### Checklist Format (REQUIRED)
Every task MUST strictly follow this format:
```text
- [ ] [TaskID] [P?] [Story?] Description with file path
```
**Format Components**:
1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
4. **[Story] label**: REQUIRED for user story phase tasks only
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
- Setup phase: NO story label
- Foundational phase: NO story label
- User Story phases: MUST have story label
- Polish phase: NO story label
5. **Description**: Clear action with exact file path
**Examples**:
- ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
- ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
- ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
- ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
- ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
- ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
- ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
- ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
### Task Organization
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
- Each user story (P1, P2, P3...) gets its own phase
- Map all related components to their story:
- Models needed for that story
- Services needed for that story
- Endpoints/UI needed for that story
- If tests requested: Tests specific to that story
- Mark story dependencies (most stories should be independent)
2. **From Contracts**:
- Map each contract/endpoint → to the user story it serves
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
3. **From Data Model**:
- Map each entity to the user story(ies) that need it
- If entity serves multiple stories: Put in earliest story or Setup phase
- Relationships → service layer tasks in appropriate story phase
4. **From Setup/Infrastructure**:
- Shared infrastructure → Setup phase (Phase 1)
- Foundational/blocking tasks → Foundational phase (Phase 2)
- Story-specific setup → within that story's phase
### Phase Structure
- **Phase 1**: Setup (project initialization)
- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
- **Phase 3+**: User Stories in priority order (P1, P2, P3...)
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
- Each phase should be a complete, independently testable increment
- **Final Phase**: Polish & Cross-Cutting Concerns

4
.gitignore vendored
View File

@@ -63,3 +63,7 @@ configs/local.yaml
# Build output # Build output
dist/ dist/
build/ build/
# Auto-generated OpenAPI documentation
/openapi.yaml
.claude/settings.local.json

File diff suppressed because it is too large Load Diff

View File

@@ -1,166 +0,0 @@
#!/usr/bin/env bash
# Consolidated prerequisite checking script
#
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
# It replaces the functionality previously spread across multiple scripts.
#
# Usage: ./check-prerequisites.sh [OPTIONS]
#
# OPTIONS:
# --json Output in JSON format
# --require-tasks Require tasks.md to exist (for implementation phase)
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
# --paths-only Only output path variables (no validation)
# --help, -h Show help message
#
# OUTPUTS:
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
set -e
# Parse command line arguments
JSON_MODE=false
REQUIRE_TASKS=false
INCLUDE_TASKS=false
PATHS_ONLY=false
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--require-tasks)
REQUIRE_TASKS=true
;;
--include-tasks)
INCLUDE_TASKS=true
;;
--paths-only)
PATHS_ONLY=true
;;
--help|-h)
cat << 'EOF'
Usage: check-prerequisites.sh [OPTIONS]
Consolidated prerequisite checking for Spec-Driven Development workflow.
OPTIONS:
--json Output in JSON format
--require-tasks Require tasks.md to exist (for implementation phase)
--include-tasks Include tasks.md in AVAILABLE_DOCS list
--paths-only Only output path variables (no prerequisite validation)
--help, -h Show this help message
EXAMPLES:
# Check task prerequisites (plan.md required)
./check-prerequisites.sh --json
# Check implementation prerequisites (plan.md + tasks.md required)
./check-prerequisites.sh --json --require-tasks --include-tasks
# Get feature paths only (no validation)
./check-prerequisites.sh --paths-only
EOF
exit 0
;;
*)
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
exit 1
;;
esac
done
# Source common functions
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get feature paths and validate branch
eval $(get_feature_paths)
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
if $PATHS_ONLY; then
if $JSON_MODE; then
# Minimal JSON paths payload (no validation performed)
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
else
echo "REPO_ROOT: $REPO_ROOT"
echo "BRANCH: $CURRENT_BRANCH"
echo "FEATURE_DIR: $FEATURE_DIR"
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "TASKS: $TASKS"
fi
exit 0
fi
# Validate required directories and files
if [[ ! -d "$FEATURE_DIR" ]]; then
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
echo "Run /speckit.specify first to create the feature structure." >&2
exit 1
fi
if [[ ! -f "$IMPL_PLAN" ]]; then
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.plan first to create the implementation plan." >&2
exit 1
fi
# Check for tasks.md if required
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.tasks first to create the task list." >&2
exit 1
fi
# Build list of available documents
docs=()
# Always check these optional docs
[[ -f "$RESEARCH" ]] && docs+=("research.md")
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
# Check contracts directory (only if it exists and has files)
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
docs+=("contracts/")
fi
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
# Include tasks.md if requested and it exists
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
docs+=("tasks.md")
fi
# Output results
if $JSON_MODE; then
# Build JSON array of documents
if [[ ${#docs[@]} -eq 0 ]]; then
json_docs="[]"
else
json_docs=$(printf '"%s",' "${docs[@]}")
json_docs="[${json_docs%,}]"
fi
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
else
# Text output
echo "FEATURE_DIR:$FEATURE_DIR"
echo "AVAILABLE_DOCS:"
# Show status of each potential document
check_file "$RESEARCH" "research.md"
check_file "$DATA_MODEL" "data-model.md"
check_dir "$CONTRACTS_DIR" "contracts/"
check_file "$QUICKSTART" "quickstart.md"
if $INCLUDE_TASKS; then
check_file "$TASKS" "tasks.md"
fi
fi

View File

@@ -1,156 +0,0 @@
#!/usr/bin/env bash
# Common functions and variables for all scripts
# Get repository root, with fallback for non-git repositories
get_repo_root() {
if git rev-parse --show-toplevel >/dev/null 2>&1; then
git rev-parse --show-toplevel
else
# Fall back to script location for non-git repos
local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
(cd "$script_dir/../../.." && pwd)
fi
}
# Get current branch, with fallback for non-git repositories
get_current_branch() {
# First check if SPECIFY_FEATURE environment variable is set
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
echo "$SPECIFY_FEATURE"
return
fi
# Then check git if available
if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
git rev-parse --abbrev-ref HEAD
return
fi
# For non-git repos, try to find the latest feature directory
local repo_root=$(get_repo_root)
local specs_dir="$repo_root/specs"
if [[ -d "$specs_dir" ]]; then
local latest_feature=""
local highest=0
for dir in "$specs_dir"/*; do
if [[ -d "$dir" ]]; then
local dirname=$(basename "$dir")
if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
local number=${BASH_REMATCH[1]}
number=$((10#$number))
if [[ "$number" -gt "$highest" ]]; then
highest=$number
latest_feature=$dirname
fi
fi
fi
done
if [[ -n "$latest_feature" ]]; then
echo "$latest_feature"
return
fi
fi
echo "main" # Final fallback
}
# Check if we have git available
has_git() {
git rev-parse --show-toplevel >/dev/null 2>&1
}
check_feature_branch() {
local branch="$1"
local has_git_repo="$2"
# For non-git repos, we can't enforce branch naming but still provide output
if [[ "$has_git_repo" != "true" ]]; then
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
return 0
fi
if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then
echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
echo "Feature branches should be named like: 001-feature-name" >&2
return 1
fi
return 0
}
get_feature_dir() { echo "$1/specs/$2"; }
# Find feature directory by numeric prefix instead of exact branch match
# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
find_feature_dir_by_prefix() {
local repo_root="$1"
local branch_name="$2"
local specs_dir="$repo_root/specs"
# Extract numeric prefix from branch (e.g., "004" from "004-whatever")
if [[ ! "$branch_name" =~ ^([0-9]{3})- ]]; then
# If branch doesn't have numeric prefix, fall back to exact match
echo "$specs_dir/$branch_name"
return
fi
local prefix="${BASH_REMATCH[1]}"
# Search for directories in specs/ that start with this prefix
local matches=()
if [[ -d "$specs_dir" ]]; then
for dir in "$specs_dir"/"$prefix"-*; do
if [[ -d "$dir" ]]; then
matches+=("$(basename "$dir")")
fi
done
fi
# Handle results
if [[ ${#matches[@]} -eq 0 ]]; then
# No match found - return the branch name path (will fail later with clear error)
echo "$specs_dir/$branch_name"
elif [[ ${#matches[@]} -eq 1 ]]; then
# Exactly one match - perfect!
echo "$specs_dir/${matches[0]}"
else
# Multiple matches - this shouldn't happen with proper naming convention
echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
echo "Please ensure only one spec directory exists per numeric prefix." >&2
echo "$specs_dir/$branch_name" # Return something to avoid breaking the script
fi
}
get_feature_paths() {
local repo_root=$(get_repo_root)
local current_branch=$(get_current_branch)
local has_git_repo="false"
if has_git; then
has_git_repo="true"
fi
# Use prefix-based lookup to support multiple branches per spec
local feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch")
cat <<EOF
REPO_ROOT='$repo_root'
CURRENT_BRANCH='$current_branch'
HAS_GIT='$has_git_repo'
FEATURE_DIR='$feature_dir'
FEATURE_SPEC='$feature_dir/spec.md'
IMPL_PLAN='$feature_dir/plan.md'
TASKS='$feature_dir/tasks.md'
RESEARCH='$feature_dir/research.md'
DATA_MODEL='$feature_dir/data-model.md'
QUICKSTART='$feature_dir/quickstart.md'
CONTRACTS_DIR='$feature_dir/contracts'
EOF
}
check_file() { [[ -f "$1" ]] && echo "$2" || echo "$2"; }
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo "$2" || echo "$2"; }

View File

@@ -1,260 +0,0 @@
#!/usr/bin/env bash
set -e
JSON_MODE=false
SHORT_NAME=""
BRANCH_NUMBER=""
ARGS=()
i=1
while [ $i -le $# ]; do
arg="${!i}"
case "$arg" in
--json)
JSON_MODE=true
;;
--short-name)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
# Check if the next argument is another option (starts with --)
if [[ "$next_arg" == --* ]]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
SHORT_NAME="$next_arg"
;;
--number)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
BRANCH_NUMBER="$next_arg"
;;
--help|-h)
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>"
echo ""
echo "Options:"
echo " --json Output in JSON format"
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
echo " --number N Specify branch number manually (overrides auto-detection)"
echo " --help, -h Show this help message"
echo ""
echo "Examples:"
echo " $0 'Add user authentication system' --short-name 'user-auth'"
echo " $0 'Implement OAuth2 integration for API' --number 5"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
i=$((i + 1))
done
FEATURE_DESCRIPTION="${ARGS[*]}"
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>" >&2
exit 1
fi
# Function to find the repository root by searching for existing project markers
find_repo_root() {
local dir="$1"
while [ "$dir" != "/" ]; do
if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
echo "$dir"
return 0
fi
dir="$(dirname "$dir")"
done
return 1
}
# Function to check existing branches (local and remote) and return next available number
check_existing_branches() {
local short_name="$1"
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
git fetch --all --prune 2>/dev/null || true
# Find all branches matching the pattern using git ls-remote (more reliable)
local remote_branches=$(git ls-remote --heads origin 2>/dev/null | grep -E "refs/heads/[0-9]+-${short_name}$" | sed 's/.*\/\([0-9]*\)-.*/\1/' | sort -n)
# Also check local branches
local local_branches=$(git branch 2>/dev/null | grep -E "^[* ]*[0-9]+-${short_name}$" | sed 's/^[* ]*//' | sed 's/-.*//' | sort -n)
# Check specs directory as well
local spec_dirs=""
if [ -d "$SPECS_DIR" ]; then
spec_dirs=$(find "$SPECS_DIR" -maxdepth 1 -type d -name "[0-9]*-${short_name}" 2>/dev/null | xargs -n1 basename 2>/dev/null | sed 's/-.*//' | sort -n)
fi
# Combine all sources and get the highest number
local max_num=0
for num in $remote_branches $local_branches $spec_dirs; do
if [ "$num" -gt "$max_num" ]; then
max_num=$num
fi
done
# Return next number
echo $((max_num + 1))
}
# Resolve repository root. Prefer git information when available, but fall back
# to searching for repository markers so the workflow still functions in repositories that
# were initialised with --no-git.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if git rev-parse --show-toplevel >/dev/null 2>&1; then
REPO_ROOT=$(git rev-parse --show-toplevel)
HAS_GIT=true
else
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
if [ -z "$REPO_ROOT" ]; then
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
exit 1
fi
HAS_GIT=false
fi
cd "$REPO_ROOT"
SPECS_DIR="$REPO_ROOT/specs"
mkdir -p "$SPECS_DIR"
# Function to generate branch name with stop word filtering and length filtering
generate_branch_name() {
local description="$1"
# Common stop words to filter out
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
# Convert to lowercase and split into words
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
local meaningful_words=()
for word in $clean_name; do
# Skip empty words
[ -z "$word" ] && continue
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
if ! echo "$word" | grep -qiE "$stop_words"; then
if [ ${#word} -ge 3 ]; then
meaningful_words+=("$word")
elif echo "$description" | grep -q "\b${word^^}\b"; then
# Keep short words if they appear as uppercase in original (likely acronyms)
meaningful_words+=("$word")
fi
fi
done
# If we have meaningful words, use first 3-4 of them
if [ ${#meaningful_words[@]} -gt 0 ]; then
local max_words=3
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
local result=""
local count=0
for word in "${meaningful_words[@]}"; do
if [ $count -ge $max_words ]; then break; fi
if [ -n "$result" ]; then result="$result-"; fi
result="$result$word"
count=$((count + 1))
done
echo "$result"
else
# Fallback to original logic if no meaningful words found
echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//' | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
fi
}
# Generate branch name
if [ -n "$SHORT_NAME" ]; then
# Use provided short name, just clean it up
BRANCH_SUFFIX=$(echo "$SHORT_NAME" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//')
else
# Generate from description with smart filtering
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
fi
# Determine branch number
if [ -z "$BRANCH_NUMBER" ]; then
if [ "$HAS_GIT" = true ]; then
# Check existing branches on remotes
BRANCH_NUMBER=$(check_existing_branches "$BRANCH_SUFFIX")
else
# Fall back to local directory check
HIGHEST=0
if [ -d "$SPECS_DIR" ]; then
for dir in "$SPECS_DIR"/*; do
[ -d "$dir" ] || continue
dirname=$(basename "$dir")
number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$HIGHEST" ]; then HIGHEST=$number; fi
done
fi
BRANCH_NUMBER=$((HIGHEST + 1))
fi
fi
FEATURE_NUM=$(printf "%03d" "$BRANCH_NUMBER")
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
# GitHub enforces a 244-byte limit on branch names
# Validate and truncate if necessary
MAX_BRANCH_LENGTH=244
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
# Calculate how much we need to trim from suffix
# Account for: feature number (3) + hyphen (1) = 4 chars
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4))
# Truncate suffix at word boundary if possible
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
# Remove trailing hyphen if truncation created one
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
fi
if [ "$HAS_GIT" = true ]; then
git checkout -b "$BRANCH_NAME"
else
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
fi
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
mkdir -p "$FEATURE_DIR"
TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
SPEC_FILE="$FEATURE_DIR/spec.md"
if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
# Set the SPECIFY_FEATURE environment variable for the current session
export SPECIFY_FEATURE="$BRANCH_NAME"
if $JSON_MODE; then
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
else
echo "BRANCH_NAME: $BRANCH_NAME"
echo "SPEC_FILE: $SPEC_FILE"
echo "FEATURE_NUM: $FEATURE_NUM"
echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
fi

View File

@@ -1,61 +0,0 @@
#!/usr/bin/env bash
set -e
# Parse command line arguments
JSON_MODE=false
ARGS=()
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--help|-h)
echo "Usage: $0 [--json]"
echo " --json Output results in JSON format"
echo " --help Show this help message"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
done
# Get script directory and load common functions
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
eval $(get_feature_paths)
# Check if we're on a proper feature branch (only for git repos)
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# Ensure the feature directory exists
mkdir -p "$FEATURE_DIR"
# Copy plan template if it exists
TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md"
if [[ -f "$TEMPLATE" ]]; then
cp "$TEMPLATE" "$IMPL_PLAN"
echo "Copied plan template to $IMPL_PLAN"
else
echo "Warning: Plan template not found at $TEMPLATE"
# Create a basic plan file if template doesn't exist
touch "$IMPL_PLAN"
fi
# Output results
if $JSON_MODE; then
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
"$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT"
else
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "SPECS_DIR: $FEATURE_DIR"
echo "BRANCH: $CURRENT_BRANCH"
echo "HAS_GIT: $HAS_GIT"
fi

View File

@@ -1,772 +0,0 @@
#!/usr/bin/env bash
# Update agent context files with information from plan.md
#
# This script maintains AI agent context files by parsing feature specifications
# and updating agent-specific configuration files with project information.
#
# MAIN FUNCTIONS:
# 1. Environment Validation
# - Verifies git repository structure and branch information
# - Checks for required plan.md files and templates
# - Validates file permissions and accessibility
#
# 2. Plan Data Extraction
# - Parses plan.md files to extract project metadata
# - Identifies language/version, frameworks, databases, and project types
# - Handles missing or incomplete specification data gracefully
#
# 3. Agent File Management
# - Creates new agent context files from templates when needed
# - Updates existing agent files with new project information
# - Preserves manual additions and custom configurations
# - Supports multiple AI agent formats and directory structures
#
# 4. Content Generation
# - Generates language-specific build/test commands
# - Creates appropriate project directory structures
# - Updates technology stacks and recent changes sections
# - Maintains consistent formatting and timestamps
#
# 5. Multi-Agent Support
# - Handles agent-specific file paths and naming conventions
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Amp, or Amazon Q Developer CLI
# - Can update single agents or all existing agent files
# - Creates default Claude file if no agent files exist
#
# Usage: ./update-agent-context.sh [agent_type]
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|q
# Leave empty to update all existing agent files
set -e
# Enable strict error handling
set -u
set -o pipefail
#==============================================================================
# Configuration and Global Variables
#==============================================================================
# Get script directory and load common functions
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
eval $(get_feature_paths)
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
AGENT_TYPE="${1:-}"
# Agent-specific file paths
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
COPILOT_FILE="$REPO_ROOT/.github/copilot-instructions.md"
CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
QWEN_FILE="$REPO_ROOT/QWEN.md"
AGENTS_FILE="$REPO_ROOT/AGENTS.md"
WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
AMP_FILE="$REPO_ROOT/AGENTS.md"
Q_FILE="$REPO_ROOT/AGENTS.md"
# Template file
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
# Global variables for parsed plan data
NEW_LANG=""
NEW_FRAMEWORK=""
NEW_DB=""
NEW_PROJECT_TYPE=""
#==============================================================================
# Utility Functions
#==============================================================================
log_info() {
echo "INFO: $1"
}
log_success() {
echo "$1"
}
log_error() {
echo "ERROR: $1" >&2
}
log_warning() {
echo "WARNING: $1" >&2
}
# Cleanup function for temporary files
cleanup() {
local exit_code=$?
rm -f /tmp/agent_update_*_$$
rm -f /tmp/manual_additions_$$
exit $exit_code
}
# Set up cleanup trap
trap cleanup EXIT INT TERM
#==============================================================================
# Validation Functions
#==============================================================================
validate_environment() {
# Check if we have a current branch/feature (git or non-git)
if [[ -z "$CURRENT_BRANCH" ]]; then
log_error "Unable to determine current feature"
if [[ "$HAS_GIT" == "true" ]]; then
log_info "Make sure you're on a feature branch"
else
log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
fi
exit 1
fi
# Check if plan.md exists
if [[ ! -f "$NEW_PLAN" ]]; then
log_error "No plan.md found at $NEW_PLAN"
log_info "Make sure you're working on a feature with a corresponding spec directory"
if [[ "$HAS_GIT" != "true" ]]; then
log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
fi
exit 1
fi
# Check if template exists (needed for new files)
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_warning "Template file not found at $TEMPLATE_FILE"
log_warning "Creating new agent files will fail"
fi
}
#==============================================================================
# Plan Parsing Functions
#==============================================================================
extract_plan_field() {
local field_pattern="$1"
local plan_file="$2"
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
head -1 | \
sed "s|^\*\*${field_pattern}\*\*: ||" | \
sed 's/^[ \t]*//;s/[ \t]*$//' | \
grep -v "NEEDS CLARIFICATION" | \
grep -v "^N/A$" || echo ""
}
parse_plan_data() {
local plan_file="$1"
if [[ ! -f "$plan_file" ]]; then
log_error "Plan file not found: $plan_file"
return 1
fi
if [[ ! -r "$plan_file" ]]; then
log_error "Plan file is not readable: $plan_file"
return 1
fi
log_info "Parsing plan data from $plan_file"
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
# Log what we found
if [[ -n "$NEW_LANG" ]]; then
log_info "Found language: $NEW_LANG"
else
log_warning "No language information found in plan"
fi
if [[ -n "$NEW_FRAMEWORK" ]]; then
log_info "Found framework: $NEW_FRAMEWORK"
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
log_info "Found database: $NEW_DB"
fi
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
log_info "Found project type: $NEW_PROJECT_TYPE"
fi
}
format_technology_stack() {
local lang="$1"
local framework="$2"
local parts=()
# Add non-empty parts
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
# Join with proper formatting
if [[ ${#parts[@]} -eq 0 ]]; then
echo ""
elif [[ ${#parts[@]} -eq 1 ]]; then
echo "${parts[0]}"
else
# Join multiple parts with " + "
local result="${parts[0]}"
for ((i=1; i<${#parts[@]}; i++)); do
result="$result + ${parts[i]}"
done
echo "$result"
fi
}
#==============================================================================
# Template and Content Generation Functions
#==============================================================================
get_project_structure() {
local project_type="$1"
if [[ "$project_type" == *"web"* ]]; then
echo "backend/\\nfrontend/\\ntests/"
else
echo "src/\\ntests/"
fi
}
get_commands_for_language() {
local lang="$1"
case "$lang" in
*"Python"*)
echo "cd src && pytest && ruff check ."
;;
*"Rust"*)
echo "cargo test && cargo clippy"
;;
*"JavaScript"*|*"TypeScript"*)
echo "npm test \\&\\& npm run lint"
;;
*)
echo "# Add commands for $lang"
;;
esac
}
get_language_conventions() {
local lang="$1"
echo "$lang: Follow standard conventions"
}
create_new_agent_file() {
local target_file="$1"
local temp_file="$2"
local project_name="$3"
local current_date="$4"
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_error "Template not found at $TEMPLATE_FILE"
return 1
fi
if [[ ! -r "$TEMPLATE_FILE" ]]; then
log_error "Template file is not readable: $TEMPLATE_FILE"
return 1
fi
log_info "Creating new agent context file from template..."
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
log_error "Failed to copy template file"
return 1
fi
# Replace template placeholders
local project_structure
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
local commands
commands=$(get_commands_for_language "$NEW_LANG")
local language_conventions
language_conventions=$(get_language_conventions "$NEW_LANG")
# Perform substitutions with error checking using safer approach
# Escape special characters for sed by using a different delimiter or escaping
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
# Build technology stack and recent change strings conditionally
local tech_stack
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
elif [[ -n "$escaped_lang" ]]; then
tech_stack="- $escaped_lang ($escaped_branch)"
elif [[ -n "$escaped_framework" ]]; then
tech_stack="- $escaped_framework ($escaped_branch)"
else
tech_stack="- ($escaped_branch)"
fi
local recent_change
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
elif [[ -n "$escaped_lang" ]]; then
recent_change="- $escaped_branch: Added $escaped_lang"
elif [[ -n "$escaped_framework" ]]; then
recent_change="- $escaped_branch: Added $escaped_framework"
else
recent_change="- $escaped_branch: Added"
fi
local substitutions=(
"s|\[PROJECT NAME\]|$project_name|"
"s|\[DATE\]|$current_date|"
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
)
for substitution in "${substitutions[@]}"; do
if ! sed -i.bak -e "$substitution" "$temp_file"; then
log_error "Failed to perform substitution: $substitution"
rm -f "$temp_file" "$temp_file.bak"
return 1
fi
done
# Convert \n sequences to actual newlines
newline=$(printf '\n')
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
# Clean up backup files
rm -f "$temp_file.bak" "$temp_file.bak2"
return 0
}
update_existing_agent_file() {
local target_file="$1"
local current_date="$2"
log_info "Updating existing agent context file..."
# Use a single temporary file for atomic update
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
# Process the file in one pass
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
local new_tech_entries=()
local new_change_entry=""
# Prepare new technology entries
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
fi
# Prepare new change entry
if [[ -n "$tech_stack" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
fi
# Check if sections exist in the file
local has_active_technologies=0
local has_recent_changes=0
if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
has_active_technologies=1
fi
if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
has_recent_changes=1
fi
# Process file line by line
local in_tech_section=false
local in_changes_section=false
local tech_entries_added=false
local changes_entries_added=false
local existing_changes_count=0
local file_ended=false
while IFS= read -r line || [[ -n "$line" ]]; do
# Handle Active Technologies section
if [[ "$line" == "## Active Technologies" ]]; then
echo "$line" >> "$temp_file"
in_tech_section=true
continue
elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
# Add new tech entries before closing the section
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
echo "$line" >> "$temp_file"
in_tech_section=false
continue
elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
# Add new tech entries before empty line in tech section
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
echo "$line" >> "$temp_file"
continue
fi
# Handle Recent Changes section
if [[ "$line" == "## Recent Changes" ]]; then
echo "$line" >> "$temp_file"
# Add new change entry right after the heading
if [[ -n "$new_change_entry" ]]; then
echo "$new_change_entry" >> "$temp_file"
fi
in_changes_section=true
changes_entries_added=true
continue
elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
echo "$line" >> "$temp_file"
in_changes_section=false
continue
elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
# Keep only first 2 existing changes
if [[ $existing_changes_count -lt 2 ]]; then
echo "$line" >> "$temp_file"
((existing_changes_count++))
fi
continue
fi
# Update timestamp
if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
else
echo "$line" >> "$temp_file"
fi
done < "$target_file"
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
# If sections don't exist, add them at the end of the file
if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
echo "" >> "$temp_file"
echo "## Active Technologies" >> "$temp_file"
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
echo "" >> "$temp_file"
echo "## Recent Changes" >> "$temp_file"
echo "$new_change_entry" >> "$temp_file"
changes_entries_added=true
fi
# Move temp file to target atomically
if ! mv "$temp_file" "$target_file"; then
log_error "Failed to update target file"
rm -f "$temp_file"
return 1
fi
return 0
}
#==============================================================================
# Main Agent File Update Function
#==============================================================================
update_agent_file() {
local target_file="$1"
local agent_name="$2"
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
log_error "update_agent_file requires target_file and agent_name parameters"
return 1
fi
log_info "Updating $agent_name context file: $target_file"
local project_name
project_name=$(basename "$REPO_ROOT")
local current_date
current_date=$(date +%Y-%m-%d)
# Create directory if it doesn't exist
local target_dir
target_dir=$(dirname "$target_file")
if [[ ! -d "$target_dir" ]]; then
if ! mkdir -p "$target_dir"; then
log_error "Failed to create directory: $target_dir"
return 1
fi
fi
if [[ ! -f "$target_file" ]]; then
# Create new file from template
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
if mv "$temp_file" "$target_file"; then
log_success "Created new $agent_name context file"
else
log_error "Failed to move temporary file to $target_file"
rm -f "$temp_file"
return 1
fi
else
log_error "Failed to create new agent file"
rm -f "$temp_file"
return 1
fi
else
# Update existing file
if [[ ! -r "$target_file" ]]; then
log_error "Cannot read existing file: $target_file"
return 1
fi
if [[ ! -w "$target_file" ]]; then
log_error "Cannot write to existing file: $target_file"
return 1
fi
if update_existing_agent_file "$target_file" "$current_date"; then
log_success "Updated existing $agent_name context file"
else
log_error "Failed to update existing agent file"
return 1
fi
fi
return 0
}
#==============================================================================
# Agent Selection and Processing
#==============================================================================
update_specific_agent() {
local agent_type="$1"
case "$agent_type" in
claude)
update_agent_file "$CLAUDE_FILE" "Claude Code"
;;
gemini)
update_agent_file "$GEMINI_FILE" "Gemini CLI"
;;
copilot)
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
;;
cursor-agent)
update_agent_file "$CURSOR_FILE" "Cursor IDE"
;;
qwen)
update_agent_file "$QWEN_FILE" "Qwen Code"
;;
opencode)
update_agent_file "$AGENTS_FILE" "opencode"
;;
codex)
update_agent_file "$AGENTS_FILE" "Codex CLI"
;;
windsurf)
update_agent_file "$WINDSURF_FILE" "Windsurf"
;;
kilocode)
update_agent_file "$KILOCODE_FILE" "Kilo Code"
;;
auggie)
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
;;
roo)
update_agent_file "$ROO_FILE" "Roo Code"
;;
codebuddy)
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
;;
amp)
update_agent_file "$AMP_FILE" "Amp"
;;
q)
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
;;
*)
log_error "Unknown agent type '$agent_type'"
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|amp|q"
exit 1
;;
esac
}
update_all_existing_agents() {
local found_agent=false
# Check each possible agent file and update if it exists
if [[ -f "$CLAUDE_FILE" ]]; then
update_agent_file "$CLAUDE_FILE" "Claude Code"
found_agent=true
fi
if [[ -f "$GEMINI_FILE" ]]; then
update_agent_file "$GEMINI_FILE" "Gemini CLI"
found_agent=true
fi
if [[ -f "$COPILOT_FILE" ]]; then
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
found_agent=true
fi
if [[ -f "$CURSOR_FILE" ]]; then
update_agent_file "$CURSOR_FILE" "Cursor IDE"
found_agent=true
fi
if [[ -f "$QWEN_FILE" ]]; then
update_agent_file "$QWEN_FILE" "Qwen Code"
found_agent=true
fi
if [[ -f "$AGENTS_FILE" ]]; then
update_agent_file "$AGENTS_FILE" "Codex/opencode"
found_agent=true
fi
if [[ -f "$WINDSURF_FILE" ]]; then
update_agent_file "$WINDSURF_FILE" "Windsurf"
found_agent=true
fi
if [[ -f "$KILOCODE_FILE" ]]; then
update_agent_file "$KILOCODE_FILE" "Kilo Code"
found_agent=true
fi
if [[ -f "$AUGGIE_FILE" ]]; then
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
found_agent=true
fi
if [[ -f "$ROO_FILE" ]]; then
update_agent_file "$ROO_FILE" "Roo Code"
found_agent=true
fi
if [[ -f "$CODEBUDDY_FILE" ]]; then
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
found_agent=true
fi
if [[ -f "$Q_FILE" ]]; then
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
found_agent=true
fi
# If no agent files exist, create a default Claude file
if [[ "$found_agent" == false ]]; then
log_info "No existing agent files found, creating default Claude file..."
update_agent_file "$CLAUDE_FILE" "Claude Code"
fi
}
print_summary() {
echo
log_info "Summary of changes:"
if [[ -n "$NEW_LANG" ]]; then
echo " - Added language: $NEW_LANG"
fi
if [[ -n "$NEW_FRAMEWORK" ]]; then
echo " - Added framework: $NEW_FRAMEWORK"
fi
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
echo " - Added database: $NEW_DB"
fi
echo
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|codebuddy|q]"
}
#==============================================================================
# Main Execution
#==============================================================================
main() {
# Validate environment before proceeding
validate_environment
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
# Parse the plan file to extract project information
if ! parse_plan_data "$NEW_PLAN"; then
log_error "Failed to parse plan data"
exit 1
fi
# Process based on agent type argument
local success=true
if [[ -z "$AGENT_TYPE" ]]; then
# No specific agent provided - update all existing agent files
log_info "No agent specified, updating all existing agent files..."
if ! update_all_existing_agents; then
success=false
fi
else
# Specific agent provided - update only that agent
log_info "Updating specific agent: $AGENT_TYPE"
if ! update_specific_agent "$AGENT_TYPE"; then
success=false
fi
fi
# Print summary
print_summary
if [[ "$success" == true ]]; then
log_success "Agent context update completed successfully"
exit 0
else
log_error "Agent context update completed with errors"
exit 1
fi
}
# Execute main function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

View File

@@ -1,28 +0,0 @@
# [PROJECT NAME] Development Guidelines
Auto-generated from all feature plans. Last updated: [DATE]
## Active Technologies
[EXTRACTED FROM ALL PLAN.MD FILES]
## Project Structure
```text
[ACTUAL STRUCTURE FROM PLANS]
```
## Commands
[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
## Code Style
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
## Recent Changes
[LAST 3 FEATURES AND WHAT THEY ADDED]
<!-- MANUAL ADDITIONS START -->
<!-- MANUAL ADDITIONS END -->

View File

@@ -1,40 +0,0 @@
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
**Purpose**: [Brief description of what this checklist covers]
**Created**: [DATE]
**Feature**: [Link to spec.md or relevant documentation]
**Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements.
<!--
============================================================================
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
The /speckit.checklist command MUST replace these with actual items based on:
- User's specific checklist request
- Feature requirements from spec.md
- Technical context from plan.md
- Implementation details from tasks.md
DO NOT keep these sample items in the generated checklist file.
============================================================================
-->
## [Category 1]
- [ ] CHK001 First checklist item with clear action
- [ ] CHK002 Second checklist item
- [ ] CHK003 Third checklist item
## [Category 2]
- [ ] CHK004 Another category item
- [ ] CHK005 Item with specific criteria
- [ ] CHK006 Final item in this category
## Notes
- Check items off as completed: `[x]`
- Add comments or findings inline
- Link to relevant resources or documentation
- Items are numbered sequentially for easy reference

View File

@@ -1,215 +0,0 @@
# Implementation Plan: [FEATURE]
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
## Summary
[Extract from feature spec: primary requirement + technical approach from research]
## Technical Context
<!--
ACTION REQUIRED: Replace the content in this section with the technical details
for the project. The structure here is presented in advisory capacity to guide
the iteration process.
-->
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
**Project Type**: [single/web/mobile - determines source structure]
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
**Tech Stack Adherence**:
- [ ] Feature uses Fiber + GORM + Viper + Zap + Lumberjack.v2 + Validator + sonic JSON + Asynq + PostgreSQL
- [ ] No native calls bypass framework (no `database/sql`, `net/http`, `encoding/json` direct use)
- [ ] All HTTP operations use Fiber framework
- [ ] All database operations use GORM
- [ ] All async tasks use Asynq
- [ ] Uses Go official toolchain: `go fmt`, `go vet`, `golangci-lint`
- [ ] Uses Go Modules for dependency management
**Code Quality Standards**:
- [ ] Follows Handler → Service → Store → Model architecture
- [ ] Handler layer only handles HTTP, no business logic
- [ ] Service layer contains business logic with cross-module support
- [ ] Store layer manages all data access with transaction support
- [ ] Uses dependency injection via struct fields (not constructor patterns)
- [ ] Unified error codes in `pkg/errors/`
- [ ] Unified API responses via `pkg/response/`
- [ ] All constants defined in `pkg/constants/`
- [ ] All Redis keys managed via key generation functions (no hardcoded strings)
- [ ] **No hardcoded magic numbers or strings (3+ occurrences must be constants)**
- [ ] **Defined constants are used instead of hardcoding duplicate values**
- [ ] **Code comments prefer Chinese for readability (implementation comments in Chinese)**
- [ ] **Log messages use Chinese (Info/Warn/Error/Debug logs in Chinese)**
- [ ] **Error messages support Chinese (user-facing errors have Chinese messages)**
- [ ] All exported functions/types have Go-style doc comments
- [ ] Code formatted with `gofmt`
- [ ] Follows Effective Go and Go Code Review Comments
**Documentation Standards** (Constitution Principle VII):
- [ ] Feature summary docs placed in `docs/{feature-id}/` mirroring `specs/{feature-id}/`
- [ ] Summary doc filenames use Chinese (功能总结.md, 使用指南.md, etc.)
- [ ] Summary doc content uses Chinese
- [ ] README.md updated with brief Chinese summary (2-3 sentences)
- [ ] Documentation is concise for first-time contributors
**Go Idiomatic Design**:
- [ ] Package structure is flat (max 2-3 levels), organized by feature
- [ ] Interfaces are small (1-3 methods), defined at use site
- [ ] No Java-style patterns: no I-prefix, no Impl-suffix, no getters/setters
- [ ] Error handling is explicit (return errors, no panic/recover abuse)
- [ ] Uses composition over inheritance
- [ ] Uses goroutines and channels (not thread pools)
- [ ] Uses `context.Context` for cancellation and timeouts
- [ ] Naming follows Go conventions: short receivers, consistent abbreviations (URL, ID, HTTP)
- [ ] No Hungarian notation or type prefixes
- [ ] Simple constructors (New/NewXxx), no Builder pattern unless necessary
**Testing Standards**:
- [ ] Unit tests for all core business logic (Service layer)
- [ ] Integration tests for all API endpoints
- [ ] Tests use Go standard testing framework
- [ ] Test files named `*_test.go` in same directory
- [ ] Test functions use `Test` prefix, benchmarks use `Benchmark` prefix
- [ ] Table-driven tests for multiple test cases
- [ ] Test helpers marked with `t.Helper()`
- [ ] Tests are independent (no external service dependencies)
- [ ] Target coverage: 70%+ overall, 90%+ for core business
**User Experience Consistency**:
- [ ] All APIs use unified JSON response format
- [ ] Error responses include clear error codes and bilingual messages
- [ ] RESTful design principles followed
- [ ] Unified pagination parameters (page, page_size, total)
- [ ] Time fields use ISO 8601 format (RFC3339)
- [ ] Currency amounts use integers (cents) to avoid float precision issues
**Performance Requirements**:
- [ ] API response time (P95) < 200ms, (P99) < 500ms
- [ ] Batch operations use bulk queries/inserts
- [ ] All database queries have appropriate indexes
- [ ] List queries implement pagination (default 20, max 100)
- [ ] Non-realtime operations use async tasks
- [ ] Database and Redis connection pools properly configured
- [ ] Uses goroutines/channels for concurrency (not thread pools)
- [ ] Uses `context.Context` for timeout control
- [ ] Uses `sync.Pool` for frequently allocated objects
**Access Logging Standards** (Constitution Principle VIII):
- [ ] ALL HTTP requests logged to access.log without exception
- [ ] Request parameters (query + body) logged (limited to 50KB)
- [ ] Response parameters (body) logged (limited to 50KB)
- [ ] Logging happens via centralized Logger middleware (pkg/logger/Middleware())
- [ ] No middleware bypasses access logging (including auth failures, rate limits)
- [ ] Body truncation indicates "... (truncated)" when over 50KB limit
- [ ] Access log includes all required fields: method, path, query, status, duration_ms, request_id, ip, user_agent, user_id, request_body, response_body
**Error Handling Standards** (Constitution Principle X):
- [ ] All API error responses use unified JSON format (via pkg/errors/ global ErrorHandler)
- [ ] Handler layer errors return error (not manual JSON responses)
- [ ] Business errors use pkg/errors.New() or pkg/errors.Wrap() with error codes
- [ ] All error codes defined in pkg/errors/codes.go
- [ ] All panics caught by Recover middleware and converted to 500 responses
- [ ] Error logs include complete request context (Request ID, path, method, params)
- [ ] 5xx server errors auto-sanitized (generic message to client, full error in logs)
- [ ] 4xx client errors may return specific business messages
- [ ] No panic in business code (except unrecoverable programming errors)
- [ ] No manual error response construction in Handler (c.Status().JSON())
- [ ] Error codes follow classification: 0=success, 1xxx=client (4xx), 2xxx=server (5xx)
- [ ] Recover middleware registered first in middleware chain
- [ ] Panic recovery logs complete stack trace
- [ ] Single request panic does not affect other requests
## Project Structure
### Documentation (this feature)
**设计文档specs/ 目录)**:开发前的规划和设计
```text
specs/[###-feature]/
├── plan.md # This file (/speckit.plan command output)
├── research.md # Phase 0 output (/speckit.plan command)
├── data-model.md # Phase 1 output (/speckit.plan command)
├── quickstart.md # Phase 1 output (/speckit.plan command)
├── contracts/ # Phase 1 output (/speckit.plan command)
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
```
**总结文档docs/ 目录)**:开发完成后的总结和使用指南(遵循 Constitution Principle VII
```text
docs/[###-feature]/
├── 功能总结.md # 功能概述、核心实现、技术要点MUST 使用中文命名和内容)
├── 使用指南.md # 如何使用该功能的详细说明MUST 使用中文命名和内容)
└── 架构说明.md # 架构设计和技术决策可选MUST 使用中文命名和内容)
```
**README.md 更新**:每次完成功能后 MUST 在 README.md 添加简短描述2-3 句话,中文)
### Source Code (repository root)
<!--
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
for this feature. Delete unused options and expand the chosen structure with
real paths (e.g., apps/admin, packages/something). The delivered plan must
not include Option labels.
-->
```text
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
src/
├── models/
├── services/
├── cli/
└── lib/
tests/
├── contract/
├── integration/
└── unit/
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
backend/
├── src/
│ ├── models/
│ ├── services/
│ └── api/
└── tests/
frontend/
├── src/
│ ├── components/
│ ├── pages/
│ └── services/
└── tests/
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
api/
└── [same as backend above]
ios/ or android/
└── [platform-specific structure: feature modules, UI flows, platform tests]
```
**Structure Decision**: [Document the selected structure and reference the real
directories captured above]
## Complexity Tracking
> **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |

View File

@@ -1,189 +0,0 @@
# Feature Specification: [FEATURE NAME]
**Feature Branch**: `[###-feature-name]`
**Created**: [DATE]
**Status**: Draft
**Input**: User description: "$ARGUMENTS"
## User Scenarios & Testing *(mandatory)*
<!--
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
you should still have a viable MVP (Minimum Viable Product) that delivers value.
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
Think of each story as a standalone slice of functionality that can be:
- Developed independently
- Tested independently
- Deployed independently
- Demonstrated to users independently
-->
### User Story 1 - [Brief Title] (Priority: P1)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 2 - [Brief Title] (Priority: P2)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 3 - [Brief Title] (Priority: P3)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
[Add more user stories as needed, each with an assigned priority]
### Edge Cases
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right edge cases.
-->
- What happens when [boundary condition]?
- How does system handle [error scenario]?
## Requirements *(mandatory)*
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right functional requirements.
-->
### Functional Requirements
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
*Example of marking unclear requirements:*
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
### Technical Requirements (Constitution-Driven)
**Tech Stack Compliance**:
- [ ] All HTTP operations use Fiber framework (no `net/http` shortcuts)
- [ ] All database operations use GORM (no `database/sql` direct calls)
- [ ] All JSON operations use sonic (no `encoding/json` usage)
- [ ] All async tasks use Asynq
- [ ] All logging uses Zap + Lumberjack.v2
- [ ] All configuration uses Viper
- [ ] Uses Go official toolchain: `go fmt`, `go vet`, `golangci-lint`
**Architecture Requirements**:
- [ ] Implementation follows Handler → Service → Store → Model layers
- [ ] Dependencies injected via struct fields (not constructor patterns)
- [ ] Unified error codes defined in `pkg/errors/`
- [ ] Unified API responses via `pkg/response/`
- [ ] All constants defined in `pkg/constants/` (no magic numbers/strings)
- [ ] **No hardcoded values: 3+ identical literals must become constants**
- [ ] **Defined constants must be used (no duplicate hardcoding)**
- [ ] **Code comments prefer Chinese (implementation comments in Chinese)**
- [ ] **Log messages use Chinese (logger.Info/Warn/Error/Debug in Chinese)**
- [ ] **Error messages support Chinese (user-facing errors have Chinese text)**
- [ ] All Redis keys managed via `pkg/constants/` key generation functions
- [ ] Package structure is flat, organized by feature (not by layer)
**Go Idiomatic Design Requirements**:
- [ ] No Java-style patterns: no getter/setter methods, no I-prefix interfaces, no Impl-suffix
- [ ] Interfaces are small (1-3 methods), defined where used
- [ ] Error handling is explicit (return errors, not panic)
- [ ] Uses composition (struct embedding) not inheritance
- [ ] Uses goroutines and channels for concurrency
- [ ] Naming follows Go conventions: `UserID` not `userId`, `HTTPServer` not `HttpServer`
- [ ] No Hungarian notation or type prefixes
- [ ] Simple and direct code structure
**API Design Requirements**:
- [ ] All APIs follow RESTful principles
- [ ] All responses use unified JSON format with code/message/data/timestamp
- [ ] All error messages include error codes and bilingual descriptions
- [ ] All pagination uses standard parameters (page, page_size, total)
- [ ] All time fields use ISO 8601 format (RFC3339)
- [ ] All currency amounts use integers (cents)
**Performance Requirements**:
- [ ] API response time (P95) < 200ms
- [ ] Database queries < 50ms
- [ ] Batch operations use bulk queries
- [ ] List queries implement pagination (default 20, max 100)
- [ ] Non-realtime operations delegated to async tasks
- [ ] Uses `context.Context` for timeouts and cancellation
**Error Handling Requirements** (Constitution Principle X):
- [ ] All API errors use unified JSON format (via `pkg/errors/` global ErrorHandler)
- [ ] Handler layer returns errors (no manual `c.Status().JSON()` for errors)
- [ ] Business errors use `pkg/errors.New()` or `pkg/errors.Wrap()` with error codes
- [ ] All error codes defined in `pkg/errors/codes.go`
- [ ] All panics caught by Recover middleware, converted to 500 responses
- [ ] Error logs include complete request context (Request ID, path, method, params)
- [ ] 5xx server errors auto-sanitized (generic message to client, full error in logs)
- [ ] 4xx client errors may return specific business messages
- [ ] No panic in business code (except unrecoverable programming errors)
- [ ] Error codes follow classification: 0=success, 1xxx=client (4xx), 2xxx=server (5xx)
- [ ] Recover middleware registered first in middleware chain
- [ ] Panic recovery logs complete stack trace
- [ ] Single request panic does not affect other requests
**Testing Requirements**:
- [ ] Unit tests for all Service layer business logic
- [ ] Integration tests for all API endpoints
- [ ] Tests use Go standard testing framework with `*_test.go` files
- [ ] Table-driven tests for multiple test cases
- [ ] Tests are independent and use mocks/testcontainers
- [ ] Target coverage: 70%+ overall, 90%+ for core business logic
### Key Entities *(include if feature involves data)*
- **[Entity 1]**: [What it represents, key attributes without implementation]
- **[Entity 2]**: [What it represents, relationships to other entities]
## Success Criteria *(mandatory)*
<!--
ACTION REQUIRED: Define measurable success criteria.
These must be technology-agnostic and measurable.
-->
### Measurable Outcomes
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]

View File

@@ -1,314 +0,0 @@
---
description: "Task list template for feature implementation"
---
# Tasks: [FEATURE NAME]
**Input**: Design documents from `/specs/[###-feature-name]/`
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
## Format: `[ID] [P?] [Story] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
- Include exact file paths in descriptions
## Path Conventions
- **Single project**: `src/`, `tests/` at repository root
- **Web app**: `backend/src/`, `frontend/src/`
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
- Paths shown below assume single project - adjust based on plan.md structure
<!--
============================================================================
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
The /speckit.tasks command MUST replace these with actual tasks based on:
- User stories from spec.md (with their priorities P1, P2, P3...)
- Feature requirements from plan.md
- Entities from data-model.md
- Endpoints from contracts/
Tasks MUST be organized by user story so each story can be:
- Implemented independently
- Tested independently
- Delivered as an MVP increment
DO NOT keep these sample tasks in the generated tasks.md file.
============================================================================
-->
## Phase 1: Setup (Shared Infrastructure)
**Purpose**: Project initialization and basic structure
- [ ] T001 Create project structure per implementation plan (internal/, pkg/, cmd/)
- [ ] T002 Initialize Go project with Fiber + GORM + Viper + Zap + Asynq dependencies
- [ ] T003 [P] Configure linting (golangci-lint) and formatting tools (gofmt/goimports)
- [ ] T004 [P] Setup unified error codes in pkg/errors/
- [ ] T005 [P] Setup unified API response in pkg/response/
- [ ] T006 [P] Setup constants management in pkg/constants/ (business constants and Redis key functions)
---
## Phase 2: Foundational (Blocking Prerequisites)
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
Foundational tasks for 君鸿卡管系统 tech stack:
- [ ] T007 Setup PostgreSQL database connection via GORM with connection pool (MaxOpenConns=25, MaxIdleConns=10)
- [ ] T008 Setup Redis connection with connection pool (PoolSize=10, MinIdleConns=5)
- [ ] T009 [P] Setup database migrations framework (golang-migrate or GORM AutoMigrate)
- [ ] T010 [P] Implement Fiber routing structure in internal/router/
- [ ] T011 [P] Implement Fiber middleware (authentication, logging, recovery, validation) in internal/handler/middleware/
- [ ] T012 [P] Setup Zap logger with Lumberjack rotation in pkg/logger/
- [ ] T013 [P] Setup Viper configuration management in pkg/config/
- [ ] T014 [P] Setup Asynq task queue client and server in pkg/queue/
- [ ] T015 [P] Setup Validator integration in pkg/validator/
- [ ] T016 Create base Store structure with transaction support in internal/store/
- [ ] T017 Create base Service structure with dependency injection in internal/service/
- [ ] T018 Setup sonic JSON as default serializer for Fiber
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
---
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 1 (REQUIRED per Constitution - Testing Standards) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T020 [P] [US1] Unit tests for Service layer business logic in internal/service/[service]_test.go
- [ ] T021 [P] [US1] Integration tests for API endpoints in internal/handler/[handler]_test.go
- [ ] T022 [P] [US1] Transaction rollback tests for Store layer in internal/store/[store]_test.go
### Implementation for User Story 1
- [ ] T023 [P] [US1] Create [Entity1] model with GORM tags in internal/model/[entity1].go
- [ ] T024 [P] [US1] Create [Entity2] model with GORM tags in internal/model/[entity2].go
- [ ] T025 [P] [US1] Create DTOs and request/response structs in internal/model/dto/[feature].go
- [ ] T026 [US1] Implement Store methods with GORM in internal/store/postgres/[store].go (depends on T023, T024)
- [ ] T027 [US1] Implement Service business logic in internal/service/[service].go (depends on T026)
- [ ] T028 [US1] Implement Fiber Handler in internal/handler/[handler].go (depends on T027)
- [ ] T029 [US1] Register routes in internal/router/router.go
- [ ] T030 [US1] Add validation rules using Validator in Handler
- [ ] T031 [US1] Add unified error handling using pkg/errors/ and pkg/response/
- [ ] T032 [US1] Add Zap logging with structured fields
- [ ] T033 [US1] Add database indexes for queries (if needed)
- [ ] T034 [US1] Create Asynq tasks for async operations (if needed) in internal/task/[task].go
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
---
## Phase 4: User Story 2 - [Title] (Priority: P2)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 2
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
---
## Phase 5: User Story 3 - [Title] (Priority: P3)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 3
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
**Checkpoint**: All user stories should now be independently functional
---
[Add more user story phases as needed, following the same pattern]
---
## Phase N: Polish & Quality Gates
**Purpose**: Improvements that affect multiple user stories and final quality checks
### Documentation (Constitution Principle VII - REQUIRED)
- [ ] TXXX [P] Create feature summary doc in docs/{feature-id}/功能总结.md (Chinese filename and content)
- [ ] TXXX [P] Create usage guide in docs/{feature-id}/使用指南.md (Chinese filename and content)
- [ ] TXXX [P] Create architecture doc in docs/{feature-id}/架构说明.md (optional, Chinese filename and content)
- [ ] TXXX Update README.md with brief feature description (2-3 sentences in Chinese)
### Code Quality
- [ ] TXXX Code cleanup and refactoring
- [ ] TXXX Performance optimization and load testing (verify P95 < 200ms, P99 < 500ms)
- [ ] TXXX [P] Additional unit tests to reach 70%+ coverage (90%+ for core business)
- [ ] TXXX Security audit (SQL injection, XSS, command injection prevention)
- [ ] TXXX Run quickstart.md validation
- [ ] TXXX Quality Gate: Run `go test ./...` (all tests pass)
- [ ] TXXX Quality Gate: Run `gofmt -l .` (no formatting issues)
- [ ] TXXX Quality Gate: Run `go vet ./...` (no issues)
- [ ] TXXX Quality Gate: Run `golangci-lint run` (no issues)
- [ ] TXXX Quality Gate: Verify test coverage with `go test -cover ./...`
- [ ] TXXX Quality Gate: Check no TODO/FIXME remains (or documented in issues)
- [ ] TXXX Quality Gate: Verify database migrations work correctly
- [ ] TXXX Quality Gate: Verify API documentation updated (if API changes)
- [ ] TXXX Quality Gate: Verify no hardcoded constants or Redis keys (all use pkg/constants/)
- [ ] TXXX Quality Gate: Verify no duplicate hardcoded values (3+ identical literals must be constants)
- [ ] TXXX Quality Gate: Verify defined constants are used (no duplicate hardcoding of constant values)
- [ ] TXXX Quality Gate: Verify code comments use Chinese (implementation comments in Chinese)
- [ ] TXXX Quality Gate: Verify log messages use Chinese (logger Info/Warn/Error/Debug in Chinese)
- [ ] TXXX Quality Gate: Verify error messages support Chinese (user-facing errors have Chinese text)
- [ ] TXXX Quality Gate: Verify no Java-style anti-patterns (no getter/setter, no I-prefix, no Impl-suffix)
- [ ] TXXX Quality Gate: Verify Go naming conventions (UserID not userId, HTTPServer not HttpServer)
- [ ] TXXX Quality Gate: Verify error handling is explicit (no panic/recover abuse)
- [ ] TXXX Quality Gate: Verify uses goroutines/channels (not thread pool patterns)
- [ ] TXXX Quality Gate: Verify feature summary docs created in docs/{feature-id}/ with Chinese filenames
- [ ] TXXX Quality Gate: Verify summary doc content uses Chinese
- [ ] TXXX Quality Gate: Verify README.md updated with brief feature description (2-3 sentences)
- [ ] TXXX Quality Gate: Verify ALL HTTP requests logged to access.log (no exceptions)
- [ ] TXXX Quality Gate: Verify access log includes request parameters (query + body, limited to 50KB)
- [ ] TXXX Quality Gate: Verify access log includes response parameters (body, limited to 50KB)
- [ ] TXXX Quality Gate: Verify logging via centralized Logger middleware (pkg/logger/Middleware())
- [ ] TXXX Quality Gate: Verify no middleware bypasses logging (test auth failures, rate limits, etc.)
- [ ] TXXX Quality Gate: Verify access log has all required fields (method, path, query, status, duration_ms, request_id, ip, user_agent, user_id, request_body, response_body)
- [ ] TXXX Quality Gate: Verify all API errors use unified JSON format (pkg/errors/ ErrorHandler)
- [ ] TXXX Quality Gate: Verify Handler layer returns errors (no manual c.Status().JSON() for errors)
- [ ] TXXX Quality Gate: Verify business errors use pkg/errors.New() or pkg/errors.Wrap()
- [ ] TXXX Quality Gate: Verify all error codes defined in pkg/errors/codes.go
- [ ] TXXX Quality Gate: Verify Recover middleware catches all panics
- [ ] TXXX Quality Gate: Verify error logs include request context (Request ID, path, method)
- [ ] TXXX Quality Gate: Verify 5xx errors auto-sanitized (no sensitive info exposed)
- [ ] TXXX Quality Gate: Verify no panic in business code (search for panic() calls)
- [ ] TXXX Quality Gate: Verify error codes follow classification (0=success, 1xxx=4xx, 2xxx=5xx)
- [ ] TXXX Quality Gate: Verify Recover middleware registered first in chain
- [ ] TXXX Quality Gate: Test panic recovery logs complete stack trace
- [ ] TXXX Quality Gate: Test single request panic doesn't affect other requests
---
## Dependencies & Execution Order
### Phase Dependencies
- **Setup (Phase 1)**: No dependencies - can start immediately
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
- User stories can then proceed in parallel (if staffed)
- Or sequentially in priority order (P1 → P2 → P3)
- **Polish (Final Phase)**: Depends on all desired user stories being complete
### User Story Dependencies
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
### Within Each User Story
- Tests (if included) MUST be written and FAIL before implementation
- Models before services
- Services before endpoints
- Core implementation before integration
- Story complete before moving to next priority
### Parallel Opportunities
- All Setup tasks marked [P] can run in parallel
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
- All tests for a user story marked [P] can run in parallel
- Models within a story marked [P] can run in parallel
- Different user stories can be worked on in parallel by different team members
---
## Parallel Example: User Story 1
```bash
# Launch all tests for User Story 1 together (if tests requested):
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
# Launch all models for User Story 1 together:
Task: "Create [Entity1] model in src/models/[entity1].py"
Task: "Create [Entity2] model in src/models/[entity2].py"
```
---
## Implementation Strategy
### MVP First (User Story 1 Only)
1. Complete Phase 1: Setup
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
3. Complete Phase 3: User Story 1
4. **STOP and VALIDATE**: Test User Story 1 independently
5. Deploy/demo if ready
### Incremental Delivery
1. Complete Setup + Foundational → Foundation ready
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
3. Add User Story 2 → Test independently → Deploy/Demo
4. Add User Story 3 → Test independently → Deploy/Demo
5. Each story adds value without breaking previous stories
### Parallel Team Strategy
With multiple developers:
1. Team completes Setup + Foundational together
2. Once Foundational is done:
- Developer A: User Story 1
- Developer B: User Story 2
- Developer C: User Story 3
3. Stories complete and integrate independently
---
## Notes
- [P] tasks = different files, no dependencies
- [Story] label maps task to specific user story for traceability
- Each user story should be independently completable and testable
- Verify tests fail before implementing
- Commit after each task or logical group
- Stop at any checkpoint to validate story independently
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence

View File

@@ -1,2 +1,3 @@
{ {
"kiroAgent.configureMCP": "Enabled"
} }

View File

@@ -411,5 +411,5 @@ docs/001-fiber-middleware-integration/ # 功能总结文档(完成阶段)
<!-- MANUAL ADDITIONS START --> <!-- MANUAL ADDITIONS START -->
永远用中文交互,注释以及文档也要使用中文(必须) 1.永远用中文交互,注释以及文档也要使用中文(必须)
<!-- MANUAL ADDITIONS END --> <!-- MANUAL ADDITIONS END -->

38
Makefile Normal file
View File

@@ -0,0 +1,38 @@
.PHONY: run build test lint clean docs
# Go parameters
GOCMD=go
GOBUILD=$(GOCMD) build
GOCLEAN=$(GOCMD) clean
GOTEST=$(GOCMD) test
GOGET=$(GOCMD) get
BINARY_NAME=bin/junhong-cmp
MAIN_PATH=cmd/api/main.go
WORKER_PATH=cmd/worker/main.go
WORKER_BINARY=bin/junhong-worker
all: test build
build:
$(GOBUILD) -o $(BINARY_NAME) -v $(MAIN_PATH)
$(GOBUILD) -o $(WORKER_BINARY) -v $(WORKER_PATH)
test:
$(GOTEST) -v ./...
clean:
$(GOCLEAN)
rm -f $(BINARY_NAME)
rm -f $(WORKER_BINARY)
run:
$(GOBUILD) -o $(BINARY_NAME) -v $(MAIN_PATH)
./$(BINARY_NAME)
run-worker:
$(GOBUILD) -o $(WORKER_BINARY) -v $(WORKER_PATH)
./$(WORKER_BINARY)
# Generate OpenAPI documentation
docs:
$(GOCMD) run cmd/gendocs/main.go

46
cmd/api/docs.go Normal file
View File

@@ -0,0 +1,46 @@
package main
import (
"github.com/gofiber/fiber/v2"
"go.uber.org/zap"
"github.com/break/junhong_cmp_fiber/internal/bootstrap"
"github.com/break/junhong_cmp_fiber/internal/handler/admin"
"github.com/break/junhong_cmp_fiber/internal/routes"
"github.com/break/junhong_cmp_fiber/pkg/openapi"
)
// generateOpenAPIDocs 生成 OpenAPI 文档
// outputPath: 文档输出路径
// logger: 日志记录器
// 生成失败时记录错误但不影响程序继续运行
func generateOpenAPIDocs(outputPath string, logger *zap.Logger) {
// 1. 创建生成器
adminDoc := openapi.NewGenerator("Admin API", "1.0")
// 2. 创建临时 Fiber App 用于路由注册
app := fiber.New()
// 3. 创建 Handler使用 nil 依赖,因为只需要路由结构)
accHandler := admin.NewAccountHandler(nil)
roleHandler := admin.NewRoleHandler(nil)
permHandler := admin.NewPermissionHandler(nil)
handlers := &bootstrap.Handlers{
Account: accHandler,
Role: roleHandler,
Permission: permHandler,
}
// 4. 注册路由到文档生成器
adminGroup := app.Group("/api/admin")
routes.RegisterAdminRoutes(adminGroup, handlers, adminDoc, "/api/admin")
// 5. 保存规范到指定路径
if err := adminDoc.Save(outputPath); err != nil {
logger.Error("生成 OpenAPI 文档失败", zap.String("path", outputPath), zap.Error(err))
return
}
logger.Info("OpenAPI 文档生成成功", zap.String("path", outputPath))
}

View File

@@ -71,7 +71,10 @@ func main() {
// 10. 注册路由 // 10. 注册路由
initRoutes(app, cfg, handlers, queueClient, db, redisClient, appLogger) initRoutes(app, cfg, handlers, queueClient, db, redisClient, appLogger)
// 11. 启动服务器 // 11. 生成 OpenAPI 文档
generateOpenAPIDocs("./openapi.yaml", appLogger)
// 12. 启动服务器
startServer(app, cfg, appLogger, cancelWatch) startServer(app, cfg, appLogger, cancelWatch)
} }

View File

@@ -13,14 +13,24 @@ import (
) )
func main() { func main() {
outputFile := "./docs/admin-openapi.yaml"
if err := generateAdminDocs(outputFile); err != nil {
log.Fatalf("生成 OpenAPI 文档失败: %v", err)
}
absPath, _ := filepath.Abs(outputFile)
log.Printf("成功在以下位置生成 OpenAPI 文档: %s", absPath)
}
// generateAdminDocs 生成 Admin API 的 OpenAPI 文档
func generateAdminDocs(outputPath string) error {
// 1. 创建生成器 // 1. 创建生成器
adminDoc := openapi.NewGenerator("Admin API", "1.0") adminDoc := openapi.NewGenerator("Admin API", "1.0")
// 2. 模拟 Fiber App // 2. 创建临时 Fiber App 用于路由注册
app := fiber.New() app := fiber.New()
// 3. 模拟 Handler // 3. 创建 Handler(使用 nil 依赖,因为只需要路由结构)
// 我们创建一个伪造的 handler。因为我们不执行请求nil 依赖是可以的。
accHandler := admin.NewAccountHandler(nil) accHandler := admin.NewAccountHandler(nil)
roleHandler := admin.NewRoleHandler(nil) roleHandler := admin.NewRoleHandler(nil)
permHandler := admin.NewPermissionHandler(nil) permHandler := admin.NewPermissionHandler(nil)
@@ -31,16 +41,14 @@ func main() {
Permission: permHandler, Permission: permHandler,
} }
// 4. 注册路由 // 4. 注册路由到文档生成器
adminGroup := app.Group("/api/admin") adminGroup := app.Group("/api/admin")
routes.RegisterAdminRoutes(adminGroup, handlers, adminDoc, "/api/admin") routes.RegisterAdminRoutes(adminGroup, handlers, adminDoc, "/api/admin")
// 5. 保存规范 // 5. 保存规范到指定路径
outputFile := "./docs/admin-openapi.yaml" if err := adminDoc.Save(outputPath); err != nil {
if err := adminDoc.Save(outputFile); err != nil { return err
log.Fatalf("保存规范失败: %v", err)
} }
absPath, _ := filepath.Abs(outputFile) return nil
log.Printf("成功在以下位置生成 OpenAPI 规范: %s", absPath)
} }

View File

@@ -295,4 +295,5 @@ A 用户买了一个A产品,那么现在给代理的成本价60 售价 90 (一
1. 一次性佣金满足 激活(实名) + 达到累计/首次充值金额 = 产生佣金(冻结) (可能是[7]天后 状态变成解冻中 同步产生一条佣金解冻审批等待审批) 1. 一次性佣金满足 激活(实名) + 达到累计/首次充值金额 = 产生佣金(冻结) (可能是[7]天后 状态变成解冻中 同步产生一条佣金解冻审批等待审批)
2. 长期佣金满足 激活(实名) + 达到累计/首次充值金额 + 在网状态(必须是正常的)(能不能拿到在网状态 存疑) + 三无(能不能拿到 存疑) = 产生佣金(冻结 必须通过excel导入, 状态 变成 解冻中 同步产生对应的佣金解冻审批 等待审批) 2. 长期佣金满足 激活(实名) + 达到累计/首次充值金额 + 在网状态(必须是正常的)(能不能拿到在网状态 存疑) + 三无(能不能拿到 存疑) = 产生佣金(冻结 必须通过excel导入, 状态 变成 解冻中 同步产生对应的佣金解冻审批 等待审批)
3. 阶梯分佣满足 激活(实名 + 达到累计/首次充值金额 + 在网状态(必须是正常的)(能不能拿到在网状态 存疑) ) = 激活 3. 组合佣金 (一次性佣金+长期佣金)(1. 连续在网多少个月后开始长期分佣)
4. 阶梯分佣满足 激活(实名 + 达到累计/首次充值金额 + 在网状态(必须是正常的)(能不能拿到在网状态 存疑) ) = 激活

266
docs/接入openapi.md Normal file
View File

@@ -0,0 +1,266 @@
# 架构升级与 OpenAPI 文档接入详细实施规范
## 1. 架构调整设计 (Architecture Upgrade)
### 1.1 目录结构变更 (Directory Structure)
我们将把扁平的 Handler 层改造为按业务域Domain物理隔离的结构。
**变更前**:
```text
internal/handler/
├── account.go
├── role.go
└── ...
```
**变更后**:
```text
internal/handler/
├── admin/ # 后台管理/PC代理端 (Admin Domain)
│ ├── account.go
│ ├── role.go
│ └── ... # 现有业务逻辑全部移到这里
├── agent/ # 手机/H5代理端 (Agent Domain) - 预留
│ └── (空)
├── app/ # C端用户 (App Domain) - 预留
│ └── (空)
└── health.go # 全局健康检查 (保持在根目录)
```
### 1.2 路由注册层改造 (Routing Layer)
路由层将不再是一个巨大的 `routes.go`,而是拆分为“总线 + 分支”结构。
**文件: `internal/routes/routes.go` (总入口)**
```go
package routes
import (
"github.com/gofiber/fiber/v2"
"junhong_cmp_fiber/internal/handler" // 引用 health
"junhong_cmp_fiber/internal/middleware"
// 引入各个域的路由包 (因循环引用问题,建议直接在此文件定义 Register 函数,或拆分包)
// 最佳实践:在此文件保留 SetupRoutes调用同包下的 RegisterAdminRoutes 等
)
func SetupRoutes(app *fiber.App, deps *bootstrap.Dependencies) {
// 1. 全局路由
app.Get("/health", handler.HealthCheck)
// 2. 注册各个域的路由组
// Admin 域 (挂载在 /api/admin)
adminGroup := app.Group("/api/admin")
// 可以在这里挂载 Admin 专属中间件 (Token验证, RBAC等)
RegisterAdminRoutes(adminGroup, deps)
// App 域 (挂载在 /api/app)
appGroup := app.Group("/api/app")
RegisterAppRoutes(appGroup, deps)
// Agent 域 (挂载在 /api/agent)
agentGroup := app.Group("/api/agent")
RegisterAgentRoutes(agentGroup, deps)
}
```
**文件: `internal/routes/admin.go` (Admin 域详情)**
```go
package routes
import (
"github.com/gofiber/fiber/v2"
"junhong_cmp_fiber/internal/handler/admin" // 引用新的 handler 包
)
func RegisterAdminRoutes(router fiber.Router, deps *bootstrap.Dependencies) {
// 账号管理
account := router.Group("/accounts")
account.Post("/", admin.CreateAccount(deps.AccountService))
account.Get("/", admin.ListAccounts(deps.AccountService))
// ... 其他原有的路由逻辑,全部迁移到这里
}
```
---
## 2. OpenAPI 接入设计 (OpenAPI Integration)
我们将引入 `swaggest/openapi-go`,通过“影子路由”技术实现文档自动化。
### 2.1 基础设施: 文档生成器 (`pkg/openapi/generator.go`)
这是一个通用的工具类,用于封装 `Reflector`
```go
package openapi
import (
"github.com/swaggest/openapi-go/openapi3"
)
type Generator struct {
Reflector *openapi3.Reflector
}
func NewGenerator(title, version string) *Generator {
reflector := openapi3.Reflector{}
reflector.Spec = &openapi3.Spec{
Openapi: "3.0.3",
Info: openapi3.Info{
Title: title,
Version: version,
},
}
return &Generator{Reflector: &reflector}
}
// 核心方法:向文档中添加一个操作
func (g *Generator) AddOperation(method, path, summary string, input interface{}, output interface{}, tags ...string) {
op := openapi3.Operation{
Summary: summary,
Tags: tags,
}
// ... 反射 input/output 并添加到 Spec 中 ...
// ... 错误处理 ...
g.Reflector.Spec.AddOperation(method, path, op)
}
// 导出 YAML
func (g *Generator) Save(filepath string) error { ... }
```
### 2.2 核心机制: 影子注册器 (`internal/routes/registry.go`)
这是一个 Helper 函数,连接 Fiber 路由和 OpenAPI 生成器。
```go
package routes
import (
"github.com/gofiber/fiber/v2"
"junhong_cmp_fiber/pkg/openapi"
)
// RouteSpec 定义接口文档元数据
type RouteSpec struct {
Summary string
Input interface{} // 请求参数结构体 (Query/Path/Body)
Output interface{} // 响应参数结构体
Tags []string
Auth bool // 是否需要认证图标
}
// Register 封装后的注册函数
// router: Fiber 路由组
// doc: 文档生成器
// method, path: HTTP 方法和路径
// handler: Fiber Handler
// spec: 文档元数据
func Register(router fiber.Router, doc *openapi.Generator, method, path string, handler fiber.Handler, spec RouteSpec) {
// 1. 注册实际的 Fiber 路由
router.Add(method, path, handler)
// 2. 注册文档 (如果 doc 不为空 - 也就是在生成文档模式下)
if doc != nil {
doc.AddOperation(method, path, spec.Summary, spec.Input, spec.Output, spec.Tags...)
}
}
```
### 2.3 业务代码改造示例
**Step 1: 改造路由文件 (`internal/routes/admin.go`)**
```go
// 引入文档生成器
func RegisterAdminRoutes(router fiber.Router, deps *bootstrap.Dependencies, doc *openapi.Generator) {
// 使用 Register 替代 router.Post
registry.Register(router, doc, "POST", "/accounts",
admin.CreateAccount(deps.AccountService),
registry.RouteSpec{
Summary: "创建管理员账号",
Tags: []string{"Account"},
Input: new(model.CreateAccountReq), // 必须是结构体指针
Output: new(model.AccountResp), // 必须是结构体指针
},
)
}
```
**Step 2: 规范化 Model (`internal/model/account_dto.go`)**
必须确保 Input/Output 结构体有正确的 Tag。
```go
type CreateAccountReq struct {
// Body 参数
Username string `json:"username" required:"true" minLength:"4" description:"用户名"`
Password string `json:"password" required:"true" description:"初始密码"`
RoleID uint `json:"role_id" description:"角色ID"`
}
type AccountResp struct {
ID uint `json:"id"`
Username string `json:"username"`
// ...
}
```
### 2.4 文档生成入口 (`cmd/gendocs/main.go`)
这是一个独立的 main 函数,用于生成文档文件。
```go
package main
import (
"junhong_cmp_fiber/internal/routes"
"junhong_cmp_fiber/pkg/openapi"
"github.com/gofiber/fiber/v2"
)
func main() {
// 1. 创建生成器
adminDoc := openapi.NewGenerator("Admin API", "1.0")
// 2. 模拟 Fiber App (不需要 Start)
app := fiber.New()
// 3. 调用注册函数,传入 doc
// 注意:这里 deps 传 nil 即可,因为我们只跑路由注册逻辑,不跑实际 Handler
routes.RegisterAdminRoutes(app, nil, adminDoc)
// 4. 保存文件
adminDoc.Save("./docs/admin-openapi.yaml")
}
```
---
## 3. 详细实施步骤 (Execution Plan)
### 第一阶段:路由与目录重构 (无文档)
1. **创建目录**: `internal/handler/{admin,agent,app}`
2. **移动文件**: 将 `account.go`, `role.go` 等移入 `internal/handler/admin`
3. **修改包名**: 将移动后的文件 `package handler` 改为 `package admin`
4. **修复引用**: 使用 IDE 或 grep 查找所有引用了 `internal/handler` 的地方(主要是 `routes``bootstrap`),改为引用 `internal/handler/admin`
5. **重构路由**:
*`internal/routes` 下新建 `admin.go`,把 `routes.go` 里关于 admin 的代码剪切过去,封装成 `RegisterAdminRoutes` 函数。
*`routes.go` 中调用 `RegisterAdminRoutes`,并挂载到 `/api/admin`(注意:**路径变更**,需通知前端或暂时保持原路径)。*建议先保持原路径 `/api/v1` 以减少破坏性,等文档上齐了一起改。或者直接痛快点改成 `/api/admin`。你选择了"路由分离",我就按 `/api/admin` 改。*
### 第二阶段:文档基础设施
1. **添加依赖**: `swaggest/openapi-go`
2. **编写工具**: 实现 `pkg/openapi/generator.go`
3. **编写注册器**: 实现 `internal/routes/registry.go`
### 第三阶段:文档接入 (以 Account 模块为例)
1. **DTO 检查**: 检查 `internal/model/account_dto.go`,确保字段 Tag 完善。
2. **路由改造**: 修改 `internal/routes/admin.go`,引入 `doc` 参数,用 `registry.Register` 替换原生路由。
3. **生成测试**: 编写 `cmd/gendocs`,运行生成 YAML验证内容是否正确。
### 第四阶段:全面铺开
1. 对所有模块重复第三阶段的工作。
2. 在 Makefile 中添加 `make docs`

View File

@@ -160,6 +160,8 @@ New request?
2. **Write proposal.md:** 2. **Write proposal.md:**
```markdown ```markdown
# Change: [Brief description of change]
## Why ## Why
[1-2 sentences on problem/opportunity] [1-2 sentences on problem/opportunity]
@@ -452,375 +454,3 @@ openspec archive <change-id> [--yes|-y] # Mark complete (add --yes for automati
``` ```
Remember: Specs are truth. Changes are proposals. Keep them in sync. Remember: Specs are truth. Changes are proposals. Keep them in sync.
---
# Project-Specific Development Guidelines
以下是本项目的开发规范,所有 AI 助手在创建提案和实现代码时必须遵守。
## 语言要求
**必须遵守:**
- 永远用中文交互
- 注释必须使用中文
- 文档必须使用中文
- 日志消息必须使用中文
- 用户可见的错误消息必须使用中文
- 变量名、函数名、类型名必须使用英文(遵循 Go 命名规范)
- Go 文档注释doc comments for exported APIs可以使用英文以保持生态兼容性但中文注释更佳
## 核心开发原则
### 技术栈遵守
**必须遵守 (MUST):**
- 开发时严格遵守项目定义的技术栈Fiber + GORM + Viper + Zap + Lumberjack.v2 + Validator + sonic JSON + Asynq + PostgreSQL
- 禁止使用原生调用或绕过框架的快捷方式(禁止 `database/sql` 直接调用、禁止 `net/http` 替代 Fiber、禁止 `encoding/json` 替代 sonic
- 所有 HTTP 路由和中间件必须使用 Fiber 框架
- 所有数据库操作必须通过 GORM 进行
- 所有配置管理必须使用 Viper
- 所有日志记录必须使用 Zap + Lumberjack.v2
- 所有 JSON 序列化优先使用 sonic仅在必须使用标准库的场景才使用 `encoding/json`
- 所有异步任务必须使用 Asynq
- 必须使用 Go 官方工具链:`go fmt``go vet``golangci-lint`
- 必须使用 Go Modules 进行依赖管理
**理由:**
一致的技术栈使用确保代码可维护性、团队协作效率和长期技术债务可控。绕过框架的"快捷方式"会导致代码碎片化、难以调试、性能不一致和安全漏洞。
### 代码质量标准
**架构分层:**
- 代码必须遵循项目分层架构:`Handler → Service → Store → Model`
- Handler 层只能处理 HTTP 请求/响应,不得包含业务逻辑
- Service 层包含所有业务逻辑,支持跨模块调用
- Store 层统一管理所有数据访问,支持事务处理
- Model 层定义清晰的数据结构和 DTO
- 所有依赖通过结构体字段进行依赖注入(不使用构造函数模式)
**错误和响应处理:**
- 所有公共错误必须在 `pkg/errors/` 中定义,使用统一错误码
- 所有 API 响应必须使用 `pkg/response/` 的统一格式
- 所有常量必须在 `pkg/constants/` 中定义和管理
- 所有 Redis key 必须通过 `pkg/constants/` 中的 Key 生成函数统一管理
**代码注释和文档:**
- 必须为所有导出的函数、类型和常量编写 Go 风格的文档注释(`// FunctionName does something...`
- 代码注释implementation comments应该使用中文
- 日志消息应该使用中文
- 用户可见的错误消息必须使用中文(通过 `pkg/errors/` 的双语消息支持)
- Go 文档注释doc comments for exported APIs可以使用英文以保持生态兼容性但中文注释更佳
- 变量名、函数名、类型名必须使用英文(遵循 Go 命名规范)
**Go 代码风格要求:**
- 必须使用 `gofmt` 格式化所有代码
- 必须遵循 [Effective Go](https://go.dev/doc/effective_go) 和 [Go Code Review Comments](https://go.dev/wiki/CodeReviewComments)
- 变量命名必须使用 Go 风格:`userID`(不是 `userId`)、`HTTPServer`(不是 `HttpServer`
- 缩写词必须全部大写或全部小写:`URL``ID``HTTP`(导出)或 `url``id``http`(未导出)
- 包名必须简短、小写、单数、无下划线:`user``order``pkg`(不是 `users``userService``user_service`
- 接口命名应该使用 `-er` 后缀:`Reader``Writer``Logger`(不是 `ILogger``LoggerInterface`
**常量管理规范:**
- 业务常量(状态码、类型枚举等)必须定义在 `pkg/constants/constants.go` 或按模块分文件
- Redis key 必须使用函数生成,不允许硬编码字符串拼接
- Redis key 生成函数必须遵循命名规范:`Redis{Module}{Purpose}Key(params...)`
- Redis key 格式必须使用冒号分隔:`{module}:{purpose}:{identifier}`
- 禁止在代码中直接使用 magic numbers未定义含义的数字字面量
- 禁止在代码中硬编码字符串字面量URL、状态码、配置值、业务规则等
- 当相同的字面量值在 3 个或以上位置使用时,必须提取为常量
- 已定义的常量必须被使用,禁止重复硬编码相同的值
**函数复杂度和职责分离:**
- 函数长度不得超过合理范围(通常 50-100 行,核心逻辑建议 ≤ 50 行)
- 超过 100 行的函数必须拆分为多个小函数,每个函数只负责一件事
- `main()` 函数只做编排orchestration不包含具体实现逻辑
- `main()` 函数中的每个初始化步骤应该提取为独立的辅助函数
- 编排函数必须清晰表达流程,避免嵌套的实现细节
- 必须遵循单一职责原则Single Responsibility Principle
## Go 语言惯用设计原则
**核心理念:写 Go 味道的代码,不要写 Java 味道的代码**
**包组织:**
- 包结构必须扁平化,避免深层嵌套(最多 2-3 层)
- 包必须按功能组织,不是按层次组织
- 包名必须描述功能,不是类型(`http` 不是 `httputils``handlers`
推荐的 Go 风格结构:
```
internal/
├── user/ # user 功能的所有代码
│ ├── handler.go # HTTP handlers
│ ├── service.go # 业务逻辑
│ ├── store.go # 数据访问
│ └── model.go # 数据模型
├── order/
└── sim/
```
**接口设计:**
- 接口必须小而专注1-3 个方法),不是大而全
- 接口应该在使用方定义,不是实现方(依赖倒置)
- 接口命名应该使用 `-er` 后缀:`Reader``Writer``Storer`
- 禁止使用 `I` 前缀或 `Interface` 后缀
- 禁止创建只有一个实现的接口(除非明确需要抽象)
**错误处理:**
- 错误必须显式返回和检查不使用异常panic/recover
- 错误处理必须紧跟错误产生的代码
- 必须使用 `errors.Is()``errors.As()` 检查错误类型
- 必须使用 `fmt.Errorf()` 包装错误,保留错误链
- 自定义错误应该实现 `error` 接口
- panic 只能用于不可恢复的程序错误
**结构体和方法:**
- 结构体必须简单直接不是类class的替代品
- 禁止为每个字段创建 getter/setter 方法
- 必须直接访问导出的字段(大写开头)
- 必须使用组合composition而不是继承inheritance
- 构造函数应该命名为 `New``NewXxx`,返回具体类型
- 禁止使用构造器模式Builder Pattern除非真正需要
**并发模式:**
- 必须使用 goroutines 和 channels不是线程和锁大多数情况
- 必须使用 `context.Context` 传递取消信号
- 必须遵循"通过通信共享内存,不要通过共享内存通信"
- 应该使用 `sync.WaitGroup` 等待 goroutines 完成
- 应该使用 `sync.Once` 确保只执行一次
- 禁止创建线程池类Go 运行时已处理)
**命名约定:**
- 变量名必须简短且符合上下文(短作用域用短名字:`i`, `j`, `k`;长作用域用描述性名字)
- 缩写词必须保持一致的大小写:`URL`, `HTTP`, `ID`(不是 `Url`, `Http`, `Id`
- 禁止使用匈牙利命名法或类型前缀:`strName`, `arrUsers`
- 禁止使用下划线连接(除了测试和包名)
- 方法接收者名称应该使用 1-2 个字母的缩写,全文件保持一致
**严格禁止的 Java 风格模式:**
1. ❌ 过度抽象(不需要的接口、工厂、构造器)
2. ❌ Getter/Setter直接访问导出字段
3. ❌ 继承层次(使用组合,不是嵌入)
4. ❌ 异常处理(使用错误返回,不是 panic/recover
5. ❌ 单例模式(使用包级别变量或 `sync.Once`
6. ❌ 线程池(直接使用 goroutines
7. ❌ 深层包嵌套(保持扁平结构)
8. ❌ 类型前缀(`IService`, `AbstractBase`, `ServiceImpl`
9. ❌ Bean 风格(不需要 POJO/JavaBean 模式)
10. ❌ 过度 DI 框架(简单直接的依赖注入)
## 测试标准
**测试要求:**
- 所有核心业务逻辑Service 层)必须有单元测试覆盖
- 所有 API 端点必须有集成测试覆盖
- 所有数据库操作应该有事务回滚测试
- 测试必须使用 Go 标准测试框架(`testing` 包)
- 测试文件必须与源文件同目录,命名为 `*_test.go`
- 测试函数必须使用 `Test` 前缀:`func TestUserCreate(t *testing.T)`
- 基准测试必须使用 `Benchmark` 前缀:`func BenchmarkUserCreate(b *testing.B)`
**测试性能要求:**
- 测试必须可独立运行,不依赖外部服务(使用 mock 或 testcontainers
- 单元测试必须在 100ms 内完成
- 集成测试应该在 1s 内完成
- 测试覆盖率应该达到 70% 以上(核心业务代码必须 90% 以上)
**测试最佳实践:**
- 测试必须使用 table-driven tests 处理多个测试用例
- 测试必须使用 `t.Helper()` 标记辅助函数
## 数据库设计原则
**核心规则:**
- 数据库表之间禁止建立外键约束Foreign Key Constraints
- GORM 模型之间禁止使用 ORM 关联关系(`foreignKey``references``hasMany``belongsTo` 等标签)
- 表之间的关联必须通过存储关联 ID 字段手动维护
- 关联数据查询必须在代码层面显式执行,不依赖 ORM 的自动加载或预加载
- 模型结构体只能包含简单字段,不应包含其他模型的嵌套引用
- 数据库迁移脚本禁止包含外键约束定义
- 数据库迁移脚本禁止包含触发器用于维护关联数据
- 时间字段(`created_at``updated_at`)的更新必须由 GORM 自动处理,不使用数据库触发器
**设计理由:**
1. **灵活性**:业务逻辑完全在代码中控制,不受数据库约束限制
2. **性能**:无外键约束意味着无数据库层面的引用完整性检查开销
3. **简单直接**:显式的关联数据查询使数据流向清晰可见
4. **可控性**:开发者完全掌控何时查询关联数据、查询哪些关联数据
5. **可维护性**:数据库 schema 更简单,迁移更容易
6. **分布式友好**:在微服务和分布式数据库场景下更容易扩展
## API 设计规范
**统一响应格式:**
所有 API 响应必须使用统一的 JSON 格式:
```json
{
"code": 0,
"message": "success",
"data": {},
"timestamp": "2025-11-10T15:30:00Z"
}
```
**API 设计要求:**
- 所有错误响应必须包含明确的错误码和错误消息(中英文双语)
- 所有 API 端点必须遵循 RESTful 设计原则
- 所有分页 API 必须使用统一的分页参数:`page``page_size``total`
- 所有时间字段必须使用 ISO 8601 格式RFC3339
- 所有货币金额必须使用整数表示(分为单位),避免浮点精度问题
- 所有布尔字段必须使用 `true`/`false`,不使用 `0`/`1`
- API 版本必须通过 URL 路径管理(如 `/api/v1/...`
## 错误处理规范
**统一错误处理:**
- 所有 API 错误响应必须使用统一的 JSON 格式(通过 `pkg/errors/` 全局 ErrorHandler
- 所有 Handler 层错误必须通过返回 `error` 传递给全局 ErrorHandler禁止手动构造错误响应
- 所有业务错误必须使用 `pkg/errors.New()``pkg/errors.Wrap()` 创建 `AppError`,并指定错误码
- 所有错误码必须在 `pkg/errors/codes.go` 中统一定义和管理
**Panic 处理:**
- 所有 Panic 必须被 Recover 中间件自动捕获,转换为 500 错误响应
- 禁止在业务代码中主动 `panic`(除非遇到不可恢复的编程错误)
- 禁止在 Handler 中直接使用 `c.Status().JSON()` 返回错误响应
**错误日志:**
- 所有错误日志必须包含完整的请求上下文Request ID、路径、方法、参数等
- 5xx 服务端错误必须自动脱敏,只返回通用错误消息,原始错误仅记录到日志
- 4xx 客户端错误可以返回具体业务错误消息(如"用户名已存在"
**错误码分类:**
- `0`: 成功
- `1000-1999`: 客户端错误4xx HTTP 状态码,日志级别 Warn
- `2000-2999`: 服务端错误5xx HTTP 状态码,日志级别 Error
## 访问日志规范
**核心要求:**
- 所有 HTTP 请求必须被记录到 `access.log`,无例外
- 访问日志必须记录完整的请求参数query 参数 + request body
- 访问日志必须记录完整的响应参数response body
- 请求/响应 body 必须限制大小为 50KB超过部分截断并标注 `... (truncated)`
- 访问日志必须通过统一的 Logger 中间件(`pkg/logger/Middleware()`)记录
- 任何中间件的短路返回(认证失败、限流拒绝、参数验证失败等)禁止绕过访问日志
**必需字段:**
访问日志必须包含以下字段:
- `method`: HTTP 方法
- `path`: 请求路径
- `query`: Query 参数字符串
- `status`: HTTP 状态码
- `duration_ms`: 请求耗时(毫秒)
- `request_id`: 请求唯一 ID
- `ip`: 客户端 IP
- `user_agent`: 用户代理
- `user_id`: 用户 ID认证后有值否则为空
- `request_body`: 请求体(限制 50KB
- `response_body`: 响应体(限制 50KB
**日志配置:**
- 访问日志应该使用 JSON 格式,便于日志分析和监控
- 访问日志文件必须配置自动轮转(基于大小或时间)
## 性能要求
**性能指标:**
- API 响应时间P95必须 < 200ms数据库查询 < 50ms
- API 响应时间P99必须 < 500ms
- 批量操作必须使用批量查询/插入,避免 N+1 查询问题
- 所有数据库查询必须有适当的索引支持
- 列表查询必须实现分页,默认 `page_size=20`,最大 `page_size=100`
- 异步任务必须用于非实时操作(批量同步、分佣计算等)
**资源限制:**
- 内存使用API 服务)应该 < 500MB正常负载
- 内存使用Worker 服务)应该 < 1GB正常负载
- 数据库连接池必须配置合理(`MaxOpenConns=25`, `MaxIdleConns=10`, `ConnMaxLifetime=5m`
- Redis 连接池必须配置合理(`PoolSize=10`, `MinIdleConns=5`
**并发处理:**
- 并发操作应该使用 goroutines 和 channels不是线程池模式
- 必须使用 `context.Context` 进行超时和取消控制
- 必须使用 `sync.Pool` 复用频繁分配的对象(如缓冲区)
## 文档规范
**文档结构要求:**
- 每个功能完成后必须在 `docs/` 目录创建总结文档
- 总结文档路径必须遵循规范:`docs/{feature-id}/` 对应 `specs/{feature-id}/`
- 总结文档文件名必须使用中文命名(例如:`功能总结.md``使用指南.md``架构说明.md`
- 总结文档内容必须使用中文编写
- 每次添加新功能总结文档时必须同步更新 `README.md`
**README.md 更新要求:**
- README.md 中的功能描述必须简短精炼,让首次接触项目的开发者能快速了解
- README.md 的功能描述应该控制在 2-3 句话以内
- 使用中文,便于中文开发者快速理解
- 提供到详细文档的链接
- 按功能模块分组(如"核心功能"、"中间件"、"业务模块"等)
---
## 提案创建检查清单
在创建 OpenSpec 提案时,请确保:
1.**技术栈合规**: 提案中的技术选型必须符合项目技术栈要求
2.**架构分层**: 设计必须遵循 Handler → Service → Store → Model 分层
3.**错误处理**: 错误处理方案必须使用统一的 `pkg/errors/` 系统
4.**常量管理**: 新增常量必须定义在 `pkg/constants/`
5.**Go 风格**: 代码设计必须遵循 Go 惯用法,避免 Java 风格
6.**测试要求**: 提案必须包含测试计划(单元测试 + 集成测试)
7.**性能考虑**: 需要考虑性能指标和资源限制
8.**文档计划**: 提案必须包含文档更新计划
9.**中文优先**: 所有文档、注释、日志必须使用中文
## 实现检查清单
在实现 OpenSpec 提案时,请确保:
1.**代码格式**: 所有代码已通过 `gofmt` 格式化
2.**代码检查**: 所有代码已通过 `go vet``golangci-lint` 检查
3.**测试覆盖**: 核心业务逻辑测试覆盖率 ≥ 90%
4.**性能测试**: API 响应时间符合性能指标要求
5.**错误处理**: 所有错误已正确处理和记录
6.**文档更新**: README.md 和功能文档已更新
7.**迁移脚本**: 数据库变更已创建迁移脚本
8.**日志记录**: 关键操作已添加访问日志和业务日志
9.**代码审查**: 代码已通过团队审查

View File

@@ -0,0 +1,121 @@
# 实现总结服务启动时自动生成OpenAPI文档
## 实现概述
本次实现在服务启动时自动生成 OpenAPI 文档,确保文档与运行的服务保持同步。
## 核心变更
### 1. 新增文件
#### `cmd/api/docs.go`
创建了 `generateOpenAPIDocs()` 函数,负责在服务启动时自动生成 OpenAPI 文档。
**关键实现**:
- 创建临时 Fiber App 用于路由注册
- 使用 nil 依赖创建 Handler仅需路由结构
- 调用路由注册函数填充文档生成器
- 保存文档到指定路径
- 生成失败时记录错误但不中断服务启动
### 2. 修改文件
#### `cmd/api/main.go`
在主函数的步骤 11 添加了文档生成调用:
```go
// 11. 生成 OpenAPI 文档
generateOpenAPIDocs("./openapi.yaml", appLogger)
```
**位置选择**:
- 放在路由注册之后,确保有完整的路由信息
- 放在服务器启动之前,确保文档在服务可用前生成
#### `cmd/gendocs/main.go`
重构了独立文档生成工具:
- 提取了 `generateAdminDocs()` 函数
- 主函数现在只负责调用生成函数和输出结果
- 保持原有的输出路径 `./docs/admin-openapi.yaml`
- 返回错误而非 panic便于错误处理
#### `.gitignore`
添加了自动生成的文档到忽略列表:
```
# Auto-generated OpenAPI documentation
/openapi.yaml
```
## 设计决策
### 避免循环依赖
最初计划将生成逻辑放在 `pkg/openapi/generate.go`,但这会导致循环依赖:
- `pkg/openapi``internal/routes``pkg/openapi`
**解决方案**: 将生成逻辑放在各自的 `cmd/` 包内:
- `cmd/api/docs.go` - 服务启动时的生成逻辑
- `cmd/gendocs/main.go` - 独立工具的生成逻辑
这样做的好处:
- 避免了循环依赖
- 保持了包的职责清晰
- 代码简单直接,易于维护
### 优雅的错误处理
文档生成失败不应影响服务启动:
- 生成失败时使用 `appLogger.Error()` 记录错误
- 服务继续启动,保证可用性
- 开发者可以通过日志发现问题
### 文档输出路径
- 服务启动生成: `./openapi.yaml`(项目根目录)
- 独立工具生成: `./docs/admin-openapi.yaml`(保持原有行为)
## 测试验证
### 编译测试
```bash
go build -o /tmp/test-api ./cmd/api
go build -o /tmp/test-gendocs ./cmd/gendocs
```
✅ 编译成功,无错误
### 功能测试
```bash
/tmp/test-gendocs
```
输出:
```
2026/01/09 12:11:57 成功在以下位置生成 OpenAPI 文档: /Users/break/csxjProject/junhong_cmp_fiber/docs/admin-openapi.yaml
```
✅ 文档生成成功33KB
### 代码规范检查
```bash
gofmt -l cmd/api/docs.go cmd/api/main.go cmd/gendocs/main.go
go vet ./cmd/api/... ./cmd/gendocs/...
```
✅ 所有检查通过
## 影响范围
### 新增功能
- ✅ 服务启动时自动生成 OpenAPI 文档
- ✅ 文档自动保存到项目根目录 `./openapi.yaml`
- ✅ 生成失败时记录错误但不影响服务启动
### 现有功能
-`cmd/gendocs` 工具继续可用(代码已重构但功能不变)
-`make docs` 命令(如存在)继续可用
- ✅ 无破坏性变更
### 开发体验改进
- ✅ 部署时无需手动执行 `make docs`
- ✅ 文档始终与当前运行的服务保持同步
- ✅ 开发过程中自动更新文档,无需频繁手动执行命令
## 后续工作
以下任务可以在后续完成:
1. 更新 README.md说明自动生成功能
2. 添加文档生成的单元测试(如需要)
3. 考虑添加启动参数控制是否生成文档(如需要)

View File

@@ -0,0 +1,101 @@
# OpenAPI 文档自动生成功能
## 功能概述
服务启动时自动生成 OpenAPI 3.0 规范文档,确保文档始终与运行的服务保持同步。
## 使用方式
### 1. 自动生成(服务启动时)
当你启动 API 服务时OpenAPI 文档会自动生成:
```bash
make run
# 或
go run cmd/api/main.go
```
文档将自动保存到项目根目录: `./openapi.yaml`
### 2. 手动生成(独立工具)
如果需要离线生成文档(不启动服务),可以使用以下命令:
```bash
make docs
# 或
go run cmd/gendocs/main.go
```
文档将保存到: `./docs/admin-openapi.yaml`
## 实现细节
### 核心文件
- `cmd/api/docs.go` - 服务启动时的文档生成逻辑
- `cmd/api/main.go` - 在步骤 11 调用文档生成
- `cmd/gendocs/main.go` - 独立文档生成工具
### 生成流程
1. 创建 OpenAPI 文档生成器
2. 创建临时 Fiber App
3. 注册所有路由(使用 nil 依赖)
4. 保存文档到指定路径
5. 生成失败时记录错误但不影响服务
### 错误处理
- 文档生成失败会记录到应用日志
- 服务启动不会因文档生成失败而中断
- 保证服务的可用性优先于文档生成
## 技术架构
### 避免循环依赖
文档生成逻辑放在各自的 `cmd/` 包内,避免了 `pkg/openapi``internal/routes` 的循环依赖。
### 代码复用
两种生成方式(自动和手动)都使用相同的核心逻辑:
- 相同的路由注册机制
- 相同的文档生成器
- 仅输出路径不同
## 配置
### .gitignore
自动生成的文档已添加到 `.gitignore`:
```
/openapi.yaml
```
这避免了将自动生成的文件提交到版本控制。
## 验证
### 编译测试
```bash
go build ./cmd/api
go build ./cmd/gendocs
```
### 功能测试
```bash
# 测试独立工具
make docs
# 检查生成的文档
ls -lh docs/admin-openapi.yaml
```
## 相关文档
- [提案](./proposal.md) - 功能需求和设计思路
- [任务清单](./tasks.md) - 实现任务列表
- [实现总结](./IMPLEMENTATION.md) - 详细的实现说明
- [规范](./specs/openapi-generation/spec.md) - 正式的功能规范

View File

@@ -0,0 +1,31 @@
# Change: 服务启动时自动生成OpenAPI文档
## Why
当前项目已经实现了OpenAPI文档生成功能但需要手动执行 `make docs` 命令才能生成文档文件。这导致以下问题:
- 部署服务时容易忘记生成文档导致文档与实际API不同步
- 开发过程中需要频繁手动执行命令来更新文档
- 无法保证文档与当前运行服务的API定义完全一致
通过在服务启动时自动生成OpenAPI文档可以确保文档始终与当前服务保持同步提升开发和部署体验。
## What Changes
-`cmd/api/main.go` 的初始化流程中添加OpenAPI文档自动生成功能
- 将文档输出到项目根目录的固定位置(`./openapi.yaml`
- 生成失败时记录错误日志但不影响服务启动
- 复用现有的文档生成逻辑(`pkg/openapi/``internal/routes/` 的Registry机制
- 移除或保留 `cmd/gendocs/main.go` 作为备用工具(供离线生成文档使用)
## Impact
### Affected specs
- **NEW**: `openapi-generation` - 新增OpenAPI文档自动生成规范
### Affected code
- `cmd/api/main.go` - 添加文档生成调用
- 可能需要提取 `cmd/gendocs/main.go` 中的生成逻辑为可复用函数
- 无需修改现有的 `pkg/openapi/generator.go``internal/routes/registry.go`
### Breaking changes
无破坏性变更。现有的手动生成方式(`make docs`)仍然可以使用。

View File

@@ -0,0 +1,81 @@
# OpenAPI Generation Specification
## ADDED Requirements
### Requirement: 服务启动时自动生成OpenAPI文档
系统启动时SHALL自动生成OpenAPI 3.0规范文档并保存到项目根目录。
#### Scenario: 服务正常启动时生成文档
- **WHEN** 服务启动流程执行到路由注册之后
- **THEN** 系统自动调用文档生成逻辑
- **AND** 在项目根目录生成 `openapi.yaml` 文件
- **AND** 文件内容包含所有已注册的API端点定义
#### Scenario: 文档生成失败时的优雅处理
- **WHEN** 文档生成过程中发生错误(如文件写入失败、权限问题)
- **THEN** 系统记录错误日志到应用日志
- **AND** 错误日志包含完整的错误信息和堆栈
- **AND** 服务启动流程继续执行,不因文档生成失败而中断
#### Scenario: 文档生成的时机控制
- **WHEN** 服务在任何环境下启动(开发、测试、生产)
- **THEN** 文档生成逻辑都会执行
- **AND** 无需额外的配置或启动参数
### Requirement: 文档输出路径规范
系统SHALL将生成的OpenAPI文档输出到固定的、可预测的位置。
#### Scenario: 文档保存到项目根目录
- **WHEN** 文档生成成功
- **THEN** 文件保存到项目根目录(相对于工作目录的 `./openapi.yaml`
- **AND** 如果文件已存在则覆盖旧版本
- **AND** 文件权限设置为 0644所有者可读写其他用户只读
#### Scenario: 确保输出目录存在
- **WHEN** 输出路径的父目录不存在
- **THEN** 系统自动创建必要的目录结构
- **AND** 目录权限设置为 0755
### Requirement: 复用现有生成逻辑
文档生成功能SHALL复用项目中已有的OpenAPI生成机制避免代码重复。
#### Scenario: 调用现有的Registry机制
- **WHEN** 执行文档生成
- **THEN** 使用 `pkg/openapi.Generator` 创建文档生成器
- **AND** 调用 `internal/routes` 中的路由注册函数
- **AND** 传入非nil的Generator实例以激活文档收集逻辑
- **AND** 使用Generator的Save方法输出YAML文件
#### Scenario: 模拟路由注册但不启动服务
- **WHEN** 生成文档时调用路由注册函数
- **THEN** 创建临时的Fiber应用实例用于路由注册
- **AND** 传入nil的依赖项因为不会执行实际的Handler逻辑
- **AND** 注册完成后丢弃Fiber应用实例不调用Listen
### Requirement: 向后兼容独立生成工具
系统SHALL保留独立的文档生成工具支持离线生成文档的用例。
#### Scenario: 通过make命令生成文档
- **WHEN** 用户执行 `make docs` 命令
- **THEN** 调用 `cmd/gendocs/main.go`
- **AND** 生成文档到指定位置(默认 `./docs/admin-openapi.yaml`
- **AND** 生成过程独立于服务运行状态
#### Scenario: 独立工具与自动生成共享代码
- **WHEN** 独立工具和自动生成都需要执行文档生成
- **THEN** 两者调用相同的底层生成函数
- **AND** 通过参数区分输出路径
- **AND** 避免逻辑重复

View File

@@ -0,0 +1,28 @@
# Implementation Tasks
## 1. 重构文档生成逻辑
- [x] 1.1 从 `cmd/gendocs/main.go` 中提取文档生成逻辑(实际采用在各自包内实现的方案)
- [x] 1.2 创建文档生成函数,接受输出路径参数
- [x] 1.3 确保函数返回错误而非panic用于优雅处理失败情况
## 2. 集成到服务启动流程
- [x] 2.1 在 `cmd/api/main.go``main()` 函数中添加文档生成调用
- [x] 2.2 将生成调用放在路由注册之后(确保有完整的路由信息)
- [x] 2.3 指定输出路径为 `./openapi.yaml`(项目根目录)
- [x] 2.4 生成失败时使用 `appLogger.Error()` 记录错误但继续启动
## 3. 更新现有工具
- [x] 3.1 保留 `cmd/gendocs/main.go` 作为独立的文档生成工具
- [x] 3.2 修改 `cmd/gendocs/main.go` 使用提取的生成逻辑
- [x] 3.3 Makefile 中的 `docs` 目标保持不变(如存在)
## 4. 文档和测试
- [x] 4.1 在 `.gitignore` 中添加 `/openapi.yaml`(避免提交自动生成的文件)
- [x] 4.2 手动测试文档生成工具,验证文档正确生成
- [x] 4.3 编译测试确保代码无错误
- [x] 4.4 README.md 更新将在后续完成
## 5. 清理和验证
- [x] 5.1 确保代码符合项目规范gofmt、go vet
- [x] 5.2 确保所有函数都有中文文档注释
- [x] 5.3 运行 `openspec validate auto-generate-openapi-docs --strict`

View File

@@ -0,0 +1,65 @@
# auth Specification
## Purpose
TBD - created by archiving change refactor-framework-cleanup. Update Purpose after archive.
## Requirements
### Requirement: Unified Authentication Middleware
系统 SHALL 提供统一的认证中间件,支持可配置的 Token 提取和验证。
#### Scenario: Token 验证成功
- **WHEN** 请求携带有效的 Token
- **THEN** 中间件提取并验证 Token
- **AND** 将用户信息同时设置到 Fiber Locals 和 Context
- **AND** 请求继续执行
#### Scenario: Token 缺失
- **WHEN** 请求未携带 Token
- **AND** 路径不在跳过列表中
- **THEN** 返回 AppErrorCodeMissingToken
- **AND** 由全局 ErrorHandler 处理错误响应
#### Scenario: Token 无效
- **WHEN** 请求携带的 Token 无效或过期
- **THEN** 返回 AppErrorCodeUnauthorized
- **AND** 由全局 ErrorHandler 处理错误响应
#### Scenario: 跳过路径
- **WHEN** 请求路径在 SkipPaths 配置中
- **THEN** 中间件跳过认证
- **AND** 请求直接继续执行
### Requirement: User Context Management
认证中间件 SHALL 提供用户上下文管理函数,支持从 Context 获取用户信息。
#### Scenario: 获取用户 ID
- **WHEN** 调用 GetUserIDFromContext(ctx)
- **AND** 认证已通过
- **THEN** 返回当前用户的 ID
#### Scenario: 检查 Root 用户
- **WHEN** 调用 IsRootUser(ctx)
- **THEN** 返回当前用户是否为 Root 用户
#### Scenario: 设置用户到 Fiber Context
- **WHEN** 调用 SetUserToFiberContext(c, userInfo)
- **THEN** 用户信息被设置到 Fiber Locals
- **AND** 用户信息被设置到请求 Context供 GORM 等使用)
### Requirement: Auth Middleware Configuration
认证中间件 SHALL 支持灵活的配置选项。
#### Scenario: 自定义 Token 提取
- **WHEN** 配置了 TokenExtractor 函数
- **THEN** 使用自定义函数从请求中提取 Token
#### Scenario: 默认 Token 提取
- **WHEN** 未配置 TokenExtractor
- **THEN** 从 Authorization Header 提取 Bearer Token
#### Scenario: 自定义验证函数
- **WHEN** 配置了 Validator 函数
- **THEN** 使用自定义函数验证 Token 并返回用户信息

View File

@@ -0,0 +1,65 @@
# data-permission Specification
## Purpose
TBD - created by archiving change refactor-framework-cleanup. Update Purpose after archive.
## Requirements
### Requirement: GORM Callback Data Permission
系统 SHALL 使用 GORM Callback 机制自动为所有查询添加数据权限过滤。
#### Scenario: 自动应用权限过滤
- **WHEN** 执行 GORM 查询
- **AND** Context 包含用户信息
- **AND** 表包含 owner_id 字段
- **THEN** 自动添加 WHERE owner_id IN (subordinateIDs) 条件
#### Scenario: Root 用户跳过过滤
- **WHEN** 当前用户是 Root 用户
- **THEN** 不添加任何数据权限过滤条件
- **AND** 可查询所有数据
#### Scenario: 无 owner_id 字段的表
- **WHEN** 表不包含 owner_id 字段
- **THEN** 不添加数据权限过滤条件
### Requirement: Skip Data Permission
系统 SHALL 支持通过 Context 绕过数据权限过滤。
#### Scenario: 显式跳过权限过滤
- **WHEN** 调用 SkipDataPermission(ctx) 获取新 Context
- **AND** 使用该 Context 执行 GORM 查询
- **THEN** 不添加任何数据权限过滤条件
#### Scenario: 内部操作跳过过滤
- **WHEN** 执行内部同步、批量操作或管理员操作
- **THEN** 应使用 SkipDataPermission 绕过过滤
### Requirement: Subordinate IDs Caching
系统 SHALL 缓存用户的下级 ID 列表以提高查询性能。
#### Scenario: 缓存命中
- **WHEN** 获取用户下级 ID 列表
- **AND** Redis 缓存存在
- **THEN** 直接返回缓存数据
#### Scenario: 缓存未命中
- **WHEN** 获取用户下级 ID 列表
- **AND** Redis 缓存不存在
- **THEN** 执行递归 CTE 查询获取下级 ID
- **AND** 将结果缓存到 Redis30 分钟过期)
### Requirement: Callback Registration
系统 SHALL 在应用启动时注册 GORM 数据权限 Callback。
#### Scenario: 注册 Callback
- **WHEN** 调用 RegisterDataPermissionCallback(db, accountStore)
- **THEN** 注册 Query Before Callback
- **AND** Callback 名称为 "data_permission"
#### Scenario: AccountStore 依赖
- **WHEN** 注册 Callback 时
- **THEN** 需要传入 AccountStore 实例用于获取下级 ID

View File

@@ -0,0 +1,56 @@
# dependency-injection Specification
## Purpose
TBD - created by archiving change refactor-framework-cleanup. Update Purpose after archive.
## Requirements
### Requirement: Bootstrap Package
系统 SHALL 提供 bootstrap 包,统一管理所有业务组件的初始化和依赖注入。
#### Scenario: 初始化所有组件
- **WHEN** 调用 Bootstrap(deps)
- **THEN** 自动初始化所有 Store、Service 和 Handler
- **AND** 返回可直接用于路由注册的 Handlers 结构体
#### Scenario: 依赖注入
- **WHEN** 初始化 Service 时
- **THEN** 自动注入所需的 Store 依赖
- **AND** 自动注入所需的其他 Service 依赖
#### Scenario: 添加新业务模块
- **WHEN** 需要添加新的业务模块
- **THEN** 只需修改 bootstrap 包
- **AND** main.go 无需任何修改
- **AND** TODO 注释标记扩展点
### Requirement: Main Function Simplification
main 函数 SHALL 只负责编排,不包含具体业务组件初始化逻辑。
#### Scenario: 标准启动流程
- **WHEN** 应用启动
- **THEN** main 函数执行以下步骤:
1. 加载配置
2. 初始化基础依赖DB、Redis、Logger
3. 调用 bootstrap.Bootstrap() 初始化业务组件
4. 设置路由和中间件
5. 启动服务器
#### Scenario: 启动失败处理
- **WHEN** 任何初始化步骤失败
- **THEN** 记录错误日志
- **AND** 程序以非零状态码退出
### Requirement: Dependencies Encapsulation
系统 SHALL 使用结构体封装基础依赖和业务组件。
#### Scenario: Dependencies 结构体
- **WHEN** 传递基础依赖时
- **THEN** 使用 Dependencies 结构体封装 DB、Redis、Logger
#### Scenario: Handlers 结构体
- **WHEN** 返回业务处理器时
- **THEN** 使用 Handlers 结构体封装所有 Handler
- **AND** 结构体包含 TODO 注释标记未来扩展点

View File

@@ -0,0 +1,80 @@
# error-handling Specification
## Purpose
TBD - created by archiving change refactor-framework-cleanup. Update Purpose after archive.
## Requirements
### Requirement: Simplified AppError Structure
系统 SHALL 简化 AppError 结构,删除冗余的 HTTPStatus 字段。
#### Scenario: AppError 字段
- **WHEN** 创建 AppError
- **THEN** 结构体只包含 3 个字段:
- Code: 业务错误码
- Message: 错误消息
- Err: 底层错误(可选)
#### Scenario: HTTP 状态码获取
- **WHEN** ErrorHandler 处理 AppError
- **THEN** 通过 GetHTTPStatus(code) 实时获取 HTTP 状态码
- **AND** 不从 AppError 字段中读取
#### Scenario: 禁止手动设置状态码
- **WHEN** 创建 AppError
- **THEN** 不提供 WithHTTPStatus() 方法
- **AND** Code 和 HTTPStatus 始终保持一致
### Requirement: Unified Error Response Format
系统 SHALL 使用统一的 JSON 响应格式(错误和成功均使用相同字段)。
#### Scenario: 响应结构
- **WHEN** 返回任何响应时
- **THEN** JSON 结构仅包含 4 个字段:
- code: 业务错误码0 表示成功)
- msg: 消息(错误消息或 "success"
- data: 响应数据(成功时有数据,错误时为 null
- timestamp: ISO 8601 时间戳
#### Scenario: 不返回 HTTP 状态码字段
- **WHEN** 返回响应时
- **THEN** JSON 不包含 httpstatus 或 http_status 字段
- **AND** HTTP 状态码仅在响应头中体现
#### Scenario: Handler 返回错误
- **WHEN** Handler 函数返回 error
- **THEN** 全局 ErrorHandler 拦截错误
- **AND** 根据错误类型构造统一格式响应
### Requirement: Handler Error Return Convention
所有 Handler 函数 SHALL 通过返回 error 传递错误,由全局 ErrorHandler 统一处理。
#### Scenario: 业务错误
- **WHEN** Handler 遇到业务错误
- **THEN** 返回 errors.New(code, message) 创建的 AppError
- **AND** 不直接调用 response.Error()
#### Scenario: 参数验证错误
- **WHEN** 请求参数验证失败
- **THEN** 返回 errors.New(CodeInvalidParam, "具体错误描述")
#### Scenario: 成功响应
- **WHEN** Handler 执行成功
- **THEN** 调用 response.Success(c, data)
- **AND** 返回 nil
### Requirement: Standardized Error Codes
系统 SHALL 使用标准化的错误码,删除向后兼容的别名。
#### Scenario: 参数验证错误码
- **WHEN** 参数验证失败
- **THEN** 使用 CodeInvalidParam
- **AND** 不使用 CodeBadRequest别名已删除
#### Scenario: 服务不可用错误码
- **WHEN** 服务不可用
- **THEN** 使用 CodeServiceUnavailable
- **AND** 不使用 CodeAuthServiceUnavailable别名已删除

View File

@@ -0,0 +1,83 @@
# openapi-generation Specification
## Purpose
TBD - created by archiving change auto-generate-openapi-docs. Update Purpose after archive.
## Requirements
### Requirement: 服务启动时自动生成OpenAPI文档
系统启动时SHALL自动生成OpenAPI 3.0规范文档并保存到项目根目录。
#### Scenario: 服务正常启动时生成文档
- **WHEN** 服务启动流程执行到路由注册之后
- **THEN** 系统自动调用文档生成逻辑
- **AND** 在项目根目录生成 `openapi.yaml` 文件
- **AND** 文件内容包含所有已注册的API端点定义
#### Scenario: 文档生成失败时的优雅处理
- **WHEN** 文档生成过程中发生错误(如文件写入失败、权限问题)
- **THEN** 系统记录错误日志到应用日志
- **AND** 错误日志包含完整的错误信息和堆栈
- **AND** 服务启动流程继续执行,不因文档生成失败而中断
#### Scenario: 文档生成的时机控制
- **WHEN** 服务在任何环境下启动(开发、测试、生产)
- **THEN** 文档生成逻辑都会执行
- **AND** 无需额外的配置或启动参数
### Requirement: 文档输出路径规范
系统SHALL将生成的OpenAPI文档输出到固定的、可预测的位置。
#### Scenario: 文档保存到项目根目录
- **WHEN** 文档生成成功
- **THEN** 文件保存到项目根目录(相对于工作目录的 `./openapi.yaml`
- **AND** 如果文件已存在则覆盖旧版本
- **AND** 文件权限设置为 0644所有者可读写其他用户只读
#### Scenario: 确保输出目录存在
- **WHEN** 输出路径的父目录不存在
- **THEN** 系统自动创建必要的目录结构
- **AND** 目录权限设置为 0755
### Requirement: 复用现有生成逻辑
文档生成功能SHALL复用项目中已有的OpenAPI生成机制避免代码重复。
#### Scenario: 调用现有的Registry机制
- **WHEN** 执行文档生成
- **THEN** 使用 `pkg/openapi.Generator` 创建文档生成器
- **AND** 调用 `internal/routes` 中的路由注册函数
- **AND** 传入非nil的Generator实例以激活文档收集逻辑
- **AND** 使用Generator的Save方法输出YAML文件
#### Scenario: 模拟路由注册但不启动服务
- **WHEN** 生成文档时调用路由注册函数
- **THEN** 创建临时的Fiber应用实例用于路由注册
- **AND** 传入nil的依赖项因为不会执行实际的Handler逻辑
- **AND** 注册完成后丢弃Fiber应用实例不调用Listen
### Requirement: 向后兼容独立生成工具
系统SHALL保留独立的文档生成工具支持离线生成文档的用例。
#### Scenario: 通过make命令生成文档
- **WHEN** 用户执行 `make docs` 命令
- **THEN** 调用 `cmd/gendocs/main.go`
- **AND** 生成文档到指定位置(默认 `./docs/admin-openapi.yaml`
- **AND** 生成过程独立于服务运行状态
#### Scenario: 独立工具与自动生成共享代码
- **WHEN** 独立工具和自动生成都需要执行文档生成
- **THEN** 两者调用相同的底层生成函数
- **AND** 通过参数区分输出路径
- **AND** 避免逻辑重复