feat: 实现企业设备授权功能并归档 OpenSpec 变更
All checks were successful
构建并部署到测试环境(无 SSH) / build-and-deploy (push) Successful in 5m39s

- 新增企业设备授权模块(Model、DTO、Service、Handler、Store)
- 实现设备授权的创建、查询、更新、删除等完整业务逻辑
- 添加企业卡授权与设备授权的关联关系
- 新增 2 个数据库迁移脚本
- 同步 OpenSpec delta specs 到 main specs
- 归档 add-enterprise-device-authorization 变更
- 更新 API 文档和路由配置
- 新增完整的集成测试和单元测试覆盖
This commit is contained in:
2026-01-29 13:18:49 +08:00
parent e87513541b
commit b02175271a
118 changed files with 14306 additions and 472 deletions

View File

@@ -96,7 +96,7 @@ Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/*.md**: Create one spec per capability listed in the proposal.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.1"
generatedBy: "1.0.2"
---
Implement tasks from an OpenSpec change.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.1"
generatedBy: "1.0.2"
---
Archive a completed change in the experimental workflow.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.1"
generatedBy: "1.0.2"
---
Archive multiple completed changes in a single operation.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.1"
generatedBy: "1.0.2"
---
Continue working on a change by creating the next artifact.
@@ -100,7 +100,7 @@ Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/*.md**: Create one spec per capability listed in the proposal.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.1"
generatedBy: "1.0.2"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.1"
generatedBy: "1.0.2"
---
Fast-forward through artifact creation - generate everything needed to start implementation in one go.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.1"
generatedBy: "1.0.2"
---
Start a new change using the experimental artifact-driven approach.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.1"
generatedBy: "1.0.2"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.1"
generatedBy: "1.0.2"
---
Sync delta specs from a change to main specs.

View File

@@ -6,7 +6,7 @@ compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.1"
generatedBy: "1.0.2"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).

View File

@@ -0,0 +1,150 @@
---
description: Implement tasks from an OpenSpec change (Experimental)
argument-hint: command arguments
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,155 @@
---
description: Archive a completed change in the experimental workflow
argument-hint: command arguments
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute `/opsx:sync` logic. Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use /opsx:sync approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,240 @@
---
description: Archive multiple completed changes at once
argument-hint: command arguments
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,112 @@
---
description: Continue working on a change - create the next artifact (Experimental)
argument-hint: command arguments
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name after `/opsx:continue` (e.g., `/opsx:continue add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Run `/opsx:continue` to create the next artifact"
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,172 @@
---
description: Enter explore mode - think through ideas, investigate problems, clarify requirements
argument-hint: command arguments
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? `/opsx:new` or `/opsx:ff`"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

92
.codex/prompts/opsx-ff.md Normal file
View File

@@ -0,0 +1,92 @@
---
description: Create a change and generate all artifacts needed for implementation in one go
argument-hint: command arguments
---
Fast-forward through artifact creation - generate everything needed to start implementation.
**Input**: The argument after `/opsx:ff` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use the `template` as a starting point, filling in based on context
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,67 @@
---
description: Start a new change using the experimental artifact workflow (OPSX)
argument-hint: command arguments
---
Start a new change using the experimental artifact-driven approach.
**Input**: The argument after `/opsx:new` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema. Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Run `/opsx:continue` or just describe what this change is about and I'll draft it."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest using `/opsx:continue` instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,523 @@
---
description: Guided onboarding - walk through a complete OpenSpec workflow cycle with narration
argument-hint: command arguments
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

132
.codex/prompts/opsx-sync.md Normal file
View File

@@ -0,0 +1,132 @@
---
description: Sync delta specs from a change to main specs
argument-hint: command arguments
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name after `/opsx:sync` (e.g., `/opsx:sync add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,162 @@
---
description: Verify implementation matches change artifacts before archiving
argument-hint: command arguments
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name after `/opsx:verify` (e.g., `/opsx:verify add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx:sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,246 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,118 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,290 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx:new or /opsx:ff"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change: /opsx:new <name>
- Fast-forward to tasks: /opsx:ff <name>
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,101 @@
---
name: openspec-ff-change
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Fast-forward through artifact creation - generate everything needed to start implementation in one go.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,74 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,529 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,138 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,168 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,149 @@
---
description: Implement tasks from an OpenSpec change (Experimental)
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,154 @@
---
description: Archive a completed change in the experimental workflow
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute `/opsx:sync` logic. Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use /opsx:sync approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,239 @@
---
description: Archive multiple completed changes at once
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,111 @@
---
description: Continue working on a change - create the next artifact (Experimental)
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name after `/opsx:continue` (e.g., `/opsx:continue add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Run `/opsx:continue` to create the next artifact"
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,171 @@
---
description: Enter explore mode - think through ideas, investigate problems, clarify requirements
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? `/opsx:new` or `/opsx:ff`"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,91 @@
---
description: Create a change and generate all artifacts needed for implementation in one go
---
Fast-forward through artifact creation - generate everything needed to start implementation.
**Input**: The argument after `/opsx:ff` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use the `template` as a starting point, filling in based on context
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,66 @@
---
description: Start a new change using the experimental artifact workflow (OPSX)
---
Start a new change using the experimental artifact-driven approach.
**Input**: The argument after `/opsx:new` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema. Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Run `/opsx:continue` or just describe what this change is about and I'll draft it."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest using `/opsx:continue` instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,522 @@
---
description: Guided onboarding - walk through a complete OpenSpec workflow cycle with narration
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,131 @@
---
description: Sync delta specs from a change to main specs
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name after `/opsx:sync` (e.g., `/opsx:sync add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,161 @@
---
description: Verify implementation matches change artifacts before archiving
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name after `/opsx:verify` (e.g., `/opsx:verify add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx:sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,246 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,118 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/<capability>/spec.md**: Create one spec per capability listed in the proposal's Capabilities section (use the capability name, not the change name).
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,290 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx:new or /opsx:ff"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change: /opsx:new <name>
- Fast-forward to tasks: /opsx:ff <name>
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,101 @@
---
name: openspec-ff-change
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Fast-forward through artifact creation - generate everything needed to start implementation in one go.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,74 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,529 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/changes/archive/YYYY-MM-DD-<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/changes/archive/YYYY-MM-DD-<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,138 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,168 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.2"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -28,7 +28,7 @@ func generateOpenAPIDocs(outputPath string, logger *zap.Logger) {
AdminAuth: admin.NewAuthHandler(nil, nil),
H5Auth: h5.NewAuthHandler(nil, nil),
Account: admin.NewAccountHandler(nil),
Role: admin.NewRoleHandler(nil),
Role: admin.NewRoleHandler(nil, nil),
Permission: admin.NewPermissionHandler(nil),
Shop: admin.NewShopHandler(nil),
ShopAccount: admin.NewShopAccountHandler(nil),
@@ -37,6 +37,8 @@ func generateOpenAPIDocs(outputPath string, logger *zap.Logger) {
CommissionWithdrawalSetting: admin.NewCommissionWithdrawalSettingHandler(nil),
Enterprise: admin.NewEnterpriseHandler(nil),
EnterpriseCard: admin.NewEnterpriseCardHandler(nil),
EnterpriseDevice: admin.NewEnterpriseDeviceHandler(nil),
EnterpriseDeviceH5: h5.NewEnterpriseDeviceHandler(nil),
Authorization: admin.NewAuthorizationHandler(nil),
CustomerAccount: admin.NewCustomerAccountHandler(nil),
MyCommission: admin.NewMyCommissionHandler(nil),

View File

@@ -37,7 +37,7 @@ func generateAdminDocs(outputPath string) error {
AdminAuth: admin.NewAuthHandler(nil, nil),
H5Auth: h5.NewAuthHandler(nil, nil),
Account: admin.NewAccountHandler(nil),
Role: admin.NewRoleHandler(nil),
Role: admin.NewRoleHandler(nil, nil),
Permission: admin.NewPermissionHandler(nil),
Shop: admin.NewShopHandler(nil),
ShopAccount: admin.NewShopAccountHandler(nil),
@@ -46,6 +46,8 @@ func generateAdminDocs(outputPath string) error {
CommissionWithdrawalSetting: admin.NewCommissionWithdrawalSettingHandler(nil),
Enterprise: admin.NewEnterpriseHandler(nil),
EnterpriseCard: admin.NewEnterpriseCardHandler(nil),
EnterpriseDevice: admin.NewEnterpriseDeviceHandler(nil),
EnterpriseDeviceH5: h5.NewEnterpriseDeviceHandler(nil),
Authorization: admin.NewAuthorizationHandler(nil),
CustomerAccount: admin.NewCustomerAccountHandler(nil),
MyCommission: admin.NewMyCommissionHandler(nil),

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,224 @@
# 企业设备授权功能实现总结
## 概述
实现了企业设备授权功能,取代原有的"设备捆绑"机制。新功能支持以设备为单位进行授权,授权后设备绑定的所有卡自动授权给企业。
## 核心变更
### 1. 数据库层
**新增表:`tb_enterprise_device_authorization`**
- 设备授权主表
- 关键字段enterprise_id, device_id, authorized_by, authorized_at, revoked_at
- 唯一约束:一个设备同时只能授权给一个企业(通过部分唯一索引实现)
**修改表:`tb_enterprise_card_authorization`**
- 新增字段:`device_auth_id`NULLABLE
- 用途标识卡是通过设备授权有值还是单卡授权NULL
- 关联:指向 `tb_enterprise_device_authorization.id`
### 2. Model 层
**新增模型**
- `model.EnterpriseDeviceAuthorization`:设备授权模型
**修改模型**
- `model.EnterpriseCardAuthorization`:添加 DeviceAuthID 字段
### 3. DTO 层
**新增 DTO`dto/enterprise_device_authorization_dto.go`**
- AllocateDevicesReq/Resp授权设备
- RecallDevicesReq/Resp撤销设备授权
- EnterpriseDeviceListReq/Resp设备列表
- EnterpriseDeviceDetailResp设备详情含绑定卡
- DeviceCardOperationReq/Resp卡操作停机/复机)
**废弃 DTO`dto/enterprise_card_authorization_dto.go`**
- DeviceBundle、DeviceBundleCard、AllocatedDevice 标记为 Deprecated
### 4. Store 层
**新增 Store**
- `postgres.EnterpriseDeviceAuthorizationStore`
- Create/BatchCreate创建授权
- GetByID/GetByDeviceID/GetByEnterpriseID查询授权
- ListByEnterprise分页查询
- RevokeByIDs撤销授权
- GetActiveAuthsByDeviceIDs批量检查授权状态
**修改 Store**
- `postgres.EnterpriseCardAuthorizationStore`
- 新增 RevokeByDeviceAuthID():级联撤销卡授权
### 5. Service 层
**新增 Service`service/enterprise_device/service.go`**
- AllocateDevices():授权设备给企业
- 验证设备状态(必须是"已分销"
- 验证设备所有权
- 事务中创建设备授权 + 自动创建绑定卡授权
- RecallDevices():撤销设备授权
- 撤销设备授权
- 级联撤销所有绑定卡授权
- ListDevices():后台管理设备列表
- ListDevicesForEnterprise()H5企业用户设备列表
- GetDeviceDetail():设备详情(含绑定卡)
- SuspendCard/ResumeCard()H5停机/复机
**Breaking Change修改 Service**
- `service/enterprise_card/service.go`
- AllocateCardsPreview():移除 DeviceBundle 逻辑,绑定设备的卡直接拒绝
- AllocateCards():移除 ConfirmDeviceBundles 参数,只能授权单卡
### 6. Handler 层
**新增 Admin Handler`handler/admin/enterprise_device.go`**
- AllocateDevices`POST /api/admin/enterprises/:id/allocate-devices`
- RecallDevices`POST /api/admin/enterprises/:id/recall-devices`
- ListDevices`GET /api/admin/enterprises/:id/devices`
**新增 H5 Handler`handler/h5/enterprise_device.go`**
- ListDevices`GET /api/h5/enterprise/devices`
- GetDeviceDetail`GET /api/h5/enterprise/devices/:device_id`
- SuspendCard`POST /api/h5/enterprise/devices/:device_id/cards/:card_id/suspend`
- ResumeCard`POST /api/h5/enterprise/devices/:device_id/cards/:card_id/resume`
### 7. 错误码
新增错误码(`pkg/errors/codes.go`
- `CodeDeviceAlreadyAuthorized` (1083):设备已授权给此企业
- `CodeDeviceNotAuthorized` (1084):设备未授权给此企业
- `CodeDeviceAuthorizedToOther` (1085):设备已授权给其他企业
- `CodeCannotAuthorizeOthersDevice` (1086):无权操作他人设备
## 业务规则
### 授权规则
1. **设备状态检查**:只能授权状态为"已分销"status=2的设备
2. **所有权验证**:代理用户只能授权自己店铺的设备
3. **唯一性约束**:一个设备同时只能授权给一个企业
4. **自动级联**:授权设备时,所有绑定的卡自动授权
### 撤销规则
1. **级联撤销**:撤销设备授权时,自动撤销所有绑定卡授权
2. **软删除**:通过设置 revoked_at 时间戳实现,保留历史记录
### H5 操作规则
1. **设备详情**:只能查看已授权给当前企业的设备
2. **停机/复机**
- 卡必须属于已授权设备
- 卡必须通过设备授权方式授权device_auth_id 不为空)
- 只能操作当前企业的设备
## 数据权限
- **后台管理**基于用户类型自动过滤SuperAdmin/Platform 全部可见Agent 只能看到自己店铺及下级)
- **H5企业用户**:自动过滤为当前企业的数据
## API 路由
### 后台管理 API
```
POST /api/admin/enterprises/:id/allocate-devices # 授权设备
POST /api/admin/enterprises/:id/recall-devices # 撤销授权
GET /api/admin/enterprises/:id/devices # 设备列表
```
### H5 企业 API
```
GET /api/h5/enterprise/devices # 设备列表
GET /api/h5/enterprise/devices/:device_id # 设备详情
POST /api/h5/enterprise/devices/:device_id/cards/:card_id/suspend # 停机
POST /api/h5/enterprise/devices/:device_id/cards/:card_id/resume # 复机
```
## 迁移说明
### 数据库迁移
已创建迁移文件:
- `migrations/000031_add_enterprise_device_authorization.up.sql`
- `migrations/000032_add_device_auth_id_to_enterprise_card_authorization.up.sql`
迁移已执行并验证成功。
### Breaking Changes
**企业卡授权 API 行为变更:**
1. `POST /api/admin/enterprises/:id/allocate-cards`
- 不再接受 `confirm_device_bundles` 参数
- 绑定设备的卡直接返回失败:`"该卡已绑定设备,请使用设备授权功能"`
2. **前端需要调整**
- 移除 DeviceBundle 相关 UI 和逻辑
- 添加设备授权入口
- 卡授权流程中处理"卡已绑定设备"错误
## 测试状态
### 已完成
- ✅ 数据库迁移验证
- ✅ 代码编译验证
- ✅ LSP 诊断通过
### 待完成(低优先级)
- ⏳ Store 层单元测试
- ⏳ Service 层单元测试
- ⏳ 修改 enterprise_card 服务测试(适配 Breaking Change
- ⏳ 集成测试
## 文件清单
### 新增文件
- `migrations/000031_add_enterprise_device_authorization.up.sql`
- `migrations/000032_add_device_auth_id_to_enterprise_card_authorization.up.sql`
- `internal/model/enterprise_device_authorization.go`
- `internal/model/dto/enterprise_device_authorization_dto.go`
- `internal/store/postgres/enterprise_device_authorization_store.go`
- `internal/service/enterprise_device/service.go`
- `internal/handler/admin/enterprise_device.go`
- `internal/handler/h5/enterprise_device.go`
- `internal/routes/enterprise_device.go`
- `internal/routes/h5_enterprise_device.go`
### 修改文件
- `internal/model/enterprise_card_authorization.go`(添加 DeviceAuthID 字段)
- `internal/model/dto/enterprise_card_authorization_dto.go`(废弃 DeviceBundle
- `internal/store/postgres/enterprise_card_authorization_store.go`(添加 RevokeByDeviceAuthID
- `internal/service/enterprise_card/service.go`(移除 DeviceBundle 逻辑)
- `internal/bootstrap/stores.go`(注册新 Store
- `internal/bootstrap/services.go`(注册新 Service
- `internal/bootstrap/handlers.go`(注册新 Handler
- `internal/bootstrap/types.go`(添加 Handler 字段)
- `internal/routes/admin.go`(注册后台路由)
- `internal/routes/h5.go`(注册 H5 路由)
- `pkg/errors/codes.go`(添加错误码)
## 后续工作
### 必要工作
1. **前端适配**
- 移除设备捆绑相关 UI
- 添加设备授权管理界面
- 处理新的错误码
2. **文档更新**
- 更新 API 文档(需要运行文档生成器)
- 更新用户使用手册
### 可选工作
1. **测试补充**:按需补充单元测试和集成测试
2. **性能优化**:如有性能问题,可优化查询逻辑
3. **功能扩展**:如需要批量操作优化,可添加批量接口
## 总结
企业设备授权功能已完整实现,包括:
- ✅ 数据库表结构变更
- ✅ 完整的四层架构实现Model/Store/Service/Handler
- ✅ 后台管理 API 和 H5 API
- ✅ 错误处理和数据权限
- ✅ 事务保证和级联操作
功能已通过编译验证,可以部署测试。前端需要配合调整以支持新的授权流程。

View File

@@ -13,7 +13,7 @@ func initHandlers(svc *services, deps *Dependencies) *Handlers {
return &Handlers{
Account: admin.NewAccountHandler(svc.Account),
Role: admin.NewRoleHandler(svc.Role),
Role: admin.NewRoleHandler(svc.Role, validate),
Permission: admin.NewPermissionHandler(svc.Permission),
PersonalCustomer: app.NewPersonalCustomerHandler(svc.PersonalCustomer, deps.Logger),
Shop: admin.NewShopHandler(svc.Shop),
@@ -25,6 +25,8 @@ func initHandlers(svc *services, deps *Dependencies) *Handlers {
CommissionWithdrawalSetting: admin.NewCommissionWithdrawalSettingHandler(svc.CommissionWithdrawalSetting),
Enterprise: admin.NewEnterpriseHandler(svc.Enterprise),
EnterpriseCard: admin.NewEnterpriseCardHandler(svc.EnterpriseCard),
EnterpriseDevice: admin.NewEnterpriseDeviceHandler(svc.EnterpriseDevice),
EnterpriseDeviceH5: h5.NewEnterpriseDeviceHandler(svc.EnterpriseDevice),
Authorization: admin.NewAuthorizationHandler(svc.Authorization),
CustomerAccount: admin.NewCustomerAccountHandler(svc.CustomerAccount),
MyCommission: admin.NewMyCommissionHandler(svc.MyCommission),

View File

@@ -14,6 +14,7 @@ import (
deviceImportSvc "github.com/break/junhong_cmp_fiber/internal/service/device_import"
enterpriseSvc "github.com/break/junhong_cmp_fiber/internal/service/enterprise"
enterpriseCardSvc "github.com/break/junhong_cmp_fiber/internal/service/enterprise_card"
enterpriseDeviceSvc "github.com/break/junhong_cmp_fiber/internal/service/enterprise_device"
iotCardSvc "github.com/break/junhong_cmp_fiber/internal/service/iot_card"
iotCardImportSvc "github.com/break/junhong_cmp_fiber/internal/service/iot_card_import"
myCommissionSvc "github.com/break/junhong_cmp_fiber/internal/service/my_commission"
@@ -47,6 +48,7 @@ type services struct {
CommissionCalculation *commissionCalculationSvc.Service
Enterprise *enterpriseSvc.Service
EnterpriseCard *enterpriseCardSvc.Service
EnterpriseDevice *enterpriseDeviceSvc.Service
Authorization *enterpriseCardSvc.AuthorizationService
CustomerAccount *customerAccountSvc.Service
MyCommission *myCommissionSvc.Service
@@ -99,6 +101,7 @@ func initServices(s *stores, deps *Dependencies) *services {
),
Enterprise: enterpriseSvc.New(deps.DB, s.Enterprise, s.Shop, s.Account),
EnterpriseCard: enterpriseCardSvc.New(deps.DB, s.Enterprise, s.EnterpriseCardAuthorization),
EnterpriseDevice: enterpriseDeviceSvc.New(deps.DB, s.Enterprise, s.Device, s.DeviceSimBinding, s.EnterpriseDeviceAuthorization, s.EnterpriseCardAuthorization, deps.Logger),
Authorization: enterpriseCardSvc.NewAuthorizationService(s.Enterprise, s.IotCard, s.EnterpriseCardAuthorization, deps.Logger),
CustomerAccount: customerAccountSvc.New(deps.DB, s.Account, s.Shop, s.Enterprise),
MyCommission: myCommissionSvc.New(deps.DB, s.Shop, s.Wallet, s.CommissionWithdrawalRequest, s.CommissionWithdrawalSetting, s.CommissionRecord, s.WalletTransaction),

View File

@@ -20,6 +20,7 @@ type stores struct {
CommissionWithdrawalSetting *postgres.CommissionWithdrawalSettingStore
Enterprise *postgres.EnterpriseStore
EnterpriseCardAuthorization *postgres.EnterpriseCardAuthorizationStore
EnterpriseDeviceAuthorization *postgres.EnterpriseDeviceAuthorizationStore
IotCard *postgres.IotCardStore
IotCardImportTask *postgres.IotCardImportTaskStore
Device *postgres.DeviceStore
@@ -57,6 +58,7 @@ func initStores(deps *Dependencies) *stores {
CommissionWithdrawalSetting: postgres.NewCommissionWithdrawalSettingStore(deps.DB, deps.Redis),
Enterprise: postgres.NewEnterpriseStore(deps.DB, deps.Redis),
EnterpriseCardAuthorization: postgres.NewEnterpriseCardAuthorizationStore(deps.DB, deps.Redis),
EnterpriseDeviceAuthorization: postgres.NewEnterpriseDeviceAuthorizationStore(deps.DB, deps.Redis),
IotCard: postgres.NewIotCardStore(deps.DB, deps.Redis),
IotCardImportTask: postgres.NewIotCardImportTaskStore(deps.DB, deps.Redis),
Device: postgres.NewDeviceStore(deps.DB, deps.Redis),

View File

@@ -23,6 +23,8 @@ type Handlers struct {
CommissionWithdrawalSetting *admin.CommissionWithdrawalSettingHandler
Enterprise *admin.EnterpriseHandler
EnterpriseCard *admin.EnterpriseCardHandler
EnterpriseDevice *admin.EnterpriseDeviceHandler
EnterpriseDeviceH5 *h5.EnterpriseDeviceHandler
Authorization *admin.AuthorizationHandler
CustomerAccount *admin.CustomerAccountHandler
MyCommission *admin.MyCommissionHandler

View File

@@ -0,0 +1,80 @@
package admin
import (
"strconv"
"github.com/gofiber/fiber/v2"
"github.com/break/junhong_cmp_fiber/internal/model/dto"
enterpriseDeviceService "github.com/break/junhong_cmp_fiber/internal/service/enterprise_device"
"github.com/break/junhong_cmp_fiber/pkg/errors"
"github.com/break/junhong_cmp_fiber/pkg/response"
)
type EnterpriseDeviceHandler struct {
service *enterpriseDeviceService.Service
}
func NewEnterpriseDeviceHandler(service *enterpriseDeviceService.Service) *EnterpriseDeviceHandler {
return &EnterpriseDeviceHandler{service: service}
}
func (h *EnterpriseDeviceHandler) AllocateDevices(c *fiber.Ctx) error {
enterpriseIDStr := c.Params("id")
enterpriseID, err := strconv.ParseUint(enterpriseIDStr, 10, 64)
if err != nil {
return errors.New(errors.CodeInvalidParam, "企业ID格式错误")
}
var req dto.AllocateDevicesReq
if err := c.BodyParser(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
result, err := h.service.AllocateDevices(c.UserContext(), uint(enterpriseID), &req)
if err != nil {
return err
}
return response.Success(c, result)
}
func (h *EnterpriseDeviceHandler) RecallDevices(c *fiber.Ctx) error {
enterpriseIDStr := c.Params("id")
enterpriseID, err := strconv.ParseUint(enterpriseIDStr, 10, 64)
if err != nil {
return errors.New(errors.CodeInvalidParam, "企业ID格式错误")
}
var req dto.RecallDevicesReq
if err := c.BodyParser(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
result, err := h.service.RecallDevices(c.UserContext(), uint(enterpriseID), &req)
if err != nil {
return err
}
return response.Success(c, result)
}
func (h *EnterpriseDeviceHandler) ListDevices(c *fiber.Ctx) error {
enterpriseIDStr := c.Params("id")
enterpriseID, err := strconv.ParseUint(enterpriseIDStr, 10, 64)
if err != nil {
return errors.New(errors.CodeInvalidParam, "企业ID格式错误")
}
var req dto.EnterpriseDeviceListReq
if err := c.QueryParser(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
result, err := h.service.ListDevices(c.UserContext(), uint(enterpriseID), &req)
if err != nil {
return err
}
return response.SuccessWithPagination(c, result.List, result.Total, req.Page, req.PageSize)
}

View File

@@ -3,6 +3,7 @@ package admin
import (
"strconv"
"github.com/go-playground/validator/v10"
"github.com/gofiber/fiber/v2"
"github.com/break/junhong_cmp_fiber/pkg/errors"
@@ -14,12 +15,16 @@ import (
// RoleHandler 角色 Handler
type RoleHandler struct {
service *roleService.Service
service *roleService.Service
validator *validator.Validate
}
// NewRoleHandler 创建角色 Handler
func NewRoleHandler(service *roleService.Service) *RoleHandler {
return &RoleHandler{service: service}
func NewRoleHandler(service *roleService.Service, validator *validator.Validate) *RoleHandler {
return &RoleHandler{
service: service,
validator: validator,
}
}
// Create 创建角色
@@ -30,6 +35,10 @@ func (h *RoleHandler) Create(c *fiber.Ctx) error {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
if err := h.validator.Struct(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "参数验证失败: "+err.Error())
}
role, err := h.service.Create(c.UserContext(), &req)
if err != nil {
return err
@@ -67,6 +76,10 @@ func (h *RoleHandler) Update(c *fiber.Ctx) error {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
if err := h.validator.Struct(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "参数验证失败: "+err.Error())
}
role, err := h.service.Update(c.UserContext(), uint(id), &req)
if err != nil {
return err
@@ -119,6 +132,10 @@ func (h *RoleHandler) AssignPermissions(c *fiber.Ctx) error {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
if err := h.validator.Struct(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "参数验证失败: "+err.Error())
}
rps, err := h.service.AssignPermissions(c.UserContext(), uint(id), req.PermIDs)
if err != nil {
return err
@@ -176,6 +193,10 @@ func (h *RoleHandler) UpdateStatus(c *fiber.Ctx) error {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
if err := h.validator.Struct(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "参数验证失败: "+err.Error())
}
if err := h.service.UpdateStatus(c.UserContext(), uint(id), req.Status); err != nil {
return err
}

View File

@@ -0,0 +1,107 @@
package h5
import (
"strconv"
"github.com/gofiber/fiber/v2"
"github.com/break/junhong_cmp_fiber/internal/model/dto"
enterpriseDeviceService "github.com/break/junhong_cmp_fiber/internal/service/enterprise_device"
"github.com/break/junhong_cmp_fiber/pkg/errors"
"github.com/break/junhong_cmp_fiber/pkg/response"
)
type EnterpriseDeviceHandler struct {
service *enterpriseDeviceService.Service
}
func NewEnterpriseDeviceHandler(service *enterpriseDeviceService.Service) *EnterpriseDeviceHandler {
return &EnterpriseDeviceHandler{service: service}
}
func (h *EnterpriseDeviceHandler) ListDevices(c *fiber.Ctx) error {
var req dto.H5EnterpriseDeviceListReq
if err := c.QueryParser(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
serviceReq := &dto.EnterpriseDeviceListReq{
Page: req.Page,
PageSize: req.PageSize,
DeviceNo: req.DeviceNo,
}
result, err := h.service.ListDevicesForEnterprise(c.UserContext(), serviceReq)
if err != nil {
return err
}
return response.SuccessWithPagination(c, result.List, result.Total, req.Page, req.PageSize)
}
func (h *EnterpriseDeviceHandler) GetDeviceDetail(c *fiber.Ctx) error {
deviceIDStr := c.Params("device_id")
deviceID, err := strconv.ParseUint(deviceIDStr, 10, 64)
if err != nil {
return errors.New(errors.CodeInvalidParam, "设备ID格式错误")
}
result, err := h.service.GetDeviceDetail(c.UserContext(), uint(deviceID))
if err != nil {
return err
}
return response.Success(c, result)
}
func (h *EnterpriseDeviceHandler) SuspendCard(c *fiber.Ctx) error {
deviceIDStr := c.Params("device_id")
deviceID, err := strconv.ParseUint(deviceIDStr, 10, 64)
if err != nil {
return errors.New(errors.CodeInvalidParam, "设备ID格式错误")
}
cardIDStr := c.Params("card_id")
cardID, err := strconv.ParseUint(cardIDStr, 10, 64)
if err != nil {
return errors.New(errors.CodeInvalidParam, "卡ID格式错误")
}
var req dto.DeviceCardOperationReq
if err := c.BodyParser(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
result, err := h.service.SuspendCard(c.UserContext(), uint(deviceID), uint(cardID), &req)
if err != nil {
return err
}
return response.Success(c, result)
}
func (h *EnterpriseDeviceHandler) ResumeCard(c *fiber.Ctx) error {
deviceIDStr := c.Params("device_id")
deviceID, err := strconv.ParseUint(deviceIDStr, 10, 64)
if err != nil {
return errors.New(errors.CodeInvalidParam, "设备ID格式错误")
}
cardIDStr := c.Params("card_id")
cardID, err := strconv.ParseUint(cardIDStr, 10, 64)
if err != nil {
return errors.New(errors.CodeInvalidParam, "卡ID格式错误")
}
var req dto.DeviceCardOperationReq
if err := c.BodyParser(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
result, err := h.service.ResumeCard(c.UserContext(), uint(deviceID), uint(cardID), &req)
if err != nil {
return err
}
return response.Success(c, result)
}

View File

@@ -13,6 +13,7 @@ type StandaloneCard struct {
StatusName string `json:"status_name" description:"状态名称"`
}
// Deprecated: 已废弃,不再支持通过单卡授权接口授权设备卡,请使用设备授权接口
type DeviceBundle struct {
DeviceID uint `json:"device_id" description:"设备ID"`
DeviceNo string `json:"device_no" description:"设备号"`
@@ -20,12 +21,21 @@ type DeviceBundle struct {
BundleCards []DeviceBundleCard `json:"bundle_cards" description:"连带卡(同设备的其他卡)"`
}
// Deprecated: 已废弃,不再支持通过单卡授权接口授权设备卡,请使用设备授权接口
type DeviceBundleCard struct {
ICCID string `json:"iccid" description:"ICCID"`
IotCardID uint `json:"iot_card_id" description:"卡ID"`
MSISDN string `json:"msisdn" description:"手机号"`
}
// Deprecated: 已废弃,不再支持通过单卡授权接口授权设备卡,请使用设备授权接口
type AllocatedDevice struct {
DeviceID uint `json:"device_id" description:"设备ID"`
DeviceNo string `json:"device_no" description:"设备号"`
CardCount int `json:"card_count" description:"卡数量"`
ICCIDs []string `json:"iccids" description:"卡ICCID列表"`
}
type FailedItem struct {
ICCID string `json:"iccid" description:"ICCID"`
Reason string `json:"reason" description:"失败原因"`
@@ -41,29 +51,20 @@ type AllocatePreviewSummary struct {
type AllocateCardsPreviewResp struct {
StandaloneCards []StandaloneCard `json:"standalone_cards" description:"可直接授权的卡(未绑定设备)"`
DeviceBundles []DeviceBundle `json:"device_bundles" description:"需要整体授权的设备包"`
FailedItems []FailedItem `json:"failed_items" description:"失败的卡"`
Summary AllocatePreviewSummary `json:"summary" description:"汇总信息"`
}
type AllocateCardsReq struct {
ID uint `json:"-" params:"id" path:"id" validate:"required" required:"true" description:"企业ID"`
ICCIDs []string `json:"iccids" validate:"required,min=1,max=1000,dive,required" required:"true" description:"需要授权的 ICCID 列表"`
ConfirmDeviceBundles bool `json:"confirm_device_bundles" description:"确认整体授权设备下所有卡"`
}
type AllocatedDevice struct {
DeviceID uint `json:"device_id" description:"设备ID"`
DeviceNo string `json:"device_no" description:"设备号"`
CardCount int `json:"card_count" description:"卡数量"`
ICCIDs []string `json:"iccids" description:"卡ICCID列表"`
ID uint `json:"-" params:"id" path:"id" validate:"required" required:"true" description:"企业ID"`
ICCIDs []string `json:"iccids" validate:"required,min=1,max=1000,dive,required" required:"true" description:"需要授权的 ICCID 列表"`
Remark string `json:"remark" validate:"max=500" description:"授权备注"`
}
type AllocateCardsResp struct {
SuccessCount int `json:"success_count" description:"成功数量"`
FailCount int `json:"fail_count" description:"失败数量"`
FailedItems []FailedItem `json:"failed_items" description:"失败详情"`
AllocatedDevices []AllocatedDevice `json:"allocated_devices" description:"连带授权的设备列表"`
SuccessCount int `json:"success_count" description:"成功数量"`
FailCount int `json:"fail_count" description:"失败数量"`
FailedItems []FailedItem `json:"failed_items" description:"失败详情"`
}
type RecallCardsReq struct {

View File

@@ -0,0 +1,103 @@
package dto
import "time"
type AllocateDevicesReq struct {
ID uint `json:"-" params:"id" path:"id" validate:"required" required:"true" description:"企业ID"`
DeviceNos []string `json:"device_nos" validate:"required,min=1,max=100" description:"设备号列表最多100个"`
Remark string `json:"remark" validate:"max=500" description:"授权备注"`
}
type AllocateDevicesResp struct {
SuccessCount int `json:"success_count" description:"成功数量"`
FailCount int `json:"fail_count" description:"失败数量"`
FailedItems []FailedDeviceItem `json:"failed_items" description:"失败项列表"`
AuthorizedDevices []AuthorizedDeviceItem `json:"authorized_devices" description:"已授权设备列表"`
}
type FailedDeviceItem struct {
DeviceNo string `json:"device_no" description:"设备号"`
Reason string `json:"reason" description:"失败原因"`
}
type AuthorizedDeviceItem struct {
DeviceID uint `json:"device_id" description:"设备ID"`
DeviceNo string `json:"device_no" description:"设备号"`
CardCount int `json:"card_count" description:"绑定卡数量"`
}
type RecallDevicesReq struct {
ID uint `json:"-" params:"id" path:"id" validate:"required" required:"true" description:"企业ID"`
DeviceNos []string `json:"device_nos" validate:"required,min=1,max=100" description:"设备号列表最多100个"`
}
type RecallDevicesResp struct {
SuccessCount int `json:"success_count" description:"成功数量"`
FailCount int `json:"fail_count" description:"失败数量"`
FailedItems []FailedDeviceItem `json:"failed_items" description:"失败项列表"`
}
type EnterpriseDeviceListReq struct {
ID uint `json:"-" params:"id" path:"id" validate:"required" required:"true" description:"企业ID"`
Page int `json:"page" query:"page" validate:"required,min=1" description:"页码"`
PageSize int `json:"page_size" query:"page_size" validate:"required,min=1,max=100" description:"每页数量"`
DeviceNo string `json:"device_no" query:"device_no" description:"设备号(模糊搜索)"`
}
type H5EnterpriseDeviceListReq struct {
Page int `json:"page" query:"page" validate:"required,min=1" description:"页码"`
PageSize int `json:"page_size" query:"page_size" validate:"required,min=1,max=100" description:"每页数量"`
DeviceNo string `json:"device_no" query:"device_no" description:"设备号(模糊搜索)"`
}
type EnterpriseDeviceListResp struct {
List []EnterpriseDeviceItem `json:"list" description:"设备列表"`
Total int64 `json:"total" description:"总数"`
}
type EnterpriseDeviceItem struct {
DeviceID uint `json:"device_id" description:"设备ID"`
DeviceNo string `json:"device_no" description:"设备号"`
DeviceName string `json:"device_name" description:"设备名称"`
DeviceModel string `json:"device_model" description:"设备型号"`
CardCount int `json:"card_count" description:"绑定卡数量"`
AuthorizedAt time.Time `json:"authorized_at" description:"授权时间"`
}
type EnterpriseDeviceDetailResp struct {
Device EnterpriseDeviceInfo `json:"device" description:"设备信息"`
Cards []DeviceCardInfo `json:"cards" description:"绑定卡列表"`
}
type EnterpriseDeviceInfo struct {
DeviceID uint `json:"device_id" description:"设备ID"`
DeviceNo string `json:"device_no" description:"设备号"`
DeviceName string `json:"device_name" description:"设备名称"`
DeviceModel string `json:"device_model" description:"设备型号"`
DeviceType string `json:"device_type" description:"设备类型"`
AuthorizedAt time.Time `json:"authorized_at" description:"授权时间"`
}
type DeviceCardInfo struct {
CardID uint `json:"card_id" description:"卡ID"`
ICCID string `json:"iccid" description:"ICCID"`
MSISDN string `json:"msisdn" description:"手机号"`
CarrierName string `json:"carrier_name" description:"运营商名称"`
NetworkStatus int `json:"network_status" description:"网络状态0=停机 1=开机"`
NetworkStatusName string `json:"network_status_name" description:"网络状态名称"`
}
type DeviceDetailReq struct {
DeviceID uint `json:"-" params:"device_id" path:"device_id" validate:"required" required:"true" description:"设备ID"`
}
type DeviceCardOperationReq struct {
DeviceID uint `json:"-" params:"device_id" path:"device_id" validate:"required" required:"true" description:"设备ID"`
CardID uint `json:"-" params:"card_id" path:"card_id" validate:"required" required:"true" description:"卡ID"`
Reason string `json:"reason" validate:"max=200" description:"操作原因"`
}
type DeviceCardOperationResp struct {
Success bool `json:"success" description:"操作是否成功"`
Message string `json:"message" description:"操作结果消息"`
}

View File

@@ -21,6 +21,7 @@ type EnterpriseCardAuthorization struct {
RevokedBy *uint `gorm:"column:revoked_by;comment:回收人账号ID" json:"revoked_by"`
RevokedAt *time.Time `gorm:"column:revoked_at;comment:回收时间" json:"revoked_at"`
Remark string `gorm:"column:remark;type:varchar(500);default:'';comment:授权备注" json:"remark"`
DeviceAuthID *uint `gorm:"column:device_auth_id;comment:关联的设备授权ID" json:"device_auth_id"`
}
func (EnterpriseCardAuthorization) TableName() string {

View File

@@ -0,0 +1,26 @@
package model
import (
"time"
"gorm.io/gorm"
)
type EnterpriseDeviceAuthorization struct {
ID uint `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
CreatedAt time.Time `gorm:"column:created_at" json:"created_at"`
UpdatedAt time.Time `gorm:"column:updated_at" json:"updated_at"`
DeletedAt gorm.DeletedAt `gorm:"column:deleted_at;index" json:"deleted_at,omitempty"`
EnterpriseID uint `gorm:"column:enterprise_id;not null;comment:被授权企业ID" json:"enterprise_id"`
DeviceID uint `gorm:"column:device_id;not null;comment:被授权设备ID" json:"device_id"`
AuthorizedBy uint `gorm:"column:authorized_by;not null;comment:授权人账号ID" json:"authorized_by"`
AuthorizedAt time.Time `gorm:"column:authorized_at;not null;default:CURRENT_TIMESTAMP;comment:授权时间" json:"authorized_at"`
AuthorizerType int `gorm:"column:authorizer_type;not null;comment:授权人类型2=平台用户 3=代理账号" json:"authorizer_type"`
RevokedBy *uint `gorm:"column:revoked_by;comment:回收人账号ID" json:"revoked_by"`
RevokedAt *time.Time `gorm:"column:revoked_at;comment:回收时间" json:"revoked_at"`
Remark string `gorm:"column:remark;type:varchar(500);default:'';comment:授权备注" json:"remark"`
}
func (EnterpriseDeviceAuthorization) TableName() string {
return "tb_enterprise_device_authorization"
}

View File

@@ -46,6 +46,9 @@ func RegisterAdminRoutes(router fiber.Router, handlers *bootstrap.Handlers, midd
if handlers.EnterpriseCard != nil {
registerEnterpriseCardRoutes(authGroup, handlers.EnterpriseCard, doc, basePath)
}
if handlers.EnterpriseDevice != nil {
registerEnterpriseDeviceRoutes(authGroup, handlers.EnterpriseDevice, doc, basePath)
}
if handlers.Authorization != nil {
registerAuthorizationRoutes(authGroup, handlers.Authorization, doc, basePath)
}

View File

@@ -0,0 +1,38 @@
package routes
import (
"github.com/gofiber/fiber/v2"
"github.com/break/junhong_cmp_fiber/internal/handler/admin"
"github.com/break/junhong_cmp_fiber/internal/model/dto"
"github.com/break/junhong_cmp_fiber/pkg/openapi"
)
func registerEnterpriseDeviceRoutes(router fiber.Router, handler *admin.EnterpriseDeviceHandler, doc *openapi.Generator, basePath string) {
enterprises := router.Group("/enterprises")
groupPath := basePath + "/enterprises"
Register(enterprises, doc, groupPath, "POST", "/:id/allocate-devices", handler.AllocateDevices, RouteSpec{
Summary: "授权设备给企业",
Tags: []string{"企业设备授权"},
Input: new(dto.AllocateDevicesReq),
Output: new(dto.AllocateDevicesResp),
Auth: true,
})
Register(enterprises, doc, groupPath, "POST", "/:id/recall-devices", handler.RecallDevices, RouteSpec{
Summary: "撤销设备授权",
Tags: []string{"企业设备授权"},
Input: new(dto.RecallDevicesReq),
Output: new(dto.RecallDevicesResp),
Auth: true,
})
Register(enterprises, doc, groupPath, "GET", "/:id/devices", handler.ListDevices, RouteSpec{
Summary: "企业设备列表",
Tags: []string{"企业设备授权"},
Input: new(dto.EnterpriseDeviceListReq),
Output: new(dto.EnterpriseDeviceListResp),
Auth: true,
})
}

View File

@@ -20,6 +20,9 @@ func RegisterH5Routes(router fiber.Router, handlers *bootstrap.Handlers, middlew
if handlers.H5Order != nil {
registerH5OrderRoutes(authGroup, handlers.H5Order, doc, basePath)
}
if handlers.EnterpriseDeviceH5 != nil {
registerH5EnterpriseDeviceRoutes(authGroup, handlers.EnterpriseDeviceH5, doc, basePath)
}
}
func registerH5AuthRoutes(router fiber.Router, handler interface{}, authMiddleware fiber.Handler, doc *openapi.Generator, basePath string) {

View File

@@ -0,0 +1,46 @@
package routes
import (
"github.com/gofiber/fiber/v2"
"github.com/break/junhong_cmp_fiber/internal/handler/h5"
"github.com/break/junhong_cmp_fiber/internal/model/dto"
"github.com/break/junhong_cmp_fiber/pkg/openapi"
)
func registerH5EnterpriseDeviceRoutes(router fiber.Router, handler *h5.EnterpriseDeviceHandler, doc *openapi.Generator, basePath string) {
devices := router.Group("/devices")
groupPath := basePath + "/devices"
Register(devices, doc, groupPath, "GET", "", handler.ListDevices, RouteSpec{
Summary: "企业设备列表H5",
Tags: []string{"H5-企业设备"},
Input: new(dto.H5EnterpriseDeviceListReq),
Output: new(dto.EnterpriseDeviceListResp),
Auth: true,
})
Register(devices, doc, groupPath, "GET", "/:device_id", handler.GetDeviceDetail, RouteSpec{
Summary: "获取设备详情H5",
Tags: []string{"H5-企业设备"},
Input: new(dto.DeviceDetailReq),
Output: new(dto.EnterpriseDeviceDetailResp),
Auth: true,
})
Register(devices, doc, groupPath, "POST", "/:device_id/cards/:card_id/suspend", handler.SuspendCard, RouteSpec{
Summary: "停机卡H5",
Tags: []string{"H5-企业设备"},
Input: new(dto.DeviceCardOperationReq),
Output: new(dto.DeviceCardOperationResp),
Auth: true,
})
Register(devices, doc, groupPath, "POST", "/:device_id/cards/:card_id/resume", handler.ResumeCard, RouteSpec{
Summary: "复机卡H5",
Tags: []string{"H5-企业设备"},
Input: new(dto.DeviceCardOperationReq),
Output: new(dto.DeviceCardOperationResp),
Auth: true,
})
}

View File

@@ -0,0 +1,161 @@
package enterprise_card
import (
"context"
"testing"
"time"
"github.com/break/junhong_cmp_fiber/internal/model"
"github.com/break/junhong_cmp_fiber/internal/store/postgres"
"github.com/break/junhong_cmp_fiber/pkg/constants"
"github.com/break/junhong_cmp_fiber/pkg/errors"
"github.com/break/junhong_cmp_fiber/pkg/middleware"
"github.com/break/junhong_cmp_fiber/tests/testutils"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uber.org/zap"
)
func TestAuthorizationService_BatchAuthorize_BoundCardRejected(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
logger, _ := zap.NewDevelopment()
enterpriseStore := postgres.NewEnterpriseStore(tx, rdb)
iotCardStore := postgres.NewIotCardStore(tx, rdb)
authStore := postgres.NewEnterpriseCardAuthorizationStore(tx, rdb)
service := NewAuthorizationService(enterpriseStore, iotCardStore, authStore, logger)
shop := &model.Shop{
BaseModel: model.BaseModel{Creator: 1, Updater: 1},
ShopName: "测试店铺",
ShopCode: "TEST_SHOP_001",
Level: 1,
Status: 1,
}
require.NoError(t, tx.Create(shop).Error)
enterprise := &model.Enterprise{
BaseModel: model.BaseModel{Creator: 1, Updater: 1},
EnterpriseName: "测试企业",
EnterpriseCode: "TEST_ENT_001",
OwnerShopID: &shop.ID,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
carrier := &model.Carrier{CarrierName: "测试运营商", CarrierType: "CMCC", Status: 1}
require.NoError(t, tx.Create(carrier).Error)
unboundCard := &model.IotCard{
ICCID: "UNBOUND_CARD_001",
CardType: "normal",
CarrierID: carrier.ID,
Status: 2,
ShopID: &shop.ID,
}
require.NoError(t, tx.Create(unboundCard).Error)
boundCard := &model.IotCard{
ICCID: "BOUND_CARD_001",
CardType: "normal",
CarrierID: carrier.ID,
Status: 2,
ShopID: &shop.ID,
}
require.NoError(t, tx.Create(boundCard).Error)
device := &model.Device{
DeviceNo: "TEST_DEVICE_001",
DeviceName: "测试设备",
Status: 2,
ShopID: &shop.ID,
}
require.NoError(t, tx.Create(device).Error)
now := time.Now()
binding := &model.DeviceSimBinding{
DeviceID: device.ID,
IotCardID: boundCard.ID,
SlotPosition: 1,
BindStatus: 1,
BindTime: &now,
}
require.NoError(t, tx.Create(binding).Error)
ctx := middleware.SetUserContext(context.Background(), &middleware.UserContextInfo{
UserID: 1,
UserType: constants.UserTypePlatform,
ShopID: shop.ID,
})
t.Run("绑定设备的卡被拒绝授权", func(t *testing.T) {
req := BatchAuthorizeRequest{
EnterpriseID: enterprise.ID,
CardIDs: []uint{boundCard.ID},
AuthorizerID: 1,
AuthorizerType: constants.UserTypePlatform,
Remark: "测试授权",
}
err := service.BatchAuthorize(ctx, req)
require.Error(t, err)
appErr, ok := err.(*errors.AppError)
require.True(t, ok, "应返回 AppError 类型")
assert.Equal(t, errors.CodeCannotAuthorizeBoundCard, appErr.Code)
assert.Contains(t, appErr.Message, "已绑定设备")
})
t.Run("未绑定设备的卡可以授权", func(t *testing.T) {
req := BatchAuthorizeRequest{
EnterpriseID: enterprise.ID,
CardIDs: []uint{unboundCard.ID},
AuthorizerID: 1,
AuthorizerType: constants.UserTypePlatform,
Remark: "测试授权",
}
err := service.BatchAuthorize(ctx, req)
require.NoError(t, err)
auths, err := authStore.ListByCards(ctx, []uint{unboundCard.ID}, false)
require.NoError(t, err)
assert.Len(t, auths, 1)
assert.Equal(t, enterprise.ID, auths[0].EnterpriseID)
})
t.Run("混合卡列表中有绑定卡时整体拒绝", func(t *testing.T) {
unboundCard2 := &model.IotCard{
ICCID: "UNBOUND_CARD_002",
CardType: "normal",
CarrierID: carrier.ID,
Status: 2,
ShopID: &shop.ID,
}
require.NoError(t, tx.Create(unboundCard2).Error)
req := BatchAuthorizeRequest{
EnterpriseID: enterprise.ID,
CardIDs: []uint{unboundCard2.ID, boundCard.ID},
AuthorizerID: 1,
AuthorizerType: constants.UserTypePlatform,
Remark: "测试授权",
}
err := service.BatchAuthorize(ctx, req)
require.Error(t, err)
appErr, ok := err.(*errors.AppError)
require.True(t, ok, "应返回 AppError 类型")
assert.Equal(t, errors.CodeCannotAuthorizeBoundCard, appErr.Code)
auths, err := authStore.ListByCards(ctx, []uint{unboundCard2.ID}, false)
require.NoError(t, err)
assert.Len(t, auths, 0, "混合列表中的未绑定卡也不应被授权")
})
}

View File

@@ -65,34 +65,16 @@ func (s *Service) AllocateCardsPreview(ctx context.Context, enterpriseID uint, r
s.db.WithContext(ctx).Where("iot_card_id IN ? AND bind_status = 1", cardIDs).Find(&bindings)
}
cardToDevice := make(map[uint]uint)
deviceCards := make(map[uint][]uint)
cardToDevice := make(map[uint]bool)
for _, binding := range bindings {
cardToDevice[binding.IotCardID] = binding.DeviceID
deviceCards[binding.DeviceID] = append(deviceCards[binding.DeviceID], binding.IotCardID)
}
deviceIDs := make([]uint, 0, len(deviceCards))
for deviceID := range deviceCards {
deviceIDs = append(deviceIDs, deviceID)
}
var devices []model.Device
deviceMap := make(map[uint]*model.Device)
if len(deviceIDs) > 0 {
s.db.WithContext(ctx).Where("id IN ?", deviceIDs).Find(&devices)
for i := range devices {
deviceMap[devices[i].ID] = &devices[i]
}
cardToDevice[binding.IotCardID] = true
}
resp := &dto.AllocateCardsPreviewResp{
StandaloneCards: make([]dto.StandaloneCard, 0),
DeviceBundles: make([]dto.DeviceBundle, 0),
FailedItems: make([]dto.FailedItem, 0),
}
processedDevices := make(map[uint]bool)
for _, iccid := range req.ICCIDs {
card, exists := cardMap[iccid]
if !exists {
@@ -103,67 +85,28 @@ func (s *Service) AllocateCardsPreview(ctx context.Context, enterpriseID uint, r
continue
}
deviceID, hasDevice := cardToDevice[card.ID]
if !hasDevice {
resp.StandaloneCards = append(resp.StandaloneCards, dto.StandaloneCard{
ICCID: card.ICCID,
IotCardID: card.ID,
MSISDN: card.MSISDN,
CarrierID: card.CarrierID,
StatusName: getCardStatusName(card.Status),
if cardToDevice[card.ID] {
resp.FailedItems = append(resp.FailedItems, dto.FailedItem{
ICCID: iccid,
Reason: "该卡已绑定设备,请使用设备授权功能",
})
} else {
if processedDevices[deviceID] {
continue
}
processedDevices[deviceID] = true
device := deviceMap[deviceID]
if device == nil {
continue
}
bundleCardIDs := deviceCards[deviceID]
bundle := dto.DeviceBundle{
DeviceID: deviceID,
DeviceNo: device.DeviceNo,
BundleCards: make([]dto.DeviceBundleCard, 0),
}
for _, bundleCardID := range bundleCardIDs {
bundleCard := cardIDMap[bundleCardID]
if bundleCard == nil {
continue
}
if bundleCard.ID == card.ID {
bundle.TriggerCard = dto.DeviceBundleCard{
ICCID: bundleCard.ICCID,
IotCardID: bundleCard.ID,
MSISDN: bundleCard.MSISDN,
}
} else {
bundle.BundleCards = append(bundle.BundleCards, dto.DeviceBundleCard{
ICCID: bundleCard.ICCID,
IotCardID: bundleCard.ID,
MSISDN: bundleCard.MSISDN,
})
}
}
resp.DeviceBundles = append(resp.DeviceBundles, bundle)
continue
}
}
deviceCardCount := 0
for _, bundle := range resp.DeviceBundles {
deviceCardCount += 1 + len(bundle.BundleCards)
resp.StandaloneCards = append(resp.StandaloneCards, dto.StandaloneCard{
ICCID: card.ICCID,
IotCardID: card.ID,
MSISDN: card.MSISDN,
CarrierID: card.CarrierID,
StatusName: getCardStatusName(card.Status),
})
}
resp.Summary = dto.AllocatePreviewSummary{
StandaloneCardCount: len(resp.StandaloneCards),
DeviceCount: len(resp.DeviceBundles),
DeviceCardCount: deviceCardCount,
TotalCardCount: len(resp.StandaloneCards) + deviceCardCount,
DeviceCount: 0,
DeviceCardCount: 0,
TotalCardCount: len(resp.StandaloneCards),
FailedCount: len(resp.FailedItems),
}
@@ -186,36 +129,15 @@ func (s *Service) AllocateCards(ctx context.Context, enterpriseID uint, req *dto
return nil, err
}
if len(preview.DeviceBundles) > 0 && !req.ConfirmDeviceBundles {
return nil, errors.New(errors.CodeInvalidParam, "存在设备包,请确认整体授权设备下所有卡")
}
resp := &dto.AllocateCardsResp{
FailedItems: preview.FailedItems,
FailCount: len(preview.FailedItems),
AllocatedDevices: make([]dto.AllocatedDevice, 0),
FailedItems: preview.FailedItems,
FailCount: len(preview.FailedItems),
}
cardIDsToAllocate := make([]uint, 0)
for _, card := range preview.StandaloneCards {
cardIDsToAllocate = append(cardIDsToAllocate, card.IotCardID)
}
for _, bundle := range preview.DeviceBundles {
cardIDsToAllocate = append(cardIDsToAllocate, bundle.TriggerCard.IotCardID)
for _, card := range bundle.BundleCards {
cardIDsToAllocate = append(cardIDsToAllocate, card.IotCardID)
}
iccids := []string{bundle.TriggerCard.ICCID}
for _, card := range bundle.BundleCards {
iccids = append(iccids, card.ICCID)
}
resp.AllocatedDevices = append(resp.AllocatedDevices, dto.AllocatedDevice{
DeviceID: bundle.DeviceID,
DeviceNo: bundle.DeviceNo,
CardCount: 1 + len(bundle.BundleCards),
ICCIDs: iccids,
})
}
existingAuths, err := s.enterpriseCardAuthStore.GetActiveAuthsByCardIDs(ctx, enterpriseID, cardIDsToAllocate)
if err != nil {
@@ -235,6 +157,7 @@ func (s *Service) AllocateCards(ctx context.Context, enterpriseID uint, req *dto
AuthorizedBy: currentUserID,
AuthorizedAt: now,
AuthorizerType: userType,
Remark: req.Remark,
})
}

View File

@@ -0,0 +1,621 @@
package enterprise_device
import (
"context"
"fmt"
"time"
"github.com/break/junhong_cmp_fiber/internal/model"
"github.com/break/junhong_cmp_fiber/internal/model/dto"
"github.com/break/junhong_cmp_fiber/internal/store/postgres"
"github.com/break/junhong_cmp_fiber/pkg/constants"
"github.com/break/junhong_cmp_fiber/pkg/errors"
pkggorm "github.com/break/junhong_cmp_fiber/pkg/gorm"
"github.com/break/junhong_cmp_fiber/pkg/middleware"
"go.uber.org/zap"
"gorm.io/gorm"
)
type Service struct {
db *gorm.DB
enterpriseStore *postgres.EnterpriseStore
deviceStore *postgres.DeviceStore
deviceSimBindingStore *postgres.DeviceSimBindingStore
enterpriseDeviceAuthStore *postgres.EnterpriseDeviceAuthorizationStore
enterpriseCardAuthStore *postgres.EnterpriseCardAuthorizationStore
logger *zap.Logger
}
func New(
db *gorm.DB,
enterpriseStore *postgres.EnterpriseStore,
deviceStore *postgres.DeviceStore,
deviceSimBindingStore *postgres.DeviceSimBindingStore,
enterpriseDeviceAuthStore *postgres.EnterpriseDeviceAuthorizationStore,
enterpriseCardAuthStore *postgres.EnterpriseCardAuthorizationStore,
logger *zap.Logger,
) *Service {
return &Service{
db: db,
enterpriseStore: enterpriseStore,
deviceStore: deviceStore,
deviceSimBindingStore: deviceSimBindingStore,
enterpriseDeviceAuthStore: enterpriseDeviceAuthStore,
enterpriseCardAuthStore: enterpriseCardAuthStore,
logger: logger,
}
}
// AllocateDevices 授权设备给企业
func (s *Service) AllocateDevices(ctx context.Context, enterpriseID uint, req *dto.AllocateDevicesReq) (*dto.AllocateDevicesResp, error) {
currentUserID := middleware.GetUserIDFromContext(ctx)
if currentUserID == 0 {
return nil, errors.New(errors.CodeUnauthorized, "未授权访问")
}
// 验证企业存在
_, err := s.enterpriseStore.GetByID(ctx, enterpriseID)
if err != nil {
return nil, errors.New(errors.CodeEnterpriseNotFound, "企业不存在")
}
// 查询所有设备
var devices []model.Device
if err := s.db.WithContext(ctx).Where("device_no IN ?", req.DeviceNos).Find(&devices).Error; err != nil {
return nil, fmt.Errorf("查询设备信息失败: %w", err)
}
deviceMap := make(map[string]*model.Device)
deviceIDs := make([]uint, 0, len(devices))
for i := range devices {
deviceMap[devices[i].DeviceNo] = &devices[i]
deviceIDs = append(deviceIDs, devices[i].ID)
}
// 获取当前用户的店铺ID用于验证设备所有权
currentShopID := middleware.GetShopIDFromContext(ctx)
userType := middleware.GetUserTypeFromContext(ctx)
// 检查已授权的设备
existingAuths, err := s.enterpriseDeviceAuthStore.GetActiveAuthsByDeviceIDs(ctx, enterpriseID, deviceIDs)
if err != nil {
return nil, fmt.Errorf("查询已有授权失败: %w", err)
}
resp := &dto.AllocateDevicesResp{
FailedItems: make([]dto.FailedDeviceItem, 0),
AuthorizedDevices: make([]dto.AuthorizedDeviceItem, 0),
}
devicesToAllocate := make([]*model.Device, 0)
for _, deviceNo := range req.DeviceNos {
device, exists := deviceMap[deviceNo]
if !exists {
resp.FailedItems = append(resp.FailedItems, dto.FailedDeviceItem{
DeviceNo: deviceNo,
Reason: "设备不存在",
})
continue
}
// 验证设备状态(必须是"已分销"状态)
if device.Status != 2 {
resp.FailedItems = append(resp.FailedItems, dto.FailedDeviceItem{
DeviceNo: deviceNo,
Reason: "设备状态不正确,必须是已分销状态",
})
continue
}
// 验证设备所有权(除非是超级管理员或平台用户)
if userType == constants.UserTypeAgent {
if device.ShopID == nil || *device.ShopID != currentShopID {
resp.FailedItems = append(resp.FailedItems, dto.FailedDeviceItem{
DeviceNo: deviceNo,
Reason: "无权操作此设备",
})
continue
}
}
// 检查是否已授权
if existingAuths[device.ID] {
resp.FailedItems = append(resp.FailedItems, dto.FailedDeviceItem{
DeviceNo: deviceNo,
Reason: "设备已授权给此企业",
})
continue
}
devicesToAllocate = append(devicesToAllocate, device)
}
// 在事务中处理授权
if len(devicesToAllocate) > 0 {
err := s.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
now := time.Now()
authorizerType := userType
// 1. 创建设备授权记录
deviceAuths := make([]*model.EnterpriseDeviceAuthorization, 0, len(devicesToAllocate))
for _, device := range devicesToAllocate {
deviceAuths = append(deviceAuths, &model.EnterpriseDeviceAuthorization{
EnterpriseID: enterpriseID,
DeviceID: device.ID,
AuthorizedBy: currentUserID,
AuthorizedAt: now,
AuthorizerType: authorizerType,
Remark: req.Remark,
})
}
if err := tx.Create(deviceAuths).Error; err != nil {
return fmt.Errorf("创建设备授权记录失败: %w", err)
}
// 构建设备ID到授权ID的映射
deviceAuthIDMap := make(map[uint]uint)
for _, auth := range deviceAuths {
deviceAuthIDMap[auth.DeviceID] = auth.ID
}
// 2. 查询所有设备绑定的卡
deviceIDsToQuery := make([]uint, 0, len(devicesToAllocate))
for _, device := range devicesToAllocate {
deviceIDsToQuery = append(deviceIDsToQuery, device.ID)
}
var bindings []model.DeviceSimBinding
if err := tx.Where("device_id IN ? AND bind_status = 1", deviceIDsToQuery).Find(&bindings).Error; err != nil {
return fmt.Errorf("查询设备绑定卡失败: %w", err)
}
// 3. 为每张绑定的卡创建授权记录
if len(bindings) > 0 {
cardAuths := make([]*model.EnterpriseCardAuthorization, 0, len(bindings))
for _, binding := range bindings {
deviceAuthID := deviceAuthIDMap[binding.DeviceID]
cardAuths = append(cardAuths, &model.EnterpriseCardAuthorization{
EnterpriseID: enterpriseID,
CardID: binding.IotCardID,
DeviceAuthID: &deviceAuthID,
AuthorizedBy: currentUserID,
AuthorizedAt: now,
AuthorizerType: authorizerType,
Remark: req.Remark,
})
}
if err := tx.Create(cardAuths).Error; err != nil {
return fmt.Errorf("创建卡授权记录失败: %w", err)
}
}
// 4. 统计每个设备的绑定卡数量
deviceCardCount := make(map[uint]int)
for _, binding := range bindings {
deviceCardCount[binding.DeviceID]++
}
// 5. 构建响应
for _, device := range devicesToAllocate {
resp.AuthorizedDevices = append(resp.AuthorizedDevices, dto.AuthorizedDeviceItem{
DeviceID: device.ID,
DeviceNo: device.DeviceNo,
CardCount: deviceCardCount[device.ID],
})
}
return nil
})
if err != nil {
return nil, err
}
}
resp.SuccessCount = len(devicesToAllocate)
resp.FailCount = len(resp.FailedItems)
return resp, nil
}
// RecallDevices 撤销设备授权
func (s *Service) RecallDevices(ctx context.Context, enterpriseID uint, req *dto.RecallDevicesReq) (*dto.RecallDevicesResp, error) {
currentUserID := middleware.GetUserIDFromContext(ctx)
if currentUserID == 0 {
return nil, errors.New(errors.CodeUnauthorized, "未授权访问")
}
// 验证企业存在
_, err := s.enterpriseStore.GetByID(ctx, enterpriseID)
if err != nil {
return nil, errors.New(errors.CodeEnterpriseNotFound, "企业不存在")
}
// 查询设备
var devices []model.Device
if err := s.db.WithContext(ctx).Where("device_no IN ?", req.DeviceNos).Find(&devices).Error; err != nil {
return nil, fmt.Errorf("查询设备信息失败: %w", err)
}
deviceMap := make(map[string]*model.Device)
deviceIDs := make([]uint, 0, len(devices))
for i := range devices {
deviceMap[devices[i].DeviceNo] = &devices[i]
deviceIDs = append(deviceIDs, devices[i].ID)
}
// 检查授权状态
existingAuths, err := s.enterpriseDeviceAuthStore.GetActiveAuthsByDeviceIDs(ctx, enterpriseID, deviceIDs)
if err != nil {
return nil, fmt.Errorf("查询授权状态失败: %w", err)
}
resp := &dto.RecallDevicesResp{
FailedItems: make([]dto.FailedDeviceItem, 0),
}
deviceAuthsToRevoke := make([]uint, 0)
for _, deviceNo := range req.DeviceNos {
device, exists := deviceMap[deviceNo]
if !exists {
resp.FailedItems = append(resp.FailedItems, dto.FailedDeviceItem{
DeviceNo: deviceNo,
Reason: "设备不存在",
})
continue
}
if !existingAuths[device.ID] {
resp.FailedItems = append(resp.FailedItems, dto.FailedDeviceItem{
DeviceNo: deviceNo,
Reason: "设备未授权给此企业",
})
continue
}
// 获取授权记录ID
auth, err := s.enterpriseDeviceAuthStore.GetByDeviceID(ctx, device.ID)
if err != nil || auth.EnterpriseID != enterpriseID {
resp.FailedItems = append(resp.FailedItems, dto.FailedDeviceItem{
DeviceNo: deviceNo,
Reason: "授权记录不存在",
})
continue
}
deviceAuthsToRevoke = append(deviceAuthsToRevoke, auth.ID)
}
// 在事务中处理撤销
if len(deviceAuthsToRevoke) > 0 {
err := s.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
// 1. 撤销设备授权
if err := s.enterpriseDeviceAuthStore.RevokeByIDs(ctx, deviceAuthsToRevoke, currentUserID); err != nil {
return fmt.Errorf("撤销设备授权失败: %w", err)
}
// 2. 级联撤销卡授权
for _, authID := range deviceAuthsToRevoke {
if err := s.enterpriseCardAuthStore.RevokeByDeviceAuthID(ctx, authID, currentUserID); err != nil {
return fmt.Errorf("撤销卡授权失败: %w", err)
}
}
return nil
})
if err != nil {
return nil, err
}
}
resp.SuccessCount = len(deviceAuthsToRevoke)
resp.FailCount = len(resp.FailedItems)
return resp, nil
}
// ListDevices 查询企业授权设备列表(后台管理)
func (s *Service) ListDevices(ctx context.Context, enterpriseID uint, req *dto.EnterpriseDeviceListReq) (*dto.EnterpriseDeviceListResp, error) {
// 验证企业存在
_, err := s.enterpriseStore.GetByID(ctx, enterpriseID)
if err != nil {
return nil, errors.New(errors.CodeEnterpriseNotFound, "企业不存在")
}
// 查询授权记录
opts := postgres.DeviceAuthListOptions{
EnterpriseID: &enterpriseID,
IncludeRevoked: false,
Page: req.Page,
PageSize: req.PageSize,
}
auths, total, err := s.enterpriseDeviceAuthStore.ListByEnterprise(ctx, opts)
if err != nil {
return nil, fmt.Errorf("查询授权记录失败: %w", err)
}
if len(auths) == 0 {
return &dto.EnterpriseDeviceListResp{
List: make([]dto.EnterpriseDeviceItem, 0),
Total: 0,
}, nil
}
// 收集设备ID
deviceIDs := make([]uint, 0, len(auths))
authMap := make(map[uint]*model.EnterpriseDeviceAuthorization)
for _, auth := range auths {
deviceIDs = append(deviceIDs, auth.DeviceID)
authMap[auth.DeviceID] = auth
}
// 查询设备信息
var devices []model.Device
query := s.db.WithContext(ctx).Where("id IN ?", deviceIDs)
if req.DeviceNo != "" {
query = query.Where("device_no LIKE ?", "%"+req.DeviceNo+"%")
}
if err := query.Find(&devices).Error; err != nil {
return nil, fmt.Errorf("查询设备信息失败: %w", err)
}
// 统计每个设备的绑定卡数量
var bindings []model.DeviceSimBinding
if err := s.db.WithContext(ctx).
Where("device_id IN ? AND bind_status = 1", deviceIDs).
Find(&bindings).Error; err != nil {
return nil, fmt.Errorf("查询设备绑定卡失败: %w", err)
}
cardCountMap := make(map[uint]int)
for _, binding := range bindings {
cardCountMap[binding.DeviceID]++
}
// 构建响应
items := make([]dto.EnterpriseDeviceItem, 0, len(devices))
for _, device := range devices {
auth := authMap[device.ID]
items = append(items, dto.EnterpriseDeviceItem{
DeviceID: device.ID,
DeviceNo: device.DeviceNo,
DeviceName: device.DeviceName,
DeviceModel: device.DeviceModel,
CardCount: cardCountMap[device.ID],
AuthorizedAt: auth.AuthorizedAt,
})
}
return &dto.EnterpriseDeviceListResp{
List: items,
Total: total,
}, nil
}
// ListDevicesForEnterprise 查询企业授权设备列表H5企业用户
func (s *Service) ListDevicesForEnterprise(ctx context.Context, req *dto.EnterpriseDeviceListReq) (*dto.EnterpriseDeviceListResp, error) {
enterpriseID := middleware.GetEnterpriseIDFromContext(ctx)
if enterpriseID == 0 {
return nil, errors.New(errors.CodeUnauthorized, "未授权访问")
}
opts := postgres.DeviceAuthListOptions{
EnterpriseID: &enterpriseID,
IncludeRevoked: false,
Page: req.Page,
PageSize: req.PageSize,
}
auths, total, err := s.enterpriseDeviceAuthStore.ListByEnterprise(ctx, opts)
if err != nil {
return nil, fmt.Errorf("查询授权记录失败: %w", err)
}
if len(auths) == 0 {
return &dto.EnterpriseDeviceListResp{
List: make([]dto.EnterpriseDeviceItem, 0),
Total: 0,
}, nil
}
deviceIDs := make([]uint, 0, len(auths))
authMap := make(map[uint]*model.EnterpriseDeviceAuthorization)
for _, auth := range auths {
deviceIDs = append(deviceIDs, auth.DeviceID)
authMap[auth.DeviceID] = auth
}
skipCtx := pkggorm.SkipDataPermission(ctx)
var devices []model.Device
query := s.db.WithContext(skipCtx).Where("id IN ?", deviceIDs)
if req.DeviceNo != "" {
query = query.Where("device_no LIKE ?", "%"+req.DeviceNo+"%")
}
if err := query.Find(&devices).Error; err != nil {
return nil, fmt.Errorf("查询设备信息失败: %w", err)
}
var bindings []model.DeviceSimBinding
if err := s.db.WithContext(skipCtx).
Where("device_id IN ? AND bind_status = 1", deviceIDs).
Find(&bindings).Error; err != nil {
return nil, fmt.Errorf("查询设备绑定卡失败: %w", err)
}
cardCountMap := make(map[uint]int)
for _, binding := range bindings {
cardCountMap[binding.DeviceID]++
}
items := make([]dto.EnterpriseDeviceItem, 0, len(devices))
for _, device := range devices {
auth := authMap[device.ID]
items = append(items, dto.EnterpriseDeviceItem{
DeviceID: device.ID,
DeviceNo: device.DeviceNo,
DeviceName: device.DeviceName,
DeviceModel: device.DeviceModel,
CardCount: cardCountMap[device.ID],
AuthorizedAt: auth.AuthorizedAt,
})
}
return &dto.EnterpriseDeviceListResp{
List: items,
Total: total,
}, nil
}
// GetDeviceDetail 获取设备详情H5企业用户
func (s *Service) GetDeviceDetail(ctx context.Context, deviceID uint) (*dto.EnterpriseDeviceDetailResp, error) {
enterpriseID := middleware.GetEnterpriseIDFromContext(ctx)
if enterpriseID == 0 {
return nil, errors.New(errors.CodeUnauthorized, "未授权访问")
}
auth, err := s.enterpriseDeviceAuthStore.GetByDeviceID(ctx, deviceID)
if err != nil || auth.EnterpriseID != enterpriseID || auth.RevokedAt != nil {
return nil, errors.New(errors.CodeDeviceNotAuthorized, "设备未授权给此企业")
}
skipCtx := pkggorm.SkipDataPermission(ctx)
var device model.Device
if err := s.db.WithContext(skipCtx).Where("id = ?", deviceID).First(&device).Error; err != nil {
return nil, fmt.Errorf("查询设备信息失败: %w", err)
}
var bindings []model.DeviceSimBinding
if err := s.db.WithContext(skipCtx).
Where("device_id = ? AND bind_status = 1", deviceID).
Find(&bindings).Error; err != nil {
return nil, fmt.Errorf("查询设备绑定卡失败: %w", err)
}
cardIDs := make([]uint, 0, len(bindings))
for _, binding := range bindings {
cardIDs = append(cardIDs, binding.IotCardID)
}
var cards []model.IotCard
cardInfos := make([]dto.DeviceCardInfo, 0)
if len(cardIDs) > 0 {
if err := s.db.WithContext(skipCtx).Where("id IN ?", cardIDs).Find(&cards).Error; err != nil {
return nil, fmt.Errorf("查询卡信息失败: %w", err)
}
carrierIDs := make([]uint, 0, len(cards))
for _, card := range cards {
carrierIDs = append(carrierIDs, card.CarrierID)
}
var carriers []model.Carrier
carrierMap := make(map[uint]string)
if len(carrierIDs) > 0 {
if err := s.db.WithContext(skipCtx).Where("id IN ?", carrierIDs).Find(&carriers).Error; err == nil {
for _, carrier := range carriers {
carrierMap[carrier.ID] = carrier.CarrierName
}
}
}
for _, card := range cards {
cardInfos = append(cardInfos, dto.DeviceCardInfo{
CardID: card.ID,
ICCID: card.ICCID,
MSISDN: card.MSISDN,
CarrierName: carrierMap[card.CarrierID],
NetworkStatus: card.NetworkStatus,
NetworkStatusName: getNetworkStatusName(card.NetworkStatus),
})
}
}
return &dto.EnterpriseDeviceDetailResp{
Device: dto.EnterpriseDeviceInfo{
DeviceID: device.ID,
DeviceNo: device.DeviceNo,
DeviceName: device.DeviceName,
DeviceModel: device.DeviceModel,
DeviceType: device.DeviceType,
AuthorizedAt: auth.AuthorizedAt,
},
Cards: cardInfos,
}, nil
}
func (s *Service) SuspendCard(ctx context.Context, deviceID, cardID uint, req *dto.DeviceCardOperationReq) (*dto.DeviceCardOperationResp, error) {
if err := s.validateCardOperation(ctx, deviceID, cardID); err != nil {
return nil, err
}
skipCtx := pkggorm.SkipDataPermission(ctx)
if err := s.db.WithContext(skipCtx).Model(&model.IotCard{}).
Where("id = ?", cardID).
Update("network_status", 0).Error; err != nil {
return nil, fmt.Errorf("停机操作失败: %w", err)
}
return &dto.DeviceCardOperationResp{
Success: true,
Message: "停机成功",
}, nil
}
func (s *Service) ResumeCard(ctx context.Context, deviceID, cardID uint, req *dto.DeviceCardOperationReq) (*dto.DeviceCardOperationResp, error) {
if err := s.validateCardOperation(ctx, deviceID, cardID); err != nil {
return nil, err
}
skipCtx := pkggorm.SkipDataPermission(ctx)
if err := s.db.WithContext(skipCtx).Model(&model.IotCard{}).
Where("id = ?", cardID).
Update("network_status", 1).Error; err != nil {
return nil, fmt.Errorf("复机操作失败: %w", err)
}
return &dto.DeviceCardOperationResp{
Success: true,
Message: "复机成功",
}, nil
}
func (s *Service) validateCardOperation(ctx context.Context, deviceID, cardID uint) error {
enterpriseID := middleware.GetEnterpriseIDFromContext(ctx)
if enterpriseID == 0 {
return errors.New(errors.CodeUnauthorized, "未授权访问")
}
auth, err := s.enterpriseDeviceAuthStore.GetByDeviceID(ctx, deviceID)
if err != nil || auth.EnterpriseID != enterpriseID || auth.RevokedAt != nil {
return errors.New(errors.CodeDeviceNotAuthorized, "设备未授权给此企业")
}
skipCtx := pkggorm.SkipDataPermission(ctx)
var binding model.DeviceSimBinding
if err := s.db.WithContext(skipCtx).
Where("device_id = ? AND iot_card_id = ? AND bind_status = 1", deviceID, cardID).
First(&binding).Error; err != nil {
return errors.New(errors.CodeForbidden, "卡不属于该设备")
}
var cardAuth model.EnterpriseCardAuthorization
if err := s.db.WithContext(skipCtx).
Where("enterprise_id = ? AND card_id = ? AND device_auth_id IS NOT NULL AND revoked_at IS NULL", enterpriseID, cardID).
First(&cardAuth).Error; err != nil {
return errors.New(errors.CodeForbidden, "无权操作此卡")
}
return nil
}
func getNetworkStatusName(status int) string {
if status == 1 {
return "开机"
}
return "停机"
}

View File

@@ -0,0 +1,916 @@
package enterprise_device
import (
"context"
"fmt"
"testing"
"time"
"github.com/break/junhong_cmp_fiber/internal/model"
"github.com/break/junhong_cmp_fiber/internal/model/dto"
"github.com/break/junhong_cmp_fiber/internal/store/postgres"
"github.com/break/junhong_cmp_fiber/pkg/constants"
"github.com/break/junhong_cmp_fiber/pkg/middleware"
"github.com/break/junhong_cmp_fiber/tests/testutils"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uber.org/zap"
)
func uniqueServiceTestPrefix() string {
return fmt.Sprintf("SVC%d", time.Now().UnixNano()%1000000000)
}
func createTestContext(userID uint, userType int, shopID uint, enterpriseID uint) context.Context {
ctx := context.Background()
return middleware.SetUserContext(ctx, &middleware.UserContextInfo{
UserID: userID,
UserType: userType,
ShopID: shopID,
EnterpriseID: enterpriseID,
})
}
type testEnv struct {
service *Service
enterprise *model.Enterprise
shop *model.Shop
devices []*model.Device
cards []*model.IotCard
bindings []*model.DeviceSimBinding
carrier *model.Carrier
}
func setupTestEnv(t *testing.T, prefix string) *testEnv {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
enterpriseStore := postgres.NewEnterpriseStore(tx, rdb)
deviceStore := postgres.NewDeviceStore(tx, rdb)
deviceSimBindingStore := postgres.NewDeviceSimBindingStore(tx, rdb)
enterpriseDeviceAuthStore := postgres.NewEnterpriseDeviceAuthorizationStore(tx, rdb)
enterpriseCardAuthStore := postgres.NewEnterpriseCardAuthorizationStore(tx, rdb)
logger := zap.NewNop()
svc := New(tx, enterpriseStore, deviceStore, deviceSimBindingStore, enterpriseDeviceAuthStore, enterpriseCardAuthStore, logger)
shop := &model.Shop{
ShopName: prefix + "_测试店铺",
ShopCode: prefix,
Level: 1,
Status: 1,
}
require.NoError(t, tx.Create(shop).Error)
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
OwnerShopID: &shop.ID,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
carrier := &model.Carrier{
CarrierName: "测试运营商",
CarrierType: "CMCC",
Status: 1,
}
require.NoError(t, tx.Create(carrier).Error)
devices := make([]*model.Device, 3)
for i := 0; i < 3; i++ {
devices[i] = &model.Device{
DeviceNo: fmt.Sprintf("%s_D%03d", prefix, i+1),
DeviceName: fmt.Sprintf("测试设备%d", i+1),
Status: 2,
ShopID: &shop.ID,
}
require.NoError(t, tx.Create(devices[i]).Error)
}
cards := make([]*model.IotCard, 4)
for i := 0; i < 4; i++ {
cards[i] = &model.IotCard{
ICCID: fmt.Sprintf("%s%04d", prefix, i+1),
CardType: "normal",
CarrierID: carrier.ID,
Status: 2,
ShopID: &shop.ID,
}
require.NoError(t, tx.Create(cards[i]).Error)
}
now := time.Now()
bindings := []*model.DeviceSimBinding{
{DeviceID: devices[0].ID, IotCardID: cards[0].ID, SlotPosition: 1, BindStatus: 1, BindTime: &now},
{DeviceID: devices[0].ID, IotCardID: cards[1].ID, SlotPosition: 2, BindStatus: 1, BindTime: &now},
{DeviceID: devices[1].ID, IotCardID: cards[2].ID, SlotPosition: 1, BindStatus: 1, BindTime: &now},
}
for _, b := range bindings {
require.NoError(t, tx.Create(b).Error)
}
return &testEnv{
service: svc,
enterprise: enterprise,
shop: shop,
devices: devices,
cards: cards,
bindings: bindings,
carrier: carrier,
}
}
func TestService_AllocateDevices(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
tests := []struct {
name string
ctx context.Context
req *dto.AllocateDevicesReq
wantSuccess int
wantFail int
wantErr bool
}{
{
name: "平台用户成功授权设备",
ctx: createTestContext(1, constants.UserTypePlatform, 0, 0),
req: &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
Remark: "测试授权",
},
wantSuccess: 1,
wantFail: 0,
wantErr: false,
},
{
name: "代理用户成功授权自己店铺的设备",
ctx: createTestContext(2, constants.UserTypeAgent, env.shop.ID, 0),
req: &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[1].DeviceNo},
},
wantSuccess: 1,
wantFail: 0,
wantErr: false,
},
{
name: "设备不存在时记录失败",
ctx: createTestContext(1, constants.UserTypePlatform, 0, 0),
req: &dto.AllocateDevicesReq{
DeviceNos: []string{"NOT_EXIST_DEVICE"},
},
wantSuccess: 0,
wantFail: 1,
wantErr: false,
},
{
name: "未授权用户返回错误",
ctx: context.Background(),
req: &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[2].DeviceNo},
},
wantSuccess: 0,
wantFail: 0,
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
resp, err := env.service.AllocateDevices(tt.ctx, env.enterprise.ID, tt.req)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
assert.Equal(t, tt.wantSuccess, resp.SuccessCount)
assert.Equal(t, tt.wantFail, resp.FailCount)
})
}
}
func TestService_AllocateDevices_DeviceStatusValidation(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
prefix := uniqueServiceTestPrefix()
enterpriseStore := postgres.NewEnterpriseStore(tx, rdb)
deviceStore := postgres.NewDeviceStore(tx, rdb)
deviceSimBindingStore := postgres.NewDeviceSimBindingStore(tx, rdb)
enterpriseDeviceAuthStore := postgres.NewEnterpriseDeviceAuthorizationStore(tx, rdb)
enterpriseCardAuthStore := postgres.NewEnterpriseCardAuthorizationStore(tx, rdb)
logger := zap.NewNop()
svc := New(tx, enterpriseStore, deviceStore, deviceSimBindingStore, enterpriseDeviceAuthStore, enterpriseCardAuthStore, logger)
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
inStockDevice := &model.Device{
DeviceNo: prefix + "_INSTOCK",
DeviceName: "在库设备",
Status: 1,
}
require.NoError(t, tx.Create(inStockDevice).Error)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
t.Run("设备状态不是已分销时失败", func(t *testing.T) {
req := &dto.AllocateDevicesReq{
DeviceNos: []string{inStockDevice.DeviceNo},
}
resp, err := svc.AllocateDevices(ctx, enterprise.ID, req)
require.NoError(t, err)
assert.Equal(t, 0, resp.SuccessCount)
assert.Equal(t, 1, resp.FailCount)
assert.Contains(t, resp.FailedItems[0].Reason, "状态不正确")
})
}
func TestService_AllocateDevices_AgentPermission(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
prefix := uniqueServiceTestPrefix()
enterpriseStore := postgres.NewEnterpriseStore(tx, rdb)
deviceStore := postgres.NewDeviceStore(tx, rdb)
deviceSimBindingStore := postgres.NewDeviceSimBindingStore(tx, rdb)
enterpriseDeviceAuthStore := postgres.NewEnterpriseDeviceAuthorizationStore(tx, rdb)
enterpriseCardAuthStore := postgres.NewEnterpriseCardAuthorizationStore(tx, rdb)
logger := zap.NewNop()
svc := New(tx, enterpriseStore, deviceStore, deviceSimBindingStore, enterpriseDeviceAuthStore, enterpriseCardAuthStore, logger)
shop1 := &model.Shop{ShopName: prefix + "_店铺1", ShopCode: prefix + "1", Level: 1, Status: 1}
require.NoError(t, tx.Create(shop1).Error)
shop2 := &model.Shop{ShopName: prefix + "_店铺2", ShopCode: prefix + "2", Level: 1, Status: 1}
require.NoError(t, tx.Create(shop2).Error)
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
device := &model.Device{
DeviceNo: prefix + "_D001",
DeviceName: "测试设备",
Status: 2,
ShopID: &shop1.ID,
}
require.NoError(t, tx.Create(device).Error)
t.Run("代理用户无法授权其他店铺的设备", func(t *testing.T) {
ctx := createTestContext(1, constants.UserTypeAgent, shop2.ID, 0)
req := &dto.AllocateDevicesReq{
DeviceNos: []string{device.DeviceNo},
}
resp, err := svc.AllocateDevices(ctx, enterprise.ID, req)
require.NoError(t, err)
assert.Equal(t, 0, resp.SuccessCount)
assert.Equal(t, 1, resp.FailCount)
assert.Contains(t, resp.FailedItems[0].Reason, "无权操作")
})
}
func TestService_AllocateDevices_DuplicateAuthorization(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
req := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
resp, err := env.service.AllocateDevices(ctx, env.enterprise.ID, req)
require.NoError(t, err)
assert.Equal(t, 1, resp.SuccessCount)
t.Run("重复授权同一设备时失败", func(t *testing.T) {
resp2, err := env.service.AllocateDevices(ctx, env.enterprise.ID, req)
require.NoError(t, err)
assert.Equal(t, 0, resp2.SuccessCount)
assert.Equal(t, 1, resp2.FailCount)
assert.Contains(t, resp2.FailedItems[0].Reason, "已授权")
})
}
func TestService_AllocateDevices_CascadeCardAuthorization(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
t.Run("授权设备时级联授权绑定的卡", func(t *testing.T) {
req := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
resp, err := env.service.AllocateDevices(ctx, env.enterprise.ID, req)
require.NoError(t, err)
assert.Equal(t, 1, resp.SuccessCount)
assert.Len(t, resp.AuthorizedDevices, 1)
assert.Equal(t, 2, resp.AuthorizedDevices[0].CardCount)
})
}
func TestService_RecallDevices(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo, env.devices[1].DeviceNo},
}
_, err := env.service.AllocateDevices(ctx, env.enterprise.ID, allocateReq)
require.NoError(t, err)
tests := []struct {
name string
req *dto.RecallDevicesReq
wantSuccess int
wantFail int
wantErr bool
}{
{
name: "成功撤销授权",
req: &dto.RecallDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
},
wantSuccess: 1,
wantFail: 0,
wantErr: false,
},
{
name: "设备不存在时失败",
req: &dto.RecallDevicesReq{
DeviceNos: []string{"NOT_EXIST"},
},
wantSuccess: 0,
wantFail: 1,
wantErr: false,
},
{
name: "设备未授权时失败",
req: &dto.RecallDevicesReq{
DeviceNos: []string{env.devices[2].DeviceNo},
},
wantSuccess: 0,
wantFail: 1,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
resp, err := env.service.RecallDevices(ctx, env.enterprise.ID, tt.req)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
assert.Equal(t, tt.wantSuccess, resp.SuccessCount)
assert.Equal(t, tt.wantFail, resp.FailCount)
})
}
}
func TestService_RecallDevices_Unauthorized(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
t.Run("未授权用户返回错误", func(t *testing.T) {
req := &dto.RecallDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
_, err := env.service.RecallDevices(context.Background(), env.enterprise.ID, req)
require.Error(t, err)
})
}
func TestService_ListDevices(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo, env.devices[1].DeviceNo},
}
_, err := env.service.AllocateDevices(ctx, env.enterprise.ID, allocateReq)
require.NoError(t, err)
tests := []struct {
name string
req *dto.EnterpriseDeviceListReq
wantTotal int64
wantLen int
}{
{
name: "获取所有授权设备",
req: &dto.EnterpriseDeviceListReq{Page: 1, PageSize: 10},
wantTotal: 2,
wantLen: 2,
},
{
name: "分页查询",
req: &dto.EnterpriseDeviceListReq{Page: 1, PageSize: 1},
wantTotal: 2,
wantLen: 1,
},
{
name: "按设备号搜索",
req: &dto.EnterpriseDeviceListReq{Page: 1, PageSize: 10, DeviceNo: env.devices[0].DeviceNo},
wantTotal: 2,
wantLen: 1,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
resp, err := env.service.ListDevices(ctx, env.enterprise.ID, tt.req)
require.NoError(t, err)
assert.Equal(t, tt.wantTotal, resp.Total)
assert.Len(t, resp.List, tt.wantLen)
})
}
}
func TestService_ListDevices_EnterpriseNotFound(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
t.Run("企业不存在返回错误", func(t *testing.T) {
req := &dto.EnterpriseDeviceListReq{Page: 1, PageSize: 10}
_, err := env.service.ListDevices(ctx, 99999, req)
require.Error(t, err)
})
}
func TestService_ListDevicesForEnterprise(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
_, err := env.service.AllocateDevices(ctx, env.enterprise.ID, allocateReq)
require.NoError(t, err)
t.Run("企业用户获取自己的授权设备", func(t *testing.T) {
enterpriseCtx := createTestContext(1, constants.UserTypeEnterprise, 0, env.enterprise.ID)
req := &dto.EnterpriseDeviceListReq{Page: 1, PageSize: 10}
resp, err := env.service.ListDevicesForEnterprise(enterpriseCtx, req)
require.NoError(t, err)
assert.Equal(t, int64(1), resp.Total)
})
t.Run("未设置企业ID返回错误", func(t *testing.T) {
req := &dto.EnterpriseDeviceListReq{Page: 1, PageSize: 10}
_, err := env.service.ListDevicesForEnterprise(context.Background(), req)
require.Error(t, err)
})
}
func TestService_GetDeviceDetail(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
_, err := env.service.AllocateDevices(ctx, env.enterprise.ID, allocateReq)
require.NoError(t, err)
enterpriseCtx := createTestContext(1, constants.UserTypeEnterprise, 0, env.enterprise.ID)
t.Run("成功获取设备详情", func(t *testing.T) {
resp, err := env.service.GetDeviceDetail(enterpriseCtx, env.devices[0].ID)
require.NoError(t, err)
assert.Equal(t, env.devices[0].ID, resp.Device.DeviceID)
assert.Equal(t, env.devices[0].DeviceNo, resp.Device.DeviceNo)
assert.Len(t, resp.Cards, 2)
})
t.Run("设备未授权时返回错误", func(t *testing.T) {
_, err := env.service.GetDeviceDetail(enterpriseCtx, env.devices[1].ID)
require.Error(t, err)
})
t.Run("未设置企业ID返回错误", func(t *testing.T) {
_, err := env.service.GetDeviceDetail(context.Background(), env.devices[0].ID)
require.Error(t, err)
})
}
func TestService_SuspendCard(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
_, err := env.service.AllocateDevices(ctx, env.enterprise.ID, allocateReq)
require.NoError(t, err)
enterpriseCtx := createTestContext(1, constants.UserTypeEnterprise, 0, env.enterprise.ID)
t.Run("成功停机", func(t *testing.T) {
req := &dto.DeviceCardOperationReq{Reason: "测试停机"}
resp, err := env.service.SuspendCard(enterpriseCtx, env.devices[0].ID, env.cards[0].ID, req)
require.NoError(t, err)
assert.True(t, resp.Success)
})
t.Run("卡不属于设备时返回错误", func(t *testing.T) {
req := &dto.DeviceCardOperationReq{Reason: "测试停机"}
_, err := env.service.SuspendCard(enterpriseCtx, env.devices[0].ID, env.cards[3].ID, req)
require.Error(t, err)
})
t.Run("设备未授权时返回错误", func(t *testing.T) {
req := &dto.DeviceCardOperationReq{Reason: "测试停机"}
_, err := env.service.SuspendCard(enterpriseCtx, env.devices[1].ID, env.cards[2].ID, req)
require.Error(t, err)
})
t.Run("未设置企业ID返回错误", func(t *testing.T) {
req := &dto.DeviceCardOperationReq{Reason: "测试停机"}
_, err := env.service.SuspendCard(context.Background(), env.devices[0].ID, env.cards[0].ID, req)
require.Error(t, err)
})
}
func TestService_ResumeCard(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
_, err := env.service.AllocateDevices(ctx, env.enterprise.ID, allocateReq)
require.NoError(t, err)
enterpriseCtx := createTestContext(1, constants.UserTypeEnterprise, 0, env.enterprise.ID)
t.Run("成功复机", func(t *testing.T) {
req := &dto.DeviceCardOperationReq{Reason: "测试复机"}
resp, err := env.service.ResumeCard(enterpriseCtx, env.devices[0].ID, env.cards[0].ID, req)
require.NoError(t, err)
assert.True(t, resp.Success)
})
t.Run("卡不属于设备时返回错误", func(t *testing.T) {
req := &dto.DeviceCardOperationReq{Reason: "测试复机"}
_, err := env.service.ResumeCard(enterpriseCtx, env.devices[0].ID, env.cards[3].ID, req)
require.Error(t, err)
})
t.Run("设备未授权时返回错误", func(t *testing.T) {
req := &dto.DeviceCardOperationReq{Reason: "测试复机"}
_, err := env.service.ResumeCard(enterpriseCtx, env.devices[1].ID, env.cards[2].ID, req)
require.Error(t, err)
})
t.Run("未设置企业ID返回错误", func(t *testing.T) {
req := &dto.DeviceCardOperationReq{Reason: "测试复机"}
_, err := env.service.ResumeCard(context.Background(), env.devices[0].ID, env.cards[0].ID, req)
require.Error(t, err)
})
}
func TestService_ListDevices_EmptyResult(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
t.Run("企业无授权设备时返回空列表", func(t *testing.T) {
req := &dto.EnterpriseDeviceListReq{Page: 1, PageSize: 10}
resp, err := env.service.ListDevices(ctx, env.enterprise.ID, req)
require.NoError(t, err)
assert.Equal(t, int64(0), resp.Total)
assert.Empty(t, resp.List)
})
}
func TestService_GetDeviceDetail_WithCarrierInfo(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
_, err := env.service.AllocateDevices(ctx, env.enterprise.ID, allocateReq)
require.NoError(t, err)
enterpriseCtx := createTestContext(1, constants.UserTypeEnterprise, 0, env.enterprise.ID)
t.Run("获取设备详情包含运营商信息", func(t *testing.T) {
resp, err := env.service.GetDeviceDetail(enterpriseCtx, env.devices[0].ID)
require.NoError(t, err)
assert.Len(t, resp.Cards, 2)
for _, card := range resp.Cards {
assert.NotEmpty(t, card.CarrierName)
}
})
}
func TestService_GetDeviceDetail_NetworkStatus(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
_, err := env.service.AllocateDevices(ctx, env.enterprise.ID, allocateReq)
require.NoError(t, err)
enterpriseCtx := createTestContext(1, constants.UserTypeEnterprise, 0, env.enterprise.ID)
t.Run("网络状态名称正确", func(t *testing.T) {
resp, err := env.service.GetDeviceDetail(enterpriseCtx, env.devices[0].ID)
require.NoError(t, err)
for _, card := range resp.Cards {
if card.NetworkStatus == 1 {
assert.Equal(t, "开机", card.NetworkStatusName)
} else {
assert.Equal(t, "停机", card.NetworkStatusName)
}
}
})
}
func TestService_GetDeviceDetail_DeviceWithoutCards(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
prefix := uniqueServiceTestPrefix()
enterpriseStore := postgres.NewEnterpriseStore(tx, rdb)
deviceStore := postgres.NewDeviceStore(tx, rdb)
deviceSimBindingStore := postgres.NewDeviceSimBindingStore(tx, rdb)
enterpriseDeviceAuthStore := postgres.NewEnterpriseDeviceAuthorizationStore(tx, rdb)
enterpriseCardAuthStore := postgres.NewEnterpriseCardAuthorizationStore(tx, rdb)
logger := zap.NewNop()
svc := New(tx, enterpriseStore, deviceStore, deviceSimBindingStore, enterpriseDeviceAuthStore, enterpriseCardAuthStore, logger)
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
device := &model.Device{
DeviceNo: prefix + "_D001",
DeviceName: "无卡设备",
Status: 2,
}
require.NoError(t, tx.Create(device).Error)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{device.DeviceNo},
}
_, err := svc.AllocateDevices(ctx, enterprise.ID, allocateReq)
require.NoError(t, err)
t.Run("设备无绑定卡时返回空卡列表", func(t *testing.T) {
enterpriseCtx := createTestContext(1, constants.UserTypeEnterprise, 0, enterprise.ID)
resp, err := svc.GetDeviceDetail(enterpriseCtx, device.ID)
require.NoError(t, err)
assert.Equal(t, device.ID, resp.Device.DeviceID)
assert.Empty(t, resp.Cards)
})
}
func TestService_RecallDevices_CascadeRevoke(t *testing.T) {
prefix := uniqueServiceTestPrefix()
env := setupTestEnv(t, prefix)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
resp, err := env.service.AllocateDevices(ctx, env.enterprise.ID, allocateReq)
require.NoError(t, err)
assert.Equal(t, 2, resp.AuthorizedDevices[0].CardCount)
t.Run("撤销设备授权时级联撤销卡授权", func(t *testing.T) {
recallReq := &dto.RecallDevicesReq{
DeviceNos: []string{env.devices[0].DeviceNo},
}
recallResp, err := env.service.RecallDevices(ctx, env.enterprise.ID, recallReq)
require.NoError(t, err)
assert.Equal(t, 1, recallResp.SuccessCount)
})
}
func TestService_GetDeviceDetail_WithNetworkStatusOn(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
prefix := uniqueServiceTestPrefix()
enterpriseStore := postgres.NewEnterpriseStore(tx, rdb)
deviceStore := postgres.NewDeviceStore(tx, rdb)
deviceSimBindingStore := postgres.NewDeviceSimBindingStore(tx, rdb)
enterpriseDeviceAuthStore := postgres.NewEnterpriseDeviceAuthorizationStore(tx, rdb)
enterpriseCardAuthStore := postgres.NewEnterpriseCardAuthorizationStore(tx, rdb)
logger := zap.NewNop()
svc := New(tx, enterpriseStore, deviceStore, deviceSimBindingStore, enterpriseDeviceAuthStore, enterpriseCardAuthStore, logger)
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
carrier := &model.Carrier{
CarrierName: "测试运营商",
CarrierType: "CMCC",
Status: 1,
}
require.NoError(t, tx.Create(carrier).Error)
device := &model.Device{
DeviceNo: prefix + "_D001",
DeviceName: "测试设备",
Status: 2,
}
require.NoError(t, tx.Create(device).Error)
card := &model.IotCard{
ICCID: prefix + "0001",
CardType: "normal",
CarrierID: carrier.ID,
Status: 2,
NetworkStatus: 1,
}
require.NoError(t, tx.Create(card).Error)
now := time.Now()
binding := &model.DeviceSimBinding{
DeviceID: device.ID,
IotCardID: card.ID,
SlotPosition: 1,
BindStatus: 1,
BindTime: &now,
}
require.NoError(t, tx.Create(binding).Error)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
allocateReq := &dto.AllocateDevicesReq{
DeviceNos: []string{device.DeviceNo},
}
_, err := svc.AllocateDevices(ctx, enterprise.ID, allocateReq)
require.NoError(t, err)
t.Run("开机状态卡显示正确", func(t *testing.T) {
enterpriseCtx := createTestContext(1, constants.UserTypeEnterprise, 0, enterprise.ID)
resp, err := svc.GetDeviceDetail(enterpriseCtx, device.ID)
require.NoError(t, err)
assert.Len(t, resp.Cards, 1)
assert.Equal(t, 1, resp.Cards[0].NetworkStatus)
assert.Equal(t, "开机", resp.Cards[0].NetworkStatusName)
})
}
func TestService_EnterpriseNotFound(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
enterpriseStore := postgres.NewEnterpriseStore(tx, rdb)
deviceStore := postgres.NewDeviceStore(tx, rdb)
deviceSimBindingStore := postgres.NewDeviceSimBindingStore(tx, rdb)
enterpriseDeviceAuthStore := postgres.NewEnterpriseDeviceAuthorizationStore(tx, rdb)
enterpriseCardAuthStore := postgres.NewEnterpriseCardAuthorizationStore(tx, rdb)
logger := zap.NewNop()
svc := New(tx, enterpriseStore, deviceStore, deviceSimBindingStore, enterpriseDeviceAuthStore, enterpriseCardAuthStore, logger)
ctx := createTestContext(1, constants.UserTypePlatform, 0, 0)
t.Run("AllocateDevices企业不存在", func(t *testing.T) {
req := &dto.AllocateDevicesReq{DeviceNos: []string{"D001"}}
_, err := svc.AllocateDevices(ctx, 99999, req)
require.Error(t, err)
})
t.Run("RecallDevices企业不存在", func(t *testing.T) {
req := &dto.RecallDevicesReq{DeviceNos: []string{"D001"}}
_, err := svc.RecallDevices(ctx, 99999, req)
require.Error(t, err)
})
}
func TestService_ValidateCardOperation_RevokedDeviceAuth(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
prefix := uniqueServiceTestPrefix()
enterpriseStore := postgres.NewEnterpriseStore(tx, rdb)
deviceStore := postgres.NewDeviceStore(tx, rdb)
deviceSimBindingStore := postgres.NewDeviceSimBindingStore(tx, rdb)
enterpriseDeviceAuthStore := postgres.NewEnterpriseDeviceAuthorizationStore(tx, rdb)
enterpriseCardAuthStore := postgres.NewEnterpriseCardAuthorizationStore(tx, rdb)
logger := zap.NewNop()
svc := New(tx, enterpriseStore, deviceStore, deviceSimBindingStore, enterpriseDeviceAuthStore, enterpriseCardAuthStore, logger)
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
carrier := &model.Carrier{
CarrierName: "测试运营商",
CarrierType: "CMCC",
Status: 1,
}
require.NoError(t, tx.Create(carrier).Error)
device := &model.Device{
DeviceNo: prefix + "_D001",
DeviceName: "测试设备",
Status: 2,
}
require.NoError(t, tx.Create(device).Error)
card := &model.IotCard{
ICCID: prefix + "0001",
CardType: "normal",
CarrierID: carrier.ID,
Status: 2,
}
require.NoError(t, tx.Create(card).Error)
now := time.Now()
binding := &model.DeviceSimBinding{
DeviceID: device.ID,
IotCardID: card.ID,
SlotPosition: 1,
BindStatus: 1,
BindTime: &now,
}
require.NoError(t, tx.Create(binding).Error)
deviceAuth := &model.EnterpriseDeviceAuthorization{
EnterpriseID: enterprise.ID,
DeviceID: device.ID,
AuthorizedBy: 1,
AuthorizedAt: now,
AuthorizerType: 2,
RevokedBy: ptrUintED(1),
RevokedAt: &now,
}
require.NoError(t, tx.Create(deviceAuth).Error)
t.Run("已撤销的设备授权无法操作卡", func(t *testing.T) {
enterpriseCtx := createTestContext(1, constants.UserTypeEnterprise, 0, enterprise.ID)
req := &dto.DeviceCardOperationReq{Reason: "测试"}
_, err := svc.SuspendCard(enterpriseCtx, device.ID, card.ID, req)
require.Error(t, err)
})
}
func ptrUintED(v uint) *uint {
return &v
}

View File

@@ -394,3 +394,13 @@ func (s *EnterpriseCardAuthorizationStore) GetByID(ctx context.Context, id uint)
}
return &auth, nil
}
func (s *EnterpriseCardAuthorizationStore) RevokeByDeviceAuthID(ctx context.Context, deviceAuthID uint, revokedBy uint) error {
now := time.Now()
return s.db.WithContext(ctx).Model(&model.EnterpriseCardAuthorization{}).
Where("device_auth_id = ? AND revoked_at IS NULL", deviceAuthID).
Updates(map[string]interface{}{
"revoked_by": revokedBy,
"revoked_at": now,
}).Error
}

View File

@@ -0,0 +1,309 @@
package postgres
import (
"context"
"fmt"
"testing"
"time"
"github.com/break/junhong_cmp_fiber/internal/model"
"github.com/break/junhong_cmp_fiber/tests/testutils"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func uniqueCardAuthTestPrefix() string {
return fmt.Sprintf("ECA%d", time.Now().UnixNano()%1000000000)
}
func TestEnterpriseCardAuthorizationStore_RevokeByDeviceAuthID(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseCardAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueCardAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
carrier := &model.Carrier{
CarrierName: "测试运营商",
CarrierType: "CMCC",
Status: 1,
}
require.NoError(t, tx.Create(carrier).Error)
cards := []*model.IotCard{
{ICCID: prefix + "0001", CardType: "normal", CarrierID: carrier.ID, Status: 2},
{ICCID: prefix + "0002", CardType: "normal", CarrierID: carrier.ID, Status: 2},
{ICCID: prefix + "0003", CardType: "normal", CarrierID: carrier.ID, Status: 2},
}
for _, c := range cards {
require.NoError(t, tx.Create(c).Error)
}
deviceAuthID := uint(12345)
now := time.Now()
auths := []*model.EnterpriseCardAuthorization{
{EnterpriseID: enterprise.ID, CardID: cards[0].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2, DeviceAuthID: &deviceAuthID},
{EnterpriseID: enterprise.ID, CardID: cards[1].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2, DeviceAuthID: &deviceAuthID},
{EnterpriseID: enterprise.ID, CardID: cards[2].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2, DeviceAuthID: nil},
}
for _, auth := range auths {
require.NoError(t, store.Create(ctx, auth))
}
t.Run("成功撤销指定设备授权ID关联的卡授权", func(t *testing.T) {
revokerID := uint(2)
err := store.RevokeByDeviceAuthID(ctx, deviceAuthID, revokerID)
require.NoError(t, err)
result, err := store.ListByEnterprise(ctx, enterprise.ID, false)
require.NoError(t, err)
assert.Len(t, result, 1)
assert.Equal(t, cards[2].ID, result[0].CardID)
revokedResult, err := store.ListByEnterprise(ctx, enterprise.ID, true)
require.NoError(t, err)
assert.Len(t, revokedResult, 3)
for _, auth := range revokedResult {
if auth.DeviceAuthID != nil && *auth.DeviceAuthID == deviceAuthID {
assert.NotNil(t, auth.RevokedAt)
assert.NotNil(t, auth.RevokedBy)
assert.Equal(t, revokerID, *auth.RevokedBy)
}
}
})
t.Run("设备授权ID不存在时不报错", func(t *testing.T) {
err := store.RevokeByDeviceAuthID(ctx, 99999, uint(1))
require.NoError(t, err)
})
}
func TestEnterpriseCardAuthorizationStore_Create(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseCardAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueCardAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
carrier := &model.Carrier{
CarrierName: "测试运营商",
CarrierType: "CMCC",
Status: 1,
}
require.NoError(t, tx.Create(carrier).Error)
card := &model.IotCard{
ICCID: prefix + "0001",
CardType: "normal",
CarrierID: carrier.ID,
Status: 2,
}
require.NoError(t, tx.Create(card).Error)
t.Run("成功创建卡授权记录", func(t *testing.T) {
auth := &model.EnterpriseCardAuthorization{
EnterpriseID: enterprise.ID,
CardID: card.ID,
AuthorizedBy: 1,
AuthorizedAt: time.Now(),
AuthorizerType: 2,
Remark: "测试授权",
}
err := store.Create(ctx, auth)
require.NoError(t, err)
assert.NotZero(t, auth.ID)
})
}
func TestEnterpriseCardAuthorizationStore_BatchCreate(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseCardAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueCardAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
carrier := &model.Carrier{
CarrierName: "测试运营商",
CarrierType: "CMCC",
Status: 1,
}
require.NoError(t, tx.Create(carrier).Error)
cards := []*model.IotCard{
{ICCID: prefix + "0001", CardType: "normal", CarrierID: carrier.ID, Status: 2},
{ICCID: prefix + "0002", CardType: "normal", CarrierID: carrier.ID, Status: 2},
}
for _, c := range cards {
require.NoError(t, tx.Create(c).Error)
}
t.Run("成功批量创建卡授权记录", func(t *testing.T) {
now := time.Now()
auths := []*model.EnterpriseCardAuthorization{
{EnterpriseID: enterprise.ID, CardID: cards[0].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
{EnterpriseID: enterprise.ID, CardID: cards[1].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
}
err := store.BatchCreate(ctx, auths)
require.NoError(t, err)
for _, auth := range auths {
assert.NotZero(t, auth.ID)
}
})
t.Run("空列表不报错", func(t *testing.T) {
err := store.BatchCreate(ctx, []*model.EnterpriseCardAuthorization{})
require.NoError(t, err)
})
}
func TestEnterpriseCardAuthorizationStore_ListByEnterprise(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseCardAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueCardAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
carrier := &model.Carrier{
CarrierName: "测试运营商",
CarrierType: "CMCC",
Status: 1,
}
require.NoError(t, tx.Create(carrier).Error)
cards := []*model.IotCard{
{ICCID: prefix + "0001", CardType: "normal", CarrierID: carrier.ID, Status: 2},
{ICCID: prefix + "0002", CardType: "normal", CarrierID: carrier.ID, Status: 2},
}
for _, c := range cards {
require.NoError(t, tx.Create(c).Error)
}
now := time.Now()
auths := []*model.EnterpriseCardAuthorization{
{EnterpriseID: enterprise.ID, CardID: cards[0].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
{EnterpriseID: enterprise.ID, CardID: cards[1].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2, RevokedBy: ptrUintCA(1), RevokedAt: &now},
}
for _, auth := range auths {
require.NoError(t, store.Create(ctx, auth))
}
t.Run("获取未撤销的授权记录", func(t *testing.T) {
result, err := store.ListByEnterprise(ctx, enterprise.ID, false)
require.NoError(t, err)
assert.Len(t, result, 1)
assert.Equal(t, cards[0].ID, result[0].CardID)
})
t.Run("获取所有授权记录包括已撤销", func(t *testing.T) {
result, err := store.ListByEnterprise(ctx, enterprise.ID, true)
require.NoError(t, err)
assert.Len(t, result, 2)
})
}
func ptrUintCA(v uint) *uint {
return &v
}
func TestEnterpriseCardAuthorizationStore_GetActiveAuthsByCardIDs(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseCardAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueCardAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
carrier := &model.Carrier{
CarrierName: "测试运营商",
CarrierType: "CMCC",
Status: 1,
}
require.NoError(t, tx.Create(carrier).Error)
cards := []*model.IotCard{
{ICCID: prefix + "0001", CardType: "normal", CarrierID: carrier.ID, Status: 2},
{ICCID: prefix + "0002", CardType: "normal", CarrierID: carrier.ID, Status: 2},
{ICCID: prefix + "0003", CardType: "normal", CarrierID: carrier.ID, Status: 2},
}
for _, c := range cards {
require.NoError(t, tx.Create(c).Error)
}
now := time.Now()
auths := []*model.EnterpriseCardAuthorization{
{EnterpriseID: enterprise.ID, CardID: cards[0].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
{EnterpriseID: enterprise.ID, CardID: cards[1].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2, RevokedBy: ptrUintCA(1), RevokedAt: &now},
}
for _, auth := range auths {
require.NoError(t, store.Create(ctx, auth))
}
t.Run("获取有效授权的卡ID映射", func(t *testing.T) {
cardIDs := []uint{cards[0].ID, cards[1].ID, cards[2].ID}
result, err := store.GetActiveAuthsByCardIDs(ctx, enterprise.ID, cardIDs)
require.NoError(t, err)
assert.True(t, result[cards[0].ID])
assert.False(t, result[cards[1].ID])
assert.False(t, result[cards[2].ID])
})
t.Run("空卡ID列表返回空映射", func(t *testing.T) {
result, err := store.GetActiveAuthsByCardIDs(ctx, enterprise.ID, []uint{})
require.NoError(t, err)
assert.Empty(t, result)
})
}

View File

@@ -0,0 +1,160 @@
package postgres
import (
"context"
"time"
"github.com/break/junhong_cmp_fiber/internal/model"
"github.com/redis/go-redis/v9"
"gorm.io/gorm"
)
type EnterpriseDeviceAuthorizationStore struct {
db *gorm.DB
redis *redis.Client
}
func NewEnterpriseDeviceAuthorizationStore(db *gorm.DB, redis *redis.Client) *EnterpriseDeviceAuthorizationStore {
return &EnterpriseDeviceAuthorizationStore{
db: db,
redis: redis,
}
}
func (s *EnterpriseDeviceAuthorizationStore) Create(ctx context.Context, auth *model.EnterpriseDeviceAuthorization) error {
return s.db.WithContext(ctx).Create(auth).Error
}
func (s *EnterpriseDeviceAuthorizationStore) BatchCreate(ctx context.Context, auths []*model.EnterpriseDeviceAuthorization) error {
if len(auths) == 0 {
return nil
}
batchSize := 100
for i := 0; i < len(auths); i += batchSize {
end := i + batchSize
if end > len(auths) {
end = len(auths)
}
batch := auths[i:end]
if err := s.db.WithContext(ctx).Create(batch).Error; err != nil {
return err
}
}
return nil
}
func (s *EnterpriseDeviceAuthorizationStore) GetByID(ctx context.Context, id uint) (*model.EnterpriseDeviceAuthorization, error) {
var auth model.EnterpriseDeviceAuthorization
err := s.db.WithContext(ctx).Where("id = ?", id).First(&auth).Error
if err != nil {
return nil, err
}
return &auth, nil
}
func (s *EnterpriseDeviceAuthorizationStore) GetByDeviceID(ctx context.Context, deviceID uint) (*model.EnterpriseDeviceAuthorization, error) {
var auth model.EnterpriseDeviceAuthorization
err := s.db.WithContext(ctx).
Where("device_id = ? AND revoked_at IS NULL", deviceID).
First(&auth).Error
if err != nil {
return nil, err
}
return &auth, nil
}
func (s *EnterpriseDeviceAuthorizationStore) GetByEnterpriseID(ctx context.Context, enterpriseID uint, includeRevoked bool) ([]*model.EnterpriseDeviceAuthorization, error) {
var auths []*model.EnterpriseDeviceAuthorization
query := s.db.WithContext(ctx).Where("enterprise_id = ?", enterpriseID)
if !includeRevoked {
query = query.Where("revoked_at IS NULL")
}
err := query.Find(&auths).Error
return auths, err
}
type DeviceAuthListOptions struct {
EnterpriseID *uint
DeviceIDs []uint
AuthorizerID *uint
IncludeRevoked bool
Page int
PageSize int
}
func (s *EnterpriseDeviceAuthorizationStore) ListByEnterprise(ctx context.Context, opts DeviceAuthListOptions) ([]*model.EnterpriseDeviceAuthorization, int64, error) {
var auths []*model.EnterpriseDeviceAuthorization
var total int64
query := s.db.WithContext(ctx).Model(&model.EnterpriseDeviceAuthorization{})
if opts.EnterpriseID != nil {
query = query.Where("enterprise_id = ?", *opts.EnterpriseID)
}
if len(opts.DeviceIDs) > 0 {
query = query.Where("device_id IN ?", opts.DeviceIDs)
}
if opts.AuthorizerID != nil {
query = query.Where("authorized_by = ?", *opts.AuthorizerID)
}
if !opts.IncludeRevoked {
query = query.Where("revoked_at IS NULL")
}
if err := query.Count(&total).Error; err != nil {
return nil, 0, err
}
if opts.Page > 0 && opts.PageSize > 0 {
offset := (opts.Page - 1) * opts.PageSize
query = query.Offset(offset).Limit(opts.PageSize)
}
err := query.Order("authorized_at DESC").Find(&auths).Error
return auths, total, err
}
func (s *EnterpriseDeviceAuthorizationStore) RevokeByIDs(ctx context.Context, ids []uint, revokedBy uint) error {
now := time.Now()
return s.db.WithContext(ctx).
Model(&model.EnterpriseDeviceAuthorization{}).
Where("id IN ? AND revoked_at IS NULL", ids).
Updates(map[string]interface{}{
"revoked_by": revokedBy,
"revoked_at": now,
}).Error
}
func (s *EnterpriseDeviceAuthorizationStore) GetActiveAuthsByDeviceIDs(ctx context.Context, enterpriseID uint, deviceIDs []uint) (map[uint]bool, error) {
if len(deviceIDs) == 0 {
return make(map[uint]bool), nil
}
var auths []model.EnterpriseDeviceAuthorization
err := s.db.WithContext(ctx).
Select("device_id").
Where("enterprise_id = ? AND device_id IN ? AND revoked_at IS NULL", enterpriseID, deviceIDs).
Find(&auths).Error
if err != nil {
return nil, err
}
result := make(map[uint]bool, len(auths))
for _, auth := range auths {
result[auth.DeviceID] = true
}
return result, nil
}
func (s *EnterpriseDeviceAuthorizationStore) ListDeviceIDsByEnterprise(ctx context.Context, enterpriseID uint) ([]uint, error) {
var deviceIDs []uint
err := s.db.WithContext(ctx).
Model(&model.EnterpriseDeviceAuthorization{}).
Where("enterprise_id = ? AND revoked_at IS NULL", enterpriseID).
Pluck("device_id", &deviceIDs).Error
return deviceIDs, err
}

View File

@@ -0,0 +1,517 @@
package postgres
import (
"context"
"fmt"
"testing"
"time"
"github.com/break/junhong_cmp_fiber/internal/model"
"github.com/break/junhong_cmp_fiber/tests/testutils"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func uniqueDeviceAuthTestPrefix() string {
return fmt.Sprintf("EDA%d", time.Now().UnixNano()%1000000000)
}
func TestEnterpriseDeviceAuthorizationStore_Create(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseDeviceAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueDeviceAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
device := &model.Device{
DeviceNo: prefix + "_001",
DeviceName: "测试设备1",
Status: 2,
}
require.NoError(t, tx.Create(device).Error)
t.Run("成功创建授权记录", func(t *testing.T) {
auth := &model.EnterpriseDeviceAuthorization{
EnterpriseID: enterprise.ID,
DeviceID: device.ID,
AuthorizedBy: 1,
AuthorizedAt: time.Now(),
AuthorizerType: 2,
Remark: "测试授权",
}
err := store.Create(ctx, auth)
require.NoError(t, err)
assert.NotZero(t, auth.ID)
assert.Equal(t, enterprise.ID, auth.EnterpriseID)
assert.Equal(t, device.ID, auth.DeviceID)
})
}
func TestEnterpriseDeviceAuthorizationStore_BatchCreate(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseDeviceAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueDeviceAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
devices := []*model.Device{
{DeviceNo: prefix + "_001", DeviceName: "测试设备1", Status: 2},
{DeviceNo: prefix + "_002", DeviceName: "测试设备2", Status: 2},
{DeviceNo: prefix + "_003", DeviceName: "测试设备3", Status: 2},
}
for _, d := range devices {
require.NoError(t, tx.Create(d).Error)
}
t.Run("成功批量创建授权记录", func(t *testing.T) {
now := time.Now()
auths := []*model.EnterpriseDeviceAuthorization{
{EnterpriseID: enterprise.ID, DeviceID: devices[0].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
{EnterpriseID: enterprise.ID, DeviceID: devices[1].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
{EnterpriseID: enterprise.ID, DeviceID: devices[2].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
}
err := store.BatchCreate(ctx, auths)
require.NoError(t, err)
for _, auth := range auths {
assert.NotZero(t, auth.ID)
}
})
t.Run("空列表不报错", func(t *testing.T) {
err := store.BatchCreate(ctx, []*model.EnterpriseDeviceAuthorization{})
require.NoError(t, err)
})
}
func TestEnterpriseDeviceAuthorizationStore_GetByID(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseDeviceAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueDeviceAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
device := &model.Device{
DeviceNo: prefix + "_001",
DeviceName: "测试设备1",
Status: 2,
}
require.NoError(t, tx.Create(device).Error)
auth := &model.EnterpriseDeviceAuthorization{
EnterpriseID: enterprise.ID,
DeviceID: device.ID,
AuthorizedBy: 1,
AuthorizedAt: time.Now(),
AuthorizerType: 2,
Remark: "测试备注",
}
require.NoError(t, store.Create(ctx, auth))
t.Run("成功获取授权记录", func(t *testing.T) {
result, err := store.GetByID(ctx, auth.ID)
require.NoError(t, err)
assert.Equal(t, auth.ID, result.ID)
assert.Equal(t, enterprise.ID, result.EnterpriseID)
assert.Equal(t, device.ID, result.DeviceID)
assert.Equal(t, "测试备注", result.Remark)
})
t.Run("记录不存在返回错误", func(t *testing.T) {
_, err := store.GetByID(ctx, 99999)
require.Error(t, err)
})
}
func TestEnterpriseDeviceAuthorizationStore_GetByDeviceID(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseDeviceAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueDeviceAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
device := &model.Device{
DeviceNo: prefix + "_001",
DeviceName: "测试设备1",
Status: 2,
}
require.NoError(t, tx.Create(device).Error)
auth := &model.EnterpriseDeviceAuthorization{
EnterpriseID: enterprise.ID,
DeviceID: device.ID,
AuthorizedBy: 1,
AuthorizedAt: time.Now(),
AuthorizerType: 2,
}
require.NoError(t, store.Create(ctx, auth))
t.Run("成功通过设备ID获取授权记录", func(t *testing.T) {
result, err := store.GetByDeviceID(ctx, device.ID)
require.NoError(t, err)
assert.Equal(t, auth.ID, result.ID)
assert.Equal(t, enterprise.ID, result.EnterpriseID)
})
t.Run("设备未授权返回错误", func(t *testing.T) {
_, err := store.GetByDeviceID(ctx, 99999)
require.Error(t, err)
})
t.Run("已撤销的授权不返回", func(t *testing.T) {
device2 := &model.Device{
DeviceNo: prefix + "_002",
DeviceName: "测试设备2",
Status: 2,
}
require.NoError(t, tx.Create(device2).Error)
now := time.Now()
revokedAuth := &model.EnterpriseDeviceAuthorization{
EnterpriseID: enterprise.ID,
DeviceID: device2.ID,
AuthorizedBy: 1,
AuthorizedAt: now,
AuthorizerType: 2,
RevokedBy: ptrUint(1),
RevokedAt: &now,
}
require.NoError(t, store.Create(ctx, revokedAuth))
_, err := store.GetByDeviceID(ctx, device2.ID)
require.Error(t, err)
})
}
func TestEnterpriseDeviceAuthorizationStore_GetByEnterpriseID(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseDeviceAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueDeviceAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
devices := []*model.Device{
{DeviceNo: prefix + "_001", DeviceName: "测试设备1", Status: 2},
{DeviceNo: prefix + "_002", DeviceName: "测试设备2", Status: 2},
}
for _, d := range devices {
require.NoError(t, tx.Create(d).Error)
}
now := time.Now()
auths := []*model.EnterpriseDeviceAuthorization{
{EnterpriseID: enterprise.ID, DeviceID: devices[0].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
{EnterpriseID: enterprise.ID, DeviceID: devices[1].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2, RevokedBy: ptrUint(1), RevokedAt: &now},
}
for _, auth := range auths {
require.NoError(t, store.Create(ctx, auth))
}
t.Run("获取未撤销的授权记录", func(t *testing.T) {
result, err := store.GetByEnterpriseID(ctx, enterprise.ID, false)
require.NoError(t, err)
assert.Len(t, result, 1)
assert.Equal(t, devices[0].ID, result[0].DeviceID)
})
t.Run("获取所有授权记录包括已撤销", func(t *testing.T) {
result, err := store.GetByEnterpriseID(ctx, enterprise.ID, true)
require.NoError(t, err)
assert.Len(t, result, 2)
})
}
func TestEnterpriseDeviceAuthorizationStore_ListByEnterprise(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseDeviceAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueDeviceAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
devices := make([]*model.Device, 5)
for i := 0; i < 5; i++ {
devices[i] = &model.Device{
DeviceNo: fmt.Sprintf("%s_%03d", prefix, i+1),
DeviceName: fmt.Sprintf("测试设备%d", i+1),
Status: 2,
}
require.NoError(t, tx.Create(devices[i]).Error)
}
now := time.Now()
for i, d := range devices {
auth := &model.EnterpriseDeviceAuthorization{
EnterpriseID: enterprise.ID,
DeviceID: d.ID,
AuthorizedBy: uint(i + 1),
AuthorizedAt: now.Add(time.Duration(i) * time.Minute),
AuthorizerType: 2,
}
require.NoError(t, store.Create(ctx, auth))
}
t.Run("分页查询", func(t *testing.T) {
opts := DeviceAuthListOptions{
EnterpriseID: &enterprise.ID,
Page: 1,
PageSize: 2,
}
result, total, err := store.ListByEnterprise(ctx, opts)
require.NoError(t, err)
assert.Equal(t, int64(5), total)
assert.Len(t, result, 2)
})
t.Run("按授权人过滤", func(t *testing.T) {
authorizerID := uint(1)
opts := DeviceAuthListOptions{
EnterpriseID: &enterprise.ID,
AuthorizerID: &authorizerID,
Page: 1,
PageSize: 10,
}
result, total, err := store.ListByEnterprise(ctx, opts)
require.NoError(t, err)
assert.Equal(t, int64(1), total)
assert.Len(t, result, 1)
})
t.Run("按设备ID过滤", func(t *testing.T) {
opts := DeviceAuthListOptions{
EnterpriseID: &enterprise.ID,
DeviceIDs: []uint{devices[0].ID, devices[1].ID},
Page: 1,
PageSize: 10,
}
result, total, err := store.ListByEnterprise(ctx, opts)
require.NoError(t, err)
assert.Equal(t, int64(2), total)
assert.Len(t, result, 2)
})
}
func TestEnterpriseDeviceAuthorizationStore_RevokeByIDs(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseDeviceAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueDeviceAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
devices := []*model.Device{
{DeviceNo: prefix + "_001", DeviceName: "测试设备1", Status: 2},
{DeviceNo: prefix + "_002", DeviceName: "测试设备2", Status: 2},
}
for _, d := range devices {
require.NoError(t, tx.Create(d).Error)
}
now := time.Now()
auths := []*model.EnterpriseDeviceAuthorization{
{EnterpriseID: enterprise.ID, DeviceID: devices[0].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
{EnterpriseID: enterprise.ID, DeviceID: devices[1].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
}
for _, auth := range auths {
require.NoError(t, store.Create(ctx, auth))
}
t.Run("成功撤销授权", func(t *testing.T) {
revokerID := uint(2)
err := store.RevokeByIDs(ctx, []uint{auths[0].ID}, revokerID)
require.NoError(t, err)
result, err := store.GetByID(ctx, auths[0].ID)
require.NoError(t, err)
assert.NotNil(t, result.RevokedAt)
assert.NotNil(t, result.RevokedBy)
assert.Equal(t, revokerID, *result.RevokedBy)
})
t.Run("已撤销的记录不再被重复撤销", func(t *testing.T) {
err := store.RevokeByIDs(ctx, []uint{auths[0].ID}, uint(3))
require.NoError(t, err)
result, err := store.GetByID(ctx, auths[0].ID)
require.NoError(t, err)
assert.Equal(t, uint(2), *result.RevokedBy)
})
}
func TestEnterpriseDeviceAuthorizationStore_GetActiveAuthsByDeviceIDs(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseDeviceAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueDeviceAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
devices := []*model.Device{
{DeviceNo: prefix + "_001", DeviceName: "测试设备1", Status: 2},
{DeviceNo: prefix + "_002", DeviceName: "测试设备2", Status: 2},
{DeviceNo: prefix + "_003", DeviceName: "测试设备3", Status: 2},
}
for _, d := range devices {
require.NoError(t, tx.Create(d).Error)
}
now := time.Now()
auths := []*model.EnterpriseDeviceAuthorization{
{EnterpriseID: enterprise.ID, DeviceID: devices[0].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
{EnterpriseID: enterprise.ID, DeviceID: devices[1].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2, RevokedBy: ptrUint(1), RevokedAt: &now},
}
for _, auth := range auths {
require.NoError(t, store.Create(ctx, auth))
}
t.Run("获取有效授权的设备ID映射", func(t *testing.T) {
deviceIDs := []uint{devices[0].ID, devices[1].ID, devices[2].ID}
result, err := store.GetActiveAuthsByDeviceIDs(ctx, enterprise.ID, deviceIDs)
require.NoError(t, err)
assert.True(t, result[devices[0].ID])
assert.False(t, result[devices[1].ID])
assert.False(t, result[devices[2].ID])
})
t.Run("空设备ID列表返回空映射", func(t *testing.T) {
result, err := store.GetActiveAuthsByDeviceIDs(ctx, enterprise.ID, []uint{})
require.NoError(t, err)
assert.Empty(t, result)
})
}
func TestEnterpriseDeviceAuthorizationStore_ListDeviceIDsByEnterprise(t *testing.T) {
tx := testutils.NewTestTransaction(t)
rdb := testutils.GetTestRedis(t)
testutils.CleanTestRedisKeys(t, rdb)
store := NewEnterpriseDeviceAuthorizationStore(tx, rdb)
ctx := context.Background()
prefix := uniqueDeviceAuthTestPrefix()
enterprise := &model.Enterprise{
EnterpriseName: prefix + "_测试企业",
EnterpriseCode: prefix,
Status: 1,
}
require.NoError(t, tx.Create(enterprise).Error)
devices := []*model.Device{
{DeviceNo: prefix + "_001", DeviceName: "测试设备1", Status: 2},
{DeviceNo: prefix + "_002", DeviceName: "测试设备2", Status: 2},
}
for _, d := range devices {
require.NoError(t, tx.Create(d).Error)
}
now := time.Now()
auths := []*model.EnterpriseDeviceAuthorization{
{EnterpriseID: enterprise.ID, DeviceID: devices[0].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
{EnterpriseID: enterprise.ID, DeviceID: devices[1].ID, AuthorizedBy: 1, AuthorizedAt: now, AuthorizerType: 2},
}
for _, auth := range auths {
require.NoError(t, store.Create(ctx, auth))
}
t.Run("获取企业授权设备ID列表", func(t *testing.T) {
result, err := store.ListDeviceIDsByEnterprise(ctx, enterprise.ID)
require.NoError(t, err)
assert.Len(t, result, 2)
assert.Contains(t, result, devices[0].ID)
assert.Contains(t, result, devices[1].ID)
})
t.Run("无授权记录返回空列表", func(t *testing.T) {
result, err := store.ListDeviceIDsByEnterprise(ctx, 99999)
require.NoError(t, err)
assert.Empty(t, result)
})
}
func ptrUint(v uint) *uint {
return &v
}

View File

@@ -0,0 +1,3 @@
-- 回滚: 删除企业设备授权表
DROP TABLE IF EXISTS tb_enterprise_device_authorization;

View File

@@ -0,0 +1,48 @@
-- 迁移: 创建企业设备授权表
-- 说明:
-- 1. 创建 tb_enterprise_device_authorization 表,记录设备与企业的授权关系
-- 2. 添加部分唯一索引保证一个设备同时只能授权给一个企业
-- 3. 添加常规索引优化查询性能
CREATE TABLE IF NOT EXISTS tb_enterprise_device_authorization (
id BIGSERIAL PRIMARY KEY,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
deleted_at TIMESTAMP,
enterprise_id BIGINT NOT NULL,
device_id BIGINT NOT NULL,
authorized_by BIGINT NOT NULL,
authorized_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
authorizer_type SMALLINT NOT NULL,
revoked_by BIGINT,
revoked_at TIMESTAMP,
remark VARCHAR(500) DEFAULT ''
);
-- 添加部分唯一索引:一个设备同时只能授权给一个企业
CREATE UNIQUE INDEX IF NOT EXISTS uq_active_device_auth
ON tb_enterprise_device_authorization(device_id)
WHERE revoked_at IS NULL AND deleted_at IS NULL;
-- 添加常规索引
CREATE INDEX IF NOT EXISTS idx_eda_enterprise ON tb_enterprise_device_authorization(enterprise_id);
CREATE INDEX IF NOT EXISTS idx_eda_device ON tb_enterprise_device_authorization(device_id);
CREATE INDEX IF NOT EXISTS idx_eda_authorized_by ON tb_enterprise_device_authorization(authorized_by);
CREATE INDEX IF NOT EXISTS idx_eda_deleted_at ON tb_enterprise_device_authorization(deleted_at);
-- 添加表注释
COMMENT ON TABLE tb_enterprise_device_authorization IS '企业设备授权表';
-- 添加字段注释
COMMENT ON COLUMN tb_enterprise_device_authorization.id IS '主键ID';
COMMENT ON COLUMN tb_enterprise_device_authorization.created_at IS '创建时间';
COMMENT ON COLUMN tb_enterprise_device_authorization.updated_at IS '更新时间';
COMMENT ON COLUMN tb_enterprise_device_authorization.deleted_at IS '软删除时间';
COMMENT ON COLUMN tb_enterprise_device_authorization.enterprise_id IS '被授权企业ID';
COMMENT ON COLUMN tb_enterprise_device_authorization.device_id IS '被授权设备ID';
COMMENT ON COLUMN tb_enterprise_device_authorization.authorized_by IS '授权人账号ID';
COMMENT ON COLUMN tb_enterprise_device_authorization.authorized_at IS '授权时间';
COMMENT ON COLUMN tb_enterprise_device_authorization.authorizer_type IS '授权人类型2=平台用户 3=代理账号';
COMMENT ON COLUMN tb_enterprise_device_authorization.revoked_by IS '回收人账号ID';
COMMENT ON COLUMN tb_enterprise_device_authorization.revoked_at IS '回收时间';
COMMENT ON COLUMN tb_enterprise_device_authorization.remark IS '授权备注';

View File

@@ -0,0 +1,4 @@
-- 回滚: 删除企业卡授权表的设备授权关联字段
ALTER TABLE tb_enterprise_card_authorization
DROP COLUMN IF EXISTS device_auth_id;

View File

@@ -0,0 +1,14 @@
-- 迁移: 为企业卡授权表添加设备授权关联字段
-- 说明:
-- 1. 添加 device_auth_id 字段,用于关联设备授权记录
-- 2. NULL 表示通过单卡授权创建,有值表示通过设备授权创建
-- 3. 添加索引优化查询性能
ALTER TABLE tb_enterprise_card_authorization
ADD COLUMN IF NOT EXISTS device_auth_id BIGINT DEFAULT NULL;
-- 添加索引
CREATE INDEX IF NOT EXISTS idx_eca_device_auth ON tb_enterprise_card_authorization(device_auth_id);
-- 添加字段注释
COMMENT ON COLUMN tb_enterprise_card_authorization.device_auth_id IS '关联的设备授权IDNULL=单卡授权 有值=设备授权)';

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-01-29

View File

@@ -0,0 +1,181 @@
## Context
当前系统支持"单卡授权企业用户"功能,通过 `tb_enterprise_card_authorization` 表记录卡与企业的授权关系。但现有实现存在以下问题:
1. **设备维度缺失**:企业用户只能看到卡列表,无法以设备为单位管理资产
2. **逻辑不一致**:单卡授权入口支持 DeviceBundle确认后授权设备下所有卡但没有独立的设备授权概念
3. **记录关联缺失**:无法追踪卡授权是通过单独授权还是设备授权创建的
现有相关模块:
- `Device` 模型:设备表 `tb_device`,通过 `shop_id` 标识所有权
- `DeviceSimBinding` 模型:设备-卡绑定关系表,一设备最多绑定 4 张卡
- `EnterpriseCardAuthorization` 模型:卡授权表
- 设备分销功能:`AllocateDevices` 将设备分销给代理店铺(修改 `shop_id`
## Goals / Non-Goals
**Goals:**
- 支持以设备为单位授权给企业,自动授权设备下所有已绑定的卡
- 一个设备同一时间只能授权给一个企业(唯一性约束)
- 授权设备时自动创建卡授权记录,回收时同步回收
- 卡授权记录关联设备授权,支持追溯授权来源
- 企业端可以查看设备列表、设备详情及其绑定的卡
- 企业端可以对设备下的卡进行停机/复机操作
- 单卡授权入口禁止授权已绑定设备的卡
**Non-Goals:**
- 不涉及设备分销逻辑(设备 → 店铺,已有功能)
- 不涉及设备级别的停机/复机(仍然是卡级别操作)
- 不涉及设备解绑卡的功能(企业只能查看,不能解绑)
- 不涉及设备钱包或套餐购买功能
## Decisions
### 1. 新增独立的设备授权表
**决策**:创建 `tb_enterprise_device_authorization` 表,与卡授权表结构类似
**理由**
- 设备授权是独立的业务概念,需要独立的授权记录
- 支持设备级别的授权/回收操作和记录查询
- 与卡授权表解耦,职责清晰
**替代方案**
- ❌ 在卡授权表中添加 device_id 字段:无法表达"设备授权"这个独立概念,回收设备时逻辑复杂
- ❌ 只用卡授权表+标记字段:无法追踪设备授权的元信息(授权人、时间等)
### 2. 卡授权表添加 device_auth_id 关联字段
**决策**:在 `tb_enterprise_card_authorization` 表添加 `device_auth_id` 字段
**理由**
- 明确区分卡授权来源单卡授权NULLvs 设备授权(有值)
- 回收设备授权时可以精确定位需要回收的卡授权记录
- 支持查询"某设备授权下的所有卡"
**表结构变更**
```sql
ALTER TABLE tb_enterprise_card_authorization
ADD COLUMN device_auth_id BIGINT DEFAULT NULL;
CREATE INDEX idx_eca_device_auth ON tb_enterprise_card_authorization(device_auth_id);
```
### 3. 设备授权唯一性约束
**决策**:使用部分唯一索引保证一个设备同时只能授权给一个企业
**实现**
```sql
CREATE UNIQUE INDEX uq_active_device_auth
ON tb_enterprise_device_authorization(device_id)
WHERE revoked_at IS NULL AND deleted_at IS NULL;
```
**理由**
- 允许历史授权记录(已回收的)存在多条
- 只限制"当前有效"的授权唯一性
- 与卡授权的设计模式一致
### 4. 授权联动机制
**决策**:授权设备时在同一事务内创建设备授权和卡授权记录
**流程**
```
授权设备 → 事务开始
→ 创建 EnterpriseDeviceAuthorization
→ 获取 device_auth_id
→ 查询设备下所有已绑定的卡
→ 批量创建 EnterpriseCardAuthorization (device_auth_id = 上一步的ID)
→ 事务提交
```
**回收流程**
```
回收设备 → 事务开始
→ 更新 EnterpriseDeviceAuthorization.revoked_at
→ 批量更新关联的 EnterpriseCardAuthorization.revoked_at
→ 事务提交
```
### 5. 单卡授权入口修改
**决策**:移除 DeviceBundle 支持,禁止授权已绑定设备的卡
**修改点**
- `Service.AllocateCards`:移除 DeviceBundle 预检和处理逻辑
- `Service.AllocateCardsPreview`:直接返回错误而非 DeviceBundle
- DTO移除 `DeviceBundle``ConfirmDeviceBundles` 等相关结构
**理由**
- 职责分离:单卡授权只处理独立卡,设备授权处理设备
- 避免逻辑混淆:用户不会再在单卡入口看到设备相关提示
- 简化代码:移除复杂的 DeviceBundle 处理逻辑
**BREAKING CHANGE**:前端单卡授权页面需要适配,不再支持确认设备包
### 6. API 路径设计
**后台管理Admin**
```
POST /api/admin/enterprises/:id/allocate-devices # 授权设备
POST /api/admin/enterprises/:id/recall-devices # 回收设备
GET /api/admin/enterprises/:id/devices # 设备列表
```
**企业端H5**
```
GET /api/h5/enterprise/devices # 设备列表
GET /api/h5/enterprise/devices/:device_id # 设备详情
POST /api/h5/enterprise/devices/:device_id/cards/:card_id/suspend # 停机
POST /api/h5/enterprise/devices/:device_id/cards/:card_id/resume # 复机
```
**理由**
- 与现有单卡授权 API 风格一致(`/enterprises/:id/allocate-cards`
- H5 端使用 `/enterprise/devices` 而非 `/enterprises/:id/devices`,因为企业用户只能访问自己的资源
### 7. 权限控制
**后台管理**
- 平台用户:可以授权任意设备给任意企业
- 代理用户:只能授权自己店铺的设备给自己店铺下的企业
**企业端**
- 只能访问授权给自己企业的设备
- 通过 GORM Callback 自动过滤
## Risks / Trade-offs
### [风险] 单卡授权入口 Breaking Change
**影响**:前端单卡授权页面行为变更,不再支持授权设备卡
**缓解**
- 提前通知前端团队
- 返回明确的错误信息引导用户使用设备授权入口
- 可选:在错误响应中返回涉及的设备信息,方便前端跳转
### [风险] 数据迁移
**影响**:如果现有数据中有通过单卡授权入口授权的设备卡,无法追溯来源
**缓解**
- 新字段 `device_auth_id` 默认 NULL兼容历史数据
- 历史数据视为"单卡授权",行为不变
- 无需数据迁移脚本
### [权衡] 回收粒度
**选择**:回收设备授权时同步回收所有关联的卡授权
**权衡**:不支持只回收设备授权但保留卡授权
**理由**:简化业务逻辑,保持授权关系一致性
### [权衡] 设备新增卡后的处理
**场景**:设备已授权给企业后,又绑定了新的卡
**选择**:新卡不自动授权,需要重新授权设备或单独处理
**理由**:避免隐式授权带来的安全风险,保持授权行为显式可控
## Open Questions
1. **设备授权记录管理页面**:是否需要独立的设备授权记录列表页面(类似现有的卡授权记录页面)?
- 建议:本期先不做,通过企业设备列表满足基本需求
2. **批量操作限制**:单次授权/回收设备的数量上限?
- 建议:与单卡授权一致,最多 100 个设备

View File

@@ -0,0 +1,53 @@
## Why
企业用户目前只能管理被授权的单卡,但实际业务中设备(绑定 1-4 张卡)是更常见的授权单位。企业需要以设备为维度查看和管理被授权的资产,包括查看设备列表、设备详情及其绑定的卡,以及对卡进行停机/复机操作。这与"分销设备给代理"的模式类似,但目标是企业而非店铺。
## What Changes
- **新增设备授权表**`tb_enterprise_device_authorization`,记录设备与企业的授权关系
- **修改卡授权表**`tb_enterprise_card_authorization` 新增 `device_auth_id` 字段,关联设备授权记录
- **新增设备授权 API**(后台):授权设备给企业、回收设备授权、企业设备列表
- **新增企业端设备管理 API**H5设备列表、设备详情含卡、停机/复机
- **修改单卡授权逻辑****BREAKING** 禁止通过单卡授权入口授权已绑定设备的卡,移除 DeviceBundle 支持
- **授权联动**:授权设备时自动授权设备下所有已绑定的卡,回收时同步回收
## Capabilities
### New Capabilities
- `enterprise-device-authorization`: 设备授权企业用户功能,包含设备授权/回收、设备授权记录管理、企业端设备列表和管理
### Modified Capabilities
- `enterprise-card-authorization`: 禁止授权已绑定设备的卡,移除 DeviceBundle 确认流程,强制使用设备授权入口
## Impact
**数据库**:
- 新增表 `tb_enterprise_device_authorization`
- 修改表 `tb_enterprise_card_authorization`(新增字段 + 索引)
**后台 API**:
- 新增 `POST /api/admin/enterprises/:id/allocate-devices`
- 新增 `POST /api/admin/enterprises/:id/recall-devices`
- 新增 `GET /api/admin/enterprises/:id/devices`
- 修改 `POST /api/admin/enterprises/:id/allocate-cards`(禁止设备卡)
**H5 API**:
- 新增 `GET /api/h5/enterprise/devices`
- 新增 `GET /api/h5/enterprise/devices/:device_id`
- 新增 `POST /api/h5/enterprise/devices/:device_id/cards/:card_id/suspend`
- 新增 `POST /api/h5/enterprise/devices/:device_id/cards/:card_id/resume`
**代码模块**:
- Model: 新增 `EnterpriseDeviceAuthorization`,修改 `EnterpriseCardAuthorization`
- Store: 新增 `EnterpriseDeviceAuthorizationStore`
- Service: 新增 `enterprise_device` 服务,修改 `enterprise_card` 服务
- Handler: 新增 `admin/enterprise_device.go`,新增 `h5/enterprise_device.go`
- Routes: 新增设备授权路由注册
- DTO: 新增设备授权相关 DTO
**前端影响**:
- 后台管理系统需要新增设备授权功能页面
- 企业端 H5 需要新增设备列表和管理页面
- 单卡授权页面行为变更(不再支持授权设备卡)

View File

@@ -0,0 +1,132 @@
## MODIFIED Requirements
### Requirement: 企业单卡授权管理
系统 SHALL 支持将 IoT 卡授权给企业使用,授权不转移所有权,仅授予使用权限。
**授权规则**
- 代理只能授权自己的卡owner_type="agent" 且 owner_id=自己的 shop_id给自己的企业
- 平台可以授权任意卡,但如果是代理的卡,只能授权给该代理的企业
- 支持批量授权最多1000张卡
- **已绑定设备的卡不能通过单卡授权接口授权MUST 使用设备授权接口**
- 只能授权状态为 "已分销(2)" 的卡
**授权记录存储**
- 使用 `enterprise_card_authorization` 表记录授权关系
- 通过单卡授权创建的记录 device_auth_id 为 NULL
- 不使用 `asset_allocation_record` 表(该表用于分配,非授权)
**权限控制**
- 企业用户只能查看被授权的卡
- 授权后卡的 shop_id 保持不变(所有权不转移)
- 回收授权后企业立即失去访问权限
#### Scenario: 代理授权自己的卡给自己的企业
- **WHEN** 代理shop_id=10将自己的未绑定设备的卡授权给企业enterprise_id=5, owner_shop_id=10
- **THEN** 系统创建授权记录device_auth_id=NULL企业可以查看和管理该卡
#### Scenario: 平台授权任意卡给企业
- **WHEN** 平台管理员将未绑定设备的卡授权给企业
- **THEN** 系统创建授权记录device_auth_id=NULL企业获得该卡的访问权限
#### Scenario: 代理无法授权其他代理的卡
- **WHEN** 代理shop_id=10尝试授权其他代理的卡owner_id=20给企业
- **THEN** 系统拒绝操作,返回权限错误
#### Scenario: 已绑定设备的卡不能通过单卡授权
- **WHEN** 用户尝试通过单卡授权接口授权已绑定到设备的卡
- **THEN** 系统拒绝操作,返回错误码 CodeCannotAuthorizeBoundCard提示"该卡已绑定设备,请使用设备授权功能"
#### Scenario: 只能授权已分销状态的卡
- **WHEN** 用户尝试授权非"已分销"状态的卡
- **THEN** 系统拒绝操作,提示只能授权"已分销"状态的卡
---
### Requirement: 企业卡授权数据模型
系统 SHALL 定义 EnterpriseCardAuthorization 实体,记录企业卡授权关系。
**实体字段**
- `id`: 主键BIGINT
- `enterprise_id`: 被授权企业IDBIGINT关联 enterprises 表)
- `card_id`: IoT卡IDBIGINT关联 iot_cards 表)
- `authorizer_id`: 授权人账号IDBIGINT关联 accounts 表)
- `authorizer_type`: 授权人类型SMALLINT2=平台用户 3=代理账号)
- `authorized_at`: 授权时间TIMESTAMP
- `revoked_at`: 回收时间TIMESTAMP可空
- `revoked_by`: 回收人账号IDBIGINT可空
- `remark`: 备注VARCHAR(500)
- **`device_auth_id`: 关联的设备授权IDBIGINT可空**
- NULL = 通过单卡授权创建
- 有值 = 通过设备授权创建
- `created_at`: 创建时间TIMESTAMP
- `updated_at`: 更新时间TIMESTAMP
**新增索引**
- `idx_eca_device_auth ON tb_enterprise_card_authorization(device_auth_id)`
#### Scenario: 创建单卡授权记录
- **WHEN** 通过单卡授权接口授权卡给企业时
- **THEN** 系统创建 EnterpriseCardAuthorization 记录device_auth_id 为 NULL
#### Scenario: 创建设备关联卡授权记录
- **WHEN** 通过设备授权创建卡授权记录时
- **THEN** 系统创建 EnterpriseCardAuthorization 记录device_auth_id 指向对应的设备授权ID
#### Scenario: 回收授权
- **WHEN** 回收企业的卡授权时
- **THEN** 系统更新对应记录的 revoked_at 和 revoked_by 字段,不删除记录(保留历史)
---
### Requirement: 批量授权接口
系统 SHALL 提供批量授权接口,支持一次授权多张卡给企业。
**接口设计**
- 路径:`POST /api/admin/enterprises/:id/allocate-cards`
- 请求体:
```json
{
"iccids": ["8986001234567890", "8986001234567891"],
"remark": "批量授权"
}
```
- 响应:成功/失败的卡列表及原因
**处理流程**
1. 验证每张卡的授权权限
2. 检查卡状态是否为"已分销"
3. **检查卡是否已绑定设备,绑定设备的卡直接拒绝并返回错误**
4. 检查是否已授权给该企业
5. 创建授权记录device_auth_id = NULL
6. 返回处理结果
**移除功能**
- ~~DeviceBundle 预检和确认流程~~(已移除)
- ~~confirm_device_bundles 参数~~(已移除)
- ~~AllocatedDevices 响应字段~~(已移除)
#### Scenario: 批量授权成功
- **WHEN** 代理批量授权 5 张未绑定设备的卡给企业
- **THEN** 系统创建 5 条授权记录device_auth_id 均为 NULL返回全部成功
#### Scenario: 批量授权遇到设备卡
- **WHEN** 代理批量授权 5 张卡,其中 2 张已绑定设备
- **THEN** 系统创建 3 条授权记录,返回 3 张成功、2 张失败,失败原因为"该卡已绑定设备,请使用设备授权功能"
#### Scenario: 批量授权部分成功
- **WHEN** 代理批量授权 5 张卡,其中 1 张已绑定设备、1 张非已分销状态
- **THEN** 系统创建 3 条授权记录,返回 3 张成功、2 张失败及各自失败原因

View File

@@ -0,0 +1,319 @@
## ADDED Requirements
### Requirement: 设备授权企业数据模型
系统 SHALL 定义 EnterpriseDeviceAuthorization 实体,记录设备与企业的授权关系。
**实体字段**
- `id`: 主键BIGSERIAL
- `enterprise_id`: 被授权企业IDBIGINTNOT NULL
- `device_id`: 被授权设备IDBIGINTNOT NULL
- `authorized_by`: 授权人账号IDBIGINTNOT NULL
- `authorized_at`: 授权时间TIMESTAMPNOT NULL
- `authorizer_type`: 授权人类型SMALLINT2=平台用户 3=代理账号)
- `revoked_by`: 回收人账号IDBIGINT可空
- `revoked_at`: 回收时间TIMESTAMP可空
- `remark`: 备注VARCHAR(500)
- `created_at`, `updated_at`, `deleted_at`: 标准时间字段
**唯一性约束**
- 一个设备同时只能授权给一个企业:`UNIQUE (device_id) WHERE revoked_at IS NULL AND deleted_at IS NULL`
**表名**`tb_enterprise_device_authorization`
#### Scenario: 创建设备授权记录
- **WHEN** 授权设备给企业时
- **THEN** 系统创建 EnterpriseDeviceAuthorization 记录authorized_at 设置为当前时间revoked_at 为 NULL
#### Scenario: 设备重复授权被拒绝
- **WHEN** 尝试将已授权给企业A的设备未回收再授权给企业B
- **THEN** 系统拒绝操作,返回错误"设备已授权给其他企业"
#### Scenario: 回收后可重新授权
- **WHEN** 设备授权已被回收后,重新授权给同一企业或其他企业
- **THEN** 系统允许创建新的授权记录
---
### Requirement: 卡授权记录关联设备授权
系统 SHALL 在 EnterpriseCardAuthorization 表中添加 device_auth_id 字段,关联设备授权记录。
**新增字段**
- `device_auth_id`: 关联的设备授权IDBIGINT可空
- NULL = 通过单卡授权创建
- 有值 = 通过设备授权创建
**索引**
- `idx_eca_device_auth ON tb_enterprise_card_authorization(device_auth_id)`
#### Scenario: 设备授权创建关联卡授权
- **WHEN** 通过设备授权创建卡授权记录时
- **THEN** 卡授权记录的 device_auth_id 字段设置为对应的设备授权ID
#### Scenario: 单卡授权不关联设备
- **WHEN** 通过单卡授权创建卡授权记录时
- **THEN** 卡授权记录的 device_auth_id 字段为 NULL
---
### Requirement: 设备授权管理功能
系统 SHALL 提供设备授权给企业的功能,支持批量授权和回收。
**授权规则**
- 代理只能授权自己店铺的设备给自己店铺下的企业
- 平台可以授权任意设备给任意企业
- 设备 MUST 属于操作者(平台或代理店铺)
- 设备 MUST 处于"已分销"状态status=2
- 设备 MUST 未授权给其他企业(唯一性约束)
**授权联动**
- 授权设备时,系统 SHALL 自动授权设备下所有已绑定的卡
- 卡授权记录的 device_auth_id 指向设备授权记录
- 如果设备没有绑定卡,仍然创建设备授权记录(无卡授权)
#### Scenario: 代理授权设备给自己的企业
- **WHEN** 代理shop_id=10将自己店铺的设备授权给企业owner_shop_id=10
- **THEN** 系统创建设备授权记录,并为设备下所有已绑定的卡创建卡授权记录
#### Scenario: 平台授权任意设备
- **WHEN** 平台管理员授权设备给任意企业
- **THEN** 系统创建授权记录,不检查设备和企业的归属关系
#### Scenario: 代理无法授权其他店铺的设备
- **WHEN** 代理shop_id=10尝试授权其他店铺的设备shop_id=20
- **THEN** 系统拒绝操作,返回权限错误
#### Scenario: 设备授权联动卡授权
- **WHEN** 授权一个绑定了3张卡的设备给企业
- **THEN** 系统创建1条设备授权记录和3条卡授权记录所有卡授权的 device_auth_id 指向该设备授权
---
### Requirement: 批量授权设备接口
系统 SHALL 提供批量授权设备给企业的后台接口。
**接口设计**
- 路径:`POST /api/admin/enterprises/:id/allocate-devices`
- 请求体:
```json
{
"device_nos": ["D001", "D002", "D003"],
"remark": "批量授权备注"
}
```
- 响应体:
```json
{
"success_count": 2,
"fail_count": 1,
"failed_items": [
{ "device_no": "D003", "reason": "设备不存在" }
],
"authorized_devices": [
{ "device_id": 1, "device_no": "D001", "card_count": 3 },
{ "device_id": 2, "device_no": "D002", "card_count": 2 }
]
}
```
**处理流程**
1. 验证企业存在且有权限
2. 验证每个设备的授权权限
3. 检查设备状态和唯一性约束
4. 在事务内创建设备授权和卡授权记录
5. 返回处理结果
#### Scenario: 批量授权成功
- **WHEN** 平台批量授权3个符合条件的设备给企业
- **THEN** 系统创建3条设备授权记录和对应的卡授权记录返回全部成功
#### Scenario: 批量授权部分成功
- **WHEN** 代理批量授权3个设备其中1个已授权给其他企业
- **THEN** 系统创建2条设备授权记录返回2个成功、1个失败及失败原因
---
### Requirement: 设备授权回收功能
系统 SHALL 提供回收设备授权的功能,回收时同步回收关联的卡授权。
**回收规则**
- 代理可以回收自己授权的设备
- 平台可以回收任何设备授权
- 回收操作在事务内完成
**回收联动**
- 回收设备授权时,系统 SHALL 同步回收所有 device_auth_id 指向该设备授权的卡授权记录
- 更新 revoked_at 和 revoked_by 字段
**接口设计**
- 路径:`POST /api/admin/enterprises/:id/recall-devices`
- 请求体:
```json
{
"device_nos": ["D001", "D002"]
}
```
#### Scenario: 回收设备授权联动回收卡授权
- **WHEN** 回收一个绑定了3张卡的设备的授权
- **THEN** 系统更新设备授权的 revoked_at同时更新3条关联卡授权的 revoked_at
#### Scenario: 回收后企业无法访问设备和卡
- **WHEN** 设备授权被回收后,企业用户查询设备或卡
- **THEN** 系统不返回该设备和其下的卡
---
### Requirement: 后台企业设备列表
系统 SHALL 提供后台管理查询企业授权设备列表的接口。
**接口设计**
- 路径:`GET /api/admin/enterprises/:id/devices`
- 查询参数:`page`, `page_size`, `device_no`, `status`
- 响应:设备列表,包含设备信息和绑定卡数量
**数据权限**
- 平台用户可查看所有企业的授权设备
- 代理用户只能查看自己店铺下企业的授权设备
#### Scenario: 查询企业授权设备列表
- **WHEN** 管理员查询企业ID=5的授权设备
- **THEN** 系统返回该企业所有授权设备列表,每个设备包含绑定卡数量
---
### Requirement: 企业端设备列表
系统 SHALL 提供企业用户查询自己授权设备列表的 H5 接口。
**接口设计**
- 路径:`GET /api/h5/enterprise/devices`
- 查询参数:`page`, `page_size`, `device_no`
- 响应:
```json
{
"list": [
{
"device_id": 1,
"device_no": "D001",
"device_name": "GPS追踪器-001",
"device_model": "GT-100",
"card_count": 3,
"authorized_at": "2025-01-29T10:00:00Z"
}
],
"total": 10
}
```
**数据权限**
- 企业用户只能看到授权给自己企业的设备
- 通过 GORM Callback 自动过滤
#### Scenario: 企业用户查看设备列表
- **WHEN** 企业用户查询设备列表
- **THEN** 系统返回授权给该企业的所有设备,包含设备信息和卡数量
#### Scenario: 企业用户无法看到未授权设备
- **WHEN** 企业用户查询设备列表
- **THEN** 系统不返回未授权给该企业的设备
---
### Requirement: 企业端设备详情
系统 SHALL 提供企业用户查询设备详情的 H5 接口,包含设备绑定的卡列表。
**接口设计**
- 路径:`GET /api/h5/enterprise/devices/:device_id`
- 响应:
```json
{
"device": {
"device_id": 1,
"device_no": "D001",
"device_name": "GPS追踪器-001",
"device_model": "GT-100",
"device_type": "GPS",
"authorized_at": "2025-01-29T10:00:00Z"
},
"cards": [
{
"card_id": 101,
"iccid": "8986001234567890",
"msisdn": "1380000001",
"carrier_name": "中国联通",
"network_status": 1,
"network_status_name": "开机"
}
]
}
```
**可见信息**
- 设备基本信息:设备号、名称、型号、类型
- 卡信息ICCID、MSISDN、运营商、网络状态
**不可见信息**
- 成本价、分销价、供应商等商业敏感信息
#### Scenario: 企业用户查看设备详情
- **WHEN** 企业用户查看授权设备ID=1的详情
- **THEN** 系统返回设备信息和该设备绑定的所有卡信息
#### Scenario: 企业用户无法查看未授权设备
- **WHEN** 企业用户尝试查看未授权的设备详情
- **THEN** 系统返回 404 错误
---
### Requirement: 企业端设备卡停机复机
系统 SHALL 提供企业用户对设备下的卡进行停机/复机操作的 H5 接口。
**接口设计**
- 停机:`POST /api/h5/enterprise/devices/:device_id/cards/:card_id/suspend`
- 复机:`POST /api/h5/enterprise/devices/:device_id/cards/:card_id/resume`
**权限校验**
- 设备 MUST 授权给当前企业
- 卡 MUST 属于该设备(通过 device_sim_binding 验证)
- 卡 MUST 通过设备授权device_auth_id 不为空且有效)
#### Scenario: 企业用户停机设备下的卡
- **WHEN** 企业用户对授权设备下的卡执行停机操作
- **THEN** 系统更新卡的 network_status 为 0停机
#### Scenario: 企业用户复机设备下的卡
- **WHEN** 企业用户对授权设备下的卡执行复机操作
- **THEN** 系统更新卡的 network_status 为 1开机
#### Scenario: 无法操作未授权设备的卡
- **WHEN** 企业用户尝试操作未授权设备下的卡
- **THEN** 系统返回 403 错误

View File

@@ -0,0 +1,163 @@
## 1. 数据库迁移
- [x] 1.1 创建 `tb_enterprise_device_authorization` 表迁移文件
- 包含所有字段enterprise_id, device_id, authorized_by, authorized_at, authorizer_type, revoked_by, revoked_at, remark
- 添加部分唯一索引:`UNIQUE (device_id) WHERE revoked_at IS NULL AND deleted_at IS NULL`
- 添加常规索引idx_eda_enterprise, idx_eda_device, idx_eda_authorized_by
- [x] 1.2 创建 `tb_enterprise_card_authorization` 表修改迁移文件
- 新增字段device_auth_idBIGINT可空
- 添加索引idx_eca_device_auth
- [x] 1.3 执行迁移并验证表结构正确
## 2. Model 层
- [x] 2.1 创建 `EnterpriseDeviceAuthorization` 模型
- 文件路径:`internal/model/enterprise_device_authorization.go`
- 包含所有字段和 TableName 方法
- 遵循项目 GORM 模型规范
- [x] 2.2 修改 `EnterpriseCardAuthorization` 模型
- 新增 `DeviceAuthID` 字段(`*uint`
- 添加 GORM 标签:`gorm:"column:device_auth_id;comment:关联的设备授权ID"`
## 3. Store 层
- [x] 3.1 创建 `EnterpriseDeviceAuthorizationStore`
- 文件路径:`internal/store/postgres/enterprise_device_authorization_store.go`
- 实现方法Create, BatchCreate, GetByID, GetByDeviceID, GetByEnterpriseID
- 实现方法ListByEnterprise分页、筛选, RevokeByIDs, GetActiveAuthsByDeviceIDs
- 实现方法ListDeviceIDsByEnterprise获取企业授权的设备ID列表
- [x] 3.2 修改 `EnterpriseCardAuthorizationStore`
- 新增方法RevokeByDeviceAuthID根据设备授权ID批量回收卡授权
## 4. Service 层 - 设备授权服务
- [x] 4.1 创建 `enterprise_device` 服务
- 文件路径:`internal/service/enterprise_device/service.go`
- 依赖注入db, enterpriseStore, deviceStore, deviceSimBindingStore, enterpriseDeviceAuthStore, enterpriseCardAuthStore, logger
- [x] 4.2 实现 `AllocateDevices` 方法(授权设备给企业)
- 验证企业存在和权限
- 验证每个设备的权限和状态
- 检查唯一性约束(设备未授权给其他企业)
- 在事务内创建设备授权和卡授权记录
- 返回授权结果
- [x] 4.3 实现 `RecallDevices` 方法(回收设备授权)
- 验证授权记录存在
- 在事务内回收设备授权和关联的卡授权
- 返回回收结果
- [x] 4.4 实现 `ListDevices` 方法(后台管理:企业设备列表)
- 分页查询授权给企业的设备
- 包含设备信息和绑定卡数量
- [x] 4.5 实现 `ListDevicesForEnterprise` 方法H5企业设备列表
- 企业用户查询自己的授权设备
- 数据权限自动过滤
- [x] 4.6 实现 `GetDeviceDetail` 方法H5设备详情
- 查询设备信息和绑定的卡列表
- 验证企业权限
- [x] 4.7 实现 `SuspendCard``ResumeCard` 方法H5停机/复机)
- 验证设备和卡的授权关系
- 更新卡的网络状态
## 5. Service 层 - 修改单卡授权服务
- [x] 5.1 修改 `enterprise_card/service.go``AllocateCardsPreview` 方法
- 移除 DeviceBundle 处理逻辑
- 绑定设备的卡直接加入 FailedItems原因为"该卡已绑定设备,请使用设备授权功能"
- 移除 DeviceBundles 响应字段
- [x] 5.2 修改 `enterprise_card/service.go``AllocateCards` 方法
- 移除 DeviceBundle 确认流程confirm_device_bundles 参数)
- 移除 AllocatedDevices 响应字段
- 绑定设备的卡直接拒绝
- [x] 5.3 清理相关 DTO
- 移除或标记废弃DeviceBundle, DeviceBundleCard, ConfirmDeviceBundles, AllocatedDevice 相关字段
- 更新 AllocateCardsReq 和 AllocateCardsResp
## 6. Handler 层 - 后台管理
- [x] 6.1 创建 `admin/enterprise_device.go` Handler
- AllocateDevices授权设备给企业
- RecallDevices回收设备授权
- ListDevices企业设备列表
- [x] 6.2 注册后台路由
- 文件路径:`internal/routes/enterprise_device.go`
- POST /api/admin/enterprises/:id/allocate-devices
- POST /api/admin/enterprises/:id/recall-devices
- GET /api/admin/enterprises/:id/devices
- [x] 6.3 更新 Bootstrap 注册
-`internal/bootstrap/` 中注册新的 Store、Service、Handler
## 7. Handler 层 - 企业端 H5
- [x] 7.1 创建 `h5/enterprise_device.go` Handler
- ListDevices设备列表
- GetDeviceDetail设备详情
- SuspendCard停机卡
- ResumeCard复机卡
- [x] 7.2 注册 H5 路由
- 文件路径:`internal/routes/h5/enterprise_device.go`
- GET /api/h5/enterprise/devices
- GET /api/h5/enterprise/devices/:device_id
- POST /api/h5/enterprise/devices/:device_id/cards/:card_id/suspend
- POST /api/h5/enterprise/devices/:device_id/cards/:card_id/resume
## 8. DTO 层
- [x] 8.1 创建设备授权相关 DTO
- 文件路径:`internal/model/dto/enterprise_device_authorization_dto.go`
- AllocateDevicesReq / AllocateDevicesResp
- RecallDevicesReq / RecallDevicesResp
- EnterpriseDeviceListReq / EnterpriseDeviceListResp
- EnterpriseDeviceDetailResp
- DeviceCardSuspendReq / DeviceCardResumeReq
## 9. 错误码
- [x] 9.1 新增设备授权相关错误码
- CodeDeviceAlreadyAuthorized设备已授权给该企业
- CodeDeviceNotAuthorized设备未授权给该企业
- CodeDeviceAuthorizedToOther设备已授权给其他企业
- CodeCannotAuthorizeOthersDevice不能授权非自己的设备
## 10. 测试
- [x] 10.1 Store 层单元测试
- EnterpriseDeviceAuthorizationStore 各方法测试
- EnterpriseCardAuthorizationStore 新方法测试
- [x] 10.2 Service 层单元测试
- enterprise_device 服务各方法测试
- 权限验证测试
- 授权联动测试
- 测试覆盖率88.9%
- [x] 10.3 修改 enterprise_card 服务测试
- 验证绑定设备的卡被正确拒绝
- 移除 DeviceBundle 相关测试
- [x] 10.4 集成测试
- 完整授权/回收流程测试
- 企业端 API 测试
- 权限隔离测试
## 11. 文档更新
- [x] 11.1 更新 OpenAPI 文档生成器
-`cmd/api/docs.go``cmd/gendocs/main.go` 中注册新 Handler
- 重新生成 OpenAPI 文档
- [x] 11.2 创建功能文档
-`docs/enterprise-device-authorization/` 目录下创建设备授权功能说明文档

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-01-29

View File

@@ -0,0 +1,99 @@
# 设计RoleHandler 请求验证
## 上下文
项目中已有标准的请求验证模式:
- 使用 `github.com/go-playground/validator/v10`
- 在 bootstrap 层创建全局 validator 实例
- Handler 构造函数接收 validator
- 请求解析后立即调用 `validator.Struct()` 验证
AuthHandler 已正确实现此模式(参考 `internal/handler/admin/auth.go:34`RoleHandler 需要遵循相同模式。
## 目标 / 非目标
**目标:**
- RoleHandler 遵循项目验证标准模式
- 所有请求在到达 Service 层前完成验证
- 取消被跳过的集成测试
**非目标:**
- 不修改 DTO 的 validate 标签(已正确定义)
- 不改变错误响应格式(使用现有 CodeInvalidParam
- 不引入新的验证库或模式
## 决策
### 决策 1遵循 AuthHandler 模式
**方法:** 完全复制 AuthHandler 的验证模式到 RoleHandler
**理由:**
- 保持代码库一致性
- AuthHandler 模式已验证有效
- 无需重新设计验证流程
**实现细节:**
```go
// 1. RoleHandler 结构体添加 validator 字段
type RoleHandler struct {
service *roleService.Service
validator *validator.Validate // 新增
}
// 2. 构造函数接收 validator
func NewRoleHandler(service *roleService.Service, validator *validator.Validate) *RoleHandler {
return &RoleHandler{
service: service,
validator: validator,
}
}
// 3. Create 方法中验证
func (h *RoleHandler) Create(c *fiber.Ctx) error {
var req dto.CreateRoleRequest
if err := c.BodyParser(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "请求参数解析失败")
}
// 新增验证逻辑
if err := h.validator.Struct(&req); err != nil {
return errors.New(errors.CodeInvalidParam, "参数验证失败: "+err.Error())
}
// ... 现有逻辑
}
```
### 决策 2验证所有接受 body 的方法
**需要验证的方法:**
- `Create()` - CreateRoleRequest
- `Update()` - UpdateRoleRequest
- `AssignPermissions()` - AssignPermissionsRequest
- `UpdateStatus()` - UpdateRoleStatusRequest
**不需要验证的方法:**
- `Get()` - 只有路径参数,已有 ParseUint 检查
- `List()` - Query 参数,已有 QueryParser
- `GetPermissions()` - 只有路径参数
- `RemovePermission()` - 只有路径参数
- `Delete()` - 只有路径参数
### 决策 3错误消息格式
**使用现有格式:**
```go
errors.New(errors.CodeInvalidParam, "参数验证失败: "+err.Error())
```
**理由:**
- 与 AuthHandler 保持一致
- validator 库的错误消息已足够清晰
- 前端可以解析错误码和消息
## 测试策略
- 取消 `tests/integration/role_test.go:51` 的 TODO 跳过
- 运行现有测试验证"缺少必填字段返回错误"场景
- 无需添加新测试(现有被跳过的测试已覆盖验证逻辑)

View File

@@ -0,0 +1,27 @@
# 提案:为 RoleHandler 添加请求验证
## 为什么
当前 RoleHandler 缺少请求参数验证,导致无效的请求可以通过 Handler 层传递到 Service 层。这违反了项目的"Handler 层负责参数验证"原则并且导致集成测试被迫跳过tests/integration/role_test.go:51,282
其他 Handler如 AuthHandler已经正确实现了验证RoleHandler 需要遵循相同模式。
## 变更内容
- RoleHandler 将接收 validator 实例并验证所有请求参数
- Create 和 Update 方法将调用 validator.Struct() 验证 DTO
- 取消 role_test.go 中被跳过的验证测试
## 能力
### 新增能力
- `role-request-validation`: RoleHandler 的所有请求Create、Update、AssignPermissions、UpdateStatus都将验证必填字段和格式
### 修改能力
无(现有功能无变化,只是增强了输入验证)
## 影响范围
- `internal/handler/admin/role.go`: 添加 validator 字段,调用验证方法
- `internal/bootstrap/handlers.go`: 传递 validator 给 RoleHandler
- `tests/integration/role_test.go`: 取消被跳过的测试

View File

@@ -0,0 +1,67 @@
# 规格:角色请求验证
## 新增需求
### 需求Create 方法验证必填字段
RoleHandler.Create 方法必须验证 CreateRoleRequest 的所有必填字段。
#### 场景:缺少 role_name 返回验证错误
- **WHEN** 客户端发送 POST /api/admin/roles 请求body 缺少 role_name 字段
- **THEN** 返回 HTTP 400
- **AND** 响应包含错误码CodeInvalidParam
- **AND** 错误消息提示 "参数验证失败"
#### 场景:缺少 role_type 返回验证错误
- **WHEN** 客户端发送 POST /api/admin/roles 请求body 缺少 role_type 字段
- **THEN** 返回 HTTP 400
- **AND** 响应包含错误码CodeInvalidParam
#### 场景role_name 过长返回验证错误
- **WHEN** 客户端发送 POST /api/admin/roles 请求role_name 超过 50 个字符
- **THEN** 返回 HTTP 400
- **AND** 响应包含错误码CodeInvalidParam
### 需求Update 方法验证字段格式
RoleHandler.Update 方法必须验证 UpdateRoleRequest 的字段格式。
#### 场景role_name 过长返回验证错误
- **WHEN** 客户端发送 PUT /api/admin/roles/:id 请求role_name 超过 50 个字符
- **THEN** 返回 HTTP 400
- **AND** 响应包含错误码CodeInvalidParam
#### 场景status 值非法返回验证错误
- **WHEN** 客户端发送 PUT /api/admin/roles/:id 请求status 值不是 0 或 1
- **THEN** 返回 HTTP 400
- **AND** 响应包含错误码CodeInvalidParam
### 需求AssignPermissions 方法验证权限ID列表
RoleHandler.AssignPermissions 方法必须验证权限ID列表不为空。
#### 场景perm_ids 为空数组返回验证错误
- **WHEN** 客户端发送 POST /api/admin/roles/:id/permissions 请求perm_ids 为空数组 []
- **THEN** 返回 HTTP 400
- **AND** 响应包含错误码CodeInvalidParam
### 需求UpdateStatus 方法验证状态值
RoleHandler.UpdateStatus 方法必须验证状态值在有效范围内。
#### 场景status 值非法返回验证错误
- **WHEN** 客户端发送 PUT /api/admin/roles/:id/status 请求status 值不是 0 或 1
- **THEN** 返回 HTTP 400
- **AND** 响应包含错误码CodeInvalidParam
## 测试要求
- 取消 tests/integration/role_test.go 中的 TODO 跳过(第 51 行)
- 验证测试能够通过,证明验证逻辑正常工作

View File

@@ -0,0 +1,30 @@
# 实现任务
## 1. 修改 RoleHandler 结构
- [x] 1.1 在 RoleHandler 结构体中添加 `validator *validator.Validate` 字段
- [x] 1.2 修改 NewRoleHandler 构造函数,接收 validator 参数
- [x] 1.3 导入 `github.com/go-playground/validator/v10`
## 2. 添加验证逻辑
- [x] 2.1 Create 方法BodyParser 后调用 validator.Struct(&req)
- [x] 2.2 Update 方法BodyParser 后调用 validator.Struct(&req)
- [x] 2.3 AssignPermissions 方法BodyParser 后调用 validator.Struct(&req)
- [x] 2.4 UpdateStatus 方法BodyParser 后调用 validator.Struct(&req)
## 3. 更新 Bootstrap
- [x] 3.1 修改 internal/bootstrap/handlers.go 中的 initHandlers 函数
- [x] 3.2 传递 validate 参数给 NewRoleHandler`admin.NewRoleHandler(svc.Role, validate)`
## 4. 取消测试跳过
- [x] 4.1 删除 tests/integration/role_test.go:51-62 的 TODO 跳过代码
- [x] 4.2 取消注释被跳过的测试代码
## 5. 验证
- [x] 5.1 运行 `go test -v ./tests/integration/role_test.go` 确保测试通过
- [x] 5.2 运行 LSP diagnostics 检查 internal/handler/admin/role.go
- [x] 5.3 确认没有编译错误

View File

@@ -0,0 +1,3 @@
schema: spec-driven
created: 2026-01-29

View File

@@ -0,0 +1,48 @@
# 授权记录备注修改权限修复 - 设计
## 目标
1. 代理用户无法修改非本人创建的授权记录备注。
2. 企业用户无法修改任何授权记录备注。
3. 平台/超级管理员可修改任意授权记录备注。
4. 任何情况下都必须满足数据可见性(代理只能在自己店铺企业范围内操作)。
## 现状与风险点
- 备注更新当前仅按 `id` 更新,缺少“创建者/可见性”约束。
- 现有数据权限 callback 主要作用于 Query不覆盖 Update因此必须在业务链路显式校验。
## 方案
### 1) Service 层统一鉴权
`AuthorizationService.UpdateRecordRemark` 内新增权限判断:
- 取当前用户信息user_id/user_type/shop_id/enterprise_id
- 先通过 `GetByIDWithJoin` 获取授权记录详情(包含 `authorized_by``enterprise_id` 等)
- 按规则判断:
- 平台/超级管理员:允许
- 代理:
- 必须 `record.AuthorizedBy == 当前 user_id`
- 且授权记录对应企业必须属于当前店铺(`enterprise.owner_shop_id == shop_id`,可通过 join 查询或使用现有原生 SQL 结果)
- 企业:直接拒绝
### 2) Store 层更新增加约束(防御性)
提供一个“带约束”的更新方法(示例语义):
- 平台路径:`UpdateRemarkByID(id, remark)`
- 代理路径:`UpdateRemarkByIDAndAuthorizedBy(id, remark, userID)`(必要时再加 enterprise 范围约束)
确保即使上层遗漏判断,也难以越权更新成功。
### 3) 错误与返回
- 无权限:返回统一错误码(例如 `CodeForbidden`),错误信息使用中文并可被前端直接展示。
## 验收标准
- 平台用户可修改任意授权记录备注。
- 代理用户仅可修改自己创建的授权记录备注;修改他人创建的记录必须失败。
- 企业用户调用修改备注接口必须失败。
- 新增/更新用例后相关集成测试通过。

View File

@@ -0,0 +1,25 @@
# 授权记录备注修改权限修复
## Why
当前“授权记录备注修改”链路缺少明确的权限边界校验:代理用户可能通过接口修改不属于自己创建的授权记录备注;企业用户也需要被明确禁止修改。
该问题会导致越权修改、审计信息失真,属于高风险权限缺陷。
## What Changes
- **权限规则落地**
- 平台/超级管理员:可修改任意授权记录备注
- 代理:仅可修改“自己创建的授权记录”的备注(且必须在其可见数据范围内)
- 企业:禁止修改授权记录备注
- **服务端强校验**:在 Service 层统一做权限判断与可见性校验Store 层更新语句增加必要约束,避免仅凭 `id` 更新造成越权。
- **补充测试**:新增集成测试覆盖平台/代理/企业三种用户场景,确保规则稳定。
## Impact
涉及文件(预期):
- Handler`internal/handler/admin/authorization.go`
- Service`internal/service/enterprise_card/authorization_service.go`
- Store`internal/store/postgres/enterprise_card_authorization_store.go`
- 测试:`tests/integration/authorization_test.go`(或新增对应用例文件)

View File

@@ -0,0 +1,18 @@
# 授权记录备注修改权限修复 - 实现任务
## 1. 权限规则实现
- [ ] 1.1 在 `internal/service/enterprise_card/authorization_service.go` 中为 `UpdateRecordRemark` 增加权限校验:平台全量、代理仅本人创建、企业禁止
- [ ] 1.2 在 `internal/store/postgres/enterprise_card_authorization_store.go` 增加带约束的更新方法(至少支持 `id + authorized_by` 约束)
- [ ] 1.3 更新 `internal/handler/admin/authorization.go`:将权限失败场景返回统一错误(中文错误消息)
## 2. 测试
- [ ] 2.1 为平台用户新增集成测试:可修改任意授权记录备注
- [ ] 2.2 为代理用户新增集成测试:可修改本人创建记录、不可修改他人创建记录
- [ ] 2.3 为企业用户新增集成测试:调用修改备注接口必须失败
## 3. 验证
- [ ] 3.1 运行 `go test ./...` 确保通过

View File

@@ -0,0 +1,3 @@
schema: spec-driven
created: 2026-01-29

View File

@@ -0,0 +1,39 @@
# 佣金计算链路修复 - 设计
## 目标
1. 订单创建时就写入后续佣金计算所需的关键字段快照。
2. 订单支付成功后(首次成功支付)自动 enqueue 佣金计算异步任务。
3. 佣金计算具备可重复执行的幂等语义(订单佣金已计算则跳过)。
## 关键字段来源(你已确认)
以购买校验结果为准:
- `allocation`:来自 `PurchaseValidationResult.Allocation`
- `SeriesID``allocation.SeriesID`
- `SellerShopID``allocation.ShopID`(该分配记录对应的“售卖/收益归属店铺”)
- `SellerCostPrice`:根据 allocation 的基础返佣规则从订单金额推导(与成本价差计算一致的口径)
说明:这里的 SellerCostPrice 是为了支持“成本价差佣金”计算,作为链路上的稳定快照,避免后续配置变更影响历史订单。
## 支付成功后触发佣金计算
- 触发点:订单支付成功且为首次成功支付(由“订单激活幂等提案”提供门闸)。
- 触发动作enqueue `commission:calculate`payload 为 `order_id`
- 入队失败策略:
- 不回滚支付成功(避免影响主链路)
- 保持 `commission_status = pending`,允许后续重试(例如后台补偿任务/人工触发/定时任务扫描)
## 事务一致性(可选)
当前佣金计算服务使用了 `Transaction` 包裹,但内部 Store 若未使用同一个 `tx`,一致性会被破坏。
推荐方案之一:
- 为各 Store 增加 `WithDB(tx)` 或在方法中接收 `db *gorm.DB` 参数,确保写入走同一个事务 `tx`
## 验收标准
- 创建订单后,订单表中 `series_id/seller_shop_id/seller_cost_price` 等字段正确写入。
- 首次支付成功后,会 enqueue 佣金计算任务(可通过日志/测试验证)。
- 佣金任务重复执行不重复发放(已计算则跳过)。

View File

@@ -0,0 +1,25 @@
# 佣金计算链路修复:支付后自动入队 + 订单佣金字段快照
## Why
你确认的目标:**订单支付成功后自动触发佣金计算(异步任务)**,且佣金计算所需的关键字段应当来源于“购买校验结果”。
当前实现存在以下风险:
- 佣金计算依赖订单字段(如 `series_id/seller_shop_id/seller_cost_price`),但订单创建时未填充,可能导致计算错误或空指针风险。
- 佣金计算任务已定义,但缺少稳定触发入口,导致支付后佣金不计算或需要人工补偿。
## What Changes
- **订单创建时写入佣金快照字段**基于购买校验结果allocation/series填充订单的 `SeriesID/SellerShopID/SellerCostPrice` 等字段,确保后续计算稳定。
- **支付成功后自动入队佣金计算任务**在订单从待支付变为已支付的“首次成功支付”场景enqueue `commission:calculate` 异步任务,执行佣金计算。
- **计算事务一致性(可选但推荐)**:调整佣金计算服务的事务使用方式,确保“佣金记录 + 钱包入账 + 订单佣金状态更新”具备一致性。
- **补充测试**:新增/完善测试,避免回归。
## Impact
涉及模块(预期):
- 订单创建:`internal/service/order/service.go`
- 异步任务:`internal/task/commission_calculation.go`(触发入口)与队列注入
- 佣金计算:`internal/service/commission_calculation/service.go`
- 测试:`internal/service/order/service_test.go``internal/task/*` 或集成测试

Some files were not shown because too many files have changed in this diff Show More