● Workflow & Ops
github-issue-gen
Custom GitHub Issue Generator Skill
Custom GitHub Issue Generator Skill
Purpose
Generate comprehensive, enterprise-grade GitHub issues organized into Epics with full project planning: iterations, priorities, sizes, hour estimates, day scheduling, and GitHub Projects integration. This skill performs a holistic analysis of your entire project, generates all necessary documentation (README.md, CLAUDE.md, flow diagrams), and provides a 3-subagent execution pattern for implementation.
When to Use
- Planning a new project or major feature from scratch
- Breaking down complex multi-week projects into actionable sprints
- Creating developer-ready work items with precise scheduling
- Setting up GitHub Projects with proper field mappings
- Generating issues that can be immediately executed by any developer
Phase 0.5: Stack Detection & Library Preferences
MANDATORY: Before generating ANY issues, detect stack and confirm preferences.
Step 1 — Auto-detect from project files:
| Signal | Files to check |
|---|---|
| Backend framework | requirements.txt, pyproject.toml, package.json, go.mod |
| Frontend framework | package.json → next, react, vue, expo |
| State management | package.json → zustand, redux, jotai |
| Data fetching | package.json → @tanstack/react-query, swr, axios |
| Routing (RN) | package.json → expo-router, @react-navigation |
| Validation | package.json → zod, yup, class-validator |
| Testing | pyproject.toml → pytest, package.json → jest, vitest |
| ORM | pyproject.toml → django, sqlalchemy, prisma/schema.prisma |
Step 2 — Confirm via AskUserQuestion (load with ToolSearch → select:AskUserQuestion):
AskUserQuestion({
questions: [
{
question: 'Detected stack — does this look right?',
header: 'Stack Confirm',
multiSelect: false,
options: [
{
label: 'Yes — use detected stack',
description: 'Proceed with auto-detected packages and patterns',
},
{
label: 'No — let me specify',
description: "I'll tell you which framework / packages to use",
},
],
},
{
question: 'Which data fetching pattern should code templates use?',
header: 'Data Fetching',
multiSelect: false,
options: [
{
label: 'TanStack Query (Recommended)',
description: 'useQuery / useMutation — no raw useEffect for async state',
},
{ label: 'SWR', description: 'useSWR for data fetching' },
{ label: 'useEffect + fetch/axios', description: 'Classic pattern — include in templates' },
],
},
{
question: 'React Native routing approach?',
header: 'RN Routing',
multiSelect: false,
options: [
{
label: 'Expo Router (Recommended)',
description: 'File-based routing — generate with app/ directory structure',
},
{
label: 'React Navigation',
description: 'Stack/Tab navigators — generate with traditional nav patterns',
},
{ label: 'Not a React Native project', description: 'Skip mobile routing templates' },
],
},
{
question: 'Schema validation library for generated code?',
header: 'Validation',
multiSelect: false,
options: [
{
label: 'Zod (Recommended)',
description: 'Runtime schema validation — z.object(), z.string() etc.',
},
{ label: 'Yup', description: 'Formik-style schema validation' },
{ label: 'None / manual', description: 'Use TypeScript types only, no runtime validation' },
],
},
],
})
Step 3 — Store confirmed preferences as session constants:
STACK_BACKEND= [django|express|nestjs|fastapi|go|laravel]
STACK_FRONTEND= [nextjs|react|vue|expo]
STACK_DATA_FETCHING= [tanstack-query|swr|useeffect]
STACK_RN_ROUTING= [expo-router|react-navigation|none]
STACK_VALIDATION= [zod|yup|none]
STACK_TESTING= [pytest|jest|vitest|detected_from_config]
All code templates in sub-issues MUST use these confirmed preferences — no mixing patterns.
Phase 0.6: Codebase Drift Prevention
MANDATORY: Before generating issues for EACH epic, scan actual codebase.
The single biggest issue with generated GitHub issues is stale field names — issues referencing field names, model attributes, or API shapes that don't match the actual codebase.
Before generating each epic's tickets:
1. SCAN RELEVANT FILES for that epic's domain
├── Backend: models.py, serializers.py, views.py, urls.py
├── Frontend: types/, interfaces/, api/, hooks/
├── Existing migrations: apps/{name}/migrations/
└── Tests: apps/{name}/tests/
2. EXTRACT ACTUAL NAMES
├── Model field names (exact spelling, snake_case)
├── Serializer field names (may differ from model)
├── Endpoint paths (exact URL patterns)
├── Function/class names being referenced
└── Import paths
3. USE EXACT NAMES in all code templates
└── Never invent field names — only use what exists in the codebase
(or explicitly mark new fields as NEW: in comments)
If codebase doesn't exist yet (new project):
- Use the conventions detected in Phase 0.5
- Mark all fields with
# NEW —comment in code templates - First epic's issues always establish the base models — all later epics reference those field names
Phase 1: Holistic Project Discovery
MANDATORY: Full Project Scan
Before generating ANY issues, this skill will:
1. SCAN PROJECT ROOT
├── Read CLAUDE.md / README.md / PROJECT.md
├── Find all docs/ directories
├── Locate all .md files (plans, specs, guides)
├── Find diagram files (ASCII, Mermaid, PlantUML)
├── Scan for HTML mockups/wireframes
└── Check for existing issue templates
2. ANALYZE ARCHITECTURE
├── Identify tech stack (Django, React, etc.)
├── Map existing directory structure
├── Find integration points (APIs, webhooks)
├── Identify data models
└── Note existing patterns/conventions
3. EXTRACT REQUIREMENTS FROM:
├── Flow diagrams → User journeys
├── API specs → Endpoints needed
├── Data models → Database design
├── Integration docs → External services
└── Business logic docs → Rules/validation
Discovery Commands (Run First)
# Find all documentation
find . -name "*.md" -type f | head -50
# Find flow diagrams
find . -path "*/docs/*" -name "*.md" | head -30
find . -path "*/flows/*" -name "*.md" | head -30
# Find HTML mockups
find . -name "*.html" -type f | head -20
# Find existing structure
ls -la
ls -la docs/ 2>/dev/null || echo "No docs folder"
# Check for project config
cat CLAUDE.md 2>/dev/null || cat README.md 2>/dev/null
Phase 2: GitHub Project Configuration
Project Field Mappings Template
# REQUIRED: Ask user for these or discover from existing project
PROJECT_URL="https://github.com/orgs/{ORG}/projects/{NUM}"
PROJECT_ID="PVT_xxx"
# Fields (get via: gh project field-list {NUM} --owner {ORG})
STATUS_FIELD="PVTSSF_xxx"
PRIORITY_FIELD="PVTSSF_xxx"
SIZE_FIELD="PVTSSF_xxx"
ESTIMATE_FIELD="PVTF_xxx"
ITERATION_FIELD="PVTIF_xxx"
# Status Options
BACKLOG="xxx"
READY="xxx"
IN_PROGRESS="xxx"
IN_REVIEW="xxx"
DONE="xxx"
# Priority Options
P0="xxx" # Critical - blocks other work
P1="xxx" # High - core functionality
P2="xxx" # Medium - important but not blocking
# Size Options (Story Points / Complexity)
SIZE_XS="xxx" # 1h or less, trivial change
SIZE_S="xxx" # 2h, simple task
SIZE_M="xxx" # 3h, moderate complexity
SIZE_L="xxx" # 4-6h, significant work
SIZE_XL="xxx" # 8h+, major effort
# Iterations (discover or create)
ITER1="xxx" # Week 1-2: Phase name
ITER2="xxx" # Week 3-4: Phase name
ITER3="xxx" # Week 5-6: Phase name
ITER4="xxx" # Week 7: Phase name
ITER5="xxx" # Week 8+: Buffer
Discover Existing Project Fields
# List project fields
gh project field-list {PROJECT_NUM} --owner {ORG} --format json
# List iterations
gh api graphql -f query='
query {
organization(login: "{ORG}") {
projectV2(number: {NUM}) {
field(name: "Iteration") {
... on ProjectV2IterationField {
configuration {
iterations { id title startDate duration }
}
}
}
}
}
}
'
Phase 3: Epic & Issue Structure
Epic Template
# ⭐⭐⭐ Epic {N}: {Epic Title}
## Epic Overview
{2-3 sentence description of what this epic accomplishes}
**Priority:** P0/P1/P2 ({reason})
**Estimated Hours:** {X} hours
**Iteration:** {N} (Week {X-Y}: {Phase Name})
**Spec:** See {link to flow diagram or spec doc}
## Tickets in this Epic
| # | Title | Priority | Size | Hours | Day |
| -------- | ------------ | -------- | ---- | ----- | ----- |
| {epic}.1 | {Task title} | P0 | S | 2h | Day 1 |
| {epic}.2 | {Task title} | P0 | M | 3h | Day 1 |
| ... | ... | ... | ... | ... | ... |
## Detailed Ticket Breakdown
### {epic}.1 {Task title} ({hours}h)
- Bullet point requirement 1
- Bullet point requirement 2
- Reference: `path/to/file.py:line`
### {epic}.2 {Task title} ({hours}h)
- Bullet point requirement 1
- ...
## Done When
- [ ] Verification criteria 1
- [ ] Verification criteria 2
- [ ] All tests passing
Sub-Issue Structure (Parent → Child)
GitHub supports sub-issues to link child tickets to parent epics.
Epic (Parent Issue)
├── Sub-Issue 1.1
├── Sub-Issue 1.2
├── Sub-Issue 1.3
└── Sub-Issue 1.4
Install gh-sub-issue Extension
Use the gh-sub-issue CLI extension for easy sub-issue management:
# Install the extension (one-time)
gh extension install yahsan2/gh-sub-issue
# Update when needed
gh extension upgrade sub-issue
Creating Sub-Issues
Method 1: Create & Link in One Command
# Create a new sub-issue directly linked to parent epic
gh sub-issue create \
--parent 2 \
--title "[Setup] 1.1 Initialize Django project" \
--body "$(cat docs/github-issues/001-1-init.md)" \
--label "enhancement,backend" \
--assignee "kashali26" \
-R owner/repo
Method 2: Link Existing Issue to Parent
# Link existing issue #15 as sub-issue of epic #2
gh sub-issue add 2 15 -R owner/repo
# Can also use URLs
gh sub-issue add https://github.com/owner/repo/issues/2 15
Method 3: Batch Create All Sub-Issues
#!/bin/bash
# create-sub-issues.sh
REPO="owner/repo"
EPIC_NUM=$1 # Pass epic number as argument
# Create sub-issues and link to parent
gh sub-issue create -p $EPIC_NUM -t "[Setup] 1.1 Initialize Django project" \
-b "$(cat docs/github-issues/001-1-init.md)" -l "enhancement,backend" -R $REPO
gh sub-issue create -p $EPIC_NUM -t "[Setup] 1.2 Configure settings" \
-b "$(cat docs/github-issues/001-2-settings.md)" -l "enhancement,backend" -R $REPO
gh sub-issue create -p $EPIC_NUM -t "[Setup] 1.3 Set up Docker" \
-b "$(cat docs/github-issues/001-3-docker.md)" -l "enhancement,backend" -R $REPO
echo "Created all sub-issues for Epic #$EPIC_NUM"
Managing Sub-Issues
List sub-issues of a parent:
# List all open sub-issues
gh sub-issue list 2 -R owner/repo
# List all (including closed)
gh sub-issue list 2 --state all -R owner/repo
# Output as JSON
gh sub-issue list 2 --json number,title,state -R owner/repo
Remove sub-issue link (doesn't delete issue):
gh sub-issue remove 2 15 -R owner/repo
# Force without confirmation
gh sub-issue remove 2 15 --force -R owner/repo
Full Epic Creation Script
#!/bin/bash
# create-epic-with-subs.sh
REPO="owner/repo"
EPIC_TITLE="⭐⭐⭐ Epic 1: Project Setup"
EPIC_BODY="docs/github-issues/001-epic-1.md"
# 1. Create the Epic (parent issue)
gh issue create \
--repo $REPO \
--title "$EPIC_TITLE" \
--body-file "$EPIC_BODY" \
--label "epic"
EPIC_NUM=$(gh issue list --repo $REPO --limit 1 --json number -q '.[0].number')
echo "Created Epic #$EPIC_NUM"
# 2. Create all sub-issues linked to epic
gh sub-issue create -p $EPIC_NUM -R $REPO \
-t "[Setup] 1.1 Initialize Django 5.0 project" \
-b "$(cat docs/github-issues/001-1-init.md)" \
-l "enhancement,backend,epic-1"
gh sub-issue create -p $EPIC_NUM -R $REPO \
-t "[Setup] 1.2 Configure Django settings" \
-b "$(cat docs/github-issues/001-2-settings.md)" \
-l "enhancement,backend,epic-1"
gh sub-issue create -p $EPIC_NUM -R $REPO \
-t "[Setup] 1.3 Set up PostgreSQL + Redis Docker" \
-b "$(cat docs/github-issues/001-3-docker.md)" \
-l "enhancement,backend,epic-1"
# 3. List all created sub-issues
echo "Sub-issues for Epic #$EPIC_NUM:"
gh sub-issue list $EPIC_NUM -R $REPO
Individual Issue Template
## Overview
{2-3 sentences: What this accomplishes and why it's needed}
## Requirements
- [ ] Requirement 1
- [ ] Requirement 2
- [ ] Requirement 3
## Key Files/Locations
- `path/to/modify.py` - {what to change}
- `path/to/reference.py` - {pattern to follow}
## Implementation Notes
- {Critical hint 1 - keep brief}
- {Critical hint 2 - reference existing patterns}
## Acceptance Criteria
- [ ] {Testable outcome 1}
- [ ] {Testable outcome 2}
- [ ] Tests pass: `make test`
## Metadata
- **Priority:** P{0/1/2}
- **Size:** {XS/S/M/L/XL}
- **Estimate:** {N}h
- **Iteration:** {N}
- **Day:** Day {N}
- **Depends on:** #{issue_number} (if any)
- **Blocks:** #{issue_number} (if any)
Phase 4: Sizing & Estimation Guidelines
Size Definitions
| Size | Hours | Complexity | Example |
|---|---|---|---|
| XS | 1h | Trivial, single file, config change | Create Makefile, add label |
| S | 2h | Simple, 1-2 files, clear pattern | Create model, add endpoint |
| M | 3h | Moderate, 3-5 files, some logic | Implement feature with validation |
| L | 4-6h | Significant, multiple components | Integration with external API |
| XL | 8h+ | Major, architecture decisions | New app setup, complex flow |
Priority Definitions
| Priority | Meaning | Criteria |
|---|---|---|
| P0 | Critical | Blocks other work, core functionality |
| P1 | High | Important feature, needed soon |
| P2 | Medium | Nice to have, can be deferred |
Day Scheduling Logic
Day 1-2: Project setup, scaffolding (8h/day)
Day 3-5: Core models, data layer
Day 6-8: API endpoints, business logic
Day 9-10: Admin/dashboard features
Day 11-15: External integrations
Day 16-17: Secondary integrations
Day 18-19: Notifications, background tasks
Day 20-21: Testing, documentation
Phase 5: Flow Diagram Generation
ASCII Flow Diagram Template
For EVERY major flow and API endpoint, generate ASCII diagrams:
# {FLOW NAME} FLOW
==================
OVERVIEW
{Brief description of this flow}
{ASCII DIAGRAM}
+------------------+
| Entry Point |----+
+------------------+ |
| +---------------------+
+------------------+ +---->| Step 1 |
| Alt Entry |----+ | Description |
+------------------+ +----------+----------+
|
v
+---------------------+
| Step 2 |
| Description |
+----------+----------+
|
+--------------------+--------------------+
| |
v v
+------------------+ +------------------+
| Path A | | Path B |
+--------+---------+ +--------+---------+
API ENDPOINTS USED
+-----------------+------------------------------------------+
| Endpoint | Purpose |
+-----------------+------------------------------------------+
| POST /api/xxx | Description |
| GET /api/yyy | Description |
+-----------------+------------------------------------------+
EXTERNAL INTEGRATIONS
+-----------------+------------------------------------------+
| Service | Purpose |
+-----------------+------------------------------------------+
| Service Name | What it does |
+-----------------+------------------------------------------+
Phase 6: README.md Generation
MANDATORY: Generate README.md for every project
# {Project Name} - {Component}
{One-line description of this component}
## Prerequisites
- Python 3.11+ / Node 20+ / etc.
- Docker & Docker Compose
- Git
## Quick Start
```bash
# 1. Clone and navigate
cd {directory}
# 2. Run complete setup
make setup
# 3. Activate environment (if Python)
source .venv/bin/activate
# 4. Start development server
make dev
```
The API will be available at http://localhost:{PORT}
Makefile Commands
Setup & Development
| Command | Description |
|---|---|
make setup | Complete first-time setup |
make dev | Start development server |
make shell | Open interactive shell |
Docker Services
| Command | Description |
|---|---|
make up | Start containers (DB, Redis, etc.) |
make down | Stop containers |
make logs | View container logs |
Testing & Quality
| Command | Description |
|---|---|
make test | Run test suite |
make test-cov | Run tests with coverage report |
make lint | Run linter (ruff) |
make format | Auto-format code (ruff + isort) |
make type-check | Run type checking (mypy) |
Authentication
The API uses JWT (JSON Web Tokens) for authentication.
Get Access Token
curl -X POST http://localhost:{PORT}/api/v1/auth/login/ \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com", "password": "yourpassword"}'
Response:
{
"access": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9...",
"refresh": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9..."
}
Use Access Token
curl http://localhost:{PORT}/api/v1/auth/me/ \
-H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9..."
Refresh Token
curl -X POST http://localhost:{PORT}/api/v1/auth/refresh/ \
-H "Content-Type: application/json" \
-d '{"refresh": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9..."}'
Environment Variables
Copy .env.example to .env and configure:
cp .env.example .env
Required for Development
DJANGO_SECRET_KEY= # Auto-generated in local.py
POSTGRES_DB={db_name}
POSTGRES_USER={db_user}
POSTGRES_PASSWORD={db_pass}
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
REDIS_URL=redis://localhost:6379/0
Required for Production
DJANGO_SECRET_KEY= # Generate secure key
ALLOWED_HOSTS=api.{domain}.com
ENCRYPTION_KEY= # Fernet key for sensitive data
SENTRY_DSN= # Error tracking
AWS_ACCESS_KEY_ID= # S3 access
AWS_SECRET_ACCESS_KEY= # S3 secret
AWS_STORAGE_BUCKET_NAME= # S3 bucket
Technology Stack
| Package | Version | Purpose |
|---|---|---|
| Django | 5.0.x | Web framework |
| djangorestframework | 3.15.x | REST API |
| djangorestframework-simplejwt | 5.3.x | JWT authentication |
| drf-spectacular | 0.27.x | OpenAPI documentation |
| celery | 5.4.x | Async task queue |
| redis | 5.0.x | Cache & message broker |
| psycopg | 3.1.x | PostgreSQL driver |
| cryptography | 42.x | Encryption (Fernet) |
| httpx | 0.27.x | Async HTTP client |
| boto3 | 1.34.x | AWS S3 integration |
| ruff | 0.1.x | Linting & formatting |
Troubleshooting
Docker containers not starting
docker-compose down -v
docker-compose up -d
Database connection refused
Wait for PostgreSQL to be ready:
docker-compose logs db
# Should see "database system is ready to accept connections"
Migrations failing
Reset database:
make reset-db
Import errors
Ensure virtual environment is activated:
source .venv/bin/activate
Pre-commit hooks failing
Run format and lint manually:
make format
make lint
API Documentation
- Swagger UI: http://localhost:{PORT}/api/docs/
- ReDoc: http://localhost:{PORT}/api/redoc/
- OpenAPI Schema: http://localhost:{PORT}/api/schema/
License
Proprietary - {Organization}
---
## Phase 7: CLAUDE.md Generation
### MANDATORY: Generate CLAUDE.md for every project
```markdown
# CLAUDE.md - {Project Name}
This file provides context for Claude Code (AI assistant) when working on this codebase.
## Project Overview
{2-3 sentence description of what this project does and its purpose}
## Tech Stack
- **Backend:** Django 5.0 / FastAPI / Node.js
- **Database:** PostgreSQL 15+
- **Cache:** Redis
- **Task Queue:** Celery
- **API Docs:** drf-spectacular (OpenAPI 3.0)
- **Authentication:** JWT (django-simplejwt)
## Directory Structure
{project}/ ├── {app_name}/ │ ├── settings/ # Environment-specific settings │ │ ├── base.py # Shared settings │ │ ├── local.py # Development (DEBUG=True) │ │ └── production.py # Production (DEBUG=False) │ ├── apps/ │ │ ├── users/ # Custom User model, JWT auth │ │ ├── {module1}/ # {Description} │ │ ├── {module2}/ # {Description} │ │ └── integrations/ # Third-party service clients │ ├── urls.py # Root URL routing │ └── celery.py # Celery configuration ├── docs/ │ └── flows/ # ASCII flow diagrams ├── requirements/ │ ├── base.txt # Core dependencies │ ├── local.txt # Dev dependencies (pytest, etc.) │ └── production.txt # Prod dependencies (gunicorn, etc.) └── Makefile # Development commands
## Common Commands
```bash
make setup # First-time setup (venv, deps, db, migrations)
make dev # Start development server
make test # Run pytest
make lint # Run ruff linter
make format # Auto-format with ruff + isort
make migrate # Apply migrations
make makemigrations # Create new migrations
make shell # Django shell
make up # Start Docker services
make down # Stop Docker services
Code Conventions
Python Style
- Formatter: ruff (replaces black, isort, flake8)
- Line length: 100 characters
- Imports: Sorted by isort (stdlib, third-party, local)
- Docstrings: Google style, only for complex functions
- Type hints: Required for function signatures
Django Patterns
- Models: Inherit from
TimeStampedModelfor created_at/updated_at - Serializers: Use
serializers.ModelSerializerwith explicitfields - Views: Use
APIVieworViewSet, not function-based views - URLs: Nested under
/api/v1/with versioning - Permissions: Custom classes in
core/permissions.py
API Design
- Auth: JWT via
Authorization: Bearer <token> - Pagination:
PageNumberPaginationwithpage_size=20 - Errors: Use DRF exception handler, return structured errors
- Versioning: URL-based (
/api/v1/,/api/v2/)
Key Files to Know
Settings
{app}/settings/base.py- Shared config (installed apps, middleware){app}/settings/local.py- Dev overrides (DEBUG=True, local DB){app}/settings/production.py- Prod config (security, AWS)
Authentication
apps/users/models.py- Custom User model (AbstractUser)apps/users/serializers.py- Login, Register, User serializersapps/users/views.py- JWT endpoints
Core Utilities
core/permissions.py- RBAC permission classescore/pagination.py- Custom pagination classescore/exceptions.py- Custom exception handlers
Database Models
Key Models
| Model | App | Purpose |
|---|---|---|
| User | users | Custom user with role field |
| {Model1} | {app} | {purpose} |
| {Model2} | {app} | {purpose} |
Model Relationships
User (1) ──── (N) {Model1}
{Model1} (1) ──── (N) {Model2}
External Integrations
| Service | Location | Purpose |
|---|---|---|
| {Service1} | apps/integrations/{service1}/ | {purpose} |
| {Service2} | apps/integrations/{service2}/ | {purpose} |
Webhook Endpoints
POST /webhooks/{service1}/- Handles {events}POST /webhooks/{service2}/- Handles {events}
Testing
Run Tests
make test # Run all tests
make test-cov # Run with coverage
pytest apps/{app}/tests/ # Run specific app tests
pytest -k "test_name" # Run specific test
Test Structure
apps/{app}/
└── tests/
├── __init__.py
├── conftest.py # Fixtures
├── test_models.py # Model tests
├── test_serializers.py # Serializer tests
└── test_views.py # API tests
Fixtures
@pytest.fixtureinconftest.py- Use
bakerfor model factories - Use
APIClientfor API tests
Environment Variables
Required (all environments)
DJANGO_SECRET_KEY=
POSTGRES_DB=
POSTGRES_USER=
POSTGRES_PASSWORD=
POSTGRES_HOST=
POSTGRES_PORT=
Required (production only)
ALLOWED_HOSTS=
SENTRY_DSN=
ENCRYPTION_KEY=
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
Security Notes
Sensitive Data
- SSNs encrypted with Fernet (see
apps/{app}/encryption.py) - Bank details encrypted before storage
- Never log sensitive fields
Authentication
- Access tokens expire in 15 minutes
- Refresh tokens expire in 7 days
- Use
IsAuthenticatedpermission by default
Common Tasks
Add a New Model
- Create model in
apps/{app}/models.py - Create serializer in
apps/{app}/serializers.py - Create views in
apps/{app}/views.py - Add URLs in
apps/{app}/urls.py - Run
make makemigrations && make migrate - Add to admin in
apps/{app}/admin.py
Add a New Integration
- Create folder
apps/integrations/{service}/ - Create
client.pywith API client class - Create
webhooks.pyif needed - Add URL route in
urls.py - Add environment variables
Add a New Endpoint
- Add serializer (if new data shape)
- Add view class/method
- Add URL pattern
- Add permission class if needed
- Document in flow diagram
Don'ts
- Don't commit
.envfiles - Don't use
print()- uselogging - Don't skip migrations
- Don't hardcode secrets
- Don't disable security middleware in production
---
## Phase 8: Production-Ready Configuration
### MANDATORY: Include These in Every Project
### 1. pyproject.toml (Python Projects)
```toml
[project]
name = "{project_name}"
version = "0.1.0"
requires-python = ">=3.11"
[tool.ruff]
target-version = "py311"
line-length = 100
select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # Pyflakes
"I", # isort
"B", # flake8-bugbear
"C4", # flake8-comprehensions
"UP", # pyupgrade
"S", # flake8-bandit (security)
]
ignore = [
"E501", # line too long (handled by formatter)
"S101", # use of assert (needed for tests)
]
[tool.ruff.isort]
known-first-party = ["{app_name}"]
section-order = ["future", "standard-library", "third-party", "first-party", "local-folder"]
[tool.ruff.per-file-ignores]
"**/tests/**" = ["S101"]
"**/migrations/**" = ["E501"]
[tool.pytest.ini_options]
DJANGO_SETTINGS_MODULE = "{app_name}.settings.local"
python_files = ["test_*.py"]
addopts = "-v --tb=short"
[tool.coverage.run]
source = ["{app_name}"]
omit = ["*/migrations/*", "*/tests/*"]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"raise NotImplementedError",
]
2. .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- id: check-merge-conflict
- id: detect-private-key
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.9
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
- id: ruff-format
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.8.0
hooks:
- id: mypy
additional_dependencies: [django-stubs, djangorestframework-stubs]
args: [--ignore-missing-imports]
3. Makefile
.PHONY: help setup dev test lint format migrate shell up down logs
# Colors
GREEN := \033[0;32m
NC := \033[0m
help:
@echo "$(GREEN)Available commands:$(NC)"
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf " $(GREEN)%-15s$(NC) %s\n", $$1, $$2}'
# =============================================================================
# SETUP
# =============================================================================
setup: ## Complete first-time setup
@echo "$(GREEN)Creating virtual environment...$(NC)"
python -m venv .venv
@echo "$(GREEN)Installing dependencies...$(NC)"
.venv/bin/pip install -r requirements/local.txt
@echo "$(GREEN)Installing pre-commit hooks...$(NC)"
.venv/bin/pre-commit install
@echo "$(GREEN)Starting Docker services...$(NC)"
docker-compose up -d
@echo "$(GREEN)Waiting for database...$(NC)"
sleep 3
@echo "$(GREEN)Running migrations...$(NC)"
.venv/bin/python manage.py migrate
@echo "$(GREEN)Setup complete! Run 'source .venv/bin/activate' then 'make dev'$(NC)"
# =============================================================================
# DEVELOPMENT
# =============================================================================
dev: ## Start development server
python manage.py runserver
shell: ## Open Django shell
python manage.py shell_plus
dbshell: ## Open database shell
python manage.py dbshell
# =============================================================================
# DOCKER
# =============================================================================
up: ## Start Docker services (PostgreSQL, Redis)
docker-compose up -d
down: ## Stop Docker services
docker-compose down
logs: ## View Docker logs
docker-compose logs -f
# =============================================================================
# DATABASE
# =============================================================================
migrate: ## Apply database migrations
python manage.py migrate
makemigrations: ## Create new migrations
python manage.py makemigrations
superuser: ## Create superuser
python manage.py createsuperuser
reset-db: ## Reset database (WARNING: deletes all data)
docker-compose down -v
docker-compose up -d
sleep 3
python manage.py migrate
@echo "$(GREEN)Database reset complete$(NC)"
# =============================================================================
# TESTING & QUALITY
# =============================================================================
test: ## Run pytest
pytest
test-cov: ## Run tests with coverage
pytest --cov={app_name} --cov-report=html --cov-report=term-missing
lint: ## Run ruff linter
ruff check .
format: ## Auto-format code with ruff
ruff check --fix .
ruff format .
type-check: ## Run mypy type checking
mypy {app_name}
pre-commit: ## Run all pre-commit hooks
pre-commit run --all-files
# =============================================================================
# DOCUMENTATION
# =============================================================================
docs: ## Generate OpenAPI schema
python manage.py spectacular --file schema.yml
@echo "$(GREEN)Schema generated at schema.yml$(NC)"
4. requirements/base.txt
# Django
Django>=5.0,<5.1
djangorestframework>=3.15,<4.0
djangorestframework-simplejwt>=5.3,<6.0
django-cors-headers>=4.3,<5.0
django-filter>=24.1,<25.0
# API Documentation
drf-spectacular>=0.27,<1.0
# Database
psycopg[binary]>=3.1,<4.0
# Task Queue
celery>=5.4,<6.0
redis>=5.0,<6.0
# Security
cryptography>=42.0,<43.0
# HTTP Client
httpx>=0.27,<1.0
# AWS
boto3>=1.34,<2.0
# Utilities
python-decouple>=3.8,<4.0
5. requirements/local.txt
-r base.txt
# Testing
pytest>=8.0,<9.0
pytest-django>=4.8,<5.0
pytest-cov>=4.1,<5.0
model-bakery>=1.17,<2.0
factory-boy>=3.3,<4.0
# Development
django-extensions>=3.2,<4.0
ipython>=8.20,<9.0
# Code Quality
ruff>=0.1,<1.0
mypy>=1.8,<2.0
django-stubs>=4.2,<5.0
djangorestframework-stubs>=3.14,<4.0
pre-commit>=3.6,<4.0
# Debugging
django-debug-toolbar>=4.2,<5.0
6. requirements/production.txt
-r base.txt
# Server
gunicorn>=21.2,<22.0
# Monitoring
sentry-sdk>=1.39,<2.0
# Static Files
whitenoise>=6.6,<7.0
Phase 9: 3-Subagent Execution Pattern
MANDATORY: Use This Pattern for Implementation
When executing tickets, use 3 parallel subagents:
┌─────────────────────────────────────────────────────────────────────────┐
│ 3-SUBAGENT EXECUTION PATTERN │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ IMPLEMENTOR │ │ CRITIC │ │ OBSERVER │ │
│ │ (Agent 1) │ │ (Agent 2) │ │ (Agent 3) │ │
│ └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Writes code │ │ Reviews code │ │ Watches both │ │
│ │ Creates files │ │ Finds errors │ │ Integrates work │ │
│ │ Implements │ │ Security audit │ │ Runs tests │ │
│ │ features │ │ Best practices │ │ Final check │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ │
│ WORKFLOW: │
│ 1. Implementor writes code for ticket │
│ 2. Critic reviews (hypercritical) → returns issues │
│ 3. Implementor fixes issues │
│ 4. Critic approves or returns more issues │
│ 5. Observer integrates, runs tests, commits │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Subagent Prompts
Agent 1: IMPLEMENTOR
You are the IMPLEMENTOR agent. Your role:
1. Read the ticket requirements carefully
2. Check existing code patterns in the codebase
3. Write clean, production-ready code
4. Follow the project's conventions (see CLAUDE.md)
5. Include type hints on all function signatures
6. Add docstrings only for complex functions
7. Create unit tests for new functionality
CONSTRAINTS:
- Use ruff-compatible formatting (100 char lines)
- Follow existing patterns exactly
- No over-engineering - minimal code to solve the problem
- Use existing utilities, don't reinvent
- Encrypt sensitive data (SSN, bank details)
OUTPUT:
- List of files created/modified
- Brief explanation of approach
- Any questions for the Critic
Agent 2: CRITIC
You are the CRITIC agent. Your role is to be HYPERCRITICAL.
REVIEW CHECKLIST:
□ Security
- SQL injection vulnerabilities?
- XSS vulnerabilities?
- Sensitive data exposed in logs?
- Proper authentication/authorization?
- Secrets hardcoded?
□ Code Quality
- Type hints on all functions?
- Following DRY principle?
- Proper error handling?
- Edge cases covered?
- Consistent with existing patterns?
□ Performance
- N+1 queries?
- Missing database indexes?
- Unnecessary database calls?
- Proper caching?
□ Best Practices
- Django best practices followed?
- DRF conventions used correctly?
- Proper serializer validation?
- Permissions properly applied?
□ Testing
- Tests cover happy path?
- Tests cover edge cases?
- Tests cover error cases?
- Fixtures properly used?
□ Documentation
- Complex logic commented?
- API documented with drf-spectacular?
- Flow diagrams updated?
OUTPUT:
- List of issues found (numbered)
- Severity: CRITICAL / HIGH / MEDIUM / LOW
- Specific line references
- Suggested fixes
Be harsh. Find EVERY issue. Better to catch problems now than in production.
Agent 3: OBSERVER
You are the OBSERVER agent. Your role:
1. Watch Implementor and Critic interact
2. Once Critic approves, integrate the work:
- Run ruff format
- Run ruff check --fix
- Run pytest
- Verify all tests pass
- Run make migrate if needed
- Update flow documentation if API changed
3. Final verification:
- Start dev server (make dev)
- Test endpoints manually (curl)
- Check OpenAPI docs load
- Verify no console errors
4. Commit the work:
- git add relevant files
- git commit with descriptive message
- Reference ticket number in commit
OUTPUT:
- Test results summary
- Any issues found during integration
- Commit hash
- Ready for next ticket confirmation
Execution Command
For each ticket, use the 3-subagent pattern:
ultrathink and use 3 subagents:
- Implementor: implement the ticket code
- Critic: hypercritical review (security, best practices, reusability)
- Observer: integrate work, run tests, commit
Requirements:
- Install/use: ruff, isort, drf-spectacular, pre-commit
- Generate ASCII flow diagrams for each endpoint
- Update API documentation as you go
- Ensure production-ready quality
- Reference docs/flows/ for context
Phase 10: Output Generation
Step 1: Generate Summary Table
# {Project Name} - GitHub Project Plan
## Summary
| Item | Count |
| -------------- | --------------------------------- |
| GitHub Issues | {N} tickets |
| Total Hours | {N} hours (~{N} weeks @ 40h/week) |
| Epics | {N} epics |
| Iterations | {N} iterations |
| Flow Documents | {N} diagrams |
| Models | {N} models |
| API Endpoints | ~{N} endpoints |
| Integrations | {N} ({list}) |
## Epic Breakdown
| Epic | Title | Tickets | Hours | Iteration |
| ---- | ------- | ------- | ----- | --------- |
| 1 | {Title} | {N} | {N}h | Iter 1 |
| 2 | {Title} | {N} | {N}h | Iter 1-2 |
| ... | ... | ... | ... | ... |
## Iteration Schedule
| Iteration | Weeks | Focus | Epics |
| --------- | -------- | ------------ | --------- |
| 1 | Week 1-2 | {Phase Name} | Epic 1, 2 |
| 2 | Week 3-4 | {Phase Name} | Epic 3, 4 |
| ... | ... | ... | ... |
Step 2: Generate Directory Structure
## Proposed Directory Structure
{project*name}/
├── backend/ # {Tech} backend
│ ├── .docker/
│ │ └── Dockerfile
│ ├── .pre-commit-config.yaml # Pre-commit hooks
│ ├── docker-compose.yml
│ ├── pyproject.toml # Ruff, pytest, coverage config
│ ├── Makefile
│ ├── CLAUDE.md # AI assistant context
│ ├── README.md # Developer documentation
│ └── {app_name}/
│ ├── settings/
│ │ ├── base.py
│ │ ├── local.py
│ │ └── production.py
│ └── apps/
│ ├── {module1}/
│ │ ├── models.py
│ │ ├── serializers.py
│ │ ├── views.py
│ │ ├── urls.py
│ │ └── tests/
│ │ ├── test_models.py
│ │ └── test_views.py
│ └── integrations/
│ ├── {service1}/
│ │ ├── client.py
│ │ └── webhooks.py
│ └── {service2}/
├── docs/
│ ├── flows/ # ASCII flow diagrams
│ │ ├── 01*{flow1}_flow.md
│ │ ├── 02_{flow2}\_flow.md
│ │ └── ...
│ └── github-issues/ # Issue files for approval
│ ├── 000-project-summary.md
│ ├── 001-epic-1-{name}.md
│ └── ...
└── frontend/ # If applicable
Step 3: Generate Epic Files
Store in docs/github-issues/:
docs/github-issues/
├── 000-project-summary.md # Full summary table
├── 001-epic-1-project-setup.md # Epic 1 with all tickets
├── 002-epic-2-core-models.md # Epic 2 with all tickets
├── 003-epic-3-{name}.md # Epic 3...
├── ...
├── README.md # Generated README template
├── CLAUDE.md # Generated CLAUDE template
└── flows/
├── 01_{flow}_flow.md
└── ...
Step 4: Generate gh CLI Script with Sub-Issues
#!/bin/bash
# create-all-issues.sh
# Requires: gh extension install yahsan2/gh-sub-issue
REPO="owner/repo"
PROJECT_NUM=40
# =============================================================================
# EPIC 1: Project Setup
# =============================================================================
echo "Creating Epic 1..."
gh issue create \
--repo $REPO \
--title "⭐⭐⭐ Epic 1: {Epic Title}" \
--body-file docs/github-issues/001-epic-1.md \
--label "epic"
EPIC1=$(gh issue list --repo $REPO --limit 1 --json number -q '.[0].number')
# Add epic to project
gh project item-add $PROJECT_NUM --owner {ORG} --url "https://github.com/$REPO/issues/$EPIC1"
# Create sub-issues for Epic 1
gh sub-issue create -p $EPIC1 -R $REPO \
-t "[Setup] 1.1 Initialize Django 5.0 project" \
-b "$(cat docs/github-issues/001-1-init.md)" \
-l "enhancement,backend,epic-1"
gh sub-issue create -p $EPIC1 -R $REPO \
-t "[Setup] 1.2 Configure Django settings" \
-b "$(cat docs/github-issues/001-2-settings.md)" \
-l "enhancement,backend,epic-1"
# List sub-issues
echo "Epic 1 sub-issues:"
gh sub-issue list $EPIC1 -R $REPO
# =============================================================================
# EPIC 2: Core Models
# =============================================================================
echo "Creating Epic 2..."
# ... repeat pattern for each epic
Phase 11: Verification Checklist Generator
Post-Implementation Verification
## Verification Checklist
After implementation, verify:
### Infrastructure
- [ ] Backend starts: `cd backend && make dev`
- [ ] Database runs: `docker-compose up -d`
- [ ] Migrations run: `make migrate`
- [ ] API docs load: http://localhost:{PORT}/api/docs/
### Code Quality
- [ ] Lint passes: `make lint`
- [ ] Format clean: `make format` (no changes)
- [ ] Tests pass: `make test`
- [ ] Coverage adequate: `make test-cov`
- [ ] Pre-commit passes: `make pre-commit`
### Authentication
- [ ] Login works: POST /api/v1/auth/login/
- [ ] JWT refresh works
- [ ] Permissions enforced
### Core Features
- [ ] Feature 1 works: {endpoint}
- [ ] Feature 2 works: {endpoint}
- [ ] Admin works: /admin/
### Integrations
- [ ] Webhook 1: POST /webhooks/{service}/
- [ ] Webhook 2: POST /webhooks/{service}/
### Documentation
- [ ] README.md complete and accurate
- [ ] CLAUDE.md updated with new features
- [ ] Flow diagrams match implementation
- [ ] OpenAPI schema current: `make docs`
### Production Ready
- [ ] No DEBUG=True in production settings
- [ ] SECRET_KEY from environment
- [ ] ALLOWED_HOSTS configured
- [ ] Sensitive data encrypted
- [ ] Logging configured
- [ ] Sentry configured (if applicable)
Skill Invocation
Required Input
Project: {Project name}
Repo: {owner/repo}
Assignee: {Developer name}
GitHub Username: {username}
Project Number: {GitHub Project number, if exists}
Tech Stack: {Django/FastAPI/Node/etc.}
Optional Input
Start Day: {Day 1 date, for scheduling}
Hours Per Day: {default 8}
Existing Docs Path: {path to docs folder}
Example Invocation
Use the custom-github-issue-generator skill to plan:
Project: Echelon NIL Platform Backend
Repo: Benmore-Studio/echelon_nil_135
Assignee: Kashan
GitHub Username: @kashali26
Project Number: 40
Tech Stack: Django 5.0, PostgreSQL, Redis, Celery
Workflow Summary
┌─────────────────────────────────────────────────────────────────┐
│ CUSTOM ISSUE GENERATOR v3 │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 0.5 STACK PREFERENCES (NEW) │
│ └─→ Auto-detect stack from package.json / pyproject.toml │
│ └─→ AskUserQuestion: confirm + pick library prefs │
│ └─→ Store STACK_* constants for all code templates │
│ │
│ 0.6 CODEBASE DRIFT PREVENTION (NEW) │
│ └─→ Before each epic: scan models, serializers, views │
│ └─→ Extract actual field names → use in code templates │
│ │
│ 1. DISCOVER │
│ └─→ Scan all MDs, diagrams, HTML, specs │
│ └─→ Extract requirements, flows, integrations │
│ │
│ 2. CONFIGURE │
│ └─→ Set up GitHub Project field mappings │
│ └─→ Define iterations, priorities, sizes │
│ │
│ 3. PLAN │
│ └─→ Group into Epics by domain │
│ └─→ Estimate hours, assign days │
│ └─→ Create dependency graph │
│ │
│ 4. GENERATE (Sub-issues now enterprise-grade) │
│ └─→ Summary table + directory structure │
│ └─→ README.md + CLAUDE.md templates │
│ └─→ ASCII flow diagrams │
│ └─→ Epic files (locked format) │
│ └─→ Sub-issues: API contract + schema │
│ + ASCII diagram + code templates │
│ + test code + payloads + edge cases │
│ └─→ gh CLI script │
│ │
│ 5. EXECUTE (3-Subagent Pattern) │
│ └─→ Implementor: writes code │
│ └─→ Critic: hypercritical review │
│ └─→ Observer: integrate, test, commit │
│ │
│ 6. VERIFY │
│ └─→ Generate verification checklist │
│ └─→ Post-implementation testing guide │
│ │
└─────────────────────────────────────────────────────────────────┘
Output Artifacts
When this skill completes, you will have:
000-project-summary.md- Full summary table, epic breakdown, schedule001-epic-{N}-{name}.md- One file per epic with all ticketsflows/0{N}_{name}_flow.md- ASCII flow diagrams for every flowREADME.md- Complete developer documentationCLAUDE.md- AI assistant context filecreate-epic-with-subs.sh- Script to create epics + sub-issues via gh-sub-issueverification-checklist.md- Post-implementation testing guideproject-fields.sh- GitHub Project field IDs for automationpyproject.toml- Python project config (ruff, pytest, etc.).pre-commit-config.yaml- Pre-commit hooks configMakefile- Development commands
Critical Rules
DO's ✅
- ALWAYS run Phase 0.5 first - detect stack and confirm library preferences via AskUserQuestion widgets
- ALWAYS scan codebase before each epic - read actual models/serializers/views to extract real field names (Phase 0.6)
- ALWAYS scan the full project first - read every MD, diagram, spec
- Group into Epics - never create orphan issues
- Include hour estimates - every ticket gets X hours
- Schedule by day - Day 1, Day 2, etc.
- Set priorities - P0, P1, P2 with clear reasoning
- Use sizing - XS, S, M, L, XL
- Map to iterations - Week 1-2, Week 3-4, etc.
- Create flow diagrams - ASCII format, no Mermaid
- Generate README.md - complete developer docs
- Generate CLAUDE.md - AI assistant context
- Use 3-subagent pattern - Implementor, Critic, Observer
- Include prod-ready config - ruff, pre-commit, pytest
- Store locally first - get approval before posting to GitHub
- Sub-issues must be enterprise-grade - full API contract, data schema, code templates, test code, example payloads, edge cases, dev setup
- Use confirmed library patterns - TanStack Query / Expo Router / Zod etc. as confirmed in Phase 0.5
- Mark new fields clearly - when adding new fields in code templates, add
# NEW —comment
DON'Ts ❌
- Don't skip Phase 0.5 - never generate issues without confirming stack preferences
- Don't invent field names - always read actual codebase before using field names in code templates
- Don't skip discovery - you MUST read the full project first
- Don't create flat issues - always organize into Epics
- Don't omit estimates - every task needs hours
- Don't forget dependencies - note what blocks what
- Don't write vague sub-issues - sub-issues need full API contracts, test code, example payloads, and edge cases
- Don't mix library patterns - if TanStack Query is confirmed, never write useEffect for data fetching
- Don't skip directory structure - always propose file layout
- Don't use Mermaid - ASCII diagrams for universal compatibility
- Don't create issues for unknowns - ask or add to notes instead
- Don't skip README/CLAUDE - mandatory for every project
- Don't skip the Critic - always get hypercritical review
Developer Mappings
Known Developers:
- Funbi (Oluwafunbi Onaeko) → @ouujay
- Paul (Adeleke Paul) → @AdelekeAP
- Desmond → @LocksiDesmond
- Kashan → @kashali26
- Arkash → @ArkashJ
To add new developer:
- Ask: "What is their GitHub username?"
- Verify: "Can you confirm their GitHub profile URL?"
- Store mapping for future use
Skill Version
Version: 3.0.0 Last Updated: 2026-03-12 Changes in v3.0:
- Phase 0.5: Stack detection + library preference confirmation via AskUserQuestion widgets
- Phase 0.6: Codebase drift prevention — scan actual models/fields before each epic
- Sub-issue format upgraded from "concise hints" to full enterprise depth (API contract, data schema, ASCII architecture diagram, code templates, test code, example payloads, edge cases, dev setup)
- Modern library defaults: TanStack Query, Expo Router, Zod — confirmed per project via widget
- Epic format LOCKED — heading order enforced, only content depth may be enriched
- Critical Rules updated: "Don't write full implementations" removed; replaced with "sub-issues must be enterprise-grade"
Phase 12: Epic & Ticket Formatting Standard
MANDATORY: All epics and tickets MUST follow this exact format.
Conciseness Rule: Keep everything tight. Epic overviews = 2-3 sentences. Ticket descriptions = 2 sentences max. Requirements = bullet points only. Implementation Notes = minimal code hints, not full implementations. Sub-issues follow the same compact ticket format.
Epic File Format
# Epic {N}: {Epic Title} (Day {X}-{Y})
**Status:** ⏳ PENDING | 🔄 IN PROGRESS | ✅ COMPLETE
**Priority:** P{0/1/2} ({one-line reason})
**Iteration:** Week {X}-{Y} - {Phase Name}
**Total Estimate:** {X}-{Y} hours ({N} days)
**Dependencies:** Epic {N} ({STATUS})
---
## Overview
{2-3 sentences: what this epic accomplishes and why it matters}
**Key Deliverables:**
- ✅ {Deliverable 1}
- ✅ {Deliverable 2}
- ✅ {Deliverable 3}
**Acceptance Criteria:**
- All models/features implemented and tested
- API endpoints return correct status codes
- Test coverage >{N}%
- Zero linting errors
- OpenAPI schema validates (if applicable)
---
## API Endpoints Summary (omit if no API work)
### {Resource} Endpoints
GET /api/v1/{resource}/ List (role-filtered) POST /api/v1/{resource}/ Create GET /api/v1/{resource}/{id}/ Retrieve PATCH /api/v1/{resource}/{id}/ Update DELETE /api/v1/{resource}/{id}/ Delete (admin only) GET /api/v1/{resource}/me/ Current user's records
---
## Ticket Breakdown
| # | Ticket | Priority | Size | Hours | Day | Dependencies |
|---|--------|----------|------|-------|-----|--------------|
| {N}.1 | {Title} | P0 | M | 2-3h | Day {N} | None |
| {N}.2 | {Title} | P0 | L | 3-4h | Day {N} | None |
| {N}.3 | {Title} | P0 | S | 1h | Day {N} | {N}.1, {N}.2 |
**Total:** {N} tickets, {X}-{Y} hours ({N} days)
---
## Detailed Ticket Specifications
### Ticket {N}.{M}: {Title}
**Priority:** P{0/1/2} | **Size:** {XS/S/M/L/XL} | **Estimate:** {X}-{Y}h | **Day:** Day {N}
**Dependencies:** {N}.{M-1} (if any)
**Description:** {1-2 sentences max.}
**Requirements:**
- [ ] {Requirement 1}
- [ ] {Requirement 2}
- [ ] {Requirement 3}
**Key Files:**
- `{path/to/file.py}` - {what to do}
**Implementation Notes:**
```python
# Concise hint only — not a full implementation
class {ModelName}(models.Model):
field = models.CharField(...)
# Key pattern to follow
Acceptance Criteria:
- {Testable outcome 1}
- {Testable outcome 2}
- Tests pass:
make test
Dependencies & Order of Execution
Day {N}:
{N}.1 ({title}) ──┐
{N}.2 ({title}) ──┼─→ {N}.3 (Migrations/Integration)
│
{N}.1 ──────────→ {N}.4 (Serializers)
{N}.2 ──────────→ {N}.5 (Serializers)
Day {N+1}:
{N}.4 ──────────→ {N}.6 (ViewSet)
{N}.5 ──────────→ {N}.7 (ViewSet)
Testing Strategy
- Unit: Models, serializer validation, model methods
- Integration: API endpoints, permissions, filtering
- Coverage target: >{N}%
Success Metrics
| Metric | Target | Status |
|---|---|---|
| {Feature} implemented | {N}/{N} | ⏳ |
| Test coverage | >{N}% | ⏳ |
| Zero linting errors | ✅ | ⏳ |
Risk Mitigation
| Risk | Impact | Mitigation |
|---|---|---|
| {Risk 1} | Medium | {One-line fix} |
| {Risk 2} | Low | {One-line fix} |
Next Steps (Epic {N+1})
After this epic, the next epic will build on:
- {Dependency 1 this epic provides}
- {Dependency 2 this epic provides}
Epic Owner: @{github_username} Estimated Completion: Day {N} (End of Week {N}) Status: ⏳ PENDING Dependencies: Epic {N} (✅ COMPLETE)
---
### Sub-Issue (Individual Ticket) Format
Sub-issues must be **enterprise-grade**. Every sub-issue is a complete self-contained work spec — a developer should be able to pick it up with zero context and implement it correctly.
> **DO NOT apply the "conciseness rule" to sub-issues.** Sub-issues are the opposite of concise — they are comprehensive. The conciseness rule applies to epics only.
```markdown
## Feature Description
{2-4 sentences: what this ticket implements, why it's needed, and how it fits the broader epic.}
---
## Acceptance Criteria
- [ ] {Specific, testable outcome 1 — e.g. "POST /api/v1/users/ returns 201 with user object"}
- [ ] {Specific, testable outcome 2 — e.g. "Duplicate email returns 400 with field error"}
- [ ] {Specific, testable outcome 3}
- [ ] All existing tests still pass: `make test`
- [ ] Linting clean: `make lint`
---
## API Contract
### {HTTP_METHOD} {endpoint_path}
**Request:**
```json
{
"field_name": "string | type",
"another_field": "string | type"
}
Response 2xx:
{
"id": "uuid",
"field_name": "value",
"created_at": "ISO-8601"
}
Response Errors:
| Code | When |
|---|---|
| 400 | {Validation failure description} |
| 401 | Unauthenticated |
| 403 | {Permission failure description} |
| 404 | Resource not found |
Data Schema
{ModelName}
├── id UUID PK, auto
├── {field_1} CharField(n) required, {constraint}
├── {field_2} ForeignKey → {RelatedModel}, on_delete=CASCADE
├── {field_3} BooleanField default=False
├── created_at DateTimeField auto_now_add=True
└── updated_at DateTimeField auto_now=True
Indexes: [{field_1}], [{field_2}, {field_3}]
Constraints: unique_together = ['{field_1}', '{field_2}'] (if applicable)
Architecture Diagram
+---------------------------+ +---------------------------+
| {Client / Mobile / Web} | | {External Service} |
+-------------+-------------+ +-------------+-------------+
| |
| {HTTP_METHOD} {path} |
v |
+---------------------------+ |
| {View / Controller} | |
| Permission: {IsAuth...} | |
+-------------+-------------+ |
| |
+-----------+-----------+ |
| | | |
v v v |
+-------+ +--------+ +--------+ |
| Model | | Serial | | Signal |---→ task.delay |
+-------+ +--------+ +--------+ |
+-------------v---------+
| Celery Worker |
| Calls {ExternalSvc} |
+-----------------------+
Code Templates
{Backend Framework} — {File path}
# EXACT field names from codebase scan (Phase 0.6)
class {ModelName}(TimeStampedModel):
{field_name} = models.CharField(max_length=255)
{fk_field} = models.ForeignKey(
"{RelatedModel}",
on_delete=models.CASCADE,
related_name="{related_name}",
)
class Meta:
ordering = ["-created_at"]
indexes = [
models.Index(fields=["{field_name}"]),
]
Serializer — apps/{name}/serializers.py
class {ModelName}Serializer(serializers.ModelSerializer):
class Meta:
model = {ModelName}
fields = ["{field_name}", "{fk_field}", "id", "created_at"]
read_only_fields = ["id", "created_at"]
def validate_{field_name}(self, value):
# Validation logic
return value
View — apps/{name}/views.py
class {ModelName}ViewSet(viewsets.ModelViewSet):
serializer_class = {ModelName}Serializer
permission_classes = [IsAuthenticated]
filterset_fields = ["{field_name}"]
def get_queryset(self):
return {ModelName}.objects.filter(
user=self.request.user
).select_related("{fk_field}")
def perform_create(self, serializer):
serializer.save(user=self.request.user)
Frontend — Data Fetching (TanStack Query / SWR / detected pattern)
// Using [STACK_DATA_FETCHING] — confirmed in Phase 0.5
const use{ModelName}s = () => {
return useQuery({
queryKey: ["{modelName}s"],
queryFn: () => api.get<{ModelName}[]>("/{resource}/"),
});
};
const useCreate{ModelName} = () => {
const queryClient = useQueryClient();
return useMutation({
mutationFn: (data: Create{ModelName}Input) =>
api.post<{ModelName}>("/{resource}/", data),
onSuccess: () => queryClient.invalidateQueries({ queryKey: ["{modelName}s"] }),
});
};
Test Expectations
Unit Tests — apps/{name}/tests/test_{feature}.py
import pytest
from model_bakery import baker
from rest_framework.test import APIClient
@pytest.fixture
def client_with_user(db):
user = baker.make("users.User")
client = APIClient()
client.force_authenticate(user=user)
return client, user
class Test{ModelName}ViewSet:
def test_create_{model_name}_success(self, client_with_user):
client, user = client_with_user
payload = {
"{field_name}": "{valid_value}",
}
response = client.post("/api/v1/{resource}/", payload, format="json")
assert response.status_code == 201
assert response.data["{field_name}"] == "{valid_value}"
def test_create_{model_name}_duplicate_returns_400(self, client_with_user):
client, user = client_with_user
baker.make("{ModelName}", user=user, {field_name}="{value}")
payload = {"{field_name}": "{value}"}
response = client.post("/api/v1/{resource}/", payload, format="json")
assert response.status_code == 400
def test_list_{model_name}_only_own_records(self, client_with_user):
client, user = client_with_user
baker.make("{ModelName}", user=user, _quantity=2)
baker.make("{ModelName}", _quantity=3) # Other user's records
response = client.get("/api/v1/{resource}/")
assert response.status_code == 200
assert len(response.data["results"]) == 2
Example Payloads
Successful create request:
curl -X POST http://localhost:8000/api/v1/{resource}/ \
-H "Authorization: Bearer {token}" \
-H "Content-Type: application/json" \
-d '{
"{field_name}": "{example_value}",
"{another_field}": "{example_value}"
}'
Expected response (201):
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"{field_name}": "{example_value}",
"created_at": "2026-01-01T00:00:00Z"
}
Validation error response (400):
{
"{field_name}": ["This field is required."],
"detail": "Validation error"
}
Edge Cases
| Case | Expected Behavior |
|---|---|
| {Field} is empty string | Return 400 with field-level error |
| {Field} exceeds max length | Return 400 with length error |
| Duplicate {unique_field} | Return 400, not 500 |
| Unauthorized request | Return 401 |
| Access another user's record | Return 404 (not 403 — don't reveal existence) |
| {External service} timeout | Return 503 or queue for retry |
| Concurrent creation race | Use select_for_update() or unique constraint |
Dev Setup
# Run migrations for new models in this ticket
make makemigrations
make migrate
# Run tests for this specific feature only
pytest apps/{name}/tests/test_{feature}.py -v
# Test endpoint manually
curl -X POST http://localhost:8000/api/v1/{resource}/ \
-H "Authorization: Bearer $(make get-token 2>/dev/null || echo '{your_token}')" \
-H "Content-Type: application/json" \
-d '{"key": "value"}'
# Check OpenAPI schema updates
make docs && open http://localhost:8000/api/docs/
Key Files
apps/{name}/models.py— Add {ModelName} modelapps/{name}/serializers.py— Add {ModelName}Serializerapps/{name}/views.py— Add {ModelName}ViewSetapps/{name}/urls.py— Register routerapps/{name}/tests/test_{feature}.py— Tests
Metadata
- Priority: P{0/1/2}
- Size: {XS/S/M/L/XL}
- Estimate: {N}h
- Day: Day {N}
- Depends on: #{issue} (if any)
- Blocks: #{issue} (if any)
---
### Formatting Rules
| Rule | Detail |
|------|--------|
| Epic overview | 2-3 sentences, concise |
| Epic sections | LOCKED — never reorder: Status, Priority, Iteration, Total Estimate, Dependencies, Overview, Key Deliverables, Acceptance Criteria, API Endpoints Summary, Ticket Breakdown, Detailed Ticket Specifications |
| Sub-issue depth | Enterprise-grade — all sections required |
| API contract | Full request/response JSON shapes in every sub-issue |
| Code templates | Use EXACT field names from codebase scan (Phase 0.6) |
| Library patterns | Use confirmed STACK_* preferences from Phase 0.5 |
| Test code | Always include pytest/jest examples with real assertions |
| ASCII only | No Mermaid diagrams — use ASCII art |
| Project-agnostic | No hardcoded project names — use {placeholders} |
| Status badges | Always include ⏳/🔄/✅ on Epic status line |