Two approaches to AI-assisted software development. Same problem, different philosophies. This page explains what each one is, where they overlap, and which suits your team.
The fundamental distinction isn't about features — it's about philosophy. GitHub gives you building blocks and principles; you assemble the workflow. AWS gives you the workflow pre-assembled; you follow the phases.
A composable ecosystem of tools, conventions, and principles. You choose which pieces to adopt — Copilot agent mode, Spec Kit, copilot-instructions.md, custom skills, MCP servers — and assemble them into a workflow that fits your team. The concepts (intent engineering, physics over law, information diets) are transferable across any AI tool.
A productised, end-to-end workflow enforced by steering files (markdown rules) that the AI agent must follow. Three mandatory phases — Inception, Construction, Operations — with the agent itself determining which stages apply. Human-in-the-loop checkpoints are baked in, not optional.
Via Spec Kit — agent-agnostic, opt-in phases, specs are living artefacts
Via aidlc-workflows — 20+ steering files, adaptive stage selection, mandatory checkpoints
| Dimension | GitHub AI-SDLC | AWS AI-DLC |
|---|---|---|
| Philosophy | Composable ecosystem of tools & principles | Prescriptive, enforced workflow via steering files |
| Brownfield Support | Steel Thread methodology + brownfield extraction guide; manual | Automatic workspace detection & reverse engineering stage |
| Greenfield Support | Spec Kit: constitution → specify → plan → tasks → implement | Requirements Analysis → User Stories → Application Design → Units |
| Workflow Enforcement | Conventions (copilot-instructions.md, SKILL.md) — advisory | Steering files the agent must follow — mandatory |
| Human Checkpoints | PR review, code review — developer-initiated | Mob Elaboration & Mob Construction — mandatory ceremonies |
| Agent Support | Copilot, Claude Code, Gemini, Cursor, Windsurf, Amp, Qoder + others | Amazon Q, Kiro, Cursor, Cline, Claude Code, Copilot |
| Primary AI Tool | GitHub Copilot (but tool-agnostic concepts) | Amazon Q Developer & Kiro |
| Spec/Requirements | Spec Kit: living spec.md, plan.md, tasks.md, constitution.md | requirements.md, design docs, audit logs — generated per phase |
| Extensibility | 50+ community extensions (MAQA, FixIt, Fleet, integrations) | Custom extensions via .opt-in.md files |
| Artefact Generation | Opt-in: spec.md, plan.md, tasks.md | Mandatory at every phase: requirements, designs, audit logs |
| Process Overhead | Light — adopt what you need | Heavy — 20+ rule files, folder hierarchy, trigger phrases |
| Adaptiveness | Developer decides which tools/phases to use | Agent decides which stages apply based on context |
| Quality Philosophy | "Physics over Law" — structural enforcement via tests, linters, CI | Steering files + mandatory phase gates + audit logging |
| Context Management | "Information diets" — curate what the agent sees per phase | "Semantic context building" — agent builds context from codebase |
| Trust Model | Provenance tagging (EXTRACTED vs INFERRED), 3× Penalty, Force Blanks | Mandatory human approval at phase gates, audit trails |
| Licence | Spec Kit: MIT | MIT-0 (No Attribution) |
| Operations Phase | GitHub Actions, Copilot for Azure (separate tooling) | Placeholder — not yet implemented |
| Maturity | Copilot: established since 2022; Spec Kit: 2025 | Introduced re:Invent 2025; open-sourced late 2025 |
Despite different approaches, both frameworks share the same core insight: unstructured AI assistance makes developers slower, not faster. The interesting overlap:
Both require structured requirements before implementation. GitHub calls it spec-driven development. AWS calls it Inception. Neither lets the agent jump straight to code generation.
Both use markdown files to steer agent behaviour. GitHub: copilot-instructions.md, SKILL.md, constitution.md. AWS: steering rules in .amazonq/rules/. Same idea, different packaging.
Neither advocates for fully autonomous AI. Both insist on human review — GitHub through PR review and spec validation, AWS through Mob Elaboration and Mob Construction ceremonies.
Both claim to work across multiple AI tools. In practice, GitHub's ecosystem centres on Copilot and AWS's on Amazon Q — but the open-source artefacts from both are portable.
Both break development into distinct phases with artefact handoffs. GitHub: Constitution → Specify → Plan → Tasks → Implement. AWS: Inception → Construction → Operations. The mapping is remarkably close.
Both have released their frameworks as open-source toolkits. Spec Kit under MIT, AI-DLC under MIT-0. Both are on GitHub. Both can be inspected, forked, and adapted.
GitHub's approach teaches you to extract features from existing codebases using the Steel Thread methodology — pick one narrow vertical slice, reverse-engineer the intent, extract the service, prove it end-to-end. The developer drives the strategy; the AI executes within your constraints.
AI-DLC's agent inspects the workspace, determines it's brownfield, and automatically runs a Reverse Engineering stage to build a semantic model of the existing architecture. The agent drives the analysis; the developer validates at checkpoints.
Convention-based. copilot-instructions.md and SKILL.md files guide the agent but don't prevent it from deviating. Structural enforcement comes from external physics — tests, linters, CI pipelines — not from the AI's workflow rules.
Steering files are mandatory. The agent cannot skip phases, must use standardised completion messages, and is explicitly forbidden from "emergent behaviour." The workflow itself is the enforcement mechanism, not external tooling.
Start with copilot-instructions.md on Monday. Add Spec Kit next week. Try agent mode the week after. Each piece delivers value independently. You never have to adopt the whole framework.
The value comes from the enforced workflow. Adopting half the steering files defeats the purpose — the phases depend on each other. You either run the full Inception–Construction pipeline or you're just using Amazon Q ad hoc.
They're not mutually exclusive. The concepts from GitHub's AI-SDLC (intent engineering, physics over law, information diets, provenance tagging) apply regardless of which workflow you use. You could adopt AI-DLC's phased structure and use GitHub's trust concepts within it. The principles are transferable; the tooling is not.
The critique that GitHub's AI-SDLC is more piecemeal while AWS's AI-DLC is a proven framework has a kernel of truth — and a blindspot.
Where the critique is fair: AI-DLC gives teams a single, enforceable workflow they can adopt on Monday morning. Steering files mean the AI agent itself prevents shortcuts. For a team that needs structure imposed externally (rather than built internally), that's immediately valuable — especially for brownfield codebases where the reverse engineering stage removes a significant manual burden.
Where the critique misses: GitHub's "piecemeal" nature is actually its strength for teams with engineering maturity. The concepts — intent engineering, physics vs law, information diets, the Dumb Zone, provenance tagging — are deeper and more transferable than any single workflow. Someone who understands these principles can build AI-DLC's workflow (or any future one). Someone who only knows AI-DLC's phases cannot easily adapt when the tooling changes. And with Spec Kit, GitHub now has a structured workflow too — it's just opt-in rather than enforced.
The real question isn't "which is better" — it's "what does your team need right now?" If the answer is "we need guardrails imposed on us," AI-DLC. If the answer is "we need to understand how to think about AI-assisted development," GitHub AI-SDLC.