← Back to AI-SDLC main site
GitHub / Microsoft AWS

GitHub AI-SDLC vs
AWS AI-DLC

Two approaches to AI-assisted software development. Same problem, different philosophies. This page explains what each one is, where they overlap, and which suits your team.

Composable Ecosystem vs Prescriptive Workflow

The fundamental distinction isn't about features — it's about philosophy. GitHub gives you building blocks and principles; you assemble the workflow. AWS gives you the workflow pre-assembled; you follow the phases.

GitHub AI-SDLC

A composable ecosystem of tools, conventions, and principles. You choose which pieces to adopt — Copilot agent mode, Spec Kit, copilot-instructions.md, custom skills, MCP servers — and assemble them into a workflow that fits your team. The concepts (intent engineering, physics over law, information diets) are transferable across any AI tool.

  • Principle-driven: learn the "why", build your own "how"
  • Tool-agnostic concepts that survive when tooling changes
  • Adopt incrementally — each piece adds value alone
  • Lighter ceremony, more flexibility
  • Developer assembles the workflow

AWS AI-DLC

A productised, end-to-end workflow enforced by steering files (markdown rules) that the AI agent must follow. Three mandatory phases — Inception, Construction, Operations — with the agent itself determining which stages apply. Human-in-the-loop checkpoints are baked in, not optional.

  • Process-driven: follow the phases, the rules enforce quality
  • Brownfield-first with automatic reverse engineering
  • Adopt all-or-nothing — the workflow is the value
  • Heavier ceremony, more governance
  • Steering files assemble the workflow for you

How Each Approach Works

GitHub AI-SDLC — Spec-Driven Development
Constitution Specify Plan Tasks Implement Review

Via Spec Kit — agent-agnostic, opt-in phases, specs are living artefacts

AWS AI-DLC — Adaptive Steering Workflow
Workspace Detection Reverse Eng. Requirements User Stories Design Units
Inception ↑     Construction ↓
Func. Design NFR Design Code Gen Build & Test Operations

Via aidlc-workflows — 20+ steering files, adaptive stage selection, mandatory checkpoints

Side-by-Side

Dimension GitHub AI-SDLC AWS AI-DLC
Philosophy Composable ecosystem of tools & principles Prescriptive, enforced workflow via steering files
Brownfield Support Steel Thread methodology + brownfield extraction guide; manual Automatic workspace detection & reverse engineering stage
Greenfield Support Spec Kit: constitution → specify → plan → tasks → implement Requirements Analysis → User Stories → Application Design → Units
Workflow Enforcement Conventions (copilot-instructions.md, SKILL.md) — advisory Steering files the agent must follow — mandatory
Human Checkpoints PR review, code review — developer-initiated Mob Elaboration & Mob Construction — mandatory ceremonies
Agent Support Copilot, Claude Code, Gemini, Cursor, Windsurf, Amp, Qoder + others Amazon Q, Kiro, Cursor, Cline, Claude Code, Copilot
Primary AI Tool GitHub Copilot (but tool-agnostic concepts) Amazon Q Developer & Kiro
Spec/Requirements Spec Kit: living spec.md, plan.md, tasks.md, constitution.md requirements.md, design docs, audit logs — generated per phase
Extensibility 50+ community extensions (MAQA, FixIt, Fleet, integrations) Custom extensions via .opt-in.md files
Artefact Generation Opt-in: spec.md, plan.md, tasks.md Mandatory at every phase: requirements, designs, audit logs
Process Overhead Light — adopt what you need Heavy — 20+ rule files, folder hierarchy, trigger phrases
Adaptiveness Developer decides which tools/phases to use Agent decides which stages apply based on context
Quality Philosophy "Physics over Law" — structural enforcement via tests, linters, CI Steering files + mandatory phase gates + audit logging
Context Management "Information diets" — curate what the agent sees per phase "Semantic context building" — agent builds context from codebase
Trust Model Provenance tagging (EXTRACTED vs INFERRED), 3× Penalty, Force Blanks Mandatory human approval at phase gates, audit trails
Licence Spec Kit: MIT MIT-0 (No Attribution)
Operations Phase GitHub Actions, Copilot for Azure (separate tooling) Placeholder — not yet implemented
Maturity Copilot: established since 2022; Spec Kit: 2025 Introduced re:Invent 2025; open-sourced late 2025

Where They Agree

Despite different approaches, both frameworks share the same core insight: unstructured AI assistance makes developers slower, not faster. The interesting overlap:

Specs Before Code

Both require structured requirements before implementation. GitHub calls it spec-driven development. AWS calls it Inception. Neither lets the agent jump straight to code generation.

Rules Files Are the Mechanism

Both use markdown files to steer agent behaviour. GitHub: copilot-instructions.md, SKILL.md, constitution.md. AWS: steering rules in .amazonq/rules/. Same idea, different packaging.

Human-in-the-Loop

Neither advocates for fully autonomous AI. Both insist on human review — GitHub through PR review and spec validation, AWS through Mob Elaboration and Mob Construction ceremonies.

Agent-Agnostic Aspirations

Both claim to work across multiple AI tools. In practice, GitHub's ecosystem centres on Copilot and AWS's on Amazon Q — but the open-source artefacts from both are portable.

Phased Workflows

Both break development into distinct phases with artefact handoffs. GitHub: Constitution → Specify → Plan → Tasks → Implement. AWS: Inception → Construction → Operations. The mapping is remarkably close.

Open Source

Both have released their frameworks as open-source toolkits. Spec Kit under MIT, AI-DLC under MIT-0. Both are on GitHub. Both can be inspected, forked, and adapted.

Where They Actually Diverge

Brownfield: Manual Strategy

GitHub's approach teaches you to extract features from existing codebases using the Steel Thread methodology — pick one narrow vertical slice, reverse-engineer the intent, extract the service, prove it end-to-end. The developer drives the strategy; the AI executes within your constraints.

Brownfield: Automatic Detection

AI-DLC's agent inspects the workspace, determines it's brownfield, and automatically runs a Reverse Engineering stage to build a semantic model of the existing architecture. The agent drives the analysis; the developer validates at checkpoints.

Enforcement: Trust the Developer

Convention-based. copilot-instructions.md and SKILL.md files guide the agent but don't prevent it from deviating. Structural enforcement comes from external physics — tests, linters, CI pipelines — not from the AI's workflow rules.

Enforcement: Trust the Process

Steering files are mandatory. The agent cannot skip phases, must use standardised completion messages, and is explicitly forbidden from "emergent behaviour." The workflow itself is the enforcement mechanism, not external tooling.

Adoption: Incremental

Start with copilot-instructions.md on Monday. Add Spec Kit next week. Try agent mode the week after. Each piece delivers value independently. You never have to adopt the whole framework.

Adoption: All-in

The value comes from the enforced workflow. Adopting half the steering files defeats the purpose — the phases depend on each other. You either run the full Inception–Construction pipeline or you're just using Amazon Q ad hoc.

Which Approach Suits Your Team?

GitHub AI-SDLC is better when…

  • Your team has strong engineering discipline already
  • You want tool-agnostic principles that outlast any vendor
  • You need to adopt incrementally alongside existing processes
  • You value flexibility over governance
  • You're working across multiple AI tools (Copilot, Claude, Gemini)
  • You want a rich extension ecosystem (50+ community plugins)
  • You're doing rapid prototyping or spike work

AWS AI-DLC is better when…

  • Your team needs a prescribed process to follow from day one
  • You're working on a large brownfield codebase and want automated analysis
  • Enterprise governance and audit trails are non-negotiable
  • You want the AI agent to enforce the workflow, not developers
  • Your organisation is already invested in the AWS ecosystem
  • You need mandatory human-review ceremonies with formal sign-off
  • Predictability matters more than flexibility

They're not mutually exclusive. The concepts from GitHub's AI-SDLC (intent engineering, physics over law, information diets, provenance tagging) apply regardless of which workflow you use. You could adopt AI-DLC's phased structure and use GitHub's trust concepts within it. The principles are transferable; the tooling is not.

Is GitHub's Approach "Piecemeal"?

The critique that GitHub's AI-SDLC is more piecemeal while AWS's AI-DLC is a proven framework has a kernel of truth — and a blindspot.

Where the critique is fair: AI-DLC gives teams a single, enforceable workflow they can adopt on Monday morning. Steering files mean the AI agent itself prevents shortcuts. For a team that needs structure imposed externally (rather than built internally), that's immediately valuable — especially for brownfield codebases where the reverse engineering stage removes a significant manual burden.

Where the critique misses: GitHub's "piecemeal" nature is actually its strength for teams with engineering maturity. The concepts — intent engineering, physics vs law, information diets, the Dumb Zone, provenance tagging — are deeper and more transferable than any single workflow. Someone who understands these principles can build AI-DLC's workflow (or any future one). Someone who only knows AI-DLC's phases cannot easily adapt when the tooling changes. And with Spec Kit, GitHub now has a structured workflow too — it's just opt-in rather than enforced.

The real question isn't "which is better" — it's "what does your team need right now?" If the answer is "we need guardrails imposed on us," AI-DLC. If the answer is "we need to understand how to think about AI-assisted development," GitHub AI-SDLC.

Sources & Further Reading