Disciplined AI Software Development Methodology
This is a shortened version of the repository README, for the full documentation visit GitHub.
A structured approach for working with AI on development projects. This methodology addresses common issues like code bloat, architectural drift, and context dilution through systematic constraints.
The Context Problem
AI systems work on Question → Answer patterns. When you ask for broad, multi-faceted implementations, you typically get:
-
Functions that work but lack structure
-
Repeated code across components
-
Architectural inconsistency over sessions
-
Context dilution causing output drift
-
More debugging time than planning time
How This Works
The methodology uses four stages with systematic constraints and validation checkpoints. Each stage builds on empirical data rather than assumptions.
Planning saves debugging time. Planning thoroughly upfront typically prevents days of fixing architectural issues later.
The Four Stages
Stage 1: AI Configuration
Set up your AI model’s custom instructions using AI-PREFERENCES.XML
. This establishes behavioral constraints and uncertainty flagging with indicators when the AI lacks certainty.
Stage 2: Collaborative Planning
Share METHODOLOGY.XML
with the AI to structure your project plan. Work together to:
-
Define scope and completion criteria
-
Identify components and dependencies
-
Structure phases based on logical progression
-
Generate systematic tasks with measurable checkpoints
Output: A development plan following dependency chains with modular boundaries.
Stage 3: Systematic Implementation
Work phase by phase, section by section. Each request follows: “Can you implement [specific component]?” with focused objectives.
File size stays ≤150 lines. This constraint provides:
-
Smaller context windows for processing
-
Focused implementation over multi-function attempts
-
Easier sharing and debugging
Implementation flow:
Request specific component → AI processes → Validate → Benchmark → Continue
Stage 4: Data-Driven Iteration
The benchmarking suite (built first) provides performance data throughout development. Feed this data back to the AI for optimization decisions based on measurements rather than guesswork.
Why This Approach Works
Decision Processing: AI handles “Can you do A?” more reliably than “Can you do A, B, C, D, E, F, G, H?”
Context Management: Small files and bounded problems prevent the AI from juggling multiple concerns simultaneously.
Empirical Validation: Performance data replaces subjective assessment. Decisions come from measurable outcomes.
Systematic Constraints: Architectural checkpoints, file size limits, and dependency gates force consistent behavior.
Implementation Steps
Note: .xml format is a guideline; you should experiment with different formats (e.g., .json, .yaml, .md) for different use cases.
Each format emphasizes different domains. For example, .md prompts are effective for documentation: because the AI recognizes the structure, it tends to continue it naturally.
.xml and .json provide a code-like structure. This tends to strengthen code generation while reducing unnecessary jargon, resulting in more structured outputs.
Additionally, I’ve included some experimental prompts to illustrate differences when using less common formats or unusual practices.
What to Expect
AI Behavior: The methodology reduces architectural drift and context degradation compared to unstructured approaches. AI still needs occasional reminders about principles - this is normal.
Development Flow: Systematic planning tends to reduce debugging cycles. Focused implementation helps minimize feature bloat. Performance data supports optimization decisions.
Code Quality: Architectural consistency across components, measurable performance characteristics, maintainable structure as projects scale.
Learning the Ropes
Getting Started
Share the 2 persona documents with your AI model & ask to simulate the character:
CORE-PERSONA-FRAMEWORK.json
- Character enforcementGUIDE-PERSONA.json
- Methodology Guide (Avoids to participate in Vibe Coding)
To create your own specialized persona you can share the CREATE-PERSONA-PLUGIN.json
document with your AI model and specify which persona you would like to create.
Share the three core documents with your AI model:
AI-PREFERENCES.XML
- Behavioral constraintsMETHODOLOGY.XML
- Technical frameworkREADME.XML
- Implementation guidance
Ask targeted questions:
-
“How would Phase 0 apply to
[project type]
?” -
“What does the 150-line constraint mean for
[specific component]
?” -
“How should I structure phases for
[project description]
?” -
“Can you help decompose this project using the methodology?”
This will help foster understanding of how your AI model interprets the guidelines.
More detailed info and practical resources found in the Repository: Disciplined AI Software Development Methodology