D-POAF® (Decentralized Proof-Oriented AI Framework) is a proof-oriented, decentralized reference framework for AI-native software engineering. It defines a structured lifecycle model and foundational principles for designing, building, operating, and evolving software in human-AI engineering environments.
D-POAF grounds legitimacy, governance, and accountability in verifiable proof, sustained through end-to-end traceability of intent, decisions, actions, artifacts, proofs, and outcomes.
AI-native engineering introduces decisions and changes that cannot be justified by hierarchy, central control, or performance claims alone. In hybrid human-AI environments, trust, value, and responsibility require:
Traditional Approaches → D-POAF
─────────────────────────────────────────
Trust-based processes → Verifiable proof
Subjective validation → Evidence-driven decisions
Centralized authority → Decentralized governance
Static frameworks → Living, adaptive systems
Manual ceremonies → Proof-first engineering
D-POAF is built on five foundational principles:
A decision becomes legitimate when justified by explicit, verifiable proof. Hierarchy, automation, or performance alone does not establish legitimacy.
Decision authority is distributed across humans, AI, and systems, supported by explicit boundaries and escalation paths. Evidence sustains reviewability and prevents opaque concentration of control.
Governance is embedded into workflows and evolves through evidence and outcomes rather than static controls. Rules, constraints, exceptions, and decision rights are maintained as an auditable, lifecycle-wide operating system.
Intent, decisions, actions, artifacts, proofs, and outcomes remain linkable to context and contribution (human or AI). Traceability sustains reviewability, reproducibility, and accountability across system evolution.
Even with AI autonomy, humans retain explicit responsibility for decision boundaries, escalation rules, and outcome acceptance. Autonomy never abolishes accountability.
D-POAF structures system evolution as a continuous proof-grounded cycle:
Intent → Decision → Execution → Evidence → Learning → Adaptation
1. Instruct & Scope
Translate intent into a scoped instruction grounded in context, with explicit scope boundaries, initial hypotheses, and proof expectations.
2. Shape & Align
Refines and decomposes scope, aligns decision and investment logic, and prepares execution through prompt action design, risk/acceptance thresholds, and human↔AI delegation bounds.
3. Execute & Evolve
Executes prompt actions to produce artifacts, validates delivery through integration, validates outcomes through review, and sustains reliability through monitoring, producing and refreshing proofs over time.
A Wave is the unit of verifiable progress. A Wave traverses the three macro-phases to produce and refresh proofs (PoD/PoV/PoR). Based on evidence and operational signals, each Wave updates intent, governance, and delegation boundaries.
D-POAF defines three proof families that sustain trust and accountability:
Evidence of intended behavior and technical alignment. Validates that the system behaves as specified.
Evidence of outcomes and measurable impact. Validates that the delivered capability produces real value.
Evidence of sustained quality, safety, and stability over time. Validates that behavior remains dependable as systems evolve.
Living Governance defines and continuously updates the system’s operating envelope through evidence:
Governance is not an external overlay, it’s a continuous, adaptive, lifecycle-wide operating layer.
D-POAF defines horizontal, collaborative roles without rigid hierarchy:
Start with the D-POAF® Canonical Specification to understand the foundational concepts and principles.
You can adopt D-POAF principles incrementally:
Connect with other practitioners:
D-POAF applies wherever AI influences or contributes to software engineering lifecycle:
D-POAF® Canonical Specification v1.0
Status: Frozen Canonical Reference
Date: December 5, 2025
License: CC BY 4.0
Publisher: D-POAF Community (initiated by Inovionix)
Canonical reference: https://www.d-poaf.org
D-POAF® Official Terminology v1.0
Status: Active
Date: January 7, 2026
License: CC BY 4.0
Publisher: D-POAF Community (initiated by Inovionix)
Reference: https://d-poaf.org/resources/D-POAF-Terminology-V1.pdf
ISBN: 979-10-415-8736-0
Legal deposit: Bibliothèque nationale de France (BnF), December 2025
Publisher: Inovionix
Authors: Azzeddine Ihsine & Sara Ihsine
Ihsine, A., & Ihsine, S. (2025).
D-POAF Framework: Decentralized Proof-Oriented AI Framework.
Inovionix.
https://www.d-poaf.org
ISBN 979-10-415-8736-0
@book{Ihsine2025DPOAF,
title = {D-POAF Framework: Decentralized Proof-Oriented AI Framework},
author = {Ihsine, Azzeddine and Ihsine, Sara},
year = {2025},
publisher = {Inovionix},
isbn = {979-10-415-8736-0},
url = {https://www.d-poaf.org},
note = {Canonical Specification v1.0}
}
A. Ihsine and S. Ihsine,
"D-POAF Framework: Decentralized Proof-Oriented AI Framework,"
Inovionix, 2025.
ISBN: 979-10-415-8736-0.
[Online]. Available: https://www.d-poaf.org
| Aspect | Traditional Agile | D-POAF |
|---|---|---|
| Legitimacy | Trust & authority | Verifiable proof |
| Decisions | Centralized (PO, SM) | Decentralized & evidence-driven |
| Governance | Static rules | Living, adaptive system |
| Validation | Subjective acceptance | Proof-based (PoD/PoV/PoR) |
| Traceability | Limited to deliverables | End-to-end (intent → outcomes) |
| AI Integration | Afterthought | Native, first-class |
| Accountability | Hierarchical | Distributed execution, humans remain accountable |
No. D-POAF is a complementary framework that can work alongside existing methodologies. It provides proof-oriented thinking, decentralized governance, and AI-native practices that enhance traditional approaches.
Yes! While D-POAF is designed for AI-native engineering, its principles of proof, evidence, and governance apply to any software project where trust and accountability matter.
D-POAF introduces new concepts (Waves, Proofs, Living Governance), but teams familiar with Agile will find many patterns recognizable. Start with the core principles and adopt incrementally.
No. D-POAF applies to any team size. Whether you’re working solo, in a small team, a growing company, or a large enterprise, you can adopt the proof-oriented principles that make sense for your context and scale them as your organization evolves.
D-POAF is a community-driven framework. We welcome contributions in several forms:
All contributions follow D-POAF’s own governance principles: evidence-driven, community-reviewed, and transparently documented.
This repository contains different types of content under appropriate licenses:
Why two licenses?
Copyright © 2025 Inovionix - Azzeddine IHSINE & Sara IHSINE
✅ Use D-POAF in your projects (personal or commercial)
✅ Modify and adapt to your needs
✅ Distribute and share
✅ Teach and train others
✅ Publish derivative works (with attribution)
See LICENSE-CC-BY and LICENSE-APACHE for full details.
D-POAF was created by:
With nearly a decade of experience each in software engineering, AI, and organizational design, we built D-POAF to address the fundamental challenges of AI-native software delivery.
This is a community effort, and we’re grateful to all contributors who help shape the future of software engineering.
“Keep it proof-first.”
In D-POAF, trust is grounded in verifiable proof, not authority.
D-POAF represents a fundamental shift in how we think about software delivery:
We’re not just building a framework. We’re building a movement toward more trustworthy, accountable, and intelligent software engineering.