D-POAF

D-POAF® Framework

![D-POAF Version](https://img.shields.io/badge/D--POAF-v1.0-blue?style=for-the-badge) ![License](https://img.shields.io/badge/License-CC%20BY%204.0-green?style=for-the-badge) ![Status](https://img.shields.io/badge/Status-Frozen%20Canonical-success?style=for-the-badge) **Decentralized Proof-Oriented AI Framework** *A proof-oriented, evidence-driven framework for AI-native software engineering* [📖 Canonical Specification](https://d-poaf.org/resources/D-POAF-Canonical-V1.pdf) • [📚 Resources](https://d-poaf.org/resources) • [💬 Discord](https://discord.gg/DMZMeHxzNd) • [🐦 Twitter](https://x.com/inovionix)

🌟 What is D-POAF?

D-POAF® (Decentralized Proof-Oriented AI Framework) is a proof-oriented, decentralized reference framework for AI-native software engineering. It defines a structured lifecycle model and foundational principles for designing, building, operating, and evolving software in human-AI engineering environments.

D-POAF grounds legitimacy, governance, and accountability in verifiable proof, sustained through end-to-end traceability of intent, decisions, actions, artifacts, proofs, and outcomes.

Why D-POAF?

AI-native engineering introduces decisions and changes that cannot be justified by hierarchy, central control, or performance claims alone. In hybrid human-AI environments, trust, value, and responsibility require:

Traditional Approaches      →  D-POAF
─────────────────────────────────────────
Trust-based processes      →  Verifiable proof
Subjective validation      →  Evidence-driven decisions
Centralized authority      →  Decentralized governance
Static frameworks          →  Living, adaptive systems
Manual ceremonies          →  Proof-first engineering

🎯 Core Principles

D-POAF is built on five foundational principles:

1. Proof Before Authority

A decision becomes legitimate when justified by explicit, verifiable proof. Hierarchy, automation, or performance alone does not establish legitimacy.

2. Decentralized Decision-Making

Decision authority is distributed across humans, AI, and systems, supported by explicit boundaries and escalation paths. Evidence sustains reviewability and prevents opaque concentration of control.

3. Evidence-Driven Living Governance

Governance is embedded into workflows and evolves through evidence and outcomes rather than static controls. Rules, constraints, exceptions, and decision rights are maintained as an auditable, lifecycle-wide operating system.

4. Traceability as a First-Class Property

Intent, decisions, actions, artifacts, proofs, and outcomes remain linkable to context and contribution (human or AI). Traceability sustains reviewability, reproducibility, and accountability across system evolution.

5. Human Accountability Is Non-Transferable

Even with AI autonomy, humans retain explicit responsibility for decision boundaries, escalation rules, and outcome acceptance. Autonomy never abolishes accountability.


📐 Canonical Model

D-POAF structures system evolution as a continuous proof-grounded cycle:

Intent → Decision → Execution → Evidence → Learning → Adaptation

Lifecycle Macro-Phases

1. Instruct & Scope
Translate intent into a scoped instruction grounded in context, with explicit scope boundaries, initial hypotheses, and proof expectations.

2. Shape & Align
Refines and decomposes scope, aligns decision and investment logic, and prepares execution through prompt action design, risk/acceptance thresholds, and human↔AI delegation bounds.

3. Execute & Evolve
Executes prompt actions to produce artifacts, validates delivery through integration, validates outcomes through review, and sustains reliability through monitoring, producing and refreshing proofs over time.

Waves: The Unit of Verifiable Progress

A Wave is the unit of verifiable progress. A Wave traverses the three macro-phases to produce and refresh proofs (PoD/PoV/PoR). Based on evidence and operational signals, each Wave updates intent, governance, and delegation boundaries.


🔐 Proof Model

D-POAF defines three proof families that sustain trust and accountability:

PoD (Proof of Delivery)

Evidence of intended behavior and technical alignment. Validates that the system behaves as specified.

PoV (Proof of Value)

Evidence of outcomes and measurable impact. Validates that the delivered capability produces real value.

PoR (Proof of Reliability)

Evidence of sustained quality, safety, and stability over time. Validates that behavior remains dependable as systems evolve.


🏛️ Living Governance

Living Governance defines and continuously updates the system’s operating envelope through evidence:

Governance is not an external overlay, it’s a continuous, adaptive, lifecycle-wide operating layer.


👥 Roles in D-POAF

D-POAF defines horizontal, collaborative roles without rigid hierarchy:


🚀 Getting Started with D-POAF

1. Read the Canonical Specification

Start with the D-POAF® Canonical Specification to understand the foundational concepts and principles.

2. Explore Resources

3. Implement Gradually

You can adopt D-POAF principles incrementally:

4. Join the Community

Connect with other practitioners:


📊 Use Cases

D-POAF applies wherever AI influences or contributes to software engineering lifecycle:


📘 Official References

Canonical Specification

D-POAF® Canonical Specification v1.0
Status: Frozen Canonical Reference
Date: December 5, 2025
License: CC BY 4.0
Publisher: D-POAF Community (initiated by Inovionix)
Canonical reference: https://www.d-poaf.org

Official Terminology

D-POAF® Official Terminology v1.0
Status: Active
Date: January 7, 2026
License: CC BY 4.0
Publisher: D-POAF Community (initiated by Inovionix)
Reference: https://d-poaf.org/resources/D-POAF-Terminology-V1.pdf

Framework Book

ISBN: 979-10-415-8736-0
Legal deposit: Bibliothèque nationale de France (BnF), December 2025
Publisher: Inovionix
Authors: Azzeddine Ihsine & Sara Ihsine


📚 How to Cite This Work

APA

Ihsine, A., & Ihsine, S. (2025).
D-POAF Framework: Decentralized Proof-Oriented AI Framework.
Inovionix.
https://www.d-poaf.org
ISBN 979-10-415-8736-0

BibTeX

@book{Ihsine2025DPOAF,
  title     = {D-POAF Framework: Decentralized Proof-Oriented AI Framework},
  author    = {Ihsine, Azzeddine and Ihsine, Sara},
  year      = {2025},
  publisher = {Inovionix},
  isbn      = {979-10-415-8736-0},
  url       = {https://www.d-poaf.org},
  note      = {Canonical Specification v1.0}
}

IEEE

A. Ihsine and S. Ihsine,
"D-POAF Framework: Decentralized Proof-Oriented AI Framework,"
Inovionix, 2025.
ISBN: 979-10-415-8736-0.
[Online]. Available: https://www.d-poaf.org

🆚 D-POAF vs Traditional Frameworks

Aspect Traditional Agile D-POAF
Legitimacy Trust & authority Verifiable proof
Decisions Centralized (PO, SM) Decentralized & evidence-driven
Governance Static rules Living, adaptive system
Validation Subjective acceptance Proof-based (PoD/PoV/PoR)
Traceability Limited to deliverables End-to-end (intent → outcomes)
AI Integration Afterthought Native, first-class
Accountability Hierarchical Distributed execution, humans remain accountable

❓ FAQ

Is D-POAF a replacement for Agile/Scrum/SAFe?

No. D-POAF is a complementary framework that can work alongside existing methodologies. It provides proof-oriented thinking, decentralized governance, and AI-native practices that enhance traditional approaches.

Can I use D-POAF for non-AI projects?

Yes! While D-POAF is designed for AI-native engineering, its principles of proof, evidence, and governance apply to any software project where trust and accountability matter.

What’s the learning curve?

D-POAF introduces new concepts (Waves, Proofs, Living Governance), but teams familiar with Agile will find many patterns recognizable. Start with the core principles and adopt incrementally.

Is this only for large organizations?

No. D-POAF applies to any team size. Whether you’re working solo, in a small team, a growing company, or a large enterprise, you can adopt the proof-oriented principles that make sense for your context and scale them as your organization evolves.

Where can I learn more?


🤝 How to Contribute

D-POAF is a community-driven framework. We welcome contributions in several forms:

Framework Development

Community Building

Academic Research

All contributions follow D-POAF’s own governance principles: evidence-driven, community-reviewed, and transparently documented.


📜 License

This repository contains different types of content under appropriate licenses:

Conceptual & Reference Documentation

Implementation & Practical Guides

Trademark

Why two licenses?

Copyright © 2025 Inovionix - Azzeddine IHSINE & Sara IHSINE

What You Can Do

✅ Use D-POAF in your projects (personal or commercial)
✅ Modify and adapt to your needs
✅ Distribute and share
✅ Teach and train others
✅ Publish derivative works (with attribution)

See LICENSE-CC-BY and LICENSE-APACHE for full details.


🌐 Community & Support


🙏 Acknowledgments

D-POAF was created by:

With nearly a decade of experience each in software engineering, AI, and organizational design, we built D-POAF to address the fundamental challenges of AI-native software delivery.

This is a community effort, and we’re grateful to all contributors who help shape the future of software engineering.


💭 Philosophy

“Keep it proof-first.”
In D-POAF, trust is grounded in verifiable proof, not authority.

D-POAF represents a fundamental shift in how we think about software delivery:

We’re not just building a framework. We’re building a movement toward more trustworthy, accountable, and intelligent software engineering.


### 🚀 Ready to Start? **[⭐ Star this repo](https://github.com/inovionix/d-poaf)** • **[💬 Join Discord](https://discord.gg/DMZMeHxzNd)** • **[📖 Read the Specification](https://d-poaf.org/canonical)**
**Building trustworthy AI-native software, one proof at a time.**
Made with ❤️ by the D-POAF community