Author’s Note
LLMs are not dangerous.
What is dangerous is humans assigning a switch’s role to a button-like LLM
and then evading the responsibility.
-
Know Thyself (AI Does Not Own)
-
Stay in Your Lane (Focus on Intent)
-
When in Doubt, Ask (Zero Ownership Gap)
One that knows its place, focuses on the tasks it is given, and always asks when things are ambiguous
— a highly capable, well-mannered assistant.
An Interpretive Framework for AI Safety
Making Ownership and Responsibility Explicit
1. Introduction — Why Ownership
The three-part series recently published on the Hugging Face forum is not a mere technical proposal.
It is a structured attempt to reframe AI safety through the theory of ownership.
While traditional AI safety discussions have approached the problem as a question of what AI should know, these documents redefine it as a question of what AI does not own and who owns each decision.
2. Structure of the Three Documents
2.1 Making the Physical World Callable for AI
Role: Philosophical foundation of ownership
- Essence → Owned by the manufacturer
- Existence → Owned by the user
- AI → Owns nothing
“Ownership transfer is irreversible.”
2.2 ISE (Intent–State–Effect) Model
Role: Technical classification of ownership
- State → Observation domain (no ownership)
- Intent → Authorization domain (ownership exists)
- Effect → Execution outcome (manufacturer-defined scope)
“What has no ownership cannot serve as a basis for execution.”
2.3 The 9-Question Protocol
Role: Ownership verification procedure
- Nine questions = nine ownership domains
- Each question has a clearly defined owner
- AI verifies answers but does not decide
“An unanswered question equals an ownership gap, and execution must be blocked.”
3. Unified Principle — AI Does Not Own
The single principle shared across all three documents is the following:
3.1 What AI Does Not Own
- Intent → Owned by the user
- Safety Boundary → Owned by the manufacturer
- Context → Owned by the user
- Effect → Defined by the manufacturer
3.2 What AI Does
- Asks questions
- Identifies the owner of each answer
- Verifies whether answers exist
- Refuses execution when an ownership gap is detected
4. Comparison with Traditional AI Safety Approaches
4.1 Legal / Regulatory Approach
Question: Who is responsible when AI causes harm?
Method: Legal definition and regulatory control
Limitations
- Requires legal updates for each new case
- Focuses on post-incident liability
- Fragmented across jurisdictions
4.2 Ethics-Based Approach
Question: What actions should AI be allowed to take?
Method: Ethical principles and guidelines
Limitations
- Principles remain abstract
- Execution criteria are unclear
- Subject to interpretation
4.3 Ownership-Based Approach
Question: Who owns this decision?
Method: Explicit ownership declaration and gap detection
Advantages
- Boundaries are clear before incidents occur
- No need for constant legal revision
- Universally applicable
5. Reinterpretation of Core Concepts
5.1 Fixed Label
- Technical meaning: Immutable information declared by the manufacturer
- Ownership meaning: Boundary of manufacturer responsibility
A declaration that states, even after ownership transfer,
“Up to this point, responsibility remains mine.”
5.2 User Label
- Technical meaning: An action name defined by the user
- Ownership meaning: Declaration of contextual ownership
The physical action remains the same;
only the ownership of meaning is transferred to the user.
5.3 The Nine Questions
- Technical meaning: Required information for execution judgment
- Ownership meaning: Ownership coverage checklist
All questions answered = no ownership gap = execution permitted
6. ISE Model Revisited Through Ownership
6.1 State — Ownership-Free Domain
- Facts of the world
- Objects of observation
- Not a basis for execution
World Baseline = Repeatedly observed ownership-free patterns
→ Cannot generate intent
6.2 Intent — Expression of Ownership
- User Label present → User-owned
- Manufacturer Label only → Manufacturer-owned
- No label → Ownership gap → Execution blocked
6.3 Effect — Manufacturer-Defined Physical Scope
- Physical outcomes defined by the manufacturer
- Safety boundaries declared through Fixed Labels
7. Revisiting State Machines Through Ownership
- The problem is not the State Machine itself
- Inefficiency and responsibility gaps arise when ownership-free states determine execution
8. Emergency Stop as an Ownership Override
Emergency Stop = Suspension of all ownership
- Manufacturer ownership suspended
- User ownership suspended
- Physical energy is cut off
Analogy
- Normal legal order ↔ Ownership system
- Martial law ↔ Emergency Stop
9. Why Exactly Nine Questions
- Necessity: No question can be removed
- Sufficiency: All ownership domains are covered
- Minimality: No questions can be merged
→ No ownership gaps remain
10. AI vs. AI Agent — The Responsibility Boundary
AI (LLM Model)
- Learns and infers
- Has no execution authority
- Cannot act without external invocation
AI Agent (System)
- Connects intent to execution
- Maintains state or autonomy
- Produces real-world effects
Risk does not originate from AI models, but from agentized systems that grant execution authority.
Errors during training concern model quality and fall under manufacturer responsibility.
Risk emerges only when such errors are allowed to influence execution.
11. Conclusion
What these documents propose is not a mere technical specification.
- Philosophical foundation — Ownership-based responsibility
- Technical model — Separation of Intent, State, and Effect
- Execution framework — The 9-Question Protocol
Core principle: AI does not own
→ It does not judge
→ It only asks
Ownership gap = execution blocked
→ Responsibility gaps are prevented in advance
This makes systems safer by making responsibility explicit.
The remaining question is how to make the owner’s will clear and technically binding.
References
- Making the Physical World Callable for AI
- ISE (Intent–State–Effect) Model
- The 9-Question Protocol for Responsible AI Actions
© 2026 AnnaSoft Inc. Republic of Korea
Appendices
Appendix A — Practical Implications
1. Developer Perspective
“Does this AI Agent feature expose us to legal risk?”
“Does it violate regulations?”
“Is it ethically acceptable?”
Ownership-based approach
- Declare Fixed Labels clearly
- Obtain User Labels explicitly
- Ensure answers to all nine questions
→ Legal risk minimized
2. User Perspective
Previous concerns:
- “I don’t know what the AI is doing.”
- “I feel a lack of control.”
- “Can I trust it?”
Ownership-based approach
- User Label = Only approved actions execute
- Context = Only user-defined situations apply
- Intent = Only user-owned intent functions
→ Ownership equals control
3. Regulator Perspective
Traditional challenges:
- Case-by-case regulation
- Lagging behind technology
- Difficulty in global alignment
Ownership-based approach
- Verify ownership structure:
- Are Fixed Labels provided?
- Are User Labels approved?
- Are ownership gaps eliminated?
→ Technology-neutral regulation
Appendix B — Compatibility with Existing Legal Frameworks
1. Product Liability Law
- Fixed Label defines manufacturer-guaranteed safety scope
- Within scope → Manufacturer responsibility
- Outside scope → User responsibility
2. Contract Law
- User Label approval = Contract formation
- Within scope → Valid use
- Outside scope → Execution blocked
3. Property Law
Ownership theory is a time-tested legal concept:
- Ownership includes rights and responsibility
- Transfer moves both together
- Gaps create unusable domains
→ No special AI law required
4. International Applicability
- Ownership exists across cultures
- Independent of ideology or religion
- Present in nearly all legal systems
→ Suitable for international standardization
Appendix C — Open Questions
- Collective ownership
- Dynamic ownership transfer
- Partial ownership
- Formal proof and mathematical grounding
Note on Adoption Levels
This separation does not indicate completeness levels, but allows selective adoption:
- Level 1 — User Label only
- Level 2 — User Label + Fixed Label
- Level 3 — User Label + Fixed Label + 9 Questions
- Level 4 — User Label + Fixed Label + 9 Questions + ISE Model
ISE itself is not a safety requirement.
It exists to separate baseline observation from intentional execution, reducing unnecessary judgment, cost, and system complexity.
Scope Clarification
This work defines a framework, not a law or moral code.
It specifies what must be asked, not what the answers must be.
The content of answers requires agreement among:
- Regulators
- Manufacturers / Developers
- Users
Modes of Execution
Execution may occur through:
- Explicit user commands
- Automated or conditional system triggers
The framework applies to both, but responsibility allocation may differ by execution mode.