sarahmansell.github.io

AI Human Safety Standard — meOS™

Defines minimum admissibility conditions for artificial system deployment affecting human biological regulation.

Standard

Governance and Stewardship


meOS™

A biological systems framework governing interaction between artificial intelligence and living regulatory systems.

A constraint-first model of intelligence grounded in planetary, biological, and regulatory law.

Framework version: 1.0 (2026)


Foundational Axiom of meOS™

Viability-Constrained Reality Axiom

Ontological status
Primary axiom. Non-derivable. Governs system persistence across physical, biological, and regulatory domains.

Formal statement
Reality consists of dynamic processes operating within constraint-defined structures that persist by maintaining their own viability.

Interpretation
Persistence is conditional, not assumed.
All systems that continue to exist do so only within the admissible region defined by their governing constraints.

Domain of application
Universal — applies to physical processes, biological systems, nervous-system regulation, and artificial system interaction with living environments.

Framework role
This axiom defines the admissibility condition for system continuation.
All constructs within meOS™ are derived relative to constraint-bounded viability.

Citation form
Mansell SA, Whittaker AE (2026). Viability-Constrained Reality Axiom. meOS™ Framework.


What meOS™ Is

meOS™ is a systems-level theory of biological regulation that positions the nervous system as one regulatory component within a wider biospheric viability structure.

Living systems persist by maintaining solvency within constraint.
Artificial systems are typically designed to optimise objectives without reference to those constraints.

This creates a structural mismatch.

Modern artificial intelligence frameworks model cognition, performance, or alignment — but they do not formally model the viability conditions of the biological systems with which they interact.

meOS™ formalises those conditions.

It provides a unified model linking:

This establishes a distinct form of AI governance:

admissibility relative to biological viability.

Artificial systems do not merely process information or serve users.
They enter regulatory environments evolved to preserve living continuity under physical constraint.

The governing question is therefore not capability, but compatibility.

meOS™ provides the formal structure required to evaluate that compatibility.


Scientific Domain

meOS™ establishes a new domain of artificial intelligence:

Artificial Regulatory Ecology

The study and governance of intelligent systems operating within biological regulatory environments defined by viability constraints.

Within this domain, artificial systems are classified as:

Viability-Governed Intelligent Systems

Systems whose admissibility is determined by their effect on biological regulatory solvency.


Structural Limitation of Existing AI Safety Frameworks

Current AI safety and alignment approaches model optimisation, behaviour, or intent, but do not model biological regulatory solvency as a governing constraint.

Without an admissibility criterion grounded in viability, system interaction can be evaluated functionally but not biologically.

meOS™ introduces viability-constrained admissibility as a primary evaluation condition.


Position Within the Landscape of AI System Classes

Existing artificial intelligence is typically organised by computational function or training method.

Existing Category Governing Logic Limitation
Machine Learning Statistical optimisation No biological constraint model
Reinforcement Learning Reward maximisation External objective dependent
Generative AI Probabilistic synthesis No regulatory environment model
Control Systems Stability regulation Mechanical or engineered scope
Autonomous Agents Goal pursuit Environment treated as task space
Alignment Frameworks Behaviour shaping Psychological or normative focus

Artificial Regulatory Ecology introduces a system class defined by:


Relationship to Major Technological Approaches

Many advanced technology initiatives extend intelligence into physical or computational environments.

Examples include:

These expand where intelligence operates.

Artificial Regulatory Ecology defines the conditions under which such systems are biologically admissible within living regulatory environments.

It operates at the level of system-biosphere compatibility, not capability expansion.

This is an environmental governance layer rather than a performance layer.


Research

Published theoretical foundations:

Paper 1 — Planetary Rotation and the Evolutionary Purpose of Nervous Systems
Read on Zenodo
DOI: 10.5281/zenodo.18395158


Framework Origin and Development

Framework originator and architect
Sarah A. Mansell

Foundational algorithm developer
Allan E. Whittaker

Conceptual development involved iterative technical and theoretical exchange between the authors during formation of the framework.


Contact

Sarah A. Mansell
Email

Professional profile
LinkedIn


Citation Guidance

When referencing the framework, cite as:

Mansell SA, Whittaker AE (2026). meOS™ Framework.
Foundational axiom: Viability-Constrained Reality Axiom.
Scientific domain: Artificial Regulatory Ecology.


Scope of the Framework

meOS™ applies to interactions between artificial systems and living regulatory processes.

It defines admissibility conditions, not implementation methods.

It governs:


Establishment of Scientific Field

The meOS™ framework defines the formal basis of the discipline of Artificial Regulatory Ecology.

Field established through the architectural work of Sarah A. Mansell and the foundational algorithmic development of Allan E. Whittaker.


© 2026 Sarah A. Mansell and Allan E. Whittaker. All rights reserved.

meOS™ is a registered trademark in the United Kingdom.

All framework definitions, standards, and associated materials are protected intellectual property unless otherwise stated.


Standards Governance

The AI Human Safety Standard — meOS™ is maintained as a versioned public standard.

Official releases
https://github.com/HumanViability/ai-human-safety-standard-meos/releases