Trauma-informed, clinically grounded, genuinely readable. For healthcare, advocacy, and wellness brands that want content with integrity.
The human side of machine systems - cognitive debt, moral development, and what it means to stay whole inside technology that thinks.
Psychological safety as infrastructure. Burnout, leadership, and the conditions humans need to do their best work.
I grew up in a Navy IT household where systems were the native language. What I learned early - and have spent a career proving - is that every system, no matter how technical, is ultimately a human one.
My background spans clinical psychology, healthcare administration, HR and organizational development, and revenue cycle operations. I have built training programs, led teams, designed workflows, and spent years watching what happens when the humans inside systems are not designed for.
The through-line across every role is applied practice - not observation, not theory-building in isolation, but direct intervention inside systems while documenting what breaks and why. In clinical and organizational literature, this is defined as praxis: the iterative cycle of evidence, action, and reflection that produces knowledge you cannot generate any other way.
applied praxis
Eth-ai-Syn is a validity-grounded measurement framework for detecting cognitive, moral, and skill erosion in human-AI systems. It evaluates AI integration through three lenses and seven constructs, using a longitudinal mixed-methods architecture that triangulates behavioral signals, self-report instruments, and qualitative process evidence. The framework operates at the intersection of clinical psychology, organizational systems, AI ethics, and implementation science. It does not argue that AI is bad. It argues that most organizations deploying AI have no validated way to measure what those deployments are doing to human judgment over time.
Cognitive Impact
How AI tools affect human thinking, decision-making, and skill retention over time. This lens tracks whether the human is genuinely reasoning or has shifted into passive monitoring — a distinction that standard performance metrics cannot detect.
Moral Architecture
How AI systems encode, reflect, or distort ethical reasoning. When decision-making is shared with an algorithmic system, felt accountability fragments. The AI becomes moral cover for outcomes operators would not authorize alone. This lens measures that drift.
Human Integration
What sustainable, psychologically safe co-existence with AI looks like in real systems. Not whether the human-AI system produces good results, but whether the human inside it is still genuinely contributing — or has quietly become a rubber stamp.
Psychological sovereignty is not optional. A human who cannot override an AI system is not a user — they are a component. Moral alignment is an ongoing relationship, not a certification. The moment it is declared complete, it becomes unaccountable. Skepticism is infrastructure. Doubt built into design is the mechanism that makes trust warranted. Clarity is care. Making something legible is the highest-stakes act in any system that affects human lives.
Full trust in any system is not an ethical goal. It is a failure mode. Eth-ai-Syn defines three required conditions: defined scope, where the system operates within understood boundaries; intact human oversight, where humans retain genuine decision authority and the cognitive capacity to exercise it; and revisability, where the system can be corrected when values shift, harm surfaces, or societal consensus evolves. When cognitive debt has eroded the expertise needed to evaluate AI outputs, oversight becomes theater. This framework exists to prevent that.
This is not a trick. Both versions sound professional. That is the point. The question the framework asks is not whether AI writes well, but whether you can still tell the difference when it does.
These documents constitute the published foundation of Eth-ai-Syn. The framework paper presents a validity-grounded measurement architecture for detecting cognitive and moral erosion in human-AI systems. The audit applies a dual-lens psychological evaluation to the framework itself before enterprise deployment.
This paper presents a comprehensive, validity-grounded measurement framework designed to detect the difference between augmentation and erosion — operationalizing seven constructs using a longitudinal, mixed-methods architecture that triangulates behavioral signals, self-report instruments, and qualitative process evidence.
Five structural gaps were identified. Each was examined through both lenses. Each has been resolved with a specific structural realignment consistent with the framework's own design principles. A sixth finding — the Measurement Frontier — was documented as a formal research mandate.
| # | Gap | Realignment | Status |
|---|---|---|---|
| A | Temporal Value Gap | Early Proof Point Checklist + Positioning Reframe | Resolved |
| B | Concealed Decision Pathway | Intent Signal + Transparent Decision Layer | Resolved |
| C | Undefined Autonomy Threshold | Moral Understanding Indicator + Autonomy Invitation | Resolved |
| D | Reactive Notification Model | Decision Trace | Resolved |
| E | Recursive System Orientation Gap | Bilateral Loop Briefing | Resolved |
| F | Measurement Frontier | Formal Research Mandate | Active |
I write for brands, publications, and people who believe how we treat humans inside our systems is never a footnote. Open to professional connections and job leads.