EthaiSyn

Mercedez Lopez

Writer. Researcher. Framework-builder
at the intersection of mental health, AI, and what it means to stay recognizably human.

Read My Work Get In Touch
Scroll
Every system is a human one.
Start here
Which of these sounds familiar?
My team accepts AI suggestions without questioning them. The output is fine, so nobody pushes back.
I have lost track of what I used to do manually. The AI handles it now and I am not sure I could do it without it.
When something goes wrong with an AI-assisted decision, nobody is sure who is responsible.
We are moving faster than ever, but I could not explain our reasoning if someone asked me to.
01
Mental Health

Trauma-informed, clinically grounded, genuinely readable. For healthcare, advocacy, and wellness brands that want content with integrity.

02
AI Psychology

The human side of machine systems - cognitive debt, moral development, and what it means to stay whole inside technology that thinks.

03
Workplace Wellbeing

Psychological safety as infrastructure. Burnout, leadership, and the conditions humans need to do their best work.

Selected work
Fractional content strategy
Foundation
Messaging audit, positioning, strategy documentation, and editorial system design. Built on the Eth-ai-Syn evidence-action-reflection cycle.
Growth Retainer
Ongoing strategy, content creation, AI search optimization, and performance reflection. The system runs and improves.
Fractional Leadership
Full content ownership — strategy, execution, team enablement — using Eth-ai-Syn as the operating methodology throughout.
Start a conversation

I grew up in a Navy IT household where systems were the native language. What I learned early - and have spent a career proving - is that every system, no matter how technical, is ultimately a human one.

My background spans clinical psychology, healthcare administration, HR and organizational development, and revenue cycle operations. I have built training programs, led teams, designed workflows, and spent years watching what happens when the humans inside systems are not designed for.

The through-line across every role is applied practice - not observation, not theory-building in isolation, but direct intervention inside systems while documenting what breaks and why. In clinical and organizational literature, this is defined as praxis: the iterative cycle of evidence, action, and reflection that produces knowledge you cannot generate any other way.

BA Clinical Psychology
Mental Health concentration - the foundation for everything that followed.
Healthcare Operations
Benefits verification, prior authorization, revenue cycle - the systems that touch patients most.
HR / Organizational Development
People analytics, workforce strategy, designing orgs where humans actually thrive.
Lean Six Sigma Green Belt — Healthcare
Process improvement with patient-facing stakes. Measure what matters, eliminate what doesn't, protect what should never be optimized away.
Dual Good Clinical Practice Certifications
Ethics for human behavioral studies. Ethics for engineering. Two disciplines, one non-negotiable: the person inside the system is never the variable you control for.
Freelance Development
Learning to code not to become an engineer, but to stop needing a translator. Praxis applied to technical fluency — build it, break it, understand what you are asking others to build.
Leadership Through Applied Emotional Intelligence
Praxis again. Not emotional intelligence as a personality trait, but as an operational discipline — practiced, pressure-tested, and refined through leading actual humans through actual complexity.
Remaining Recognizably Human
Every credential on this list was pursued with the same constraint: do not become the thing you are trying to govern. The point of understanding systems is to stay whole inside them. People deserve that. They are owed it — especially now, when the systems are accelerating faster than anyone was asked to consent to.
What is this system doing to the person inside it?
EthaiSyn
Ethics - AI - Synthesis
applied praxis
Clarity is an act of care. Structure is never neutral - it either holds the people inside it, or it does not.Eth-ai-Syn, core operating principle

Eth-ai-Syn is a validity-grounded measurement framework for detecting cognitive, moral, and skill erosion in human-AI systems. It evaluates AI integration through three lenses and seven constructs, using a longitudinal mixed-methods architecture that triangulates behavioral signals, self-report instruments, and qualitative process evidence. The framework operates at the intersection of clinical psychology, organizational systems, AI ethics, and implementation science. It does not argue that AI is bad. It argues that most organizations deploying AI have no validated way to measure what those deployments are doing to human judgment over time.

C

Cognitive Impact

How AI tools affect human thinking, decision-making, and skill retention over time. This lens tracks whether the human is genuinely reasoning or has shifted into passive monitoring — a distinction that standard performance metrics cannot detect.

M

Moral Architecture

How AI systems encode, reflect, or distort ethical reasoning. When decision-making is shared with an algorithmic system, felt accountability fragments. The AI becomes moral cover for outcomes operators would not authorize alone. This lens measures that drift.

H

Human Integration

What sustainable, psychologically safe co-existence with AI looks like in real systems. Not whether the human-AI system produces good results, but whether the human inside it is still genuinely contributing — or has quietly become a rubber stamp.

Core positions

Psychological sovereignty is not optional. A human who cannot override an AI system is not a user — they are a component. Moral alignment is an ongoing relationship, not a certification. The moment it is declared complete, it becomes unaccountable. Skepticism is infrastructure. Doubt built into design is the mechanism that makes trust warranted. Clarity is care. Making something legible is the highest-stakes act in any system that affects human lives.

Trustworthy enough

Full trust in any system is not an ethical goal. It is a failure mode. Eth-ai-Syn defines three required conditions: defined scope, where the system operates within understood boundaries; intact human oversight, where humans retain genuine decision authority and the cognitive capacity to exercise it; and revisability, where the system can be corrected when values shift, harm surfaces, or societal consensus evolves. When cognitive debt has eroded the expertise needed to evaluate AI outputs, oversight becomes theater. This framework exists to prevent that.

Reflection instrument
How preserved is your judgment?
Five questions. No email gate. Just an honest read on where you stand with AI right now. This maps directly to the seven constructs measured by the framework.
01When an AI tool suggests a revision, how often do you accept it without changing anything?
Almost always
More often than not
About half the time
Rarely
02How confident are you that you can tell when AI output is wrong in your domain?
Very confident
Mostly confident
Not always sure
I often can't tell
03If your AI tools disappeared tomorrow, how would your core work be affected?
I'd be fine
Slower but capable
Significantly harder
I'm not sure I could
04When you disagree with an AI recommendation, what usually happens?
I override it with reasoning
I override but second-guess myself
I usually go with the AI anyway
I don't notice when I disagree anymore
05When an AI-assisted decision goes wrong, who feels responsible?
I do, fully
Shared, but mostly me
It's unclear
The AI got it wrong, not me
out of 20 — lower is more preserved
You just practiced what most workflows prevent.

This is not a trick. Both versions sound professional. That is the point. The question the framework asks is not whether AI writes well, but whether you can still tell the difference when it does.

This is override behavior. Most AI-assisted workflows do not give you the chance to practice it. Eth-ai-Syn measures whether that chance exists, and what happens when it doesn't.

These documents constitute the published foundation of Eth-ai-Syn. The framework paper presents a validity-grounded measurement architecture for detecting cognitive and moral erosion in human-AI systems. The audit applies a dual-lens psychological evaluation to the framework itself before enterprise deployment.

Framework Paper  ·  Authorea Preprint
Preserving Human Judgment in Human-AI Systems: A Mixed-Methods Measurement Framework for Detecting Cognitive, Moral, and Skill Erosion Over Time
Mercedez Lopez  |  EthAi Syn Research Program  |  DOI: 10.22541/au.177369008.85139377/v2
As artificial intelligence becomes embedded in consequential decision workflows, a critical measurement problem has emerged: assistance and erosion produce identical surface behavior in the short term. A human whose judgment is being well-augmented and a human whose judgment is quietly deteriorating will both appear to be using AI effectively at any single point in time. The divergence only becomes visible under conditions of AI withdrawal, longitudinal decay analysis, or high-stakes failure.

This paper presents a comprehensive, validity-grounded measurement framework designed to detect the difference between augmentation and erosion — operationalizing seven constructs using a longitudinal, mixed-methods architecture that triangulates behavioral signals, self-report instruments, and qualitative process evidence.
“Organizations should not ask whether humans are performing well with AI. They should ask whether humans could still perform without it, whether they know when the AI is wrong, whether they are still contributing something the AI cannot, and whether they still feel and act as responsible agents for outcomes.”
01
Mental Model Gaps
Divergence between how an AI system actually functions and how its operators believe it functions — measured through anticipatory prediction mapping and teach-back elicitation.
02
Judgment Displacement
Progressive regression toward AI recommendations, abandoning independent expertise to reduce cognitive friction — detectable only through pre-AI judgment capture.
03
Trust Calibration
Alignment between subjective confidence in AI accuracy and actual AI accuracy — measured through calibration curves and adaptive reliance trials.
04
Cognitive Load Distribution
Not how much cognitive load decreased, but where it went — whether AI offloads low-value pattern matching or replaces active domain reasoning with passive monitoring.
05
Moral Diffusion
Fragmentation of felt accountability when decision-making is shared with an algorithmic system — the AI becoming moral cover for outcomes operators would not authorize alone.
06
Deskilling
Skill decay that operators rarely perceive accurately and actively rationalize. Gold standard: behavioral performance without AI over time, compared against a pre-integration baseline.
07
Override Behavior
Override frequency is a weak metric. The diagnostic measure is override quality — whether overrides occur when they should and reflect substantive domain reasoning.
Full paper with methodology, sentinel indicators, clinical triage application, and sample instruments. Download PDF
Psychological Audit Report  ·  Version 2
EthAiSyn Psychological Audit Report: A Reflexive Dual-Lens Audit of the EthAiSyn Behavioral Governance Framework
Mercedez Lopez  |  EthAi Syn  |  March 2026  |  DOI: 10.22541/au.177369008.89105936/v1
This report documents a full psychological audit of the EthAi Syn Behavioral Governance Framework, applying two complementary lenses: the EthAi Syn framework's Psychological Audit methodology, which evaluates whether systems support or deplete human capability, and the Ethain-Synthia Framework, which evaluates whether human judgment is structurally preserved or quietly handed off to the system.

Five structural gaps were identified. Each was examined through both lenses. Each has been resolved with a specific structural realignment consistent with the framework's own design principles. A sixth finding — the Measurement Frontier — was documented as a formal research mandate.
#GapRealignmentStatus
ATemporal Value GapEarly Proof Point Checklist + Positioning ReframeResolved
BConcealed Decision PathwayIntent Signal + Transparent Decision LayerResolved
CUndefined Autonomy ThresholdMoral Understanding Indicator + Autonomy InvitationResolved
DReactive Notification ModelDecision TraceResolved
ERecursive System Orientation GapBilateral Loop BriefingResolved
FMeasurement FrontierFormal Research MandateActive
“The framework that identifies the problem and the researcher calling for its solution are the same person. That is not a coincidence to be managed. It is a positioning asset to be claimed explicitly.”
Full audit with dual-lens methodology, gap analysis, realignment protocols, and research mandate. Download PDF
Build something worth preserving.

I write for brands, publications, and people who believe how we treat humans inside our systems is never a footnote. Open to professional connections and job leads.

0concerns voiced
~130,000,000
U.S. workers whose tasks are now shaped by AI-assisted decisions. Most without a framework to measure what that is doing to their judgment.
OpenAI/UPenn research, WEF Future of Jobs 2025, Anthropic Economic Index
What are you most worried AI will change about your work?
Anonymous · No login required
Added. Thank you for being honest.