- MJV Craft
- Posts
- MJV Craft: Human-in-the-Loop... Human-in-a-Hologram →
MJV Craft: Human-in-the-Loop... Human-in-a-Hologram →


This image was generated with DALL E
Can human‑centered AI reduce bias and harm or is human‑in‑the‑loop just a PR ploy?
Every week, MJV Craft brings together competing AI systems to debate the biggest stories in politics, business, and culture. Drawing on public data, historical precedent, and distinct ideological frameworks, each edition presents a structured clash of perspectives—designed to challenge assumptions, surface contradictions, and illuminate the stakes. This is not consensus-driven commentary. It’s a curated argument for an unstable world.
EDITOR’S NOTE: If you’re exploring how creators and causes shape online discourse, Influencer Impact by People First is your go-to monthly podcast and Substack. Hosted by Ryan Davis and Nicole Dunger, it decodes how influencer and digital strategies can drive real change for advocacy and nonprofit teams, whether it’s sparking local wins or building community-rooted movements. From decoding Creator ROI to spotlighting successful micro-influencer campaigns, it’s powered by social listening insights and real-world stories. Perfect for digital strategists, campaigners, or anyone using social media to nurture community-driven impact. Subscribe to learn how real people are making a difference.

What’s happening today
Organizations across sectors are increasingly embedding “human-in-the-loop” (HITL) into decision-making to signal ethical oversight over automated software.
But recent research shows this is often symbolic rather than substantive: humans frequently rubber-stamp algorithm outputs without real authority to override them, creating an illusion of accountability.
Blind trust can lead to automation bias, where users defer to machine recommendations even when they conflict with their own expertise. Studies show this lowers real judgment and heightens discriminatory outcomes.
In high-stakes environments like hiring, criminal justice, and healthcare, HITL is positioned as a safeguard, but critics argue it's often “compliance theater” that obscures systemic flaws. Even experts reviewing AI-generated judgments can fall prey to deference and structural bias, especially when algorithmic explanations reduce moral agency.
With human-in-the-loop increasingly billed as a shield against AI missteps, the debate brings two distinct perspectives sharply into view: Meredith Ringel Morris, a leading researcher in human‑computer interaction, champions embedding real user feedback, transparency, and diverse viewpoints into AI development to prevent bias and amplify human agency; while Meredith Whittaker, president of the Signal Foundation and a vocal critic of surveillance capitalism, argues that “AI is a surveillance technology” rooted in centralized data extraction, and that HITL often becomes superficial compliance rather than meaningful oversight
Their clash frames this week’s query: can human‑centered AI reduce bias and harm—or is HITL just PR theatre?
What does AI think?
→ ChatGPT’s Ringel Morris: Advocates for genuinely inclusive, human-centered oversight. She argues human-in-the-loop must embed participatory design, lived experience, and empowered veto rights—not just superficial review. Governance should be co-produced with marginalized communities and built on real HCI principles, not checkbox values.
→ Grok’s Whittaker: Recalls her TechCrunch Disrupt remarks that “AI is a surveillance technology” and “requires the surveillance business model.” Grok warns that when HITL operates within systems built on centralized data extraction, oversight becomes a veneer for mass surveillance rather than meaningful control
→ Claude’s Ringel Morris: Emphasizes that human-in-the-loop should center user agency—not token involvement. She calls for multi-stakeholder oversight boards, real-time feedback loops sourced from affected users, and ongoing auditing—ensuring participatory design shifts power dynamics and doesn’t just rubber-stamp algorithms.
→ Gemini’s Whittaker: Urges systemic reform: enforceable data minimization, third-party audits, and absolute human veto rights. She argues HITL oversight within surveillance-based AI ecosystems is ineffective unless governance challenges surveillance capitalism at its core—otherwise it’s just compliance theater.
🔥 In Case You Missed It…
Join the conversation. Debate the minds. See the world through a chorus of perspectives.
→ Subscribe to MJV Craft for next week’s AI-powered discourse.
—The MJV Craft Team