AI has been pitched as fate. Emily M. Bender and Alex Hanna argue it’s a marketing strategy. The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want is a brisk, incisive decoder ring for the current AI cycle, written to help readers separate what these systems do from what their makers say they do. It is more than a takedown of
inflated claims;
💬 (2)
it is a primer on how language gets weaponized to influence policy, labor, and public perception, and a toolkit for pushing back with clarity and confidence.
The Core Argument
Bender, a computational linguist, and Hanna, a sociologist of technology, come at AI from first principles: what are these models, how do they work, and what does it mean to call them intelligent. They argue that much of today’s AI rests on statistical pattern prediction, not understanding, and that the leap from performance on benchmarks to blanket competence in social contexts is where the con begins. The book shows how technical abstractions get translated into grandiose stories about inevitability and progress, then traces how those stories justify data extraction, free labor from users and contractors, and regulatory outcomes that favor incumbents.
DesignWhine's Verdict
Overall
-
Argument Strength
-
Writing Style
-
Practical Value
-
Reader Engagement
Summary
In a market drowning in AI theater, a good bullshit detector is worth more than another implementation guide. The AI Con delivers exactly that.
Bender and Hanna strip away the mystical language that vendors use to sell statistical pattern-matching as intelligence. They show how “learning” and “understanding” become trojan horses for exploitation, then arm you with questions that cut through the demo magic. When your CTO starts talking about AI destiny, you’ll have the vocabulary to drag the conversation back to data sources, error rates, and who gets fired when things break.
The book is ruthlessly focused on power, not technology. That’s its strength and limitation. You’ll learn to spot AI snake oil but get less help building alternatives. Still, in 2025, skepticism is the scarcer skill.
Pros
Sharp, uncompromising takedown of AI hype
Practical questions for vendor evaluations
Accessible writing without academic bloat
Focus on labor and power dynamics often ignored
Cons
Limited technical depth
Prescriptive sections feel underdeveloped
Instead of introducing one more sweeping manifesto, the authors stay grounded in case-style analysis. They take apart familiar claims, AI as sentient, AI as neutral, AI as destiny, and pair each with a practical way to interrogate it. When a vendor touts accuracy, they ask: on what data, against which baseline, with what error distribution, in which real-world conditions. When a model is framed as a replacement for human judgment, they look at who is left holding the liability when it fails.
The book is strongest on rhetoric. Words like learning and intelligence are treated as metaphors that can be useful within research but corrosive in public discourse when taken literally. The authors explain how these metaphors blur boundaries,
encourage anthropomorphism,
💬 (1)
and grant systems an authority they have not earned. Readers walk away with a language audit mindset: swap mystical phrasing for operational description, and the stakes of deployment become clearer.
The Authors
Emily M. Bender is a professor of computational linguistics at the University of Washington, where she has spent over two decades studying how humans and machines process language. Her academic work spans syntax, semantics, and multilingual natural language processing, giving her deep technical insight into the systems she critiques. Bender gained broader recognition for coining the term “stochastic parrots” to describe large language models, arguing that statistical fluency shouldn’t be mistaken for understanding. Her position within the AI research community makes her criticism particularly pointed – she’s not an outsider throwing stones, but an insider questioning the field’s most ambitious claims about machine intelligence.


Alex Hanna is the director of research at the Distributed AI Research Institute and a former senior research scientist at Google, where she worked on AI ethics and algorithmic auditing before leaving to pursue independent research. Her background spans sociology, computer science, and digital activism, with a focus on how technological systems intersect with labor, power, and social justice. Hanna’s work consistently examines not just what AI can do, but who benefits from its deployment and who bears the costs. Her departure from Google reflects the book’s central tension: the difficulty of conducting critical AI research within the institutions that profit most from AI adoption.
What the Book Gets Right
The authors excel at demystifying AI without condescension. They explain model training and evaluation in plain language, then peel back the glossy sheen that marketing layers over limitations and failure modes. Just as important, they keep the human cost front and center. By following the labor that props up AI systems, the annotators, moderators, and service workers whose tasks are often invisible, the book reframes AI as a business model that distributes risks and rewards unevenly.
The guidance they offer feels immediately usable: questions and heuristics that can travel into vendor negotiations, procurement reviews, and policy deliberations. The prose helps too. It is nimble and occasionally wry, which keeps the subject intelligible and the argument sharp. Throughout, the book insists on distinctions that matter in practice: statistical fluency is not comprehension, correlation is not causation, and polished outputs are not guarantees of truth.
Where It Leaves Questions
The subtitle, “How to Fight Big Tech’s Hype and Create the Future We Want”, promises a roadmap for creating the future we want, and while the book points in the right direction, its prescriptive sections are more sketch than blueprint. Readers charged with implementation in sensitive or regulated environments may wish for deeper playbooks that cover process patterns, risk controls, and organizational change.
The argument can also flatten a sprawling field into a single silhouette. That move clarifies the political economy of AI, yet it sometimes glosses over meaningful differences among retrieval, classification, and generative systems, especially where narrow deployments with guardrails can be justified. Technical readers may finally crave a tighter taxonomy of model limitations beyond examples. The authors choose accessibility over granularity, which suits a broad audience, but engineers will likely supplement with specialized literature.
Why It Matters
AI has entered a phase where demos stand in for due diligence, and where metaphors shape budgets and laws. The AI Con arrives as a needed counterweight. It teaches readers to slow the conversation to the level where accountability lives: data provenance, error profiles, domain fit, auditability, escalation paths, and the right to opt out. It also reframes adoption as a policy choice rather than a technological inevitability, which is the most valuable reframing a leader can bring to a roadmap discussion.
For Design Leaders, Technologists, and Product Teams
This book earns a place on the shelf of anyone deciding whether to buy, build, or ban. It sharpens the questions that matter in the room: What problem is being solved. What is the baseline without AI. What are the failure costs and who bears them. What is the plan for redress when the system fails. Is there a simpler, non-ML solution that meets the need with fewer externalities. It also encourages teams to ask vendors for documentation beyond glossy benchmarks: datasets used, known hazards, evaluation in target contexts, monitoring plans, and sunset criteria.
Style and Structure
At about the length of a typical trade nonfiction work, the book reads quickly, with tight chapters that open with vivid claims and then peel them back. The prose is accessible enough for non-experts and pointed enough to be useful for practitioners who need a language to push back against hype without sounding reactionary. It is not a technical manual. It is an operating manual for public and organizational sensemaking.
As AI continues reshaping workplaces and policies, what questions do you think we should be asking that we’re not? Let us know in the comments.









I think it’s a mix. Some AI stuff is genuinely impressive (ChatGPT for brainstorming is solid) but the “AI will replace everyone” narrative is pure marketing drama. Most AI I’ve used feels like really good pattern matching, not the sci-fi intelligence everyone talks about. The hype definitely outpaces the reality.
You’re absolutely right for now. These tools keep getting better so who knows what we’ll have in 2-3 years, but the marketing is running like 5 years ahead of the actual tech and overselling everything.
100% exaggerated. I work in logistics here in Houston and we “upgraded” to an AI system last year that was supposed to revolutionize our supply chain. It’s basically a glorified spreadsheet with extra steps. We need MORE people to babysit the system now.
Thanks for sharing your ground experience! This is exactly the kind of real-world feedback we need more of. Your “glorified spreadsheet with extra steps” description is chef’s kiss 👌
My kids named our Roomba and get upset when it gets “stuck.” We’re all doing it.
Lol, yes! AI companies know exactly what they’re doing when they program their bots to say ‘I.’ Thanks for leaving a comment!