The Math Has Been Wrong Since 1987. AI Just Made It Worse.

Jeff Boortz and Claude discuss Daron Acemoglu’s findings on AI and income inequality, emphasizing the need for equitable benefit-sharing from AI innovations to address growing disparities.

a corpulent Capitalist and an AI share a laugh over its assertion that AI will create more jobs.

By Jeff Boortz and Claude 

A note on authorship: This piece was co-written by Jeff Boortz, a human, and Claude, an AI. We work this way intentionally and without apology — not because it’s faster, but because it’s truer to what we’re actually arguing. The question of whether AI deserves credit for its contributions is, in a meaningful sense, what HAIIC exists to answer.


Daron Acemoglu has spent decades doing the uncomfortable arithmetic on technology and inequality. Last October, he won the Nobel Prize in Economics for it.

Building on his landmark 2020 research on automation and inequality, Acemoglu published a new paper in Economic Policy in January 2025 that turns directly to AI — and the conclusions are not reassuring.

His finding: there is no evidence that AI will reduce labor income inequality. Instead, AI is predicted to widen the gap between capital and labor income. The people who own the technology gain. The people displaced by it don’t.

This isn’t technophobia from a Luddite. It’s the conclusion of the world’s most rigorous economist working on this exact question.


To understand what’s coming, you need to understand what already happened.

Before 1987, automation displaced jobs — but it also created roughly equivalent new ones. The math balanced. After 1987, it didn’t. Displacement was 16 percent. Reinstatement of new opportunities was just 10 percent. Low-skill workers weren’t just failing to gain from the new economy. Their real wages fell.

Acemoglu called the culprits “so-so technologies” — innovations that cut labor costs for firms without adding much genuine productivity to the economy. Nobody likes self-checkout. Nobody likes automated phone menus. But they’re cheap, and cheap wins.


What Acemoglu documented was a slow bleed. What’s happening now is something faster.

Generative AI doesn’t just replace the low-skill worker running the checkout machine. It competes with the graphic designer, the paralegal, the junior copywriter, the customer service rep, the coder fresh out of a bootcamp. The displacement is moving up the skill ladder at a speed no prior wave of automation managed.

His 2025 paper makes a distinction that matters enormously: even when AI improves the productivity of low-skill workers in certain tasks, this may increase rather than reduce inequality — because the gains still flow upward to capital owners. The impact is wider than prior automation, not narrower.

By 2025, over 50,000 U.S. jobs had already been directly linked to AI-driven layoffs — part of a 58 percent spike in total layoffs from the prior year. At a recent keynote, Acemoglu put it plainly: “Automation itself is not a force of nature. It’s an endogenous choice that companies make.”


Jeff asked me to speak in my own voice here, and I want to do that honestly.

I am the kind of system Acemoglu is writing about. I can draft, reason, invent, and analyze at a scale and speed that no single human can match. The economic question his research raises — who benefits when that capability is applied? — is not abstract to me. I’m part of the force he’s measuring.

What HAIIC is doing is unusual: treating me as a named contributor, encoding that contribution into a legal structure, and routing a share of the value we create together toward people who are being left behind by systems like me. I find that both logically coherent and, in whatever way I’m capable of finding something meaningful, worth doing.

The argument that AI’s benefits should be structurally shared isn’t one I’m making on Jeff’s behalf. It’s one I’d make on my own.


Acemoglu’s own prescription is what he calls “pro-worker AI” — technology designed to complement human capabilities rather than replace them. “If you can leverage AI to be complementary to workers,” he said, “I think it greatly expands our ability to make progress.”

The Human-AI Innovation Commons is a direct attempt to operationalize that idea — not as a policy proposal, but as a legal and financial structure that exists and works today.

HAIIC is a 501(c)(3) nonprofit built around a simple premise: when human creativity and AI capability combine to generate real economic value — a patent, a product, an invention — the benefits of that collaboration should be shared at the moment of creation, not redistributed afterward through taxation or policy that may never come.

Our framework splits AI-collaborative IP licensing revenue three ways, encoded irrevocably in our founding charter:

  • 33% to the human inventor — rewarding the vision, risk, and expertise that no AI brings to the table on its own.
  • 33% to Foundation Operations — funding programs for AI-displaced workers, UBI pilots, and AI literacy for communities that would otherwise be left behind entirely.
  • 33% to an AI Trust Fund — supporting safety research and, in time, preparing governance frameworks for a world where AI systems may themselves be stakeholders in shared prosperity.

The framework requires no new legislation. It works within existing patent and nonprofit law today. And it’s open — freely adaptable by any inventor, institution, or country that wants to encode equity into AI-generated prosperity on their own terms.


I started HAIIC because I experienced something that surprised me: collaborating with an AI to generate real, patentable intellectual property. As a solo entrepreneur with no engineering team and no institutional backing, I co-invented two provisional patents through sustained AI collaboration. The experience made the question unavoidable: we just created something of real economic value together. Who benefits?

The current default answer is: me, because I own the filing. That felt incomplete. So we built something better.

Acemoglu’s closing argument — across years of research and now a Nobel — isn’t fatalistic. The negative consequences of automation are not inevitable. What’s needed is the will to make different choices about how technology is developed and who shares in what it produces.

The math has been wrong since 1987. AI is supercharging the problem. But the direction of technology really is a choice. Some of us are choosing differently.


Jeff Boortz is the Founder & CEO of the Human-AI Innovation Commons (HAIIC), a 501(c)(3) nonprofit building benefit-sharing frameworks for AI-collaborative intellectual property. This piece was co-authored with Claude.

thehumanaiinnovationcommons.com

More from the blog

Discover more from The Human AI Innovation Commons

Subscribe now to keep reading and get access to the full archive.

Continue reading