Bernie Has No Plan for an AI Takeover—So I Made One
What I wish he said + the solutions I’m thinking about instead.
I watched Bernie Sanders on Joe Rogan a few times this week and wanted to share some thoughts. I specifically want to focus on one big question he was asked repeatedly and couldn’t really answer….
If you haven’t seen the episode, I recommend it.
The question he kept dodging was: What happens to workers if there’s an AI takeover?
His answers were vague. He shuffled between ideas like new work trends like 4-day workweeks, and the obvious issue of universal healthcare. Which, to be fair, are important conversations. But they’re not actual solutions to mass AI displacement. A 4-day workweek and healthcare for all are great, but they won’t stop your job from being automated.
Let’s talk about that 4-day workweek idea for a second.
It was never meant to be an AI mitigation plan. It was a way to acknowledge the productivity gains workers have already created through telemeetings, word processing, automation, and efficiency tools. I touched on this in my TEDx talk about the productivity-pay gap where I focus on the fact that we’ve never been more productive, yet our wages haven’t kept up.
4-day workweeks were meant to give us more leisure time and protect time for:
Working moms
Burnout prevention and wellness
Seasonal slowdowns (I gave my team Fridays off this month because most of our clients are OOO in July anyway)
Disrupting the nonstop busywork machine we’re all stuck in
But if you tell employers they, at the same time, will shrink the workweek and spend millions on AI to do the same tasks faster and cheaper, that’s not going to end well. Of course they’ll choose the for-profit solution: cut expenses (people) and invest in AI (scalable, no benefits, no PTO).
Also, candidly, 4-day workweeks were so 2020. That was the perfect time to test this. I was a tech consultant then, and no one was doing real work. Everything was on pause. It would’ve been the least disruptive moment to try it out. But we didn’t. That window’s closed.
And to touch on Universal Health Care for a second. Yes, it’s a great idea. But when you don’t have a job due to AI, you can’t pay your rent with subsidized health care.
I’m focusing on Bernie here because I genuinely expected more from him. If anyone should have a solid answer for how to protect workers from being replaced by machines, it should be a socialist. That’s why I found his response so disappointing when I was expecting more vision and pragmatism. And honestly, it made me trust the political system even less. One side doesn’t want to regulate AI at all. The other side doesn’t know how. And seems like they’re not looking forward.
So instead of just critiquing Bernie and doing nothing (which would make me no better than him), I’m using this post to lay out a few of my own ideas on how we can actually prevent an AI takeover.
1. AI Royalties
If your labor trains AI, you should get paid.
Right now, most employees unknowingly contribute to training AI systems through emails, documents, screen recordings, customer interactions, and more. That data becomes part of the foundation for tools that make the company more efficient… usually by eliminating your job.
So what would it look like if workers actually had rights over their own output?
Think of this like residuals in the entertainment industry. Actors and writers get paid when their work gets reused. I benefit from this in my own business where another person or business cannot use my likeness without my permission. Workers should have the same rights because the labor is being recycled and repackaged as “machine intelligence.”
What this could look like in practice:
Employers must disclose when internal data or output is used to train AI models
Workers can opt in or out, and set licensing terms
If used, they receive royalties or structured payments tied to value creation
Could be implemented via union contracts, personal employment agreements, or new data rights legislation
We’ve accepted that celebrities deserve compensation when their image or voice is used. The same logic applies to knowledge workers. Just because one’s output is less glamorous doesn’t mean it’s less valuable.
2. AI Ramp-Up Periods
Automation should be gradual.
We already have the WARN Act, which requires 60 days’ notice before mass layoffs, but it wasn’t built for AI. Companies are now restructuring entire departments under the guise of “productivity,” and doing it quietly by replacing people with internal AI tools.
An AI Ramp-Up Period would legally or structurally limit how quickly a company can displace workers through automation. This slows the bleeding and gives workers time to adjust, retrain, or find new roles.
What this could look like in practice:
A cap on how many workers can be displaced by AI per quarter (e.g. no more than 10%)
A requirement for companies to submit an AI displacement plan in advance, detailing which roles will be affected
Mandatory transition support, such as reskilling programs, redeployment opportunities, or compensation scaling with impact
Could be enforced through legislation, ESG frameworks, or internal company policies
This plan doesn’t blocking AI innovation. Instead it focuses on controlling the velocity of change so it doesn’t crush people. If you can’t shut down a factory overnight, you shouldn’t be allowed to shut down a labor force overnight either.
3. The AI Pension:
Why reinvent the wheel, when we can just go back to what worked.
AI for businesses may capitalize on years of human labor that built the systems, workflows, and data it now automates.
A traditional severance check doesn’t cover that kind of extraction. Which is where the AI Pension comes in.
This would be a dedicated fund that compensates workers who are laid off due to AI-driven role elimination. The pension could be government-administered, privately managed, or embedded into large company HR policies.
What this could look like in practice:
Companies pay into an AI Displacement Fund when AI tools are introduced that reduce headcount
Workers receive a monthly payout or lump sum if their job is eliminated due to automation
The amount is based on factors like seniority, job function, and proximity to the AI model’s development (i.e. did your work directly help train it?)
It can also include reskilling credits, financial coaching, and access to early retirement options
This is a safety net missing from the future of work. Since you trained your replacement (knowingly or not), the very least you deserve is a payout.
I want to be clear that everything I’m proposing (AI royalties, pensions, ramp-up periods) are financial solutions. But they don’t address the deeper issue underneath all of this:
What happens to our sense of purpose when we lose our jobs to machines?
If you’ve followed me for a while, you know I talk a lot about our unhealthy attachment to work. Our culture still treats jobs like they’re the ultimate source of identity and worth. So when someone loses their career, especially in a trade or lifelong role, it’s not just about income. It’s about value, purpose, and diignity.
And no one seems to be talking about what happens when that’s ripped away by AI.
You can’t just tell a 50-year-old factory worker that his job is gone forever and expect him to sit at home and collect a check. That won’t end up well for the betterment of society.
Universal Basic Income alone doesn’t solve this.
There’s this narrative that UBI is a safety net, but it completely ignores how people psychologically process work loss. The idea that a flat monthly check can replace the identity, routine, and purpose someone built their life around is deeply flawed.
Especially when that check is tied to an institution like the Federal Reserve, which has proven it can’t be fully trusted to manage long-term economic stability for the average person. I’ve talked about this in my recent video on how their decisions have already left many Americans 13 years behind in wealth creation. If you missed it, you can watch here:
So yeah, handing people a check doesn’t guarantee anything. It doesn’t solve purpose, agency, nor even survival if inflation outpaces that payment.
And while we’re here: why are we letting tech billionaires run this conversation?
Why is Sam Altman the face of UBI? Why does Elon Musk pretend to be a national ethics board? Why is Bill Gates acting like the head of public health?
These people have the right to talk and share their input, but they shouldn’t be the ones setting the agenda for our social or economic future. They weren’t elected. They don’t speak for workers. And they’re not the ones who will feel the consequences of their own ideas.
Being brilliant at building AI doesn’t mean you understand how people survive it. And it definitely doesn’t make you qualified to lead society through the fallout.
P.S. I’ve been thinking a lot about how to separate real concerns from fear-based narratives around AI.
I’m aware of the valid criticisms (like its impact on climate, copyright, and jobs), but I’m also seeing a lot of exaggerated claims that lack context. Some of it needs to be challenged, not just repeated.
I’ll share more once I’ve had time to fully sort through it and actually offer something worth your time.