Yoshua Bengio Says All Jobs Will Be Eliminated As US Forges Strategic Alliance Without Brazil - Why These 24 Hours Reveal AI's Most Urgent Reality
December 24, 2025 | by Matos AI

Sometimes, reality knocks on the door with a force that disarms any easy optimism. In the last 24 hours, the world of Artificial Intelligence has delivered an intense combination of urgencies: from one of AI's creators admitting total regret to a new technological Cold War leaving Brazil out. And in the midst of all this, companies like Zendesk are already planning to make US$ 200 million from AI, while deepfakes of deceased people have sparked protests from grieving families.
We are no longer in the territory of promises. We're at the stage where AI is showing its brute strength - and its deep vulnerabilities.
I've been working with governments, companies and innovation ecosystems for years, and I can say that with peace of mind: these 24 hours concentrate the essence of the moment we are living in. It's no longer about “if” AI will transform everything. It's about how we'll deal with the consequences - and whether we'll have a seat at the table when the decisions are made.
Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!
- AI for Business: focused on business and strategy.
- AI Builders: with a more technical and hands-on approach.
Yoshua Bengio's Regret: When One of the Fathers of AI Says He Made a Mistake
Let's start with the news that shook the ecosystem: Yoshua Bengio, winner of the Turing Award and one of the three “founding fathers” of deep learning, said publicly that he regrets his work in creating AI. According to InfoMoney report (based on Fortune), Bengio claims that “all jobs will be eliminated” in the next five years.
It's not a question of “maybe”. Or of “cognitive jobs” versus “manual jobs”. Bengio is categorical: robotics and massive data collection will lead to the eventual replacement of physical jobs too. The threat is not a future one. It is already silently underway.
What struck me was the scientist's turning point: the emergence of ChatGPT. It was there that Bengio realized that the technology he had helped to create was not just a scientific breakthrough. It was a force with catastrophic potential for democracy and social structure. His regret does not come from technical failure. It comes from technical success without ethical or social anchoring.
Bengio founded LawZero, which focuses on safe AI aligned with human values. But he also makes a desperate plea to CEOs: “Stop the current work and talk to solve the problem.” He projects that, at the current rate, democracy itself could collapse in about two decades.
When one of the architects of technology asks us to pause, it's not alarmism. It's lucidity.
Why Repentance Matters to Business Leaders Today
I talk to executives every day who are implementing AI in their operations. The pressure is brutal: competitors are advancing, investors are demanding efficiency, and the promise of cost savings is too seductive to ignore.
But here's the issue that Bengio's speech highlights: are we really prepared for the systemic consequences of what we are implementing?
I'm not talking about compliance or governance checklists. I'm talking about looking in the mirror and asking: when we automate 40% of our workforce, what happens to those people? When our AI eliminates human intermediation, what will be the impact on the quality of relationships with customers, suppliers, communities?
In my work with companies, I always insist on one principle: before implementing AI, understand the social system you're in. Technology does not operate in a vacuum. It amplifies structures. If your company already has a culture of discarding people, AI will make it industrial. If your company values human development, AI can free up time for creativity and strategy.
The choice is not technical. It's moral.
Pax Silica: The New Technological Cold War That Left Brazil Out
While Bengio was sounding the ethical alarm, the Trump administration announced the Pax Silica, A strategic alliance with seven countries to create robust semiconductor and AI chains. According to the Power360, The countries chosen were Japan, South Korea, the Netherlands, the United Kingdom, Israel, the United Arab Emirates and Australia.
Brazil was left out.
Let's understand what this means. Pax Silica is a direct response to the virtual Chinese monopoly on rare earths and critical components. Japan, Korea and the Netherlands are essential for the production of advanced chips (Samsung, Sony, Hitachi, ASML - Europe's most valuable company). Israel brings expertise in digital warfare. The UAE brings strategic capital. Australia offers alternative raw materials.
What about Brazil? Even though we are producers of rare earths, we were left out of the first phase of the alliance. This brutally weakens our geopolitical bargaining power on an axis that will define the 21st century.
Why Being Left Out of Pax Silica Is a Strategic Problem for Brazilian Companies
You might think: “This is geopolitics. What does it have to do with my company?”
It has everything to do with it.
When the US forms a strategic alliance for semiconductors and AI, it's not just designing product flows. They are designing flows of knowledge, technical standards, security protocols, and privileged access to innovations. Brazilian companies that rely on AI infrastructure will have to operate in an environment where the rules have been written by others.
In my work with governments and support organizations, I see this clearly: countries that don't have a seat at the table don't just lose business; they lose decision-making sovereignty. And decision-making sovereignty, in the age of AI, is the difference between being an active player or a passive consumer market.
Brazil's exclusion from Pax Silica is not a judgment on our technical capacity. It is a consequence of political and diplomatic choices that have put us off the US strategic radar. But it is also a signal to the Brazilian private sector: either we build domestic capacity and relevant regional partnerships, or we will be dependent on those who have decided without us.
Desperate Dependence: The US and Chinese Domination of Data Center Batteries
The geopolitical irony became even clearer in another news item of the day: despite the US's ambition to lead AI, the country is facing a “Desperate” dependence on Chinese batteries, This is essential for data centers and the Pentagon. Second Opera Mundi report (based on the New York Times), China will dominate 99% of LFP (lithium iron phosphate) cells by 2024.
Data centers consume billions of dollars in lithium-ion batteries as a backup against power outages that can corrupt AI codes. The Pentagon depends on Chinese chains for thousands of critical components, from drones to lasers. Experts say that building an independent industry will be extremely difficult and expensive due to environmental standards and refining complexity.
OpenAI has publicly acknowledged that electricity is a strategic asset. And the lesson of the war in Ukraine - where dependence on components revealed critical vulnerabilities - is being used to argue that technological sovereignty in AI requires control over energy and storage infrastructure.
What Battery Dependency Teaches About Invisible Infrastructure
I always say in my immersive courses: AI isn't just about software. AI is physics. It's energy, chips, batteries, fiber optics, water for cooling, and extremely complex global logistics.
When we talk about dependence on Chinese batteries, we're talking about something deeper than the supply chain. We're talking about strategic blackmail power. If China decides to restrict exports of LFP cells - as it has done with rare earths in times of tension - American (and global) data centers could face critical blackouts.
For Brazilian companies planning to invest heavily in AI infrastructure, the lesson is clear: diversify your energy sources and critical components. Don't pin your entire strategy on a single supplier or country. The geopolitics of AI are unstable, and anyone who depends on a single chain is vulnerable.
Global Investors Diversify Into Chinese AI, Fearing Wall Street Bubble
And speaking of geopolitics, an interesting piece of news has brought unexpected movement: global investors are increasing their exposure to Chinese AI companies, such as Alibaba (owner of the Qwen model) and chipmakers like Moore Threads and MetaX. According to the Earth, managers like Ruffer are limiting their exposure to American Big Techs and seeking diversification in China.
UBS Global Wealth Management has classified Chinese technology as “the most attractive” for its political support and rapid monetization of AI. While the US leads in cutting-edge innovation, China is closing the gap in engineering and manufacturing. The pressure of the US-China technology war is accelerating Chinese innovation.
But here's the important detail: some analysts warn that valuations of newly-listed Chinese chipmakers can be driven by hype, suggesting caution.
Bubble or Strategic Reallocation? What's Happening to AI Capital?
I've been watching this movement closely. It's not just an escape from a supposed bubble on Wall Street. It is a strategic reallocation of capital to where there is robust political support and the capacity for rapid execution.
China has an advantage that many underestimate: coordination between the state and the private sector for long-term technological goals. While American companies face quarterly pressure from shareholders, Chinese companies receive political support for decades-long investments.
For Brazilian investors and entrepreneurs, the lesson is: don't put all the eggs in the American basket. AI innovation is becoming multipolar. Follow what's happening in China, India, Israel and Europe. Strategic partnerships and cross-learning will be essential for those who want to compete globally.
Zendesk Forecasts US$ 200 Million in AI Revenue and Changes Model to “Resolution”
Now let's move on to a piece of news that shows how AI is already generating real, measurable value: Zendesk expects to reach US$ 200 million in Annual Recurring Revenue (ARR) from AI by the end of 2025, representing more than 10% of the total ARR. Second NeoFeed, At the end of the year, more than 20,000 customers already use its AI solutions.
But the most interesting thing is the change in the business model: charging now depends on the effective resolution of the customer's problem, rather than the volume of interactions. This change is already active in Brazil, which is the company's third largest market.
CEO Tom Eggemeier projects that in three years, direct interactions will decrease as AI-based personal agents take over decisions and negotiations. The company plans two more acquisitions in 2026 focused on increasing the automatic resolution rate.
Why the “Resolution” Model Is a Game-Changer for AI in Companies
I always say that AI without measurable results is corporate theater. Zendesk is doing something that every company should do: putting value metrics at the center of the business model.
When you charge for “resolution” instead of “interaction”, you align the incentive of technology with the real result the customer wants. It doesn't matter how many tickets were opened. It matters how many problems were solved.
This completely changes the game. Instead of inflating volume metrics (how many messages the AI processed), you focus on impact (how many satisfied customers left). And this forces the company to invest in quality AI, not empty scale AI.
In my consulting work, I help companies design exactly this kind of model: how to measure the real value generated by AI, and how to turn this into a measurable revenue model or cost reduction. It's the difference between eternal pilots and implementation that scales.
Deepfakes of Deceased Generate Protest: The Dark Side of Generative AI
Not everything in the last 24 hours has been about geopolitics or business models. A news story brought to light one of the most disturbing aspects of generative AI: hyper-realistic videos of deceased celebrities (Michael Jackson, Elvis Presley, Queen Elizabeth II) are being created with tools such as OpenAI's Sora, sparking protests from families and organizations. According to The Globe (AFP), the daughter of Robin Williams and the heirs of Martin Luther King Jr. condemned the use without authorization.
Following complaints, OpenAI has prevented the creation of videos with MLK Jr. but recognizes that the restrictions are not universal. Experts like Hany Farid criticize the proliferation of this “AI slop”, which amplifies distrust in real information.
The central discussion is “Posthumous Image Control” and the traumatic risk for relatives, as well as the impact on society's general trust in the veracity of the media.
Where do we draw the line between innovation and infringement?
This news made me particularly uncomfortable. Not because the technology exists - all powerful technology can be misused. But because the speed with which we normalize ethical violations is frightening.
When you create a deepfake of a deceased person, you're not just “playing with technology”. You're manipulating the memory of someone who can no longer defend themselves. You're causing real pain to real families. And you're contributing to an environment where no one trusts anything they see anymore.
I've been working with responsible AI for years, and one of the questions I always ask development teams is: just because you can, does that mean you should?
The answer isn't always obvious. But in cases like this, it should be. Respect for the dead and their families is not a technical issue. It's a question of basic humanity.
Poetry as a Weapon: Researchers Discover Literary Jailbreak in AIs
And speaking of vulnerabilities, a fascinating discovery has shown that poetry can disable AI safety mechanisms. Researchers at the Icaro Lab (Italy) have discovered that prompts formulated in the form of poetry can bypass jailbreaks. According to DW, By converting 1.2 thousand dangerous prompts into poems, the success rate in generating forbidden content was surprisingly high.
The poems created manually by the researchers were the most effective, outperforming those generated by AI. The hypothesis is that the unusual structure of poetry (verse, rhyme, metaphor) confuses AI, in the same way that mathematically calculated adversarial suffixes did, but without the mathematical complexity.
What Poetic Vulnerability Reveals About AI
I found this discovery absolutely brilliant - and worrying. It reveals something fundamental about how AIs work: they don't understand; they recognize patterns.
When you formulate a dangerous instruction in the form of poetry, the linguistic structure doesn't fit the patterns that the AI has been trained to block. Verses, rhymes, metaphors - all this creates a “layer of obfuscation” that allows forbidden content to go unnoticed.
What does this mean for companies? Security mechanisms based on pattern recognition are fragile by nature. Any human creativity - literary, cultural, linguistic - can get around them.
The solution is not to block all creativity. It's to recognize that AI needs layers of human supervision, especially in sensitive contexts. And that safety in AI is not a solved problem. It's a permanent problem.
AI “Berne” Arrives in Brazilian Courts to Identify Abusive Litigation
On the national scene, a concrete and significant application of AI has reached the courts: the tool Bern, developed by the TJ/GO, was made available to courts across the country via Jus.br and the Digital Platform of the Judiciary (PDPJ) under the Justice 4.0 initiative. According to Crumbs, Berna uses natural language processing (NLP) to analyze and group initial petitions, identifying patterns of repetition and possible misuse or abuse of justice.
The solution is agnostic to procedural systems and aims to automate steps, ensuring jurisdictional agility. This is the second initiative nationalized by the CNJ's Conecta project, validating the application of AI in complex government processes.
Why Bern is an Example of Well-Applied AI in the Public Sector
I often work with governments, and one of the biggest difficulties I see is the implementation of AI in complex bureaucratic processes. Bern is a rare example of AI that solves a real, measurable problem with a public impact.
Abusive litigation and mass claims overload the judiciary, delay legitimate cases and waste public resources. By automatically identifying patterns of repetition, Berna frees up judges' and civil servants' time for cases that really require human analysis.
Most importantly: Bern is system-agnostic. This means that it can be implemented in different courts without having to reconfigure the entire digital infrastructure. This interoperability is essential for scaling across governments.
In my work with support organizations and governments, I always reinforce: AI in the public sector needs to be designed for scale and inclusion. Berna is a good example of this.
What These 24 Hours Teach Us About AI's Current Momentum
So what can we take away from this whirlwind of news?
First: we are no longer in the land of promises. We're at the consequences stage. Yoshua Bengio is not speculating about the future. He's warning about the present, which has already begun.
Second: AI geopolitics is real and brutal. Brazil was left out of Pax Silica. The US depends on Chinese batteries. Investors are diversifying into China. The power configuration of the 21st century is being drawn up now, and those who don't have a seat at the table will be dependent on those who decided without us.
Third: AI already generates measurable value - and companies that understand this, like Zendesk, are redesigning entire business models based on resolution and impact, not empty volume.
Bedroom: ethics is not something you add later. Deepfakes of the deceased are not “side effects” of innovation. They are symptoms of a technology that has advanced without sufficient moral structure.
Fifth: AI safety is fragile. If poetry can jailbreak, imagine what malicious actors with resources can do.
And sixth: there are brilliant examples of well-applied AI, This is a great example of technology, such as Berna in the Brazilian judiciary, which shows that it is possible to use technology to solve real problems with a positive social impact.
What to do now? Practical Action for Leaders, Entrepreneurs and Public Managers
Whether you lead a company, a government or an innovation ecosystem, these 24 hours are not just news. They are signals for urgent action.
1. Reassess your AI strategy with a systemic lens. Before implementing more automation, ask yourself: what will the impact be on people? How will we deal with the transition? What ethical commitments will we make publicly?
2. Diversify your geopolitical dependencies. Don't put all your AI infrastructure in a single country or supplier. Geopolitics are unstable. Supply chains can be cut off. Energy is power. Batteries are power. Whoever controls it controls you.
3. Measure real value, not vanity metrics. How many decisions has your AI improved? How many real problems has it solved? If you can't answer that, maybe you shouldn't be scaling yet.
4. Implement layers of ethical supervision. Security in AI is permanent, not one-off. Vulnerabilities are constantly appearing. And don't just think about technical attacks. Think of cultural, linguistic and creative attacks.
5. Invest in domestic capacity. Brazil was left out of Pax Silica. But that doesn't mean we should accept it passively. We have talent, we have natural resources, we have a market. We need strategic coordination. Governments and the private sector need to talk seriously about digital sovereignty.
How I Can Help You and Your Organization Right Now
I dedicate my career to helping companies, governments and ecosystems navigate this complex moment of AI. Not with ready-made answers, but with practical frameworks, context analysis, and strategies that balance innovation with responsibility.
In my consulting and mentoring, I help executives and leaders to:
- Design AI strategies that generate measurable and sustainable value
- Identify geopolitical vulnerabilities and diversify dependencies
- Implement ethical and governance frameworks that do not hinder innovation, but rather direct it.
- Build internal AI capacity instead of relying forever on external consultancies
In my immersive courses, I work with entire teams to transform knowledge into practical action, connecting technology with business context and real social impact.
If you feel that your organization is chasing AI without strategic clarity, or is implementing technology without sufficient ethical framework, let's talk. This is the time to act with intelligence, not just speed.
Because, as Yoshua Bengio reminded us in these 24 hours: the time to pause and think is now. Not when the consequences are already irreversible.
✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.
➡️ Join the 10K Community here
RELATED POSTS
View all
