Felipe Matos Blog

Brazilian Congress Approves AI Rules For Copyrighted Works As US$ 400 Billion Investment Faces 18-Month Chip Life - Why These 24 Hours Expose the Collision Between National Regulation and Global Financial Reality

December 23, 2025 | by Matos AI

j6BVvQ-L5aa-i92fnorrU_0bed1d35627e4f97b91c87d3df155176

Do you know that moment when you realize that two completely different conversations are taking place at the same time, but they both define the same future? That's exactly what happened in the last 24 hours in the world of artificial intelligence.

While Brazil's National Congress advanced regulations to protect copyright and images from unauthorized use by generative AI systems, On the other side of the world, technology companies were facing a silent crisis: the lifespan of the most advanced AI chips can be as little as 18 months to 3 years, calling into question the US$ 400 billion to be invested by 2025.

And here's the point that fascinates me: these two realities - legislative and infrastructural - rarely talk to each other, but they urgently need to meet. Because what's the point of creating sophisticated rules to protect creators if the very financial sustainability of AI systems is in question? And how can we guarantee a return on billion-dollar investments if society has yet to define the basic rules of the game?


Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!


Let's break down what happened and, more importantly, why this moment reveals the most strategic crossroads between national governance and global economic viability.

Brazil Finally Makes Progress on Copyright Protection in AI - But Leaves Crucial Gaps

THE The Culture Committee of the Chamber of Deputies has approved a bill which establishes something that should already be obvious: generative AI systems need prior authorization to use images of people and copyrighted works (texts, songs, artistic creations).

Congresswoman Denise Pessôa's substitute contains important points:

  • Mandatory prior licensing with a maximum term of 3 years and continuous remuneration for the use of voice and image of artists
  • Prohibition of definitive assignment of rights, ensuring that creators maintain control over their works
  • Special protection for identity elements, by preventing AI companies from simply “training” models with faces, voices and creations without consent

So far, significant progress. But one detail caught my attention and reveals the complexity of the moment: the rapporteur removed from the text the rule that automatically denied copyright to works created by IA.

What does this mean? That the future protection of content generated by artificial intelligence will depend on the level of human participation in creation. In other words: the line between “AI-assisted human work” and “purely artificial work” is still being drawn - and this definition will have billion-dollar impacts.

The Normative Vacuum for the 2026 Elections Threatens a Clash of Powers

But Brazil's regulatory drama doesn't stop there. While copyright protection advances, the AI Regulatory Framework not voted on by the House Special Committee, and now experts are warning of a possible conflict between the legislature and the judiciary.

The problem? The 2026 elections are coming up, and the principle of electoral annuality requires rules to be defined a year in advance. If Congress does not act quickly, the TSE (Superior Electoral Court) may need to intervene on broad issues such as algorithmic transparency and penalties - something that should be dealt with by national law.

The current rules basically focus on:

  • Banning deepfakes in election campaigns
  • Requiring clear warnings when AI is used in campaign content

But that's insufficient for the size of the challenge. How do we define responsibilities when a candidate uses AI to micro-segment contradictory messages for different groups? How do you monitor algorithms that amplify disinformation in an automated way? These questions still have no clear legal answer.

In my work with companies, governments and support organizations, I see this tension all the time: technology is advancing at exponential speed, but our institutions still operate at a linear pace. And when a regulatory loophole meets an electoral process, the risk to democracy is real.

AI Infrastructure Faces Its Biggest Crisis: Chips That Last Less Than an iPhone

Now let's turn to the other side of this story: the brutal reality of the costs and durability of AI infrastructure.

Second CNN Brazil report, technology companies will spend US$ 400 billion by 2025 in capital expenditure related to artificial intelligence. This figure is so large that it bears repeating: four hundred billion dollars in a single year.

But here's the problem that's keeping CFOs awake: the high-end chips used to train large language models (LLMs) have an estimated economic lifespan of only 18 months to 3 years.

Why so short? Two main reasons:

  • Physical wear and tear: Advanced GPUs operate under extreme heat and stress, which accelerates hardware degradation
  • Technological obsolescence: New generations of chips arrive quickly, offering much greater efficiency - which makes it economically unviable to continue using older versions

The Dangerous Dependency that OpenAI Revealed (And Tried to Hide)

In a rare moment of transparency (followed by a rapid retreat), Sarah Friar, OpenAI's financial director, admitted that the sustainability of the company's business model depends directly on the durability of the chips. She even mentioned the possibility of seeking government guarantees to mitigate this risk - a comment that generated so much repercussion that it was later downplayed.

But the cat was already out of the box. The message was clear: even the world's most valuable AI company isn't sure it can sustain its operating costs in the long term.

Nvidia, for its part, is trying to alleviate the problem by highlighting its CUDA software, which theoretically makes it possible to extend the use of older chips. But the strategy raises an obvious question: whether old chips can be “enough” with better software, why do companies keep spending billions on new generations of hardware?

The answer, of course, is that they can't. The AI arms race always demands the state of the art - and that feeds the vicious cycle of investment and obsolescence.

The Ghost of the Bubble: When US$ 400 Billion May Not Bring Returns

And here we come to the critical point that connects infrastructure and finance: the short lifespan of chips puts pressure on companies to generate absurdly fast returns on billion-dollar investments.

We're talking about a business model that needs to pay for itself in 18 to 36 months - a timeframe that, in any other infrastructure industry, would be considered unfeasible. Imagine if hydroelectric plants or telecommunications networks needed to generate a full return in 2 years. Impossible, right?

But that's exactly what's happening with AI. E the market is already reacting with fear of a bubble. Investors are preparing for “extreme scenarios of sharp volatility” in the technology sector in 2026.

Some data that reinforces this tension:

  • Walt Disney invested US$ 1 billion in OpenAI, This is a move that shows large corporations betting heavily on the use of more than 200 Marvel, Pixar and Star Wars characters in ChatGPT and Sora.
  • The lure of AI resources is already hurting the smartphone market, with global shipments projected to fall by 2.1% in 2026 due to high component costs
  • Strategists suggest “trading the volatility” of the Nasdaq 100, preparing for sudden falls followed by rapid recoveries

In other words: the market is betting so much on AI that it is creating artificial scarcity in other sectors - and no one knows for sure whether the returns will come in time.

AI Reaches HR and Creates a Vicious Cycle Between Companies and Candidates

And if you think these tensions between regulation and infrastructure are abstract, look at what's happening in the labor market.

Second CNN Brazil report, more than half of American organizations used AI to recruit in 2025 - but the results are far from positive.

The irony is brutal: research shows that using LLMs like ChatGPT to help with the job search decreases the chances of being hired. AI-generated cover letters are longer, but less valued. Why? Because they lack authenticity, real context and genuine human connection.

Across the table, automated interviews (with AI asking the questions) are perceived as “cold” and can amplify existing human biases - exactly the opposite of what the technology promised to solve.

Daniel Chait and the “Vicious Cycle” of Mistrust

Daniel Chait, CEO of Greenhouse, described the problem perfectly: we are creating a “Vicious circle” between employers and candidates.

Companies use AI to filter thousands of CVs quickly. Candidates, knowing this, use AI to generate dozens of generic applications. Companies, realizing the low quality, harden the automated filters. Candidates, frustrated, resort to even more automation.

And in the middle of this algorithmic war, real people with real skills are being excluded by arbitrary criteria that no one can explain.

The reaction is coming quickly:

  • Liz Shuler of the AFL-CIO called the current use “unacceptable”
  • States like California, Colorado and Illinois are enacting laws to regulate the use of AI in HR
  • Legal cases are already underway, such as the one filed against HireVue for alleged lack of accessibility

The message is clear: pre-existing anti-discrimination laws apply to the use of AI - and companies that ignore this will face increasing legal consequences.

When Poetry Breaks Down AI's Defenses: The Vulnerability No One Expected

And just when you think you've understood the challenges of AI, something completely unexpected comes along.

Italian researchers from the Icaro Lab have discovered that prompts in the form of poetry manage to disable the security mechanisms of AI models with a high success rate.

Let me repeat that in another way: verses, rhymes and metaphors can jailbreak AI systems more easily than complex mathematical attacks.

The researchers turned 1,200 dangerous prompts (which would induce the AI to generate harmful content) into poems - and the success rate was surprisingly high when the poems were created manually by humans.

Why It Matters Much More Than It Seems

This discovery reveals something fundamental about the limits of current AI: technology still can't cope adequately with the diversity and creativity of human expression.

The hypotheses are fascinating:

  • The poetic structure can confuse risk recognition patterns
  • Metaphors and figures of speech can “camouflage” dangerous intentions
  • The emotional context of poetry can disable logic-based filters

The exact cause is still unknown - and that's worrying. We're putting AI into critical systems without fully understanding their vulnerabilities.

In my work with companies and governments, I always reinforce: technology is no substitute for human judgment, especially when dealing with cultural nuances, emotional contexts and subjective intentions. This discovery proves that point poetically - literally.

Brazilian Judiciary Adopts AI to Fight Abusive Litigation While CEOs Fight for Profitability

While these tensions mount, Brazil is showing that the practical implementation of AI can bring concrete results in specific sectors.

THE Goiás Court of Justice has developed the “Berna” tool” (recursive electronic search using natural language), now available to all Brazilian courts via Jus.br and the Digital Platform of the Judiciary (PDPJ).

Berna uses AI for:

  • Analyze initial petitions automatically
  • Identify repeating patterns that indicate mass demands
  • Rating the likelihood of abusive litigation, helping magistrates prioritize genuine cases

It's a practical example of how AI can increase the efficiency of the judicial system without replacing human judgment - just by better organizing the workflow.

But Profitability is Still the Elephant in the Room

Despite these practical advances, the reality of corporate finance tells a different story.

A Teneo survey revealed that:

  • 68% of CEOs plan to increase spending on AI in 2026
  • However, less than half of the projects implemented so far have generated real profit

There are success stories in marketing and customer service - areas where automation of repetitive processes brings clear gains. But in high-risk sectors such as legal and HR, implementation remains complex and expensive.

And even giants like OpenAI can take years to achieve financial sustainability - a reality that contrasts sharply with the market's astronomical expectations.

The Future of Managerial Work: AI Taking Over Operations and Forcing a Redefining of Roles

And while we discuss infrastructure and regulation, the impact on work is already changing the organizational structure of companies.

Companies like Amazon, Moderna and McKinsey are using AI agents to automate routine and administrative tasks that consume managers' time.

The expectation is that this will allow managers to focus on:

  • Long-term strategy instead of putting out daily fires
  • People development, taking on a role closer to that of coach and facilitator
  • Supervision of larger teams, potentially flattening organizational structures

But There's a Critical Alert Here

Companies urgently need redefine accountability metrics and incentives for managers. You can't just automate operations and keep the same KPIs based on team results.

The focus needs to shift to essential human skills in the age of AIempathy, active listening, conflict facilitation, strategic talent development.

And here's the risk that worries me: there's a growing warning that the excessive use of AI in feedback and recognition can deteriorate interpersonal relationships. “Synthetic empathy” is no substitute for a manager's authenticity - and employees quickly realize when they are receiving automated messages instead of genuine attention.

In my mentoring work with executives, I see this tension all the time: the pressure to adopt AI coexisting with the fear of losing the human connection that sustains strong organizational cultures.

China Implements Traffic Enforcement with Real-Time AI: Efficiency Versus Surveillance

And to close the panorama of the last 24 hours, an example that perfectly illustrates the extremes of AI implementation.

THE traffic police in Changsha, China, implemented smart glasses from Rokid equipped with AI for real-time monitoring.

The numbers are impressive:

  • Complete vehicle and driver verification in 1 to 2 seconds
  • 93% reduction in inspection time
  • Accuracy of over 99% in license plate recognition
  • Facial recognition of drivers and real-time voice translation
  • Opera offline, instantly consulting public security databases

From the point of view of operational efficiency, it's impressive. The technology allows inspections to be carried out without physical contact, increasing agent safety and drastically speeding up the process.

But from the point of view of privacy and civil rights? This kind of mass surveillance raises profound questions about the balance between public safety and individual freedoms - questions that democratic societies need to confront before implementing similar technologies.

What This Moment Reveals: The Inevitable Collision Between Local Regulation and Global Reality

After analyzing all these developments over the last 24 hours, it is clear that we are at a crucial - and tense - moment.

On the one hand, countries like Brazil are trying to build regulatory frameworks to protect individual rights, creators and democratic integrity. These are legitimate and necessary efforts.

On the other side, the global AI infrastructure is facing a financial sustainability crisis, with chips that last less than smartphones and billionaire investments that may not pay off in time.

And in the middle of it all, workers face recruitment systems that exclude them by opaque criteria, managers need to redefine their roles, and entire societies debate the limit between efficiency and surveillance.

Why These Conversations Need to Meet Urgently

The point is not to choose between regulation or innovation. The point is to recognize that regulation without understanding the economic reality of technology generates unenforceable laws, while innovation without governance generates unacceptable social externalities.

Some critical points that need to be addressed simultaneously:

  • Copyright and image protection needs to consider sustainable remuneration models for AI companies - it can't be so restrictive that it makes the technology unviable, nor so permissive that it destroys creators
  • Electoral regulation needs to be technically informed, understanding how algorithms work in practice, not just banning deepfakes superficially
  • Financial sustainability of AI infrastructure it needs to be transparent - if the business model depends on government subsidies or public guarantees, society has the right to know and decide about it
  • Use of AI in HR and management needs clear ethical frameworks, with the right to explain automated decisions and the possibility of human recourse
  • Implementation in critical sectors such as the judiciary needs independent auditing and algorithmic transparency - even when it brings efficiency

What We Can Do Concretely in the Face of This Complexity

I know this panorama can seem overwhelming. There are so many tensions happening simultaneously, and the temptation is to stand back and wait for “someone else to sort it out”.

But now is not the time to wait. This is the time to building intelligent navigation capacity in this complex scenario.

If you are business leader:

  • Understand in depth the financial viability of your AI projects - infrastructure lifecycle, expected return, regulatory risks
  • Invest in AI governance from the outset, not as “compliance” but as a competitive advantage and risk mitigation
  • Redefine management metrics to balance automation with the quality of human leadership

If you are technology or data professional:

  • Develop regulatory literacy - understand the laws being created and how they affect your work
  • Adopt responsible AI practices by design, not as an “add-on” but as a technical requirement
  • Question vulnerabilities and biases - the discovery of “jailbreak poetry” shows that there is still a lot we don't understand

If you are creator or artist:

  • Actively follow regulatory debates - your voice matters and affects the future of your profession
  • Understand how your works could be used to train AI and demand transparency
  • Explore licensing models that allow you to monetize legitimate use while protecting your rights

If you are citizen concerned about democracy:

  • Push for algorithmic transparency in electoral and judicial processes
  • Demand that representatives have a minimum understanding of the technology they are regulating
  • Take part in public consultations on regulatory frameworks - your real-life experience matters

The Future Is Built at the Crossroads Between Governance and Viability

These 24 hours of AI news have revealed something fundamental: we are no longer at the stage of choosing whether to adopt artificial intelligence. We're at the stage of deciding how we're going to live with it in a sustainable way - financially, socially and ethically.

The collision between national regulation and global economic reality is not a bug - it is the central challenge of our age. And unlike other technological revolutions, we don't have decades to adjust. We have months.

The Brazilian Congress is advancing copyright protection while US$ 400 billion chips last less than 3 years. It's not a contradiction - it's the real complexity of the historical moment we're living through.

And the only way to navigate this complexity is with in-depth knowledge, honest dialog between different perspectives, and the courage to make decisions even if you don't have all the answers.

In my mentoring and consulting work, I help executives, companies and governments build exactly this capacity: to understand the intersection between technology, regulation and business model, transforming complexity into actionable strategy. Because in the end, it's not about predicting the future - it's about building the capacity to thrive in multiple possible futures.

How about you? How are you preparing to navigate this crossroads between governance and viability?


✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.

➡️ Join the 10K Community here


RELATED POSTS

View all

view all