Felipe Matos Blog

Time Names ‘AI Architects’ Person of the Year While Disney Invests US$ 1 Billion - Why These 24 Hours Reveal Artificial Intelligence's Definitive Turn to the Mainstream

December 12, 2025 | by Matos AI

ZWeufOc99xPjQUCRc5CPj_9a4338158ea54eebbe5d84474c8eea7b

The last 24 hours have brought a definitive milestone for artificial intelligence: for the first time, the magazine Time elected not one person, but a group of technology leaders as Person of the Year 2025. Jensen Huang (Nvidia), Sam Altman (OpenAI), Mark Zuckerberg (Meta), Elon Musk (xAI) and other “architects of AI” received the recognition that, according to Time itself, places the technology as one of the best in the world. “the most important tool in the competition between great powers since the advent of nuclear weapons”.

On the same day, Disney announced a US$ 1 billion investment in OpenAI, El Salvador is licensing more than 200 iconic characters (Mickey, Marvel, Star Wars, Pixar) for AI video creation from 2026. And, to add more complexity to the scenario, El Salvador has become the first country in the world to implement generative AI throughout its education system, using Elon Musk's Grok to more than a million students.

Meanwhile, in the financial market, Oracle plummeted 12.81TP3Q after announcing US$ 15 billion investment in AI infrastructure, and rekindling fears of a “bubble” in the sector. And in Brazil, we discovered that 50 million Brazilians already use generative AI, but with a deep socio-economic divide.


Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!


What's going on? Why are these 24 hours so revealing? And what does this mean for companies, professionals and governments that are still deciding how to position themselves?

The Consecration of the “Architects”: When AI stops being a promise and becomes real power

Let's start by recognizing Time. It is not trivial that a century-old magazine, which has already elected Churchill, Gandhi, Martin Luther King and even Hitler (in critical historical contexts), chooses to celebrate a group of CEOs and technology scientists.

What does this mean?

Firstly, that AI is no longer seen as an experimental technology. It is consolidated geopolitical, economic and social power. When editor-in-chief Sam Jacobs states that “no one has had a greater impact than the individuals who imagined, designed and built AI”, He is recognizing that these figures shape the present - not the distant future.

Secondly, there is an extraordinary concentration of power. Jensen Huang predicts that AI will make the global economy to jump from US$ 100 trillion to US$ 500 trillion. That's a 5x jump in world wealth. Who controls this transition? A handful of American and Chinese companies. Who is absent? Europe, to a large extent. Latin America almost entirely.

Thirdly, and perhaps most worryingly, the Time also mentions the risks: legal actions against OpenAI by allegations that ChatGPT contributed to the suicides of young people, such as Adam Raine, and the massive elimination of jobs.

This reminds me of a conversation I recently had with the CEO of a medium-sized company in Brazil. He said to me: “Felipe, we see this news, but it seems like science fiction. What do I do on Monday morning?”

The answer lies precisely in these 24 hours: science fiction has become monday morning.

Disney and OpenAI: When Hollywood Embraces AI (and Sets the Rules of the Game)

Disney's agreement with OpenAI is historic for several reasons. First, obviously, for the value: US$ 1 billion in three years. But more important than money is what it represents: the definitive validation of intellectual property as a fuel for AI.

You see, until recently, the creative industry was in an open war against AI companies. Artists, writers, musicians sued OpenAI, Midjourney and Google for unauthorized use of their works to train models. A Disney itself has sent extrajudicial notice to Google precisely because of this, accusing large-scale copyright infringement.

What's next? Disney makes billion-dollar deal with OpenAI, licensing more than 200 characters for fans to create videos using Sora and ChatGPT. CEO Bob Iger was emphatic: the agreement prioritizes the “responsible use of AI, which protects the safety of users and the rights of creators”.

What has changed?

Simple: Disney has realized that it's better to dictate the rules than to fight the tide. Instead of suing to prevent, it monetizes. Instead of resisting, it sets the terms. This movement is a milestone: Hollywood, the epicenter of global culture, is saying “if you want to use our characters in AI, you pay - and pay well”.

And herein lies the strategic lesson for any company with valuable intellectual property: you can't ignore AI, but you can negotiate its entry. In my work with companies that own trademarks, patents or proprietary databases, I always reinforce: your greatest asset is not preventing AI, it's define how she uses what is hers.

Disney did just that. And won US$ 1 billion in the process.

The Contrast with Google: Legal War While Closing Deals

Ironically, while Disney signs with OpenAI, it accuses Google of violating copyright when training Gemini, Veo and Imagen with protected works without authorization.

This perfectly illustrates the fragmentation of the AI ecosystem: there is no monolithic block of “big techs”. There are alliances, trade wars, legal disputes and contradictory strategies going on simultaneously.

For Brazilian companies looking in from the outside, the question is: how to position themselves on this board? The answer depends on strategic clarity about your assets. Do you have unique data? Local market knowledge? Customer relationships that global giants can't replicate? Then you have bargaining power. If you don't, you need to build it - fast.

El Salvador Becomes a Global Laboratory: Education in the Hands of AI (and Elon Musk)

Now comes one of the most controversial pieces of news: El Salvador, under the government of Nayib Bukele, has announced partnership with Elon Musk's xAI to implement the Grok in every public school in the country, benefiting more than a million students.

This is first national educational program based entirely on generative AI in the world. It's not a pilot. It's full scale.

The Salvadoran government already uses Nvidia technology in other areas and has implemented DoctorSV, a medical consultation system with AI. Now it's taking its bet on education, promising to personalized learning for each student.

Sounds promising, right? But there are huge problems.

First, the Grok faces international questioning for biased or unreliable content, especially on controversial political issues. Making this technology the basis of national education is risky, to say the least.

Secondly, there is the question of digital sovereignty. El Salvador is handing over the intellectual training of its next generation to a private American company. Who educates the AI that will educate the children? What values are embedded in the algorithms? Who audits it?

Third, technological dependence. If El Salvador becomes totally dependent on Grok and, for whatever reason (commercial, political, technical), the system fails or is discontinued, what happens to the country's education?

That said, El Salvador is also doing something that few governments have the courage to do: experiment at scale. While most countries endlessly debate regulations and timid pilots, Bukele simply implements them. Will it work? We don't know. But we will know very soon - and the rest of the world will learn from the successes and errors Salvadorans.

In my work with governments and educational institutions, I always reinforce: you can't wait for the perfect solution. But you can't act without safeguards either. The way forward is experiment responsiblyThis means robust pilot projects, continuous evaluation, radical transparency and, crucially, training for educators.

Oracle Plummets 12.8%: The Ghost of the “AI Bubble” Comes Back to Haunt Wall Street

While Disney and El Salvador are betting billions on AI, the financial market has sent out a warning signal. Oracle, the data infrastructure giant plunged 12.81TP3Q after announcing that it will increase capital investments by US$ 15 billion for 2026, mainly for the AI infrastructure linked to the agreement with OpenAI.

Why the fall?

Because investors are seeing a growing discrepancy between sky-high costs and profits that are slow to appear. Oracle is betting big, but forecast revenues (US$ 16-18 billion in 3Q) are below analysts' estimates (19.4%).

Bank of America analysts were direct: there is a “problem with the investment curve”. In other words, companies are spending absurdly on AI, but the expected return is taking time. This raises the question that nobody wants to ask out loud: are we in a bubble?

I don't think it's a bubble in the classic sense. THE AI works. It generates real value. The problem is speed and scale of investment versus market maturity. It's like building a 10-lane superhighway when there isn't a big city connected to it yet.

That doesn't mean that AI is a fraud. It means that timing and investment strategy are critical. Companies that spend billions without clear use cases and rapid revenue generation will suffer. Companies that invest with a focus on specific, measurable and scalable applications will thrive.

In my consulting work and in my immersive courses, I always emphasize: AI is not about spending more, it's about spending better. Start small, measure impact, scale what works. That's the difference between innovation and waste.

50 Million Brazilians Use AI - But the Inequality is Brutal

Now let's look at home. The survey ICT Households 2025 revealed that 32% of internet users in Brazil - about 50 million people - have already used generative AI.

That's impressive. Brazil is at the top of the global adoption of emerging technologies. But here's the problem: 69% of class A users used AI, compared to only 16% of classes D and E.

This inequality is not just about internet access. It's about digital literacy, educational context and ability to use the tool. An A-grade student using AI for school research has advantages that accumulate. A class D student without access falls further behind.

Fábio Storino, the research coordinator, was direct: this difference in access can deepen inequalities, especially if the AI is used for study. And 53% of users cited school/college research as a reason for use.

Here's the paradox: AI has enormous potential for democratizing knowledge - a personalized tutor for each student, regardless of where they live. But if access is unequal, it widens exclusion.

What to do?

Public policies on connectivity and digital education are essential, of course. But we also need civil society initiatives, companies and educational institutions. Projects that bring accessible AI (simple interfaces, content in Portuguese, relevant use cases) to vulnerable communities.

In my work with social impact startups, I see this happening on a small scale. We need to scale up. And fast.

63% of Brazilians Use AI to Write Personal Messages: Are We Outsourcing Our Thinking?

Another impressive fact: research by Página 3 revealed that 63% of Brazilians have used AI to write personal messages, and 49% prefer to consult people to make decisions.

The title of the research is provocative: “More of the same”. The thesis is that AI is leading to homogenization of behaviour and the loss of authenticity. And 63% of respondents agree that people were more authentic in the past.

This makes me wonder: are we using AI as cognitive crutch? Are we outsourcing the effort to think, feel and express ourselves?

There are two sides to this. On the one hand, it's true that delegating too much to AI can stunt essential skills. If you always ask ChatGPT to write your messages, eventually you will loses the ability to articulate complex thoughts on their own.

On the other hand, tools have always shaped our thinking. Writing has changed the way we think. The calculator changed how we do math. GPS changed how we navigate. AI is doing the same - but at a much greater scale and speed.

The question is not whether AI will change the way we think. It's already changing. The question is: do we want this change to be conscious or passive?

In my mentoring work with executives and leaders, I always say: use AI as amplifier, not substitute. Use it to write a draft, but revise and add your voice. Use it to research, but draw your own conclusions. Use it to automate the repetitive, but invest time in the strategic and creative.

Authenticity doesn't disappear because we use tools. It disappears when we stop exercising our judgment.

James Cameron Vows Not to Replace Actors - But Will Hollywood Follow His Example?

In the midst of all this turmoil, there was a voice of resistance: James Cameron has reiterated that he will not use generative AI to replace actors in the Avatar films, maintaining its commitment to human motion capture.

Cameron is in favor of using AI to optimize workflow, but not in the creative process. He explores, precisely, human themes such as empathy and self-destruction.

This raises a fundamental philosophical question: what is the role of AI in art?

I tend to agree with Cameron. Art is, essentially, expression of the human condition. If we completely automate creation, what are we expressing? Algorithmic efficiency?

But there are nuances. AI can be creative tool in the hands of a human artist. Just as a paintbrush doesn't diminish the merit of a painting, a well-used AI can amplify the vision of a director, musician or writer.

The problem is when AI replaces human intention. When the algorithm decides the script, the dialogues, the emotions. Then we lose something essential.

Hollywood is in this dilemma. Disney embraces AI (with billion-dollar deals), but Cameron resists. Who's right? Perhaps both. Maybe the answer is context and purpose. AI for visual effects? Great. AI to replace the performance of an actor with decades of human experience? Questionable.

Meta Changes Strategy: From Open Source to Closed Models

And there's more. A Mark Zuckerberg's Meta has radically changed its AI strategy. After the failure of the open Llama 4, it redirected its focus to closed models, such as the ‘Avocado’ project, under the leadership of Alexandr Wang, an advocate of proprietary AIs.

This represents a giant departure from the open source strategy which Zuckerberg defended as a way to compete with China and the US. Meta is investing US$ 600 billion in infrastructure, But the focus on ‘superintelligence’ has raised concerns among investors and regulators.

Internally, there is tension. Yann LeCun, a historical figure in AI at Meta, recently left because he no longer aligned himself with the corporate strategy.

What does that mean?

It means that even the big techs are recalibrating your bets. Open source seemed to be the way forward. Now, with monetization under pressure, closed models are back on the table.

For startups and smaller companies, this is worrying. Open models like Llama were opportunity to compete without relying on 100% paid APIs. If the trend is towards closure, the ecosystem will become more concentrated.

But there is also a strategic lesson: open source is not charity. It's a market strategy. Meta opened Llama to create an ecosystem, hinder competitors and position itself as a leader. When this strategy didn't generate the expected return, it changed.

Companies need to understand: technological dependence is a strategic risk. Diversifying suppliers, investing in internal capacity and participating in open source communities (where possible) are ways to mitigate this risk.

What These 24 Hours Teach Us: AI Out of the Lab and Into the Boardroom

Let's recap what happened in less than 24 hours:

  • Time honors the “architects of AI” as Person of the Year, signaling that technology is a consolidated geopolitical power.
  • Disney invests US$ 1 billion in OpenAI, validating intellectual property as the fuel for AI and dictating the rules of the game.
  • El Salvador implements generative AI on a national scale in education, becoming a global laboratory (with all the risks that entails).
  • Oracle plummets 12.8%, rekindling fears of a bubble and showing that huge investments with no clear return scare the market.
  • 50 million Brazilians use AI, but with brutal inequality that threatens to widen exclusion.
  • 63% of Brazilians delegate writing personal messages to AI, raising questions about authenticity and cognitive outsourcing.
  • Goal abandons open source and bets on closed models, concentrating power and increasing dependency.

What does all this mean?

That means AI definitely left the laboratory and entered the meeting room, the classroom, the living room. It's no longer an experiment. It's operational reality.

And this reality has deep contradictions:

  • Concentration of power versus democratization of access.
  • Billionaire investments versus fears of a bubble.
  • Mass adoption versus brutal inequality.
  • Promises of personalization versus the homogenization of behaviour.
  • Disruptive innovation versus technological dependence.

For companies, the question is no longer “should I use AI?”. É “how to use AI strategically, measurably and sustainably?“.

For governments, the question is no longer “should I regulate AI?”. É “how do I regulate without stifling innovation, but also without allowing abuse and exclusion?“.

For professionals, the question is no longer “Will AI replace my job?”. É “how do I position myself as someone who uses AI as an amplifier, not as a substitute?“.

The Way Ahead: Responsible, Strategic and Inclusive Artificial Intelligence

In my daily work with companies, governments and startups, I see three fundamental challenges that these 24 hours have reinforced:

1. Strategic clarity

Many organizations are investing in AI because “everyone else is”. That's a recipe for waste. Oracle fell 12.8% for this very reason: investment without a clear return.

The right way is to start with simple questions: what specific problem am I solving? How do I measure success? What is the realistic cost-benefit? AI is not a silver bullet. It's a tool that needs a clear purpose.

2. Responsibility and Governance

Disney was smart to emphasize “responsible use of AI, which protects the safety of users and the rights of creators”. El Salvador should have done the same before handing over national education to Grok.

Governance is not an obstacle. It's protection. Companies that invest in AI without thinking about bias, transparency, privacy and social impact will face crises. We're already seeing this with lawsuits against OpenAI.

3. Inclusion and Empowerment

50 million Brazilians use AI. That's great. But the inequality is brutal. Democratizing access isn't just about technology, it's about education. It's about enabling people to use AI critically, creatively and ethically.

That goes for companies too. There's no point in deploying AI if your team doesn't know how to use it. O World Economic Forum predicts 170 million new jobs by 2030, According to MIT, 95% of the pilot projects failed due to human expertise.

Training is the missing link. And it's urgent.

Conclusion: AI's Most Strategic Moment is Now - and You Need to Decide How to Position Yourself

These 24 hours were not just another day of technology news. They were the definitive milestone that artificial intelligence has left its promise and entered the mainstream.

When the Time elects “AI architects” as Person of the Year, she is saying: these people shape the present. When Disney invests US$ 1 billion, it is saying: AI is a validated commercial product. When El Salvador puts AI in national education, it is saying: AI is public policy. When 50 million Brazilians use AI, we are saying: AI is everyday life.

But we are also seeing the contradictions: financial bubbles, brutal inequality, cognitive outsourcing, technological dependence, concentration of power.

AI's most strategic moment is not when everyone has mastered the technology. É now, There is still time to build solid foundations: a clear strategy, robust governance, genuine inclusion and massive training.

Companies that do this will thrive. Those that just follow the wave will go bust (as Oracle has shown). Governments that regulate intelligently will protect citizens without stifling innovation. Those that regulate by panic or omission will create technological deserts.

What about professionals? Those who learn to use AI as an amplifier of creativity, empathy and critical thinking will be indispensable. Those who delegate everything to AI will become redundant.

In my mentoring and immersive courses, I help executives, companies and institutions navigate exactly these issues: how to invest in AI with strategic clarity, how to implement governance without paralyzing bureaucracy, how to empower teams for the post-AI world.

Because, at the end of the day, AI doesn't define our future. Our choices in the face of AI define us.

And these 24 hours have made it very clear: it's no longer possible to postpone this choice.


✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.

➡️ Join the 10K Community here


RELATED POSTS

View all

view all