Felipe Matos Blog

Fake AI Videos Flood the Net While Brazil Spends R$ 518 Thousand on European Course - Why These 24 Hours Reveal the Urgency of Real Digital Literacy

December 28, 2025 | by Matos AI

MESH81I3O55ruaB342XXV_80df592a57d54174aa530bbaf044ddcc

The last 24 hours have brought a stark contrast that exposes artificial intelligence's most critical moment: while AI-generated videos confuse millions of people about what is real on social media, Brazil invests half a million reais to take judges to Italy to learn about the technology. This paradox between massive misinformation and elite training reveals an uncomfortable truth about where we are on the AI journey.

The question: are we investing in the digital literacy of the right people?

The Flood of Deepfakes That No One Can Identify Anymore

According to a report published by The Globe (via The New York Times), the proliferation of AI-generated videos on social media has reached a critical point. An emblematic example happened on TikTok in October 2025: a completely fake video simulated an interview about the sale of food benefits. The women, the conversation, everything was generated by OpenAI's Sora application.


Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!


The most disturbing? Most viewers believed it was real. In the comments, people reacted with explicit racist attacks and virulent criticism of government assistance programs - just as Donald Trump was debating cuts to the SNAP program in the US.

I've been working with companies and governments for years helping to implement AI responsibly, and I can tell you: this kind of manipulation at scale is not dystopian science fiction. It's our Thursday afternoon reality.

Since the launch of Sora, misleading videos have exploded on TikTok, X, YouTube, Facebook and Instagram. The safeguards that the platforms promised - such as requiring disclosure of the use of AI and banning misleading content - have not been met. proved to be completely insufficient in the face of the technological leap made by OpenAI tools.

When Technology Serves Hostility

The use of these videos goes far beyond harmless memes. According to the report, they are being used to:

  • Foreign influence operations: Russia created fake videos of Ukrainian soldiers crying to demoralize the country
  • Incitement to religious hatred: In India, videos show the preparation of biryani with manhole water to attack Muslims
  • Political disinformation: A fake video about Jeffrey Epstein, with Trump's synthetic voice, was seen by more than 3 million people in a few days

Researcher Sam Gregory, executive director of Witness, got straight to the point: “Could they do a better job of moderating disinformation content? Yes, they're clearly not doing that”.

Darjan Vujica, a former member of the US State Department, summed up the problem: “The barrier to using deepfakes as part of disinformation has collapsed, and once disinformation spreads, it's difficult to correct the record.”

AI Pollution and Digital Fatigue

While deepfakes confuse people about what is real, another phenomenon is plaguing the internet: “AI slop” - low-quality and generally unwanted AI-generated content. According to Euronews, mentions of “AI slop” on the internet increased ninefold in 2025 compared to 2024.

The term gained so much prominence that it was elected Word of the Year 2025 by Merriam-Webster and the national dictionary of Australia. AI-generated articles already account for more than half of all English-language content on the web, according to SEO company Graphite.

Kate Moran, vice president of research at Nielsen Norman Group, identified the fundamental problem: there is “a lot of pressure to show shareholders: ‘Look, we've put AI in our product'”. This leads to a technology-led design - you start with the tool and then try to find a problem that it can solve, which is the opposite of what design should be.

The most recent example? In November, Meta launched the “Vibes” application in Europe, dedicated to short videos generated by AI. According to internal data seen by Business Insider, the app had only 23,000 daily active users in its first few weeks. France, Italy and Spain registered between 4,000 and 5,000 daily active users each. A disconcerting failure for a company that had warned against “unoriginal content”.

Brazil Between Two Worlds: Innovation and Capacity Building

Meanwhile, in Brazil, we see an interesting contrast. On the one hand, there are promising initiatives. According to Trade Journal, researcher Guilherme Cunha Lima has developed an AI system capable of analyzing hundreds of articles simultaneously, with more than 90% precision - beating tools such as Deep Research (ChatGPT) and Elicit, which recorded hits between 70% and 80%.

The solution is based on the RAG (Retrieval Augmented Generation) architecture, but includes two new modules focused on auditing and automatic correction of statements and citations. This is the kind of Brazilian innovation we need to celebrate: technology that solves a real reliability problem.

On the other hand, we have an issue that bothers me deeply. As reported by Lauro Jardim's blog, The Rio de Janeiro Court of Justice spent R$ 518 thousand to take 23 judges and a president of the TRF-2 to a course on “Law, Justice and Artificial Intelligence” at the University of Milan in Italy.

The course, which took place between December 1 and 5, totaled 25 hours of theoretical and practical activities. There are three more courses of similar value planned for 2026, all in Europe.

I'm not questioning the importance of judicial training in AI. I'm questioning the choice. Why spend half a million reais to take magistrates to Europe when we could create massive training programs here, using Brazilian experts - some of whom, like Guilherme Cunha Lima, are developing solutions that are more precise than global commercial tools?

The Urgency of Democratic Digital Literacy

The real problem is not AI itself. It's brutal asymmetry of knowledge about it. While deepfakes deceive millions of ordinary Brazilians on social media, we invest in premium training for a judicial elite that could be trained locally.

Daniel Mügge, a researcher at the University of Amsterdam, made an insightful observation about the priorities of technology companies: they engage in a “race with each other”, betting everything on trying to beat OpenAI. This limits investment in AI that could solve concrete social problems. “We see that many investments in AI actually end up in applications that make society worse rather than better,” he said.

The same reasoning applies to public investments. We need to ask ourselves: are our resources being directed to democratizing knowledge about AI or to maintaining privileges of access?

The Sectoral Dilemmas We Can't Ignore

THE Globe report on AI in audiovisual illustrates the complexity of the moment. While James Cameron defends AI as a means of reducing costs and speeding up processes - without making human artists obsolete - the Brazilian dubbing sector is experiencing a gloomy scenario.

Fábio Azevedo, president of Dublar, denounces the fact that dubbing artists are being approached to sell their voices to feed AI programs. The Spanish movie “The Silence of Marcos Tremmer” on Prime Video was entirely dubbed by AI and received devastating reviews for basic errors and lack of feeling.

This is the kind of application that Mügge cited: technology that makes society worse, not better. Not because AI is bad, but because its implementation ignores the real human impact.

Regulation: Between Protection and Innovation

The regulatory debate has also advanced. According to JOTA, PL 2338/2023, led by Senator Rodrigo Pacheco, is based on the premise of affirming the centrality of the human person and the need for secure and reliable systems - bringing Brazil closer to risk-based models such as the European Union's AI Act.

The approach avoids both indiscriminate prohibition and absolute permissiveness. Algorithmic flaws already impact access to health, credit and the job market. Diagnostic systems are less accurate for black people, while credit granting models reproduce inequalities through apparently neutral variables.

But the effectiveness of any legal framework depends on the state's ability to oversee complex systems. Auditing algorithms, tackling the black box and keeping up with the speed of innovation require regulatory bodies with autonomy, resources and technical expertise. Without this, regulation risks becoming symbolic.

China, by the way, has already announced preliminary rules to regulate AI with human-like interaction, establishing an approach that requires providers to warn users against excessive use and intervene when they show signs of addiction. It's a paternalistic vision, but one that recognizes real psychological risks.

What Really Matters Now

These 24 hours of AI news have exposed an uncomfortable truth: we are living through a crisis of digital literacy on a civilizational scale. It's no use having Brazilian researchers creating solutions more precise than ChatGPT if millions of people can't tell a real video from a deepfake.

There's no point in spending half a million on European courses for magistrates if we don't invest in mass training for educators, communicators, health professionals and ordinary citizens who have to navigate this new reality on a daily basis.

Kate Moran is right to suggest that “boring” AI - that which improves the user experience without requiring interaction beyond reading, such as the summary of product reviews on Amazon - may be better in the long run than flashy tools. But we need to go further: we need national AI literacy programs that teach people to:

  • Identify signs of manipulation in videos and images
  • Questioning the origin and authenticity of content
  • Use AI tools productively, not just consume passively
  • Understand the algorithmic biases that affect their lives
  • Demand transparency and accountability from platforms

The Target Betting on Closed Models

It is worth mentioning that Meta is changing strategy. After betting on LLaMa as an open source model that would be the “Android of AI”, the company spent US$ 14.3 billion to acquire Scale AI and is developing two new models: Avocado (successor to LLaMa, but closed) and Mango (image and video generation to compete with Sora).

It's a significant shift: from democratization via open source to an Apple-style model - closed and consumer-oriented. This concentrates even more power in the hands of a few companies and reinforces the urgency of independent digital literacy.

The Way Ahead

I don't have easy answers, but I do have convictions. We need to build AI innovation ecosystems that put social impact in the center, We need to invest in democratic training, not elitist. We need to invest in democratic, not elitist, training. We need to develop technology that solves real problems for real Brazilians.

The case of voice actor Robson Kumode, who criticized AI dubbing by saying that it “has no breath”, captures something essential: technology without humanity is not progress, it is impoverishment. AI should amplify our creative capacity, not replace our essence.

In my work with companies and governments, I see the difference every day between organizations that treat AI as a tool for real transformation and those that use it only for marketing. The former invest in team training, redesign processes with a focus on human impact and measure success by tangible results. The latter buy ready-made solutions, implement them without context and get frustrated when they don't see a return.

The same logic applies to public policies. We can choose to invest in massive, democratic training, or we can continue spending fortunes on premium training for the few while the majority of the population remains vulnerable to digital manipulation.

Digital literacy is not an optional luxury. It is the basic infrastructure for citizenship in the 21st century.

These 24 hours of AI news have not brought revolutionary technological advances. It has brought something more important: clarity about where we are going wrong. Deepfakes deceiving millions, “AI slop” polluting the internet, public investments disconnected from the reality of the majority, entire sectors being transformed without protection for workers.

The moment calls for less dazzle and more pragmatism. Fewer trips to Europe and more investment in local expertise. Fewer flashy tools and more solutions that solve real problems. Less concentration of knowledge and more radical democratization.

In my mentoring and the immersive courses I offer, I work exactly on this frontier: helping executives, entrepreneurs and organizations navigate AI strategically, ethically and focused on real impact. Because I believe that technology needs to serve people, not the other way around. And because I know that quality digital literacy cannot be the privilege of the few.

My question is simple: what kind of future are we building with AI? One where few understand and many are manipulated, or one where knowledge is democratic and impact is shared?

The answer lies in the choices we make today. In the priorities we set. In the investments we approve. These 24 hours have shown that the current path is not working. It's time to choose another.


✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.

➡️ Join the 10K Community here


RELATED POSTS

View all

view all