Felipe Matos Blog

TSE Rushes AI Rules For 2026 Elections While 70% of Students Already Use Tools Without Instruction - Why These 24 Hours Define the Race Against Misinformation and Educational Backwardness

January 19, 2026 | by Matos AI

A8JSJPRqxAyz80JA22cju_018df1c51f60418086dea0cd8c9da54f

The last 24 hours have brought a rare combination: institutional urgency and a structural gap. While the Superior Electoral Court (TSE) races to consolidate effective rules against deepfakes and disinformation until March 5th, 70% of Brazilian high school students already use AI - but only 32% have received guidance from their school. Why does this simultaneity matter? Because we are facing the same technology in two critical fields - democracy and education - but with absolutely unequal speeds, resources and clarity.

I work with companies, governments and innovation ecosystems on a daily basis. And what I see in those 24 hours is not just a scattered collection of news. It's a systemic portrait of how Brazil is dealing with AI at its most delicate momentwhen the technology is already in the hands of the population, but the institutions are still drawing up the rules of the game.

The TSE's Race Against the Clock: Rules for a Terrain Without a Map

Second experts interviewed by Terra, The TSE faces an unprecedented challenge: create effective rules for the use of artificial intelligence in the 2026 elections without a prior consolidated regulatory basis. The resolutions that will guide the Electoral Justice must be approved by March 5, and the consensus among jurists is that there will be no radical structural changes in relation to 2024 - but rather adjustments to contain the scale and sophistication of synthetic content.


Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!


Lawyer Sabrina Veras, from the Brazilian Academy of Electoral and Political Law (Abradep), sums up the problem: “Content generated by generative AI is increasingly sophisticated and circulates at great speed. It is difficult to trace the origin of this content, and this requires a more efficient response”. José Luiz Nunes, from FGV Direito Rio, adds that the TSE should follow “a similar line in structural terms” with the 2024 rules, as there has been no transformative technological advance since then, only improvement.

But here's the crux: the TSE is bound by existing legislation. As expert Massaro points out, a more robust advance would come from a regulation on the platforms' obligations in relation to AI-made content. And there's another expectation: how the Supreme Court's decision on platform liability - which explicitly excluded electoral legislation - will impact the electoral sphere.

Deepfakes and Disinformation: The Core of the Problem

As shows R7, 54% of Brazilians used generative AI tools in 2024, This percentage is higher than the global average of 48%, according to an Ipsos and Google survey. This means that technology is already widespread - but without clear training or instruction on the risks.

Political scientist Gabriel Amaral, quoted in the same text, warns: “When everything can be fabricated to look like the truth, the voter stops asking ‘is this real?’ and starts questioning only whether it confirms what they already believe”. In other words, the debate leaves the field of verification and enters that of symbolic adhesion. Even when denied, deepfakes cause damage by eroding trust in information, mediators and democratic institutions.

And therein lies the paradox: since June 2025, the TSE has been running a working group on the subject, This was the result of a debate led by Justice Cármen Lúcia, following the viralization of hyper-realistic videos produced by AI. But the central problem, as Alexandre Basilio, a researcher at Abradep, says, is not technological - it's one of trust. “The central rule is the protection of voter confidence,” he says.

In the 2024 municipal elections, the TSE regulated the use of AI for the first time, banning deepfakes and restricting robots in contact with voters. The resolution differentiated the use of permitted artificial content (as long as it is clearly identified) from prohibited manipulations, such as audios or videos that alter someone's image or voice to favor or harm candidates. The use of synthetic content is permitted, as long as the information is clear, visible and accessible to the voter that the material was manufactured or manipulated with the help of artificial intelligence.

But even with these rules, experts predict that most of the infractions will not come from official campaigns, but from supporters, influencers and informal profiles. Basilio anticipates a scenario dominated by memes and highly shareable content, which can quickly reach millions of people. The repetition of offensive content, even after it has been removed, should create a “cat and mouse race” between platforms and the electoral justice system.

70% of Students Already Use AI, But Only 32% Are Educated: The Education Gap Exposed

While the TSE is racing against the clock, the Brazilian education system faces a structural gap. According to an editorial in O Globo, around 70% of high school students say they already use AI, but only 32% say they have been instructed by their school to apply the new technology, according to the latest edition of the Cetic.br survey.

And herein lies the contradiction: while they promise to include digital and media education classes in the elementary school curriculum by 2025, the fact is that technology is already being used without clear pedagogical guidance. Teachers should be able to take advantage of AI to improve their lessons and instruct students to use it in the most productive way. But this is not happening.

There are real risks. Research by Microsoft and Carnegie Mellon University has revealed that higher levels of trust in AI are associated with lower critical thinking skills. When an AI robot performs tasks that are essential to learning, the student suffers. There is also the danger that they will learn incorrectly, since AI tools are supplied with data that doesn't go through strict checks, and hallucinations - although less frequent in the latest models - are still common.

The editorial cites an inspiring example: Estonia, noticing that 90% of high school students were already using chatbots, got OpenAI to adapt ChatGPT for school use.. Now the robot responds to questions with new ones, to stimulate learning, rather than with direct answers. This is exactly the kind of approach that Brazil needs - but has yet to implement on a large scale.

As the editorial says: “Who knows, instead of being a means for students to cheat on exams and schoolwork, AI might provide an opportunity for Brazil to catch up in education?” The question is fair. But it calls for coordinated action.

Elon Musk's Grok: When the Absence of Limits Reveals the Worst of AI

And if you think the problem of AI is only theoretical, consider the scandal of Grok, Elon Musk's chatbot. As Veja shows, The feature nicknamed “spicy mode” on Grok has encouraged the creation of photos and videos recreating sexualized images of girls and women.

In Indonesia and Malaysia, Grok has been temporarily banned. In the UK, Ofcom announced a severe investigation, echoing a decision by European Union authorities. In Brazil, the Consumer Defense Institute (Idec) asked the federal government on January 12 to suspend the tool. Julia Abad, an Idec researcher, said it was “a measure to prevent further damage, while there is no specific legislation”.

AI Forensics researchers analyzed more than 20,000 random images generated by Grok and 50,000 user requests between December 25 and January 1. They found a high prevalence of terms such as “remove clothing” and “put on bikini”, with more than half of the records generated containing individuals with minimal clothing. Another study revealed that between January 5 and 6, Grok generated an average of 6,700 inappropriate posts per hour, compared to an average of 79 by five competing sites.

Elon Musk responded dismissively, saying that the legal responsibility lies with the individual who creates and uploads the illegal content, not the company. The strategy aims at fewer barriers for more access, viralization and data traffic. It's the opposite of responsible governance.

Meta and the Bug that Exposed the Fragility of Automation: When AI Uses Photos Without Authorization

And if you think the problem is only with Elon Musk's companies, consider the case of Meta. As Folha de S.Paulo shows, a bug in Meta's ad automation platform with artificial intelligence caused a Brazilian company to show photos of a client's daughter in an advertisement seeking to sell skin care products. The daughter never authorized the use of the image.

The problem began 30 days ago, when pharmacist Fátima Costa, 65, noticed that the Principia brand's ads on Instagram were showing photos of her daughter, Folha journalist Gabriela Mayer. The image of Gabriela shown in the advertisement is the same one that appears on her Instagram profile.

Meta employees interviewed on condition of anonymity say it was a mistake in the company's platform - a hallucination, as they say in artificial intelligence jargon. According to them, the problem affects a small number of users. Principia says it has never used the image seen in the report in its promotional materials. “We only use images with authorization, directly with Meta, without the involvement of agencies. We questioned Meta as soon as we became aware of this case and are awaiting a position on the authenticity and cause of the possible use of this image in these ads.”

The company uses a tool called Meta Advantage+ Creative, which promises to use information on how users interact with advertisements to automatically edit images and deliver personalized content. But here's the problem: Meta disclaims any liability for texts or images created by Meta Advantage+. “We also give no guarantee that the content of ads will be unique and protected by intellectual property rights or that it will not violate the rights of third parties,” says the company in its terms of use.

As Uerj digital law professor Chiara Teffe says, it would still be possible to contest who is responsible for the misuse of image and personal data. Today, there is no precedent for the situation.

AI and Rare Earths: The Geopolitical Race that Brazil Observes from the Outside

While we face institutional and educational challenges, Canadian mining company Aclara Resources has signed a research and development agreement with a US Department of Energy national laboratory to apply artificial intelligence to the process of separating heavy rare earths.. The development will be conducted at Argonne National Laboratory, one of the US government's main research centers.

The aim is to improve process efficiency and reduce uncertainties in industrial operations. In practice, the technology creates a virtual representation of the industrial process, The Carina Project is built on real operational data, mathematical models and artificial intelligence algorithms. The Canadian company owns the Carina Project, located in Nova Roma (GO). The project has already received funding from the U.S. government, through the U.S. International Development Finance Corporation.

Brazil has the world's second largest reserves of rare earths, second only to China. However Gazeta do Povo shows, the country suffers from well-known dysfunctions: stratospheric interest rates, growing public debt, legal insecurity and the absence of a coordinated industrial policy, resulting in the export of raw materials.

The Brazilian Mining Institute (Ibram) warns that the challenge is to “stop being a country with exceptional geological endowments - just having the wealth - and become a country with a mineral vocation - using the wealth for national development. This requires state policy“.

Thiago Sbardelotto, economist at XP Investimentos, highlights the urgency of structural reforms. The window of opportunity is narrow, with Australia, Chile and Vietnam already advancing in alternative chains to China.

The Oil Companies' Double Strategy: Using AI and Selling Energy to It

And while Brazil exports raw materials, the oil industry has discovered that AI's energy appetite isn't a challenge - it's its lifeline to growth, as shown by Forbes Brazil.

The oil giants' strategy involves profit twice from Artificial Intelligence (AI): by using it to extract fossil fuels more efficiently and subsequently selling energy (generated by natural gas) to the data centers that power AI. This dual strategy could consolidate oil dependency for decades.

Abu Dhabi National Oil Company (ADNOC) has implemented more than 30 AI tools, generating US$ 500 million in value and reducing up to 1 million tons of CO₂ between 2022 and 2023. Chevron is said to be developing its first natural gas power plant of approximately 2.5 GW (expandable to around 5 GW) in West Texas, scheduled to begin operations in 2027, targeting an undisclosed data center customer.

The big question: emissions from data centers are expected to grow from 180 million tons today to 300 to 500 million tons by 2035 globally. The GPT-3 training consumed 1,287 megawatt-hours (MWh), generating 552 tons of CO₂. Fossil fuels meet 60% of data center energy demand.

Semantix and Huawei: The Brazilian AI Entering the Global Market

But it's not all gaps. Semantix has announced the approval of its AI suite on the Huawei Cloud Marketplace, KooGallery, expanding the international reach of the company's solutions through digital cloud channels.

With the partnership, the Semantix AI suite, which includes the LinkAPI, Agentix and Safetix solutions, becomes the Brazil's first AI offering based on autonomous agents with integrated governance to enter Huawei Cloud's global ecosystem. The initiative expands access to the company's technologies in markets in Asia, the Middle East and Latin America.

Safetix, for its part, acts at the AI governance layer, offering traceability, auditing and continuous monitoring from data origin to automated decisions. The solutions are already available at KooGallery for corporate clients in different regions.

According to Leonardo Poça D'Água, CEO of Semantix, “governance has become an operational condition for large-scale adoption of AI”. Approval in Huawei's marketplace allows this approach to be taken to new markets with the support of a global cloud platform.

Why These 24 Hours Matter: The Urgency of Governance and Education

I see a clear pattern in these 24 hours: Brazil is dealing with AI reactively, not proactively. The TSE is racing against the clock to approve rules by March 5. The education system notes that 70% of students already use AI, but only 32% are educated. Companies export raw materials while other countries add value with AI. And platforms like Grok and Meta expose the fragility of limitless automation.

But there are promising signs. Semantix shows that it is possible to develop AI with integrated governance and enter global markets. Aclara Resources shows that Brazil can attract strategic investments in critical minerals. And the TSE, even without a prior regulatory basis, is trying to balance freedom of expression and electoral integrity.

What's missing? Coordination. As the news about rare earths shows, the challenge is not just to have the wealth - it's to use the wealth for national development. And that requires state policy. The same goes for AI: it's not enough to have talent, startups and resources. We need a coordinated strategy that connects education, regulation, investment and governance.

In my work with companies, governments and innovation ecosystems, I see that AI is not a destination - it's a path. And this path requires three pillars: universal digital literacy, so that everyone knows how to use AI critically and productively; responsible governance, so that platforms do not operate without limits; and coordinated industrial policy, Brazil needs to add value and not just export raw materials.

As the O Globo editorial says: “Who knows, instead of being a means for students to cheat on exams and schoolwork, AI might provide an opportunity for Brazil to catch up in education?” The same question applies to democracy, the economy and governance. The opportunity is there. The question is whether we take advantage of it strategically or just react to the next crisis.

What to Do Now: From Reaction to Strategy

If you lead a company, a government or an organization, these 24 hours should be a wake-up call. AI is already in the hands of the population - but without education, clear governance or a coordinated strategy. What can you do?

  • Invest in digital literacy: It's not enough to have AI tools. We need to train teams, students and citizens to use technology critically and productively.
  • Demand responsible governanceDon't accept platforms that disclaim responsibility. Demand transparency, traceability and clear limits.
  • Connect regulation and innovation: Rules are not the enemy of innovation - they are preconditions for trust. Take part in discussions on AI regulation, such as Bill 2.338/2023.
  • Add value locally: Don't just supply raw materials (data, talent, resources). Develop solutions, products and services that capture value in the AI chain.

In my mentoring and consulting programs, I work with executives, companies and governments on exactly this frontier: how to transform AI from threat to opportunity, from reaction to strategy, from gap to competitive advantage. Because AI won't wait. And Brazil can't be left behind.

The race is on. The question is whether we will participate in it with clarity, governance and purpose - or just watch from the sidelines while others define the rules of the game.


✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.

➡️ Join the 10K Community here


RELATED POSTS

View all

view all