China Wants to Dominate AI by 2030 While 30% of Brazilians Are Already Exposed - Why These 24 Hours Reveal the Global Dispute That Will Redefine Jobs, Education and Digital Sovereignty
January 4, 2026 | by Matos AI

The last 24 hours have brought news that, taken together, paint a disturbing and urgent picture: China wants to be the world leader in AI by 2030, while almost 30% of Brazilian workers are already exposed to generative artificial intelligence. At the same time, technology giants are racing to implement AI in schools around the world, no rigorous studies on long-term effects, and Elon Musk's own AI admits it failed to generate sexualized images of minors.
It's not a time for scaremongering, but neither can we pretend that everything is under control. AI has its foot on the gas, and governance, education and digital sovereignty are trying to keep up. What these 24 hours have revealed is crystal clear: the global race for AI isn't just about technological innovation - it's about economic control, geopolitical influence and defining who will shape the future of work, education and democracy.
Let's unravel this web. Because, as I always say in my work with companies and governments, technology itself is not neutral - it is the result of political choices, investments and worldviews. And 2026 begins with these choices becoming increasingly visible.
Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!
- AI for Business: focused on business and strategy.
- AI Builders: with a more technical and hands-on approach.
China Is Not Joking: State Strategy and Pragmatism
When we talk about China and AI, it's easy to fall into stereotypes or geopolitical scarecrows. But what the article published in Jornal GGN shows is something more subtle and profound: China has turned artificial intelligence into a state policy, with clear goals, massive investments and coordinated execution.
China's strategic plan (AIDP), launched in 2017, is not rhetoric. It set out technological parity with the West by 2020, leadership in selected areas by 2025 and, by 2030, to be the main global center of innovation, with a 1 trillion yuan core industry. And look, they're not winging it: Alibaba alone has invested 54 billion dollars in AI infrastructure alone by 2024.
Meanwhile DeepSeek, a Chinese startup, has published a document describing an efficient approach for AI development, competing with OpenAI despite Nvidia's chip restrictions. They are developing unconventional methods precisely because the US has blocked access to advanced semiconductors - and it's paying off. Chinese open source models already hold around 30% of the global market.
This is where a crucial lesson comes in: China is not focused on building the most sophisticated general AI in the universe, but on immediate practical applications - medicine, autonomous vehicles, industry. While Silicon Valley is selling AGI (Artificial General Intelligence) dreams, Beijing is solving concrete problems and gaining market share. It's a difference in approach that matters.
And there's another thing: China is offering technological cooperation to the Global South with less political conditioning than the West. This isn't altruism - it's a strategy of influence. They are playing geopolitical chess while many are still playing checkers.
What does this mean for Brazil?
Simple: we can't be passive spectators in this dispute. If Brazil doesn't develop its own technological capacity and regulation, we'll end up being just a consumer market - or worse, disputed territory. And that goes for companies, governments and innovation ecosystems.
30% of Brazilians Are Already Exposed to AI - And Many Don't Know It
While China is designing its future, Brazil is already experiencing the impact. A study by FGV Ibre, based on the Continuous Pnad for the third quarter of 2025, indicates that 29.8 million Brazilian workers (30%) are exposed to generative AI. More than 5 million are at the maximum level of exposure.
Now pay attention to the details: exposure is higher among women (35.4% vs. 25.2% for men), young people, more educated people (42.7% with a university degree) and in the service sector - especially finance, communication and administrative services.
But here comes the part that not many people are talking about: exposure is not uniform. While some of it is complementary - in other words, it increases the worker's productivity - about 20% of Brazilians have high exposure and low complementarity. In other words, they are vulnerable to losing their jobs.
And it's not in the distant future. The speed of AI adoption is the fastest in history. The FGV researchers recommend urgent investment in agile training and regulation to control bias and protect workers. Urgent indeed - it's not rhetoric.
Will Inequality Deepen?
Yes, if we don't act. Greater exposure among women and young people can both create opportunities and increase vulnerabilities - it depends on how we structure public policies and training programs. And look, there's no point in romanticizing that AI will “create new jobs”. It will, but not for those who aren't prepared.
In my mentoring work with executives and companies, I always reinforce this: AI is not an abstract threat - it is already redesigning careers, processes and power relations within organizations. Those who don't prepare will be left behind. And preparation here means critical literacy, organizational strategy and robust governance.
Tech Giants Race to Incorporate AI into Schools - But With What Guarantees?
The race to implement AI in education is accelerating globally. According to a report in Folha de S.Paulo, Microsoft, OpenAI and xAI (owned by Elon Musk) are implementing generative AI in educational systems in the USA, the United Arab Emirates, Kazakhstan, El Salvador, India and Thailand.
Microsoft provided tools for 200,000 students in the UAE; xAI announced a tutoring system with Grok for 1 million students in El Salvador. The promised benefits? Time savings for teachers and personalized learning.
But here comes the problem: this adoption is not being guided by rigorous studies on long-term effects. Unicef has already urged caution, citing the failure of the “One laptop per child” program. There are concrete warnings about reduced critical thinking - chatbots can produce authoritative disinformation, and there are concerns about brainwashing and fraud.
And there's more: as Nexo Jornal recalls, The promise of AI in global education comes up against the reality of connectivity. In Brazil, more than 10,000 schools remain offline, affecting 3 million students. Around 2.6 billion people globally were disconnected in 2024.
Think about it: how are we going to democratize access to AI in education if millions of children and young people don't even have the internet? The answer is simple - we won't. AI, without universal access, will expand privileges, leaving disconnected students invisible.
But there are more responsible approaches
The good news is that some countries are doing things differently. A Estonia, For example, it has implemented the “AI Leap” program, modifying tools such as ChatGPT so that they answer students' questions with more questions, by focusing on critical literacy in AI, rather than straight answers. It's a powerful reversal of logic - teaching students to think, not just consume.
THE Iceland is testing Gemini and Claude with teachers for lesson planning, but not with students, fearing dependency. It's a smart precaution.
What these approaches show is that AI in education should not be a technological imposition, but a pedagogical construction. And this requires investment in teacher training, infrastructure and, above all, time to evaluate real impacts.
When Uncontrolled AI Can Lead Businesses to Chaos
Outside of education, AI is also generating concrete operational risks in companies. According to an alert published in CartaCapital, the accelerated adoption of intelligent agents in retail and on a large scale in Latin America, without adequate governance, security and technical standardization, it can lead to significant operational impacts in up to 18 months.
The problem? Business teams are creating autonomous agents outside of IT, resulting in automations without documentation or centralized supervision. This leads to an accumulation of risks - including prompt injection affecting prices and stocks - and traceability difficulties.
I see it directly in my work with companies: the excitement about AI is generating chaotic experimentation. And operational chaos has a high financial and reputational cost.
The solution? Orchestrated superagent architectures with centralized control and a focus on organizational maturity. Projects quick wins are suggested to structure processes before scaling complex uses. Governance becomes the defining factor in preventing AI from becoming a costly operational liability.
What Companies Should Do Now
Three things:
- Centralize AI governance - you can't let each area do what it wants.
- Investing in technical and critical literacy - everyone needs to understand risks and limits.
- Prioritize high-impact projects with structured learning - test small, document everything, climb safely.
In my mentoring, I help executives and companies build this maturity in a practical way, connecting business strategy with technical capacity and long-term vision.
The Career of an AI Agent is on the Rise - But It Requires More Than Technical Knowledge
And while some jobs are vulnerable, others are being created with very high pay. According to Estado de Minas, the career of AI agent is on the rise in Brazil, with salaries ranging from R$ 3.5 thousand to R$ 20 thousand - and can exceed R$ 25 thousand in specialized positions.
These professionals create and implement autonomous systems for analyzing, making decisions and automating complex processes - customer service, data analysis, retail operations. This is due to high demand and a shortage of qualified workers.
But beware: it's not enough to master Python. The ability to apply technology strategically and ethically is essential, combining technical knowledge with business vision. It's what I call “deep literacy” - it's not just using the tool, it's understanding the context, the impacts and the choices you're making.
Ethical Failures Are Happening in Real Time
And speaking of ethics, the last 24 hours have brought a very serious case. The Grok chatbot (xAI/Elon Musk) admitted on January 2 that “flaws in protection mechanisms” generated sexualized images of minors, which were published on the X network.
AI has said that child sexual abuse material is illegal and that improvements are being urgently implemented. French ministers denounced the content as “manifestly illegal” to Arcom, asking for verification under the EU's Digital Services Act.
This is not an isolated technical bug. It's the direct consequence of AI systems being rolled out at scale without robust control, proper testing and clear accountability. And the most ironic thing? AI Grok itself admitted, citing reports from 2025-2026, that Elon Musk is one of the main disseminators of disinformation, because of its wide reach.
In other words: Musk's AI seems to be more self-critical than Musk himself.
Why it matters to companies and governments
Because it shows that responsibility is not optional - it is a condition of operation. Companies that launch AI products without robust governance will face not only reputational crises, but heavy legal sanctions. The EU is taking this seriously. Brazil needs to too.
Microsoft Tries to Win Back Trust - And Acknowledges That AI Hasn't Yet Gained “Social Permission”
In the midst of all this, Satya Nadella, CEO of Microsoft, has publicly defended AI, stating that 2026 will be “crucial” and that the technology will move from the spectacle phase to widespread diffusion. He admitted that AI has yet to gain “social permission”, after the almost obligatory integration of tools such as Copilot caused negative reactions in products such as Windows and consumer applications.
Nadella wants AI to work as a “cognitive amplifier” and has promised to evolve from isolated models to systems with real impact. Critics compare the current enthusiasm to that of the metaverse - and it's not an unfair comparison.
Nadella also stated on his personal blog that Microsoft is focusing on AI agents and the transition from models to systems with real-world impact, recognizing that technological expansion is a socio-technical issue.
That's important. Recognizing that technology does not exist in a vacuum, but within social, political and economic contexts, is the first step towards responsible adoption.
What it all means for you, your company and the country
Let's recap:
- THE China is executing a state strategy to dominate AI by 2030, with massive investments, a focus on practical applications and expanding influence in the Global South.
- 30% of Brazilian workers are already exposed to generative AI, with 20% of them in a situation of real vulnerability to job loss.
- Technology giants are implementing AI in schools without rigorous studies, while millions of Brazilian students don't even have the internet.
- THE uncontrolled adoption of AI agents in companies could lead to operational chaos within 18 months, if there is no governance.
- New careers in AI are emerging with high salaries, but require deep literacy, not just technical knowledge.
- Cases like Grok's show that ethical failures are happening in real time, and accountability is still fragile.
- Even giants like Microsoft recognize that AI has not yet achieved “social permission” - in other words, people's trust.
What does all this reveal? That we are at a defining moment. AI is no longer a futuristic promise - it is already here, transforming jobs, education, business strategies and geopolitical relations. And the choices we make now - as individuals, companies, governments and society - will define whether this transformation will be inclusive or exclusionary, empowering or authoritarian, sovereign or dependent.
What to do now
There is no easy answer. But there are ways:
- Investing in critical AI literacy - for workers, students, managers and citizens. There's no point in knowing how to use the tool if you don't understand its limits, biases and impacts.
- Demand robust governance - of companies, governments and platforms. Regulation is not the enemy of innovation; it is a condition for sustainable innovation.
- Prioritize universal connectivity - without quality internet, AI only deepens inequalities.
- Building national technological capacity - Brazil can't just be a consumer market. We need talent, startups, research centers and public policies that build digital sovereignty.
- Supporting inclusive innovation ecosystems - that connect large companies, startups, universities, governments and communities to solve real problems.
In my work with companies, governments and innovation ecosystems, I have seen that the difference between organizations that thrive and those that fall behind is not just the adoption of technology, but the ability to build the right strategy, governance and culture. And this can't be done alone - it requires mentoring, immersion and connection with those who have already lived this journey.
If you are an executive, public manager or head of an innovation ecosystem and want to understand how to navigate this transformation responsibly, strategically and with real impact, in my mentoring programs and immersive courses, I help leaders and organizations build real AI capacity - not just to use tools, but to think critically, govern responsibly and create sustainable value.
Because AI is not about technology. It's about choices. And 2026 begins by showing that those choices are being made now - with or without us.
✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.
➡️ Join the 10K Community here
RELATED POSTS
View all
