Felipe Matos Blog

Google Makes Guinness World Records With AI Class For 200,000 While Meta Captures User Data - Why This Contrast Defines the Moment of Real Democratization Versus Concentration of Power

December 19, 2025 | by Matos AI

kzFKIndI_YToJ5Mv9EOpV_35ac7ef4711a40a3976d692ab2a9c0ce

As Google joined the Guinness World Records training more than 200,000 people in AI in a single day, In the same day, Meta discreetly updated its privacy policy to use conversations with its artificial intelligence to train models. Two news stories, one day, two radically opposite paths.

If you still thought that the main dispute in AI was between who has the fastest model or the most powerful chip, the last 24 hours have shown something different: the real battle is between those who democratize knowledge and those who concentrate control.

And this distinction, my friends, is not just philosophical. It defines who will ride the wave of transformation and who will sink into it.


Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!


Google's Record: 200,000 People Trained in AI in One Day

On December 6th, Google Cloud held the largest hybrid and simultaneous Artificial Intelligence class ever held in the world. As reported by Brasília Newspaper, The action connected 50 universities in nine Latin American countries, including Brazil, Argentina, Chile and Mexico, as well as thousands of online participants.

THE SENAI-SP was the protagonist, part of a four-year strategic partnership with Google Cloud that has already trained more than 40,000 students in information technology. During the training, participants had direct contact with the Gemini (Google's AI model), Google Workspace and NotebookLM, focusing on prompt engineering and the creation of AI agents.

Eduardo López, president of Google Cloud for Latin America, got straight to the point: investing in digital skills is no longer optional, it is urgent need. The World Economic Forum estimates that by 2030, half of the essential skills of workers will change, with AI impacting almost 70% of current skills.

Let me translate that into plain language: if you're not training in AI now, you're falling behind in real time.

Why This Record Matters Beyond Marketing

Some people will look at this initiative and think: “Oh, it's just corporate marketing”. I completely disagree.

When 200,000 people are trained simultaneously in practical AI skills - not abstract theory, but engineering prompts, creating agents, using real tools - this establishes a new standard of scale for technological education.

In my work with companies and governments, I constantly see the same dilemma: how to empower entire teams quickly when technology changes every quarter? Google's answer was: massive scale, free access, focus on practical application.

And here's the crucial point: the democratization of knowledge in AI is not charity, it's strategy. The more people who master tools like Gemini, the more the platform's ecosystem grows, the more use cases emerge, the more value is generated for everyone involved.

That's what I always say in my mentoring programs: shared knowledge multiplies, locked knowledge atrophies.

On the Other Side: Meta Starts Using User Data to Train AI

While Google was training crowds, Meta (owner of Instagram, Facebook and WhatsApp) activated a silent but impactful update: as of December 16, as reported by G1, the company would use user conversations with its AI to target ads and suggest content.

In addition, public information from Threads will be used to train AI systems. The update expands the practice from 2024, when public photos and posts were used - a measure that was questioned by Idec and briefly suspended by the ANPD (National Data Protection Authority).

Meta allows users to object to the use of public information (posts, photos, comments) by sending a specific e-mail via a company link. But here's the detail that many people miss: consent is presumed if you do not actively object.

In other words, the default is: “Your data is ours, unless you explicitly say no”.

The Problem of Implicit Permission

Let's be clear: i'm not making a moral judgment here. Meta is operating within the available legal framework. The problem is more subtle and deeper.

When the business model is “presumed consent” instead of “explicit consent”, we are creating a gigantic power asymmetry. Most users won't read the policy update, won't find the opposition link, won't send the email.

The result? Billions of conversations, photos, comments and interactions become fuel for proprietary AI models, while the users who generated this content have no share in the profits, control or even governance of this technology.

In my work with companies and governments, I always reinforce it: AI is only sustainable if it is built on trust. And trust is not built by actively opting out, but by consciously opting in.

As pointed out by CNN Brazil, The decision raises concerns precisely about this implicit permission, requiring proactive action by the user to protect their privacy.

Kate Crawford and the “AI Empire”: Digital Extractivism on an Industrial Scale

If you think I'm exaggerating when I talk about “concentration of power”, I need to introduce you to Kate Crawford. According to an analysis published by Other words, Crawford argues that AI is imposing a new metabolic breakdown, turning the promise of technology into an extractivist nightmare under the command of big tech.

Crawford compares AI to the technology of linear perspective - a model that altered the representation of truth in the Renaissance. She uses the concept of “long duration” to situate the AI empire within 500 years of political, social and economic domination, linking it to early capitalism and colonialism.

The current focus? Extraction of embodied life, data and human cognition.

Unsustainable Infrastructure

Crawford goes beyond philosophical criticism and points to concrete data: the material infrastructure of AI is unsustainable. Data centers consume energy comparable to entire nations, and are expected to reach 25% of total US electricity by 2030. Demand for critical minerals such as lithium is skyrocketing.

And here comes Jevons' Paradox: energy efficiency is being outstripped by total consumption. In other words, the more efficient the chips get, the more chips we use, and the total consumption only increases.

But the problem is not only material. Crawford identifies what he calls “slop” (low-effort content generated by AI) and “slopaganda” (lero-lero marketing), which flood the online environment. What's more, the risk of Model collapse (Model Autophagy Disorder - MAD), where models degenerate as they are fed by their own synthetic results, losing diversity.

Crawford warns of an explosive mix of ignorance and power, He warned that technology is defining reality without the public understanding how it works.

And I agree with her: you can't build a sustainable future when most people don't understand how the technology that governs their lives works.

The Market Speaks: 7 Careers Gaining Strength from AI in Brazil

But let's get away from the criticism and look at the real opportunities. Because, yes, despite all the problems, AI is creating real jobs and careers.

According to a report by Capitalist, Seven professions are on the rise in Brazil due to AI:

  • Specialist in Artificial Intelligence: It maps processes and guides ethical and strategic use.
  • Prompt Engineer: Structure advanced commands for generative AIs.
  • Data Scientist: It works with large volumes of data that are essential for AI systems.
  • Machine Learning Engineer: Develops, trains and maintains robust models.
  • Data and BI analysts: They work with predictive models generated by AI.
  • Cybersecurity experts: They defend against AI threats and ensure compliance with the GDPR.
  • Content Creators and Designers: They use AI to speed up production and explore creative variations.

The market values professionals who combine human skills (criticism, creativity) with technological skills. And here's the insight: AI doesn't replace human creativity, it amplifies it.

In my mentoring programs, I work on precisely this integration: teaching executives and entrepreneurs to use AI as a lever, not as a substitute. The question is never “AI or human”, but “AI + human”.

What Recruiters Really Want (And It's Not a Diploma)

And here's a fact that should make many people rethink their investments in education: according to a report in the Earth, In the last few years, engineering leaders have indicated a radical change in the criteria for hiring in technology.

A survey by CodePath showed that 38% of companies have reduced entry-level hires in the last year. On the other hand, what recruiters really value:

  • Side projects/Portfolios: 38%
  • Internship experience: 35%
  • Public code portfolios (GitHub): 34%

In contrast, the prestige of the educational institution was cited by only 17%.

Let me translate that: showing real work is worth more than a diploma hanging on the wall.

Jobs with AI skills offer, on average, US$ 18,000 more in annual remuneration (Lightcast analysis). And AI skills are increasingly relevant in sectors such as non-technological (51% of vacancies in 2024).

The US federal government, for example, has announced the hiring of 1,000 AI specialists without requiring a diploma.

This change is radical. And it favors those who have initiative, ability to learn quickly and show results - exactly what programs like Google's are trying to democratize.

The Paradox: 92.6% Use AI, But Don't Understand the Basic Concepts

But here's the problem: usage is advancing faster than understanding. An Adapta survey of 500 Brazilian professionals, reported by TechTudo, showed a huge gap in the understanding of the technical concepts of AI.

Prompt was the most dubious term (12.6% of mentions). Fundamental terms such as machine learning, deep learning and neural networks added up to more than 9% of confusion.

And guess what? About 92.6% of the Brazilian professionals interviewed already used AI Agents at work.

In other words: we're using tools we don't understand.

It's like driving a car without knowing what a brake is. It works... until it works too much or too little at the wrong time.

This reinforces what Kate Crawford warned: fully understanding these technologies is still a challenge in the Brazilian corporate environment. And the phenomenon of AI hallucination (when models confidently invent false information) reinforces the need for human supervision.

In my immersive courses, I dedicate a significant amount of time to fundamentals. It's not enough to know how to use a tool, you need to understand how it works, what its limitations are and where it can fail.

Google Gemini 3 Flash: The Race for Speed Continues

Meanwhile, the technological race is on. Google has announced the launch of Gemini 3 Flash, according to TNH1, a new AI that promises to be the fastest ever launched by big tech, for speed, efficiency and reduced costs.

In internal tests, the Gemini 3 Flash significantly outperformed its predecessor:

  • In “The Last Exam” test: achieved 33,7% (against 11% for the previous one)
  • In MMMU-Pro (multimodality and reasoning): obtained 81,2%, surpassing competitors

The model costs US$ 0.50/million input tokens and US$ 3.00/million output tokens. It is designed to make AI Mode more robust for complex searches and to generate more structured answers (images, tables).

And there's more: Google has extended Gemini's content verification capabilities to include videos, as reported by Digital Look. Users can send a video to the assistant and ask if it was generated by Google's AI.

Gemini analyzes the video (up to 100 MB and 90 seconds) in search of the SynthID, Google's proprietary digital watermark, increasing transparency.

This is a direct response to the problem of deepfakes and unmarked synthetic content. But, as always, effectiveness depends on coordinated adoption of standards between platforms.

The Year of Anti-IA Marketing? The Human Reaction is Coming

And speaking of reaction, there's a trend emerging that's worth paying attention to: marketing “100% human”. According to CNN Brazil, With the proliferation of “slop” (low-quality content generated by AI, voted word of the year 2025 by Merriam-Webster), companies and consumers are showing saturation.

The forecast? 2026 could be the year of anti-IA marketing.

Here are some examples:

  • THE iHeartMedia launched the slogan “guaranteed human”, discovering that 90% listeners prefer human content.
  • Creators in Hollywood and newspapers such as The Tyee (Canada) have adopted anti-IA policies.
  • On Pinterest, the adoption of AI has generated alienation, and ads for a wearable device with AI have been vandalized with messages of resistance.

The crisis of confidence generated by more sophisticated deepfakes could lead to an increase in the value of the human authenticity.

But beware: this doesn't mean that AI will disappear. It means that consumers are learning to distinguish between smart use and lazy abuse.

AI used to speed up complex research? Great. AI used to flood the internet with generic, soulless content? No thanks.

In my work with companies, I always emphasize: AI should amplify the human voice, not replace it. Use it to scale what is already good, not to produce volume without value.

Itaú Emps: Generative AI as an Entrepreneur's Ally

And it's not all concentration of power and slop. There are Brazilian cases showing intelligent use. Itaú Emps, a segment focused on micro and small companies, uses Generative Artificial Intelligence to offer a proactive banking service, based on real-time analysis and personalized recommendations, according to the NeoFeed.

Gabriela Ferreira, director of Itaú Emps, points out that AI understands the client's financial context and generates tailor-made guidelines, breaking with traditional FAQs-based assistants.

Functionalities include alerts about drops in revenue, identification of cash flow opportunities and assistance with pricing.

Implementation required robust governance to handle sensitive data, ensure compliance with the GDPR and avoid bias, with multi-layered validations.

Itaú Emps acts as an innovation laboratory for the bank, accelerating the development of solutions that can be applied to other audiences. The roadmap focuses on future transactional integration, while keeping human service accessible (average waiting time of 30 seconds).

This is an example of AI that serves the user, not exploits them. The difference? Transparency, governance and a focus on solving real problems.

The Contrast That Defines the Moment: Democratization Versus Concentration

So, back to the beginning: why does the contrast between Google and Meta matter so much?

Because he represents two radically different visions of the future of AI:

Vision 1 (Google, in this case): Democratizing knowledge on a massive scale, training millions, creating an ecosystem where more people master tools and can participate in the digital economy. It's a bet on abundance mindset - the more people know, the more value is created for everyone.

Vision 2 (Meta, in this case): Concentrate data, train proprietary models with user content without explicit consent, use these models to increase engagement and advertising profits. It's a bet on a scarcity mindset - the more data I control, the more power I have.

Both approaches are legal. Both are rational from a business point of view. But only one of them builds long-term trust.

And here's my position, developed over years of working with startups, companies and governments: AI will only be sustainable if it is built with broad participation, radical transparency and distributed governance.

When Kate Crawford warns of the “AI Empire”, she is not being alarmist. She is pointing out a real historical risk: the concentration of cognitive power in a few hands, without democratic oversight, without value sharing, without accountability.

The alternative path? Educate, train, distribute. Like Google did with its 200,000 students. As Itaú Emps is doing with its micro and small entrepreneurs. As SENAI-SP is doing with its 40,000 IT students.

What You Can Do Now (Because Knowledge Without Action Is Just Entertainment)

Okay, we've talked about problems, opportunities, risks and paths. Now, what can you, the reader, do in concrete terms?

If you are a professional:

  • Invest in practical training in AI. Not just watching videos, but building real projects. A portfolio is worth more than a degree.
  • Learn the fundamentals: what a prompt is, how LLMs work, what neural networks are. You don't need to be an engineer, but you do need to understand the basics.
  • Develop critical competence: know when to use AI, when not to, how to validate results.
  • Join communities, contribute to open source, show your work publicly.

If you are an entrepreneur:

  • Use AI to amplify, not replace. Automate the repetitive, but keep the human in the strategic.
  • Invest in data governance. If you collect customer data, be transparent, ask for explicit consent, show value in return.
  • Build trust. In the world of slop and slopaganda, authenticity is a competitive differentiator.
  • Consider partnerships with educational institutions. Empowering your ecosystem increases the value of your platform.

If you are a manager or executive:

  • Don't outsource AI strategy to IT. AI is a business strategy, not just technology.
  • Empower your teams. Professionals who master AI are worth gold and prefer to work where they can apply it.
  • Establish clear policies of use: when to use AI, how to validate, what are the ethical limits.
  • Measure real impact, not just hype. AI that doesn't generate measurable value is a waste of resources.

If you are a citizen concerned about privacy:

  • Review the privacy policies of the platforms you use. Seriously. I know it's boring, but it's important.
  • Activate the opt-out when available. In the case of Meta, you can send the opposition e-mail.
  • Support democratic regulation of AI. That's not anti-technology, it's pro-accountability.
  • Value and consume content created by humans. Your click is a vote.

The Future We Are Building: Choice, Not Destiny

There's a concept that I always repeat in my mentoring and consulting programs: the future of AI is not an inevitable destiny, it's a collective choice we make every day.

Every company that chooses transparency over extraction is voting for a future.

Every professional who empowers themselves instead of resisting is voting for a future.

Every government that regulates with intelligence instead of panic is voting for a future.

Every consumer who values authenticity over volume is voting for a future.

The last 24 hours have clearly shown us both ways: democratization or concentration. Education or extraction. Trust or control.

And here's my bet, based on years of working with startups, companies and governments: those who build ecosystems, not empires, win.

Because empires fall. They always have. But resilient ecosystems adapt, evolve and survive.

When Google trains 200,000 people in one day, it's not just marketing. It's planting the seeds of a more robust ecosystem, where more people can create value, solve local problems and build sustainable businesses.

When Meta concentrates data without explicit consent, it's not just maximizing short-term profits. It is eroding the trust that sustains its own business model in the long term.

History has already shown that: extreme concentration eventually generates reaction. The “anti-IA” movement of 2026 could be just the beginning of a growing demand for transparency, shared value and democratic governance.

How I can help you on your journey

If you've come this far and you're thinking: “OK, Felipe, I agree with all this, but how do I apply it to my reality?”, I've got good news.

In my consulting and mentoring, I help executives, entrepreneurs and teams navigate this transition intelligently, building AI strategies that generate real value while maintaining responsible governance.

I offer immersive courses that go beyond the superficial, teaching not just tools, but fundamentals, critical thinking and practical application. Because, as we've seen, using AI without understanding it is like driving blindfolded.

And I work with companies and governments in designing policies and training programs at scale, Just like Google did, but adapted to the Brazilian reality and the specific challenges of each sector.

If you want to build an AI strategy that is sustainable, ethical and, above all.., that generates real positive impact, Let's talk.

Because the future of AI will be built by those who choose to democratize, educate and share. Not by those who choose to concentrate, extract and control.

And that choice? It starts now. Today. With you.

What will you choose?


✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.

➡️ Join the 10K Community here


RELATED POSTS

View all

view all