ASML Records US$ 15.85 Billion Orders As YouTube Uses AI To Estimate Age And Freelancers Lose 90% Of Clients - Why These 24 Hours Reveal The Collision Between Robust Spending And Real Human Impact
January 28, 2026 | by Matos AI

The last few days have brought a revealing combination: ASML, the Dutch semiconductor equipment manufacturer, posted record quarterly orders of 13.16 billion euros (US$ 15.85 billion), This is an increase of 85.6% over the previous year, driven by the voracious appetite for advanced chips for artificial intelligence. Over the same time horizon, YouTube has announced that it will use AI to estimate the age of users in Brazil, applying automatic protections to children under 18, and freelancers reported losses of up to 90% from clients since generative tools like ChatGPT became popular.
How is it possible for billions to flow into the AI infrastructure while the professional fabric of thousands of people disintegrates?
The answer lies in the stark difference between where the money flows and where the impacts materialize. And right now, we have the rare opportunity to see both sides of the same coin: the robust growth of the hardware that powers AI and the immediate human cost it is exacting.
Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!
- AI for Business: focused on business and strategy.
- AI Builders: with a more technical and hands-on approach.
Record Spending on Semiconductors: AI's Invisible Fuel
ASML is not just any company. It makes the machines that produce the world's most advanced chips, the ones that make it possible to train and run language models such as GPT, Gemini and LLaMA. Its extreme ultraviolet (EUV) lithography systems are practically irreplaceable - there is no viable alternative on the global market.
The fact that the company recorded a jump of 85.6% in quarterly orders signals that the AI boom is not marketing. It's structural investment. Companies such as TSMC, Samsung and Intel are ordering billion-dollar equipment to expand manufacturing capacity, anticipating continued demand for decades to come.
According to report from Valor Econômico, This result “is a sign that customer spending to produce advanced chips, driven by the boom in artificial intelligence, remains strong, despite fears of a market bubble.”
In other words: even with fears of over-investment and a possible bubble burst, the consensus among the tech giants is that AI is here to stay. And that requires silicon. A lot of silicon.
This dynamic has profound geopolitical implications. The race for advanced chip manufacturing capacity is at the heart of the technological dispute between the United States and China. Whoever controls semiconductor production controls the infrastructure for the next generation of digital products, defense systems, industrial automation and, of course, artificial intelligence.
YouTube Uses AI to Estimate Age: Protection or Behavioral Monitoring?
Meanwhile, on the other side of the spectrum, YouTube has announced the expansion of an AI system to Brazil that estimates the age of users based on usage patternstypes of videos searched, categories watched and account longevity.
When the system identifies a user as under 18, it automatically applies measures such as disabling personalized advertising, activating digital wellbeing tools and limiting repetitive recommendations of sensitive content.
The declared intention is to protect teenagers. The global pressure for safer digital environments for children is real and necessary. Platforms such as Roblox, TikTok and ChatGPT have already implemented similar systems.
But let's be frank: this technology also represents a massive expansion of behavioral monitoring. AI isn't just inferring age - it's mapping preferences, attention spans, emotional patterns and cognitive vulnerabilities.
Professor Paulo Almeida, a specialist in digital media at UnB, highlighted in the report from Correio Braziliense that “this breaks the logic of algorithmic escalation, in which the system itself intensifies exposure according to the user's consumption history.”
This is progress. But he also warns: “important challenges arise, such as possible classification errors, privacy issues and the continued use of behavioral monitoring.”
The central question here is not whether the protection of minors is important - it is. The question is: who audits the criteria? Who guarantees that this data will not be used for other purposes? And what happens when an adult is wrongly classified and has to send official documents to “prove” their age?
We are normalizing algorithmic surveillance in the name of security, and this deserves a much more robust public debate than we are having.
Freelancers Lose 90% of Clients: The Human Cost of Accelerated Automation
And then we come to the most painful part of the story.
In an in-depth and necessary report, the Tecnoblog heard from dozens of Brazilian freelancers who have seen their careers crumble since tools like ChatGPT, Midjourney and other generative systems became popular.
The reports are overwhelming:
- Paula, editor: demand has fallen by more than 50% since 2022.
- Fabio Farro, screenwriter: lost 90% of customers between 2023 and 2024.
- Beatriz, text producer: lost all customers (more than ten brands) before 2024.
- Ricardo, marketing coordinator: companies cut marketing budgets to “demonstrate investment in AI.”
- Viviane Fortes, advertising copywriter: has seen copywriters at an agency replaced by “ChatGPT operators.”
These are not isolated cases. They represent a structural trend: generative AI is being used not to increase creative capacity, but to replace skilled workers with a cheaper and faster alternative, even if it is inferior in quality.
And here's the point that bothers me the most: often, the replacement isn't even justified by superior AI performance. It's justified by the pressure to “show that the company is investing in innovation”.”
As Ricardo said: “Since 2023, whenever he changed companies, he went to one that paid less.” Viviane Fortes has seen her price list frozen since the pandemic and plans to migrate to data analysis or the public sector.
Mariana, a book designer, discovered that her covers were being made by AI when she checked her Amazon customer page. She refuses to do “art washing” - taking AI drafts and signing them off as human work.
The emotional impact is devastating. Ricardo has been undergoing treatment for anxiety since 2023. Farro has had to move back in with his mother. Paula feels that her work has become a commodity.
Study Finds AI Fails 97% of Complex Professional Tasks
Ironically, while freelancers are losing clients to AI tools, a recent study by the Center for AI Safety and Scale showed that advanced AI models failed more than 97% of real professional tasks.
The study subjected tools such as Manus AI, Grok 4, Sonnet 4.5, GPT-5 agent and Gemini 2.5 Pro to complete remote work projects - product design, video animation, data analysis, architecture. In almost all cases, the systems failed to deliver work of acceptable quality without significant human intervention.
The best automation rate recorded was just 2.5%.
According to the researchers, the flaws are not specific - they are structural. AI has a hard time:
- Understanding ambiguities in the briefing
- Maintain consistency throughout complex tasks
- Adapt decisions as new problems arise
- Present continuous learning during the project
In other words: AI works well as a support tool, but fails miserably when it takes over entire projects.
And yet workers are being replaced en masse. Not because AI is better. But because it's cheaper.
CNJ Tries to Draw Ethical Boundaries, But Gaps Worry
The use of AI in the public sector is also accelerating. Recently, the National Council of Justice (CNJ) published Resolution No. 615/2025, which repeals the pioneering Resolution No. 332/2020 and updates the guidelines for the use of AI in the Brazilian judiciary.
In an analysis published in Conjur, experts highlight important advances:
- Institutional recognition that algorithms are not neutral
- Ban on systems that assess “personality traits” to predict criminal recidivism
- Ban on emotion recognition by biometrics
- No ranking of people based on their social situation to assess the “plausibility of their rights”
These are fundamental “red lines”. They protect against judicial dystopias in which the machine decides who deserves freedom or who is “dangerous” based on racial or socioeconomic profiles.
But the resolution also has serious weaknesses:
- Compromised transparency: allows auditing “without unrestricted access to the source code”, protecting trade secrets to the detriment of explainability
- Diversity as a dispensable item: the standard requires different teams, but allows exceptions “to ensure efficiency and speed”, treating inclusion as an accessory
- Not necessarily government data: data sources are “preferably” public, opening the door to biased private databases
As Conjur's analysts warn: “The promise is efficiency. The risk, unfortunately, is still discrimination under the guise of neutrality.”
Knowledge Bottlenecks: Workers Rush to Qualify
Faced with this scenario, Brazilian workers are not waiting for companies to structure training programs. According to data from Unico Skill published by Earth, the search for artificial intelligence courses grew by more than 840% in January 2026, compared to January 2025.
Professionals in their 40s and 50s are leading this transformation, seeking out courses such as “Artificial Intelligence for Professional Growth” and “AI Applied to Business.”
In addition, there was a 2,800% increase in demand for People Development courses, indicating that professionals understand that AI requires calibrated human leadership, not just technical mastery.
Joca Oliveira, CEO of Unico Skill, sums it up well: “The WEF report highlights that the successful adoption of AI starts with people, not with technology alone. What we're seeing at our base is the materialization of this: the worker has realized that in order to stay relevant, they need to master the tool.”
Companies that invest in training via Unico Skill record 30% less turnover, creating the environment of stability needed for sustainable digital transformation.
Global Retailers Put AI on the Shelf: From Experimentation to Infrastructure
Meanwhile, global retail has consolidated the use of AI for strategic decisions. At NRF Retail's Big Show, held in New York in January, the message was clearAI is no longer a showcase, but an invisible infrastructure to guarantee productivity, efficiency and margins.
Large networks now use algorithms to:
- Demand forecasting
- Inventory management
- Reduction of breakages
- Dynamic pricing
- Promotional design
The promise is simple: sell better, with less waste and more margin. And the results are starting to show in EBITDA.
In Brazil, ABRAS projects growth of 3.2% in household consumption in 2026, sustained by rising real incomes, a heated job market and transfer programs. But consumers are more selective, paying attention to price and promotions, requiring retailers to be more precise in their decisions.
AI is no longer an experiment. It is the operational layer that defines who wins and who loses in the modern retail game.
AI Pioneer Warns: “Herd Effect” Is Leading Technology Down a Dead End
And to close this complex panorama, Yann LeCun, AI pioneer and Turing Award winner, warned that the “herd effect” is leading Silicon Valley down a blind alley.
LeCun, who was chief AI scientist at Meta for more than a decade and left the company in November to found Advanced Machine Intelligence Labs (AMI Labs), criticizes the obsession with large language models (LLMs).
According to him: “LLMs are not a path to superintelligence, nor to human-level intelligence. I've been saying that all along.”
LeCun argues that LLMs have structural limits: they can't plan soundly, they don't understand the complexity of the physical world and they don't predict the results of their own actions.
He warns: “There's this herd effect where everyone in Silicon Valley has to work on the same thing,” which stifles more promising approaches in the long term.
As American companies retreat from their open source stance in search of competitive advantage, LeCun warns that “good ideas are coming from China” and criticizes Silicon Valley's “superiority complex”, which prevents the recognition of innovations from elsewhere.
What Does It All Mean For You?
These 24 hours reveal something fundamental: the money is going to one place, but the impact is happening somewhere else.
Billions are flowing into semiconductors, advanced chips and cloud infrastructure. But the immediate human cost is falling on freelancers, creative workers, marketing and communications professionals who are seeing their careers crumble.
AI is being applied to monitor teenage behavior, make legal decisions and optimize retail margins. But who audits? Who guarantees transparency? Who protects workers?
And the scariest thing: studies show that AI fails in 97% of complex professional tasks, but workers continue to be replaced.
Not because AI is better. But because it's cheaper.
So what do you need to do?
First: understand that AI is not neutral. It carries biases, flaws and limitations. If you lead a team, don't outsource cognition to algorithms without qualified human supervision.
Second: invest in continuous training. Not to “compete with AI”, but to master the tool and apply it critically, ethically and strategically.
Third: demand transparency. If your company uses AI for critical decisions - hiring, promotion, performance analysis, pricing - demand auditability, explainability and diversity in the technical teams.
Bedroom: don't fall for the “herd effect”. Don't do AI because everyone else is doing it. Do AI because it solves a real, measurable problem, and because you have the governance to avoid collateral damage.
And if you're a freelancer or creative professional feeling the impact, know this: not using AI has become a differentiator for clients who value quality, originality and ethical responsibility.
This is the time to consolidate your positioning, document your methodology and clearly communicate why human labor still matters - and always will.
My Job is to Help You Navigate This Scenario
In my work with companies, governments and entrepreneurship support organizations, I help leaders implement AI in a strategic, ethical and sustainable way. Not as a substitute for people, but as an amplifier of human capabilities.
If you lead a team and want to understand how to apply AI without losing the human dimension, if you are looking to build real governance to avoid discriminatory biases, or if you need to qualify your team to work with AI in a critical and creative way, my mentoring programs and immersive courses are designed to do just that.
It's not about surfing the hype. It's about building systems that work, that respect people and that generate measurable results without producing automated injustices.
Because at the end of the day, the AI that is worthwhile is the one that strengthens human labor, not the one that replaces it for financial convenience.
And if there's one thing these 24 hours have taught us, it's that we need this clarity now - before the human cost becomes irreversible.
✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.
➡️ Join the 10K Community here
RELATED POSTS
View all
