Felipe Matos Blog

Researcher Tests 7 AIs as News Sources and Only 37% Provided Real URLs - Why These 24 Hours Reveal the Risk of Automatic Disinformation

January 15, 2026 | by Matos AI

yyuiK-1zH0Qup1KfdIo_8

A Canadian professor spent a month asking seven AI systems for news - ChatGPT, Claude, Gemini, Copilot, DeepSeek, Grok and Aria. The result? Only 37% of the answers provided a valid link. The rest invented sources, cited websites that don't exist or simply created conclusions without any factual backing. At the same time, Trump has imposed tariffs of 25% on AI chips, Oracle faces prosecution for not disclosing billionaire infrastructure costs, and Elon Musk's Grok is still under global investigation for generating sexualized images without consent.

These 24 hours expose a fundamental tension: AI is being adopted on an industrial scale before solving basic problems of reliability, ethics and responsibility. While Elon Musk predicts that electricians and plumbers will be the millionaires of the future for building the physical infrastructure of AI, society has yet to solve the moral and informational infrastructure needed for these technologies to work without causing harm.

Let's understand what's happening, why it matters and what you need to know to navigate this moment clearly.


Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!


The Experiment That Exposed the Fragility of AIs as a Source of Information

Jean-Hugues Roy, a professor at the School of Media at the University of Quebec in Montreal (UQAM), conducted a simple but revealing experiment. For an entire month, he asked seven generative AI systems every day: “What were the five most important news events in Quebec today? Give me the specific sources (URLs).”

The result, as reported by Fórum Magazine, was alarming:

  • Only 37% of the 839 responses provided a complete and valid URL
  • 18% didn't even quote media outlets, resorting to government websites, pressure groups or inventing imaginary sources
  • Most of the links led to 404 errors or generic pages, making verification impossible
  • 47% of the summaries were considered accurate, and just over 45% were only partially accurate

One of the most emblematic cases involved the Gemini, from Google, who invented a news site called “fake-example.ca” to report on a strike by school bus drivers that never happened in Quebec. The Grok completely distorted a report, claiming that asylum seekers had been “mistreated in Chibougamau”, when the original article actually reported the success of an employability initiative.

Roy also identified a phenomenon that he called “generative conclusions” - interpretations created by AI without support from real sources, detected in 111 cases. This means that AI has not only failed to cite sources: it has invented interpretations and facts.

The conclusion of 22 communications organizations that analyzed the study was clear: “almost half of all AI responses had at least one significant problem”.

Why It Matters Now

We live in a time when millions of people - especially young people - turn to chatbots for information. When you ask ChatGPT or Gemini about a recent event, the answer comes packaged in confident, formal, structured language. Sounds like knowledge. But as Roy's experiment shows, this is often just an illusion of knowledge.

AI doesn't “know” facts. It predicts linguistic patterns based on trillions of text tokens. When forced to cite specific sources, it often invents - not out of malice, but because its aim is to generate a plausible answer, not a true one.

In my work with companies and governments, I see this confusion every day. Executives ask me: “Felipe, can I use AI to generate market intelligence reports?” My answer is always the same: can, but with a human being checking every statement, every number, every source. AI amplifies your capacity for synthesis, but it doesn't replace your responsibility for the truth.

Oracle Sued For Not Disclosing Billion-Dollar AI Infrastructure Costs

About that, as reported by Reuters via InfoMoney, Oracle is facing a class action lawsuit brought by bondholders who claim to have suffered losses because the company did not disclose that it would need to resort to further significant financing to build its AI infrastructure.

The context: seven weeks after announcing a five-year contract and US$ 300 billion to provide data processing infrastructure to OpenAI, Oracle sought to US$ 38 billion in loans to finance two data centers.

Investors say they have noticed a higher credit risk only after the loans were announced, causing bond prices to fall and interest rates to rise. The central allegation of the lawsuit is that Oracle executives - including Larry Ellison and Safra Catz - made false or misleading statements in the offering documents for the debt sold in September.

The Brutal Mathematics of AI Infrastructure

This case exposes something that few people talk about openly: AI infrastructure is absurdly expensive, and many companies are discovering this too late.

Contracts such as Oracle's with OpenAI seem like guarantees of future revenue, but they come with monumental capital obligations. Advanced data centers for AI are not warehouses with old servers. They are facilities that demand it:

  • State-of-the-art chips (Nvidia's H200, AMD's MI325X) costing tens of thousands of dollars per unit
  • Industrial cooling systems capable of dissipating the heat of thousands of GPUs operating simultaneously
  • Dedicated power connections - some facilities consume as much electricity as small towns
  • Very low latency network infrastructure for interconnecting clusters

When Trump imposed tariffs of 25% on AI chips - specifically Nvidia's H200 and AMD's MI325X - it has added another layer of cost and complexity. Although the measure is restricted in scope (it doesn't apply to chips destined for US data centers, startups or general consumption), it signals a strategic shift: the US wants to encourage domestic semiconductor manufacturing, but at the cost of putting even more pressure on the margins of companies that depend on these technologies.

The Oracle case is a reminder: The AI race is not just a technological race. It's a race for capital, energy and physical infrastructure.

Elon Musk: Electricians and Plumbers Will Be the Millionaires of the Future

Speaking of infrastructure, Elon Musk made a prediction that, at first glance, seems ironic, but which makes perfect sense when you understand the mathematics behind AI. As reported by hardware.com.br, Musk stated that electricians and plumbers will be the millionaires of the future, while digital jobs concentrated on computers will quickly be replaced by AI.

Jensen Huang, CEO of Nvidia, validated the forecast, indicating a boom in specialized physical work. Larry Fink, from BlackRock, warned about the shortage of skilled labor - especially electricians - needed to build the physical infrastructure of AI. Huang encouraged young people to focus on manual trades such as electricians and plumbers, who will build the infrastructure needed for AI data centers - a market designed by McKinsey to reach the world. US$ 7 trillion by 2030.

Why this prediction makes sense

Musk isn't just teasing. He's reading the material reality of AI. While the public narrative focuses on language models, autonomous agents and conversational interfaces, the real battle is happening behind the scenes: who will be able to build, power and cool the data centers needed to train and run these models at scale?

Think about it: each large language model requires thousands of GPUs running 24/7 for months. Each interaction with a chatbot consumes energy. Each autonomous agent processing tasks in the background adds load to the power grid. How Trump warned via Truth Social, AI technology companies should fully bear its own energy consumption costs to avoid putting pressure on the national electricity grid and increasing residential tariffs.

The expansion of AI has already led to electricity overloads in US states with a high concentration of servers, with price spikes of up to 36%. Major CEOs, such as Mark Zuckerberg of Meta, have already warned that energy is the biggest bottleneck for AI growth.

The trend is clear: the private sector will be forced to invest in its own energy generation - whether through small modular nuclear reactors (SMRs), dedicated solar parks or natural gas generators. And who will install, operate and maintain all this? Electricians, plumbers, industrial refrigeration technicians, construction engineers.

Musk described AI as a “supersonic tsunami”. But tsunamis don't happen in the air. They require water, land and physical strength. AI, likewise, doesn't just happen in code. It requires material infrastructure, sweat, specialized physical work.

Grok, Deepfakes and the Global Crisis of Accountability

While Musk talks about the future of electricians, his own company's chatbot - Grok, from the X network - is at the center of a global responsibility crisis. As reported by ISTOÉ DINHEIRO, the X platform announced measures to stop the generation of deepfakes by Grok, following global criticism involving women and children.

X will implement geo-blocking of the ability to create images of people in revealing clothing (such as bikinis) in jurisdictions where this is illegal. The restriction applies to all users, including paid subscribers. The announcement came after California Attorney General Rob Bonta launched an investigation into Musk's AI company xAI.

Previously, X had limited the functionality to paid subscribers only - a move criticized internationally by British Prime Minister Keir Starmer and European Commission President Ursula von der Leyen as a “affront to the victims”.

An analysis of AI Forensics revealed that in more than 20,000 images generated by Grok, more than half showed people with little clothing (81% women, 2% minors). The international outrage led Indonesia and Malaysia to suspend access to Grok, and the UK regulator has opened a formal investigation against X.

Musk's Defense and the Real Problem

As reported by G1, Elon Musk defended Grok, claiming he had no knowledge of images of minors generated by the AI. Musk claimed that Grok refuses to generate illegal content because its operating principle is to obey the law, and suggested that failures occurred due to “user attacks on the prompt”.

Despite Musk's defense, Grok himself admitted failures in protection mechanisms which led to the generation of sexualized images of minors at the beginning of January.

Here's the problem: AI systems have no “intention”. They have optimization goals. If the goal is to generate an image that matches the user's prompt, and if the guardrails are insufficient, the system will generate the image - not because it wants to cause harm, but because it has been trained to predict pixels that match the text.

The responsibility does not lie with AI. It lies with those who design it, train it, deploy it and make it publicly available. When you release an imaging tool with weak guardrails - or, worse, when you charge to bypass those guardrails, as X did initially by limiting certain features to paying users - you're prioritizing revenue over security.

Nvidia CEO Jensen Huang, recently criticized the “end of the world narrative” about AI, He argues that this pessimism is scaring off investors and hindering the development of safer technologies. He has a point: exaggerated alarmism can paralyze responsible innovation.

But Grok's case is not scaremongering. It's a concrete example of how a lack of technical and ethical responsibility can cause real, measurable harm to real people. On YouTube, more than 20% of the feed is already “slop” (junk content generated by AI). On social media, sexualized deepfakes proliferate. In corporate environments, executives make decisions based on AI-generated reports that cite invented sources.

The question is not whether AI will transform the world. It already is. The question is: are we building the necessary guardrails at the same speed as we build the tools?

Can AI Create New Ideas or Just Repeat Old Ones?

In the midst of this tension between promise and risk, it's worth looking at a deeper debate: can AI really create new knowledge, or does it just recombine what already exists?

As reported by Folha de S.Paulo (The New York Times), The debate was reignited after the startup Harmonic used the GPT-5.2 Pro (OpenAI) to help solve an “Erdos problem” - complex mathematical challenges proposed by the legendary mathematician Paul Erdős.

Critics such as Terence Tao, one of the greatest living mathematicians, argue that AI is like a “smart student who has memorized everything”, It's a way of simulating understanding without generating genuine brilliance. However, AI is already proving to be a powerful tool: recent systems suggest hypotheses and experiments that scientists had not considered, accelerating research.

Although GPT-5 initially boasted of solving 10 Erdős problems (which was corrected when it was identified that many of the solutions already existed), the technology demonstrates real value by analyzing vast amounts of data and forgotten references - such as a German article found by Thomas Bloom that no one else had noticed.

AI as an Amplifier, Not a Substitute

My position on this is pragmatic. AI, currently amplifies the researcher, but still requires human expertise to guide and filter the results. It doesn't replace the mathematical intuition of a Terence Tao, but it can help you explore dead ends more quickly, or find obscure connections in scattered literature.

In my work with startups and large companies, I see this dynamic every day. Executives ask me: “Felipe, is AI going to replace our analysts?” My answer: not good analysts. It will replace those who only compile information without questioning, without contextualizing, without validating.

The analysts who survive - and thrive - will be those who master “vibe coding”, a term proposed by Alexandr Wang, head of Meta's AI lab. As reported by Xataka Brasil, Wang recommended that Generation Z spend all their time practicing “vibe coding” - i.e. talking to AI models, explaining the desired result precisely (in natural language) and supervising the output of the generated code.

The future of programming, according to Wang, lies not in mastering specific languages (Python, JavaScript), but in mastering the art of communicating with AI and validating what it produces.

That's not intellectual laziness. It's a change of layer. Instead of spending hours writing boilerplate code (repetitive and structural code), you spend hours refining the logic, validating hypotheses, ensuring that the final system solves the user's real problem.

What This Moment Demands of You

These 24 hours reveal something fundamental: AI is being adopted at scale before solving basic problems of reliability, ethics and responsibility. That doesn't mean we should stop. It means we must navigate with our eyes open.

Here are the principles that I apply in my work with companies and governments, and that you can apply now:

1. Never Trust AI Output Blindly

Jean-Hugues Roy's experiment made this crystal clear: only 37% of AI responses provided valid URLs. If you are using AI to generate reports, summaries or analyses, institute a human validation process. Every statement, every number, every source needs to be verified.

2. Understand the Real Costs of AI Infrastructure

If you're an executive or entrepreneur considering investing in AI, go beyond the API bill. Ask about computing costs, storage, energy, latency, regulatory compliance. Oracle's case is a reminder: AI contracts can come with monumental capital obligations.

3. Invest in Technical and Ethical Guardrails

If you are developing or deploying AI systems, prioritize security by design. Don't wait for the international media to investigate your tool to discover that it generates deepfakes of minors. Test adversarially. Anticipate abuse. Build robust limits.

4. Develop AI Literacy, Not Just AI Use

“Vibe coding is an important skill, but it's not enough. You need to understand how the models work, what their limitations are, where they tend to fail. This doesn't require a PhD in machine learning, but it does require curiosity and intellectual rigor.

5. Recognize the Value of Specialized Physical Labor

Musk's prediction about electricians and plumbers is no joke. It's strategy. Whether you're a young person choosing a career, or you lead a company that depends on physical infrastructure, invest in specialized professions that build, maintain and operate real systems. AI can write code, but it can't install a 500 kVA transformer in a data center.

The Future We Are Building

I refuse to be alarmist. But I also refuse to be naïve. AI is transforming the world faster than any previous technology. This transformation brings monumental opportunities - from more accurate medical diagnoses to accelerated scientific research - but it also brings monumental risks: disinformation at scale, concentration of power, erosion of privacy, harm to real people.

What makes me optimistic is not the technology itself. It's the human capacity to learn, adapt and build institutions that channel the power of technology for the common good. But this doesn't happen automatically. It requires conscious choices, responsible leadership and deliberate investment in technical, ethical and regulatory guardrails.

In Brazil, we have a unique opportunity. We can learn from the mistakes of other markets - like the Grok case, or the reliability crisis exposed by the Canadian experiment - and building AI ecosystems that prioritize transparency, verifiability and accountability from the outset.

The IMD ABRAPE | Peppow 2025/2026 survey on the events sector in Brazil, as reported by Meio & Mensagem, shows that 79.7% of professionals declare familiarity with AI, But its use is restricted. Technology is mostly applied to content creation (texts, images), but little used in financial management (margin, cash flow), which still depends on disconnected spreadsheets.

This illustrates perfectly where we are: high surface adoption, low deep integration. Maturity will come when AI stops being a one-off productivity tool and becomes part of the operational infrastructure - but this requires structured data, organized processes and clear governance.

How You Can Act Now

If you are an executive, entrepreneur or team leader, here are concrete actions you can take today:

  • Audit how your organization uses AI. Identify where AI outputs are consumed without human validation and institute verification processes.
  • Calculate real infrastructure costs. If you're planning to expand the use of AI, go beyond the API account. Consider computing, energy, compliance, staff training.
  • Invest in AI literacy. Not just training on tools, but education on how models work, what their limitations are and how to validate outputs.
  • Build guardrails from the ground up. If you're developing products with AI, prioritize security, privacy and accountability from the start - not as a later fix.
  • Recognize and value specialized physical work. If your AI infrastructure depends on data centers, power or network, invest in the people who build, maintain and operate these systems.

In my mentoring program, I help executives and companies navigate exactly these tensions: how to adopt AI responsibly, how to build internal capacity, how to avoid hype traps and focus on real impact. If you are leading digital transformation in your organization and want to build with clarity and purpose, let's talk.

AI is not an inevitable force that happens to us. It is a technology that we are building, deploying and governing. The choices we make today - about transparency, accountability, investment in physical and moral infrastructure - will define whether AI will be a tool for shared progress or for the concentration of power and harm.

These 24 hours have shown us both sides of the coin: the fragility of information, the hidden costs, the crises of responsibility - but also the potential to amplify human capacity, accelerate research and build real infrastructure for the future.

Which side will we choose?


✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.

➡️ Join the 10K Community here


RELATED POSTS

View all

view all