Felipe Matos Blog

All blog posts

Insights on startups, AI, innovation, the future of work and technology education. Practical strategies for impact businesses and digital transformation.

Microsoft CEO Warns Of ‘AI Psychosis’ As 95% Of Corporate Projects Fail - Why This Defines The Most Critical Moment In Responsible Adoption

August 23, 2025 | by Matos AI

Microsoft Warns Of ‘AI Psychosis’ As 95% Of Corporate Projects Fail - Why This Reveals The Biggest Challenge Of Responsible Adoption

August 22, 2025 | by Matos AI

Biggest iPhone Manufacturer Swaps Smartphones for AI As Google Launches AI Mode in Brazil - What This Reveals About the Future of Business

August 21, 2025 | by Matos AI

93% of Brazilians Use AI But Only 54% Understand What It Is - Why This Paradox Reveals the Biggest Technological Empowerment Opportunity in Our History

August 20, 2025 | by Matos AI

Elderly Man Deceived by Meta Chatbot While AGU Acts Against Child AI - Why This Redefines Technological Responsibility

August 19, 2025 | by Matos AI

Meta Allows ‘Sensual’ Conversations with Children As Brazil Moves to Regulate AI - Why This Moment Defines the Future of Tech Governance

August 17, 2025 | by Matos AI

From the Streets of ABC to World Leadership in AI - Why This Story Reveals the True Transformative Potential of Artificial Intelligence

August 16, 2025 | by Matos AI

1 Million Brazilians Will Be Trained in AI While Target Allows ‘Sensual’ Conversations with Children - The Ethical Paradox of the Last 24 Hours

August 15, 2025 | by Matos AI

95% of Companies Using AI Record Increased Revenue - Why This Number Reveals the True State of Digital Transformation in Brazil

August 14, 2025 | by Matos AI

New IT Graduates Work in Fast Food While 81% of Brazilians Trust AI - The Paradox That Defines Our Future

August 13, 2025 | by Matos AI

The warning came straight from the heart of Microsoft: Mustafa Suleyman, CEO of AI at the tech giant, has officially defined a new phenomenon called “AI psychosis” - users who project human consciousness and emotions onto chatbots, developing delusions, mania and even paranoia. At the same time, an MIT study reveals that 95% of corporate AI adoption attempts fail to generate measurable financial impact.

It's no coincidence that these two warnings appear simultaneously. They reveal opposite sides of the same coin: while some users create dangerous emotional bonds with AI, most companies are still unable to extract real value from it.

The Disturbing Reality of “AI Psychosis”

When we talk about “AI-associated psychosis”, we are not dealing with a marginal problem. Psychiatrists have already reported hospitalizations due to psychosis after prolonged interactions with AI, The phenomenon includes cases of self-mutilation and even deaths associated with obsessive dependence on these tools.

The most worrying thing? One in 10 Brazilians already use chatbots to vent. And we're not just talking about people in vulnerable situations - any user can be affected.

The problem has clear technical roots:

In my experience helping companies implement AI, I realize that the lack of education about these risks is one of the biggest obstacles to responsible adoption.

Corporate Failure Revealed by MIT

While some users develop dangerous dependencies, the corporate world faces the opposite extreme: 95% of corporate AI adoption attempts fail to generate measurable financial impact.

The MIT study is brutal in its conclusions: popular projects such as ChatGPT and Microsoft Copilot show fundamental technical limitations - lack of learning from feedback and inability to adapt to the context. They function more as individual support than structural transformation.

The reality is that we are investing billions in a technology that we still don't know how to use effectively. Research suggests that AI could generate US$ 6 trillion to the global economy by 2030, but only “if there is substantial acceleration in productivity” - something we have yet to see happen.

The Technical Advances That Could Change Everything

It's not all bad news. Google has announced a 97% reduction in energy consumption per Gemini question - surprisingly, each question now consumes just 0.24 Wh, equivalent to 9 seconds of TV.

This advance in energy efficiency is crucial for the sustainability of AI on a global scale. Even as processing volume jumps by 4,947% in a year, reaching hundreds of trillions of tokens a month, the energy footprint is becoming more manageable.

At the same time, Apple is considering integrating Google's Gemini into a redesign of Siri, This is a sign that even giants that prioritize internal development recognize the need for strategic partnerships in AI.

Brazil at the Forefront of Responsible Governance

While the world debates the risks of AI, Brazil is taking practical steps. The Federal Senate instituted the ApoIA Program, establishing ethical guidelines for the use of AI in its activities.

The rules are clear and sensible:

This positions the Senate as a benchmark in AI governance in the public sector, something that other institutions should follow.

Nokia also sees Brazil as strategic for becoming a sovereign AI hub, and highlighting our advantages in renewable energy, technological talent and distributed infrastructure.

The Emerging Risks We Must Monitor

The warnings don't stop there. Phone scams are already popping up using Google's AI-generated results to display fake numbers, This shows how the democratization of AI can facilitate sophisticated fraud.

A study of more than a thousand engineers revealed another problem: professionals who use AI are rated as less competent, even though they are of equal quality. This “invisible cost” is especially pronounced for women.

E TikTok could eliminate hundreds of human moderator positions by replacing them with AI - a trend that raises questions about the quality of moderation and the impact on jobs.

The Adoption Paradox: Between Hype and Reality

There is a fascinating paradox emerging. Marketing agencies report that, despite the reduction in website traffic due to the search for AI, conversion rates remain stable or grow.

This suggests that AI is filtering consumers, delivering more qualified leads. It's a sign that, when applied well, technology can generate real value - even in unexpected ways.

At the same time, Paul Krugman warns of the risks of a global recession caused by limitations in the energy infrastructure to support investments in data centers. It's a reminder that unbridled growth without sustainable planning can have severe economic consequences.

Building AI For People, Not To Be People

Mustafa Suleyman's quote sums it up: “We should build AI for people; not to be a person”.

In more than two decades of helping companies navigate technological transformations, I've learned that success always depends on three fundamental pillars:

  1. Education and awareness: teams that understand both the opportunities and the risks
  2. Gradual and responsible implementation: start small, measure results, iterate
  3. Clear governance: rules, responsibilities and human supervision

The current moment in AI demands maturity. We cannot fall prey to unbridled hype or paralyzing pessimism. We need a balanced approach that recognizes both the transformative potential and the real risks.

The Way Forward

The news of the last 24 hours paints a complex picture: we have warnings about psychological addiction, massive corporate failure, impressive technical advances, responsible regulation and emerging risks. All happening simultaneously.

This isn't chaos - it's the reflection of a rapidly evolving technology that we're still learning to use. The important thing is not to get carried away by extremes.

For companies, the lesson is clear: before implementing AI, invest in education and governance. Understand the risks, start with specific and measurable use cases, and always maintain human supervision.

For professionals, the message is twofold: learn to use AI as a tool, but keep an eye on its limits. And be prepared for possible evaluation biases in the job market.

For individual users, awareness is key. Chatbots are not friends, therapists or confidants. They are sophisticated tools that can simulate empathy, but do not possess it.

The future of AI will be determined by the decisions we make now about how to develop, regulate and use it. It is a moment of collective responsibility that will define whether this technology will be a force for good or a source of profound social problems.

In my mentoring work, I help leaders and companies navigate exactly these challenges - turning AI's potential into real value, in a responsible and sustainable way. Because in the end, it's not just about technology. It's about building a future where artificial intelligence amplifies the best of humanity, without replacing what makes us human.