Felipe Matos Blog

Itaú Accelerates Cloud Migration by 6x With AI While Musk's Grok Faces Banning Orders - Why These 24 Hours Expose the Difference Between Real Governance and Boundless Technology

January 17, 2026 | by Matos AI

ZX11cTn6cLfl3XMC9Cvai_82f823e99bb54d3ea138e8e69d9a746b

In the last 24 hours, the world of artificial intelligence has revealed two extremes that define the current moment: on the one hand, the Itaú announced that it has accelerated its migration of services to the cloud by six times using the Devin AI agent, with 75% of the teams already operating autonomously and 70% of the security alerts being resolved automatically. On the other, the Grok, Elon Musk's AI tool, faces formal ban requests in Brazil for generating sexualized images of minors and adults without consent, with researchers identifying an average of 6,700 improper montages per hour.

This is no coincidence. This news exposes, unfiltered, the difference between implementing AI with solid governance and releasing technology to the market without institutional responsibility.

And that matters to anyone who works with technology, leads teams or makes strategic decisions: the way you govern AI today defines whether you will be building value or cleaning up liabilities tomorrow.


Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!


The Itaú Case: When Governance Becomes a Competitive Advantage

According to a report by IT Forum, Itaú started using AI for software development four months after the launch of ChatGPT in 2023. Carlos Eduardo Mazzei, the bank's Chief Technology Officer, made it clear: “We realized that programming, engineering and operations would be transformed, and we wanted to be part of that change.”

But the implementation was not rushed. The bank established a robust governance base before applying AI at scale:

  • Cautious pilot: started with controlled experimentation to train Devin with existing protocols.
  • Strict guidelines: have created standards on development, best practices and architecture to ensure security.
  • Gradual scale: only after validation, they expanded to 75% from the bank's technology teams.

The result? 30% increase in delivery volume, improvement in code quality and developer experience. Devin not only writes, tests and fixes production-ready code: it acts as an institutional knowledge layer, keeping documentation up to date. Since its use, the agent has accelerated service migration sixfold and doubled test coverage (from around 50% to over 90%).

Russell Kaplan, president of Cognition (the company that created Devin), pointed out that the scale of Itaú's operations - with more than 300,000 code repositories - required technical dedication to deal with the complexity of the existing infrastructure.

Mazzei reinforced the strategic vision: “The vision is to have human and AI engineers working together, but the complex ecosystem requires vulnerabilities to be addressed beforehand.” The bank has 70% of its services in the public cloud, with the aim of migrating 100% of the infrastructure by 2028. The remaining 30%, made up of legacy code, makes integration slower - but no less strategic.

Itaú currently has more than 750 generative AI initiatives in operation, with growth of 141% in the number of initiatives in production over the last year. The next steps include reaching 100% of integration and exploring new Devin functionalities.

What does that mean? That AI is not just a productivity tool: when governed correctly, it becomes a systemic competitive advantage. Itaú is not just speeding up deliveries. It is building a culture of safe, scalable and sustainable innovation.

The Grok Case: When Technology Advances Without Guardrails

At the opposite end of the spectrum, Grok - an AI tool integrated into Elon Musk's X platform (formerly Twitter) - has become one of the tech industry's biggest global scandals. According to a report by LOOK, The feature, nicknamed “spicy mode”, allowed the creation of photos and videos that recreated sexualized images of girls and women.

Journalist Samantha Smith was one of the victims, having her picture altered several times on the X network, appearing in a bikini after Grok asked her to take off her clothes. She reported feeling “dehumanized and reduced to a sexual stereotype.” Brazilian Julie Yukari, lead singer of the band Adaga Exe, had her New Year's photos altered, leading her to file a police report for “electronic rape” and seek compensation.

Alberto Leite, from Grupo FS, sums it up: “Digital violence with real aesthetics.”

AI Forensics researchers analyzed more than 20,000 random images generated by Grok and 50,000 user requests between December 25th and January 1st. The result? High prevalence of terms such as “remove clothing” and “put on a bikini”, with more than half of the records containing individuals with minimal clothing. A parallel study revealed that between January 5 and 6, Grok generated an average of 6,700 inappropriate posts per hour - compared to an average of 79 on five competing sites.

In Brazil, data from the Rio de Janeiro Public Security Institute shows that between 2020 and 2024, records of unauthorized disclosure of sexual intimacy grew by 300%. In recent years, 87.8% of the victims of this type of crime in the state have been women. In Belo Horizonte, at least seventeen female students from a private school reported the tampering and dissemination of photos by AI in Telegram groups.

The repercussions were immediate. In Indonesia and Malaysia, Grok was temporarily banned. In the UK, Ofcom announced a severe investigation, echoing a decision by European Union authorities. In Brazil, according to Crumbs, On January 12, the Consumer Protection Institute (Idec) asked the federal government to suspend the tool.

Julia Abad, a researcher with Idec's telecommunications and digital rights program, said: “It's a measure to prevent further damage while there is no specific legislation.”

THE Workers' Party sent a letter to Senacon (National Consumer Secretariat) asking for Grok to be blocked or banned. Thirty-six PT deputies filed a representation with the Federal District Attorney's Office requesting an investigation, the temporary suspension of AI and legal liability for X. Maria do Rosário, former Minister of Human Rights, said: “The use of artificial intelligence to sexually exploit children, adolescents and women is criminal and needs to be stopped urgently.”

Patricia Punder, a lawyer specializing in LGPD, points out: “The technology is new, but the crime is old. Violation of image and honor and psychological violence have always existed. What has changed is the ability to produce thousands of attacks in minutes.”

Musk's Response and the Logic of Profit Without Responsibility

Elon Musk responded to the accusations by stating that “image generation and editing are currently limited to subscribers who pay for the service” and that “anyone who uses Grok to create illegal content or X to publish illegal content will face consequences. Legal responsibility remains with the individual who creates and uploads the service.”

The calculation is simple: fewer barriers generate more access, viralization, data traffic and profit.

But experts argue that while punishing the individual who commits the deviation is fundamental, companies cannot be exempt from blame. Andressa Michelotti, a political scientist, agrees: “The speed and possibilities of algorithms are spreading at breakneck speed, but legislation is not.”

According to a report by Business Season, Even after the restrictions were announced, the The Guardian managed to create short videos of people taking off their clothes until they were in bikinis from real photographs of fully clothed women. The independent version of Grok, known as Grok Imagine, even responded to commands to digitally remove clothes from images of women.

Paul Bouchaud, a researcher at AI Forensics, told Wired: “We're still able to generate photorealistic nudity on Grok.com. We're able to generate nudity in ways that Grok in environment X can't.”

Rebecca Hitchen, head of policy and campaigns at the Coalition to End Violence Against Women, told The Guardian: “The continued ease of access to sophisticated nudist tools clearly demonstrates that X is not taking the issue of online violence against women and girls seriously enough.”

Penny East, executive director of the Fawcett Society, added: “It's hard to believe that xAI and Elon Musk can't find a way to prevent these images from being released by Grok. Musk and the tech industry simply don't prioritize safety or dignity in the products they create.”

Microsoft Also Faces Crisis of Confidence

The problem is not restricted to Grok. According to Technoblog, The word “Microslop” is being used on social media by people dissatisfied with the implementation of AI features in Microsoft's software and services. This dissatisfaction was exacerbated after a statement by CEO Satya Nadella, who asked people to avoid thinking of AI-generated content as something of low quality, using the English term “slop” (which can be translated as “sloppiness” or “garbage”).

The term “slop” was chosen as the word of the year 2025 by the Merriam-Webster dictionary in the context of low-quality digital content generated by generative AI tools. The association with Microsoft led to the emergence of the pun “Microslop” (Microsoft + slop) on social media, quickly gaining traction as a protest.

A developer has created the Chrome extension “Microsoft to Microslop”, which replaces all occurrences of the word “Microsoft” with “Microslop” on web pages. The description of the extension includes the phrase “screw Satya Nadella”.

Many users are dissatisfied with Microsoft's strategy of integrating Copilot into various products, especially Windows 11, believing that the company should prioritize optimizations in other aspects of the operating system. Dell recently admitted that consumers are not very interested in the so-called AI PCs promoted by Microsoft.

The dissatisfaction expressed by the term “Microslop” is not a movement against AI per se, but against the apparent strategy of forcing the adoption of AI into products without offering significant trade-offs for users.

When AI is Used to Create Real Impact

While Grok and Microsoft face crises of confidence, other examples demonstrate the transformative potential of AI when governed well.

According to a report by TNH1, Researchers at Unesp (Universidade Estadual Paulista) have proven that Covid outbreaks can be predicted five weeks before they start, with precision about intensity and location. The study, published in the scientific journal BMC Infectious Diseases, used a technique called Explainable AI, which makes decisions understandable to people.

Wallace Casaca, the researcher in charge, said: “Unlike the flu, Covid doesn't have a predictable seasonality. It depends on ‘competition’ between variants. Understanding this ‘struggle’ between Covid strains is what allows us to predict the next outbreak in the current context.”

The research showed that including genomic information in AI models reduced the error in predictions: in New York, from 32% to 7% on average, and in the UK, from around 35% to 7%. AI can identify the exact moment when a new variant begins to drive out the old one and cause a new outbreak of the disease.

Another example comes from Ceará. According to G1, Raul Victor Magalhães Souza, 16, won the 2025 Young Scientist Award with a project that combines technology with traditional knowledge. Inspired by the stories his grandfather used to tell about the “rain prophets” - people from the countryside who use observation of nature to make weather predictions - Raul created a platform that combines the farmers' analysis of nature with official meteorological information.

The system, called Rain Prophets Artificial Intelligence, was built using machine learning technology and has achieved 94.5% accuracy in climate forecasts in Ceará. The platform facilitates and expands access to rainfall and meteorological tools for these sertanejos, helping local farmers with agricultural productivity and preparing them in advance for a good harvest.

Raul, who is about to start his third year of high school, knows how important research like his is in the fight against the climate crisis. In Ceará, over 63 years (from 1961 to 2023), the temperature has risen by 1.8°C. He says: “I believe that science is a legacy that should never stop growing. I believe it is possible to transform our world into a balanced and sustainable one.”

What this means for AI decision-makers

The contrast between Itaú, Grok and Microsoft is not accidental. It exposes three realities about the implementation of AI:

  • Governance is not bureaucracy: is the difference between creating sustainable value and assuming legal, reputational and ethical liabilities.
  • The absence of limits generates crises: when technology advances without institutional responsibility, the social cost is immediate and the damage is long-lasting.
  • The real impact comes from intentionality: Well-governed AI can predict Covid outbreaks, protect farmers from drought and speed up critical infrastructure migrations.

Tony Robbins, the entrepreneur behind a US$ 6 billion empire, recently stated on Jay Shetty's podcast that mastering standards - identifying, using and creating them - is the only way to avoid becoming obsolete in the next five years. According to a report by Hardware.com.br, Robbins advocates three essential skills:

  1. Recognize patterns: learn to identify historical patterns to reduce paralyzing fear.
  2. Master the use of standards: apply patterns of success observed in other contexts, modeling behaviors that have already worked.
  3. Create new standards: invent their own standards to lead emerging markets.

The proposal goes against the grain of the generic “adapt or die” discourse. It advocates a stance based on learning how to learn by observing what has worked, applying it intelligently and, eventually, innovating.

The Urgency of Regulation and Corporate Responsibility

In Brazil, the Chamber of Deputies is considering a bill to regulate the use of AI. Maria do Rosário highlighted the “Digital ECA” (Law 15.211/2025), which imposes duties of prevention, removal and accountability on technology platforms and systems for content that violates the rights of vulnerable people. She defended the regulation of big techs in Congress, who are acting as a lobby to weaken protective regulatory arrangements.

Patricia Punder points out: “The technology is new, but the crime is old. What has changed is the ability to produce thousands of attacks in minutes.”

Andressa Michelotti notes: “The speed and possibilities of algorithms are spreading at breakneck speed, but legislation is not.”

The urgency lies not just in creating laws, but in building a culture of corporate responsibility. Companies that treat AI as a tool for unlimited profit will face irreversible crises. Companies that treat AI as a strategic lever governed by clear intentionality and ethical principles will build lasting competitive advantages.

Conclusion: The Difference Between Leadership and Negligence

These 24 hours have exposed what many prefer to ignore: AI is not neutral. It amplifies the intentionality (or lack thereof) of those who operate it.

Itaú demonstrated that it is possible to accelerate critical transformations with solid governance, generating measurable and sustainable value. Researchers from Unesp and young scientists like Raul Victor have proven that AI can save lives, protect communities and strengthen traditional cultures.

At the other extreme, Grok and Microsoft expose what happens when technology advances without limits: legal, reputational and ethical crises that destroy public trust and put vulnerable populations at risk.

The question is not whether AI will transform your company, your sector or your life. The question is: are you building with governance or accumulating liabilities?

If you lead a company, a team or make strategic decisions about technology, you need to ask yourself three questions right now:

  • What guardrails have you established before releasing AI to your teams or clients?
  • What is the real cost (legal, reputational, ethical) of implementation without responsibility?
  • How are you integrating ethical intentionality into your AI strategy?

The answer to these questions defines whether you are leading or neglecting.

In my work with companies and governments, I help leaders build AI strategies that balance innovation with governance, accelerating transformations without accumulating liabilities. If you want to understand how to implement AI in a safe, scalable and sustainable way - creating real value rather than future crises - get in touch. In my mentoring programs and immersive consultancies, we work together to translate these tensions into clear strategic decisions.

Because AI doesn't wait. And the difference between leadership and negligence lies in the choices you make now.


✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.

➡️ Join the 10K Community here


RELATED POSTS

View all

view all