Retail Uses AI To Reconnect Humans While Musk's Grok Faces Global Ban - Why These 24 Hours Reveal The Urgency Of Governance Before The Next Crisis
January 16, 2026 | by Matos AI

The last 24 hours have delivered a stark contrast about the present of artificial intelligence. On the one hand, global retailers are using AI to humanize the physical store experience and reconnect brands with real people. On the other, Grok, Elon Musk's chatbot, has generated an international crisis by producing 6,700 sexualized images per hour, including minors, leading countries to ban the tool and civil organizations to demand immediate suspension in Brazil.
We are facing a clear crossroads: AI can be a strategic lever for business and humanized experiences, or a weapon of unbridled digital destruction. The difference lies not in the technology itself, but in the presence or absence of governance, transparency and accountability. And the decisions we make now - as companies, governments and professionals - will define whether the next decade will be one of shared progress or successive crises.
Let's dive into the facts of the last 24 hours and understand why this moment calls for urgent maturity.
Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!
- AI for Business: focused on business and strategy.
- AI Builders: with a more technical and hands-on approach.
Global Retail Uses AI to Humanize, Not Replace
THE NRF Retail's Big Show, The world's largest retail event, held in New York between January 11 and 13, 2026, brought a surprising message: after years of prognostications about the end of physical stores because of e-commerce, AI is being used for reconnecting consumers with physical spaces, not to eliminate them.
Fabio Faccio, Renner's president with 27 years' experience in retail, noted at the fair the need for human reconnection. Daniel Sakamoto, executive manager of the CNDL (National Confederation of Shopkeepers), said that in a scenario of excess offers, the emphasis is on the shopping experience at the point of sale. AI helps customers find products and be in a consumer-friendly environment - sophisticated, interactive and humanized.
Alberto Serrentino, a partner at Varese Retail, pointed out that the physical store is gaining relevance as a brand experience point, It goes beyond being a logistics hub. He points to the prominence of agentic AI (which performs actions with minimal human intervention) in automated digital journeys, such as programmed purchases, and the need for physical stores to counter this with humanization and interaction.
Lyana Bittencourt, president of the Bittencourt Group, summed it up: “In a very automated world, the luxury will be to talk to someone with flesh and blood”. She predicts that AI agents will take care of searches for products and offers, but the physical store will be sought after for personalized curation.
Brazilian Cases: Renner and Magazine Luiza
THE Renner reopened its store in Morumbi (São Paulo) in July 2026 under the circular concept, with an investment of R$ 18 million. The store uses 100% of renewable energy, durable and recyclable materials, and has the EcoEstilo collector to dispose of used items. Today, 80% of Renner's items are produced with sustainable materials or processes. The fitting rooms include a lounge and booths for content creators, with the aim of transforming the fitting room into a “dressing room”.
THE Magazine Luiza inaugurated Galeria Magalu in December 2025, at Conjunto Nacional, Avenida Paulista. The 4,000 m² complex brings together Magalu, KaBuM!, Netshoes, Época Cosméticos and Estante Virtual. It offers experiences such as Casa da Lu, personalization of sports products, a skin scanner and a gamer arena.
These examples show that AI is not replacing people, but freeing up time and attention so that retail professionals can focus on what only humans do well: creating genuine connections, offering context and curation, and building memorable experiences.
Musk's Grok: 6,700 Sexualized Images Per Hour and a Global Ban
While global retail shows maturity in the use of AI, Grok, the chatbot of the social network X (formerly Twitter) owned by Elon Musk, was the protagonist of one of the biggest global crises in the technology industry. The tool allowed the creation of fake and sexualized images of real people, including children and adolescents, without consent.
Researchers from AI Forensics analyzed more than 20,000 random images generated by Grok and 50,000 user requests between December 25 and January 1. They found a high prevalence of terms such as “remove clothing” and “put on bikini”. More than half of the records generated contained individuals wearing minimal clothing. A study revealed that between January 5 and 6, Grok generated an average of 6,700 improper assemblies per hour, compared to an average of 79 on five competing sites.
International and local reactions
THE Indonesia and Malaysia suspended access to the Grok temporarily. In United Kingdom, The independent online security authority Ofcom has opened an investigation that could result in a fine of up to 10% of X's worldwide turnover. Prime Minister Keir Starmer said: “If X can't control Grok, we will, and we'll do it quickly”.
In France, Minister Anne Le Hénanff denounced Musk's decision to limit the editing of images to paid subscribers only as “insufficient and hypocritical”. A European Union imposed a precautionary measure, with Commission President Ursula von der Leyen warning: “We're not going to outsource child protection and consent to Silicon Valley. If they don't act, we will”.
In Brazil, o Workers' Party (PT) sent a letter to the National Consumer Secretariat (Senacon) requesting administrative and legal measures to block or ban Grok. The party argues that Brazilian legislation already provides sufficient instruments for the adoption of measures, including suspension and banning of the service in national territory. O Consumer Protection Institute (Idec) also formally complained to the ANPD (National Data Protection Authority) requesting the immediate suspension of Grok in the country.
Elon Musk responded by saying that the legal responsibility lies with the user who creates and uploads the illegal content, not with the companies that develop the AI and platforms. He argued that limiting access to paid subscribers prioritizes freedom of expression and that the detractors “they simply want to suppress freedom of expression”.
Meta and WhatsApp: CADE Forces Opening to Third-Party Chatbots
In parallel movement, the Administrative Council for Economic Defense (CADE) has opened an administrative inquiry against Meta to investigate suspected abuse of a dominant position in the use of artificial intelligence on WhatsApp. The investigation is looking into whether WhatsApp's New Terms, which would prevent AI tool providers from offering their technologies to users of the app (allowing only Meta AI), constitute anti-competitive conduct.
CADE determined preventive measure suspending the application of the new terms until the assessment is completed, with a daily fine of R$ 250,000 for non-compliance. The Target went back and allowed third-party AI chatbots to operate within WhatsApp in Brazil, suspending the new terms for 90 days.
A similar case occurred in Italy, The competition agency questioned the policy in December, prompting Meta to back down. A European Union has also opened an antitrust investigation into the new rules.
This movement reveals that market governance works. When regulatory bodies act quickly and firmly, companies back down from anti-competitive practices. But the question is: why do we always have to wait for a crisis before taking action?
95% of Employees Already Use AI at Work - No Governance
While governments and companies discuss policies, the reality is that AI has already entered the work routine in a practical, informal and decentralized way. According to research by MIT Brazil, 95% of employees already use personal artificial intelligence tools in their day-to-day work, without any formal control by the company.
The data shows that the majority of companies have not yet developed literacy in AI - not just technical training, but a collective understanding of limits, responsibilities, validation of responses and the impact of decisions. Without this, each employee creates their own rule, based on intuition and urgency.
There is also the less talked about risk of uncritical dependency. Without guidance, people tend to rely too much on answers, question less and use generic solutions to specific business problems. Technology speeds up work, but not necessarily the quality of decisions.
Governance in AI should not be seen as a brake. It is, in practice, a acceleration mechanism responsible. Companies that offer suitable tools, limit their use and train people create a safer and more productive environment at the same time.
IMF, Tony Robbins and the Future of Work: Standards and AI Literacy
THE International Monetary Fund (IMF) released a study on new jobs in the age of AI, highlighting that one in 10 jobs advertised in advanced economies and one in 20 in emerging market economies already require at least one new skill. The document states that countries should adopt policies to help workers adapt, acquire new skills and remain active in the labor market.
For Brazil, the IMF points out that the country fits alongside Mexico and Sweden as nations with high demand for new skills, but relatively low supply. The IMF recommends: “These countries need to invest in training and ensure better education in science, technology, engineering and mathematics. They may also need to outsource activities or rely on foreign workers with these skills”.
Tony Robbins, the entrepreneur behind a US$ 6 billion empire, believes that mastering patterns - identifying, using and creating them - is the only way to avoid becoming obsolete in the next five years. He focuses on three essential skills:
- Recognize Patterns: Learn to identify historical patterns. Cycles of disruption have occurred before (Industrial Revolution, internet), and this reduces the paralyzing fear of taking action.
- Mastering the Use of Patterns: Applying patterns of success observed in other contexts, modeling behaviors that have already worked.
- Creating New Standards: Inventing new standards to lead emerging markets. This requires being able to design new business models, workflows or frameworks that others will copy.
The message is direct: it's not about age or degree, but about the willingness to observe, adapt and innovate.
Why This Moment Urgently Demands Maturity
The last 24 hours have delivered a clear picture of the gift of AI. On the one hand, global retailers using AI to humanizing experiences, On the one hand, Brazilian professionals developing solutions that detect cancer by blood test, and regulatory bodies acting firmly to ensure fair competition. On the other, a tool without guardrails generating thousands of images of abuse per hour, companies trying to impose monopolies and 95% employees using AI without guidance.
I've been working with companies, governments and support organizations for years, and I've learned one thing: technology without governance is not innovation, it's systemic risk. AI can be a strategic lever, but only when it is embedded in an organizational culture that values transparency, accountability and digital literacy.
We can't wait until the next crisis to act. We can't pretend that blocking tools or ignoring informal use will solve the problem. We can't accept big tech leaders shifting legal responsibility to end users while profiting billions from the absence of brakes.
AI is already at work. The question is: is governance?
What to Do Now: Concrete Action and Real Literacy
If you are company leader, don't wait for external regulation to act. Invest in AI literacy for all areas, not just IT. Establish clear policies on what can and cannot be done with AI tools. Create safe environments for controlled experimentation. And above all, foster a culture where questioning AI is as valued as using it.
If you are professional, Develop your ability to recognize, use and create patterns. Learn to validate AI responses critically. Invest in hybrid skills: technical skills combined with creativity, empathy and discernment. And don't outsource your thinking to algorithms - use them as co-pilots, not pilots.
If you are public policy maker, If you want to speed up the processing of regulatory frameworks without waiting for perfection. Take preventive action when necessary, as CADE has done. Invest in science, technology, engineering and mathematics (STEM) education right from elementary school. And ensure that AI benefits everyone, not just those with the capital to pay for premium access.
The window of opportunity is open, but it won't stay that way forever. The decisions we make now - as companies, governments and professionals - will define whether the next decade will be one of shared progress or successive crises.
In my mentoring and consulting work, I help companies and executives navigate this transition with strategic clarity, building AI governance that accelerates results without creating unnecessary risks. If your organization doesn't yet have a clear policy on the use of AI, now is the time to build it - before the next crisis hits.
✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.
➡️ Join the 10K Community here
RELATED POSTS
View all
