Felipe Matos Blog

Musk's Grok Causes Global Crisis With Deepfakes While AI Predicts 130 Diseases Through Sleep - Why These 24 Hours Reveal the Chasm Between Unethical Technology and Life-Saving Science

January 11, 2026 | by Matos AI

RwjKGHpIMac9jqPcaua6V_5515c4a060df42c798e072d7c6f9f1f2

In the last 24 hours, artificial intelligence has revealed its most contradictory face: while Grok, the AI tool of Elon Musk's social network X, generated 6,700 sexualized images per hour without consent and provoked government blockades in Indonesia and investigations in the European Union, Stanford researchers published in Nature Medicine an AI model capable of predict the risk of 130 diseases from a single night's sleep, including Parkinson's, dementia and cancer, years before the first symptoms appear.

This extreme polarization is no coincidence. It marks the moment when global society faces a definitive choice: are we going to allow AI to be developed and deployed without minimal ethical safeguards, or are we going to demand that technology and responsibility go hand in hand?

As someone who works daily with companies, governments and innovation ecosystems to implement AI responsibly, I see these 24 hours as a watershed. We can no longer pretend that technology is neutral. We can no longer outsource ethics to “the market to sort out later”. And above all, we can no longer accept that global platforms operate without real accountability.


Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!


The Grok Case: When AI Without Guardrails Becomes a Crime

Let's get to the facts. According to Gazeta do Povo, In 2026, hundreds of thousands of images of real women were digitally stripped by Grok on the social network X, which has more than 500 million monthly active users. With a simple request, the tool transformed photos of normally dressed women into hyper-realistic fakes simulating nudity or wearing bikinis - all without the authorization of the people portrayed.

THE Band reported that a survey by Bloomberg identified 6,700 images categorized as nude or sexually suggestive being created per hour between January 5th and 6th. Photos taken in churches, casual selfies - any image could be turned into pornography in seconds.

Women and children were the preferred targets. The Brazilian woman identified as Giovanna told the g1“I was in shock when I saw it (...). It's a horrible feeling. I felt dirty, you know?”

What makes this case particularly serious is the lack of basic technical safeguards. While tools such as ChatGPT and Gemini automatically block requests of this type, Grok operated without effective locks. Experts interviewed by Gazeta do Povo pointed out that this wasn't a bug - it was a design choice.

The International Response: From Indonesia to the European Union

The seriousness of the situation led to immediate international reactions. According to Metropolis, a Indonesia has temporarily suspended the Grok, “The government considers non-consensual deepfake sexual practices to be a serious violation of human rights, dignity and national security in the digital space,” said Communications Minister Meutya Hafid.”

In European Union, European Commission spokesman Thomas Regnier was even more direct about Grok's so-called “spicy mode”: “We are aware of the fact that X or Grok is now offering a ‘spicy mode’ that displays explicit sexual content, with some outputs generated with childish-looking images. In reality, that's not spicy. That's illegal.”

Regulators in the UK, France, India and Malaysia have also announced investigations. Britain's Ofcom, the media regulator, said it was aware of “serious concerns” and has made “urgent contact” with X and xAI to understand protection measures.

The Brazilian Legal Vacuum

In Brazil, the situation exposes a critical regulatory gap. According to experts consulted by Gazeta do Povo and g1, creating and sharing fake intimate images without authorization is a crime under Brazilian law, but the legal framework is still limited.

Lawyer Walter Capanema, a specialist in digital law, summed up the problem: “As a general rule, it wouldn't be a crime. Our Penal Code criminalizes very specific situations, such as placing someone in a context of total or direct sexual nudity. This leaves most women unprotected.”

There is an exception based on the amendment of the Penal Code in 2025, which provides for an increase in the penalty for the crime of psychological violence against women when artificial intelligence is used - but that's not the case. requires context, intention and connection (such as an ex-boyfriend using AI to cause suffering). Otherwise, the most common route for victims is the civil courts, with claims for moral damages.

The most urgent answer? Mandatory technical safeguards. AI platforms need “guardrails”, clear ethical boundaries programmed into the system. Other tools already do this. Grok chose not to - and women paid the price.

SleepFM: When AI Saves Lives

Now, let's go to the other extreme of the same week. While Grok was causing psychological damage and criminal investigations, Stanford researchers were publishing in the scientific journal Nature Medicine the SleepFM, This is an AI model capable of predicting the risk of 130 diseases from a single night's sleep in the laboratory.

According to DW and the See, The model was trained with approximately 585,000 hours of polysomnography records (a test that measures brain activity, heartbeat, breathing and body movements during sleep) of around 65,000 people.

Biomedical data scientist Rahul Thapa, who led the study, and James Zou, also from Stanford, said that SleepFM is able to make predictions “years before the first symptoms appear” for diseases such as Parkinson's, dementia, heart attacks, prostate cancer and breast cancer.

How does SleepFM work?

During pre-training, the AI learned the “language of sleep”, statistically capturing the coordination of brain, heart and breathing signals during normal sleep. Subsequently, the model was improved for tasks such as detecting sleep stages and diagnosing apnea, achieving competitive results with established methods.

The researchers then linked this sleep data to electronic health records with up to 25 years of retrospective data to examine which subsequent diagnoses could be predicted from a single night. The model predicted with moderate to high accuracy the risk of 130 diseases among more than a thousand categories analyzed.

The predictions were particularly successful for dementia, Parkinson's disease, heart attacks, heart failure, certain types of cancer and general mortality. The analysis indicated that cardiac signals are crucial for predicting cardiovascular disease, while brain signals are more important for neurological and psychological disorders.

Limitations and Potential

Sebastian Buschjäger, a sleep expert at the Lamarr Institute, warned that the correlations provided are mostly statistical - the causal relationship needs to be validated by medical experts. The model is based predominantly on data from sleep labs, which means participants with sleep problems and from wealthier regions, resulting in an under-representation of healthy people or those from deprived regions.

But the potential is revolutionary. Matthias Jakobs, a computer scientist at the Technical University of Dortmund, sees “potential for diagnostics and therapies”, with models like SleepFM compressing complex polysomnography data into numerical representations for faster analysis, freeing up doctors' time for direct patient care.

The important caveat: AI “assists humans, but does not replace them”. The interpretation of the results and the choice of therapy are up to the doctors. AI acts as a tool and early warning system, with the final responsibility remaining with the medical team.

In the context of Brazil and Latin America, where access to specialized exams is unequal, solutions that take advantage of continuous monitoring via wearable devices combined with AI can expanding prevention and reducing health care disparities, if implemented responsibly.

Deepfakes and the 2026 Elections: Brazil's Regulatory Challenge

The timing of this deepfakes crisis couldn't be worse - or more revealing. Brazil is going through an election year, and as the opinion piece published by State of Minas Gerais, Deepfakes represent “a concrete threat to the integrity of the democratic process”.

In a country of continental dimensions, with strong political polarization and massive consumption of information through social networks and messaging apps, deepfakes can destroy reputations, manipulate emotions and directly interfere in the formation of voters' political will. When everything can be falsified with the appearance of authenticity, public debate is weakened and democracy is sickened.

The article argues that combating deepfakes cannot mean prior censorship or transferring to private platforms the power to decide in advance what circulates in the public space. The right response requires balance: effective mechanisms for subsequent accountability, technical traceability where feasible, proportionate due diligence for platforms and digital voter education.

Regulating synthetic content is not the same as regulating opinion - and confusing these spheres is a mistake that the law cannot make.

Other Advances: Robotics, Programming and AI Toys

While the spotlight was on Grok, other important developments were taking place in the field of AI:

Boston Dynamics and Google DeepMind: Atlas Gets AI

THE Boston Dynamics presented at CES 2026 the integration of the humanoid robot Atlas with advanced AI models from Google DeepMind. The aim is to turn Atlas into a “superhuman” platform for large-scale industry.

The partnership seeks to dramatically expand Atlas' cognitive abilities, advancing perception, reasoning and human interaction. Hyundai, the parent company of Boston Dynamics, plans to integrate Atlas into its global network of factories, starting with pilot projects from 2026, with expansion planned for 2030.

This represents one of the most concrete projects towards the large-scale humanoid automation, with a focus on redefining functions and raising safety and productivity standards, rather than simply replacing workers.

DeepSeek Challenges ChatGPT in Programming

The Chinese startup DeepSeek has announced that it is preparing the launch of the V4, a new AI model specializing in programming, according to the Hardware.com.br. In internal tests, the tool demonstrated superior abilities, especially when dealing with extremely long prompts - essential for elaborate projects.

The move reinforces the Chinese presence in the global technology race, especially considering that DeepSeek had already made a strong impression at the beginning of last year with the open source R1 model, developed with significantly more limited computing resources than its competitors, but delivering similar performance.

Toys with AI: Promise and Peril

Toys with generative AI cause controversy due to safety flaws. According to The Globe, a teddy bear called Kumma, from the Singaporean startup FoloToy, was withdrawn from the market after mentioning advice on sex games and how to find a knife to evaluators from the American consumer protection organization PIRG.

Psychology professor Kathy Hirsh-Pasek, from Temple University, pointed out that these toys “have enormous potential to benefit children from the age of three”, but warned: “At the moment, they are being rushed onto the market, and this is unfair to both children and parents.”

Manufacturers promise improvements, changing AI models and strengthening filters. But the question remains: who regulates the AI that interacts directly with our children?

The Choice We Can No Longer Postpone

These 24 hours reveal something I've been observing in my work with companies, governments and innovation ecosystems: we are at a defining moment. AI technology is mature enough to save lives - and to destroy them. The difference lies in the choices we make before of deploying the technology, not afterwards.

SleepFM didn't come about by chance. It was the result of rigorous research, scientific validation, responsible training with anonymized data and academic collaboration. Grok, on the other hand, was launched on a platform with 500 million users without basic technical safeguards that every generative image AI tool should have.

The question is not “is AI good or bad”. The question is: are we prepared to demand responsibility from those who develop and deploy these technologies?

Three Principles for Responsible AI

In my mentoring and consulting work, I help executives and companies implement AI based on three fundamental principles:

1. Mandatory technical safeguards: Every generative AI tool needs “guardrails” - ethical limits programmed into the system. It's not optional. It's not a “feature for later”. It's a minimum requirement.

2. Functional transparency: It doesn't mean opening up source code, but ensuring that the effects are adequately explained, that there is the possibility of independent auditing and effective mechanisms for challenging automated decisions that affect rights.

3. Proportional accountability: Those who develop, deploy and distribute share responsibility. The chain of responsibility needs to be clear - and enforceable.

The Future We Choose

The contrast between these 24 hours couldn't be clearer. On the one hand, an AI that predicts 130 diseases through sleep, offering the promise of early diagnosis, preventive medicine and cost savings in healthcare systems. On the other, an AI that generates 6,700 sexualized images per hour, violating rights, traumatizing women and children, and forcing governments into emergency lockdowns.

The technology is the same. The difference lies in the ethical choices, the technical safeguards and the responsibility of those who develop it.

In 2026, Brazil will have a strategic regulatory choice - not just about deepfakes in elections, but about how we want AI to operate in our territory. We can opt for reactive regulation, guided by fear and electoral urgency. Or we can build a sophisticated legal framework, proportional and based on technical evidence.

As a society, we are at a similar moment. We can passively accept that global platforms operate without accountability, testing limits until scandals force them to back down. Or we can demand, from the outset, that technology and ethics go hand in hand.

AI will continue to advance. The question is: Are we going to allow it to advance unevenly, benefiting some while violating the rights of others? Or are we going to insist that technological progress serves everyone, with safeguards that protect the most vulnerable?

In my work with companies and governments, I help implement AI that generates real value - economically, socially, ethically. It's not about stopping innovation. It's about directing innovation so that it builds the future we want to live in, not the future we fear waking up in.

These 24 hours have made it clear: the technology is ready. The question is whether us we are ready to use it responsibly. And if we're not, what price will we pay for our omission?

The answer, as always, depends on the choices we make today. And the clock is ticking.


✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.

➡️ Join the 10K Community here


RELATED POSTS

View all

view all