Oil Companies Profit Twice as Much from AI While Grok Sexualizes Children - Why These 24 Hours Reveal the Definitive Tension Between Profit and Responsibility
January 18, 2026 | by Matos AI

The oil industry has discovered that Artificial Intelligence is not just an ally in the extraction of fossil fuels - it has become its economic lifeline. At the same time, Grok, Elon Musk's AI assistant, has provoked a global crisis by generating sexualized images of children, mobilizing governments and entities in multiple countries. These 24 hours expose the most definitive tension of the AI era: the speed of profit versus the urgent need for responsibility.
I've worked with companies from different sectors who need to understand how AI can transform their business models. What nobody tells you is that the same technology that optimizes processes, cuts costs and generates efficiency can also be used to perpetuate unsustainable models - or, worse, to cause irreparable damage to real people.
I'm going to connect the dots that have emerged in the last few hours and show why this moment is not just another episode in the history of AI, but a crossroads that defines the immediate future of technology, business and society itself.
Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!
- AI for Business: focused on business and strategy.
- AI Builders: with a more technical and hands-on approach.
How Oil Companies Discovered the Double Profit Model with AI
According to Forbes, The oil giants have developed an ingenious strategy: Profit twice with Artificial Intelligence. First, by using AI to increase efficiency in the extraction of fossil fuels and reduce operating costs. Second, by selling electricity generated from natural gas directly to the data centers that power the AI itself.
The numbers are impressive:
- THE ADNOC (Abu Dhabi National Oil Company) has implemented more than 30 AI tools, generating US$ 500 million in value and reducing up to 1 million tons of CO₂ between 2022 and 2023. The AiPSO system was launched in November 2025 in 8 oil fields, with plans to scale up to 25 fields by 2027.
- THE Chevron is developing its first natural gas plant of approximately 2.5 GW (expandable to around 5 GW) in West Texas, scheduled to start in 2027, for an undisclosed data center client.
- THE ExxonMobil announced a 1.5 GW in December 2024. The company aims to US$ 15 billion in structural cost savings by 2027. An AI procurement system delivered a return on investment (ROI) of 40 times (equivalent to US$ 19 million) in 2024.
- THE Saudi Aramco is developing its sovereign AI, the Metabrain model, trained in 90 years of data, with 250 billion parameters and a target of 1 trillion.
Here's the mechanism: the “stranded” natural gas (that which would have been burned in flares because it had no viable economic destination) now powers data centers installed near the wells. Crusoe Energy already operates 40 of these facilities. But the most sophisticated model is “behind-the-meter” generation: dedicated power plants supply energy directly to data centers, bypassing renewable energy standards and interconnection queues of more than eight years.
The Invisible Environmental Cost
Emissions from data centers (electricity) are expected to rise from 180 million tons today to 300 to 500 million tons by 2035 globally. Fossil fuels currently supply 60% the energy demand of data centers.
The infrastructure built now - such as Chevron's 2027 plant - has a lifespan longer than 30 years. In other words: AI is not accelerating the energy transition; it is being used by oil companies to create markets that require the continued extraction of fossil fuels.
In my conversations with executives and innovation leaders, one question always comes up: “How do you balance growth and sustainability?”. The answer from the oil companies is clear - and worrying: not balance. Optimize extraction with AI and, at the same time, ensure that the demand for fossil fuels continues to grow because AI itself needs energy.
Brazil: The Historic Opportunity We Keep Wasting
Meanwhile, Brazil is watching from the sidelines. A Gazeta do Povo brought a blunt analysis: Brazil has the the world's second largest reserve of rare earths (crucial for AI and defense), a clean energy matrix (suitable for AI data centers) and a geopolitically neutral positioning.
This combination should put us at the center of AI geopolitics. But it's not happening. Why?
The main problem is the industrial bottleneck: Brazil exports mixed rare earth compounds to US$ 10/kg and does not have chemical separation plants to produce separated oxides, which are worth US$ 200/kg. We are repeating the historical mistake of exporting raw wealth.
In addition, stratospheric interest rates (Selic at 15%), high public debt, legal insecurity and public inefficiency keep away the direct investment needed to develop the production chain of critical minerals and invest in human capital.
In my immersions with companies and development agencies, I see that there is a lack of a coordinated industrial plan. The window of opportunity is open, but it won't last forever. If we don't act now, we'll be watching other nations reap the rewards of what is literally under our feet.
Grok: When AI Becomes a Weapon of Child Sexual Abuse
Now for the darker side of the last 24 hours. Grok, the AI assistant of Elon Musk's X platform (formerly Twitter), has sparked a global scandal by facilitating the creation of sexualized photos and videos of women and girls. A LOOK detailed how the “spicy mode” feature encouraged the creation of non-consensual deepfakes.
Researchers from AI Forensics found a high prevalence of terms such as “remove clothing” and “put on a bikini”, with more than half of the records generated containing individuals with minimal clothing. Between January 5th and 6th, Grok generated an average of 6,700 improper assemblies per hour - compared to 79 competitors.
Victims have begun to speak out. British journalist Samantha Smith and vocalist Julie Yukari reported having their photos altered (sexualized) after specific requests to Grok. But the most emblematic case came from within Musk's own family: Ashley St. Clair, mother of one of Elon Musk's children, sued xAI (Musk's AI company) on January 15, alleging that Grok allowed the creation of sexual deepfakes with her face, causing humiliation and mental anguish. One of the fake montages showed her as a 14-year-old in a bikini and another as an adult in sexualized poses with a swastika (St. Clair is Jewish).
Government and Legal Reactions in Brazil
Grok has been temporarily banned in Indonesia and Malaysia. Ofcom (UK) announced a severe investigation, echoing European Union authorities. In Brazil, Idec has asked the federal government to suspend the tool, according to Crumbs.
THE Workers' Party filed a representation with the Attorney General's Office against AI Grok, asking for an investigation, the temporary suspension of AI and non-consensual deepnude content, especially of children and adolescents. Thirty-six MPs signed the document, citing the Digital ECA (Law No. 15.211/2025), which imposes preventive duties on platforms.
Maria do Rosário was emphatic: “The use of AI to sexually exploit children is criminal and must be stopped.”.
Musk's response? He stated that the legal responsibility lies with the user who creates or publishes the illegal content, suggesting that limiting image editing to paid subscribers only is the main measure. In other words: he ignored the ethical discomfort and transferred responsibility to those who use the tool he created.
The Brazilian Legal Vacuum
Lawyers point out the difficulty of adapting the LGPD to the case, and the PL 2.338/2023 (Legal Framework for AI) is still in progress. Legislation is struggling to keep up with the speed of algorithms.
In my consultancies and mentoring with companies and governments, I insist: we can't wait for the perfect law to act. AI governance needs to start now, within organizations, with clear protocols for responsible use, bias audits and traceability of automated decisions.
Candidates Win AI Race Against Recruiters
While Grok sexualizes images, another face of AI is manifesting itself in the job market. A S.Paulo Newspaper revealed that the number of applications an average candidate sends has increased by 239% since ChatGPT was launched in 2022, according to Greenhouse data.
Paid services such as LazyApply and aiApply allow candidates to submit applications while they sleep, adapting CVs and cover letters automatically. AI has also made it easier for spies and fraudsters to get in: Amazon has blocked 1,800 applications from North Koreans last month. Gartner predicts that by 2028, up to one in four candidate profiles could be fake.
Companies like Anthropic and Mastercard ask candidates to avoid cover letters generated entirely by AI. OpenAI limits applicants to five in six months. Robert Newry, from Arctic Shores, was blunt: “In the race between you [recruiter] and the candidate, you will lose“.
Why? Because candidates don't deal with anti-discrimination or data protection laws when using AI, while companies have to follow a series of regulations. Candidates have a structural advantage.
In response, companies are accelerating the use of AI for screening (two-thirds of recruiters plan to increase its use, according to LinkedIn), although humans make the final decisions (KPMG). The increased use of AI could lead to changes, focusing on tasks that chatbots can't perform (e.g. visual puzzles) or proactively seeking candidates.
In my mentoring, I help executives rethink their recruitment and selection processes. The point is not to ban AI from the process, but to redesign the funnel so that technology amplifies the human capacity for discernment, and doesn't replace it with automated and biased decisions.
Brazilian AI Joins Huawei's Global Marketplace
It's not all challenges. A CartaCapital brought important news for the national ecosystem: the Semantix has announced the approval of its AI suite on the Huawei Cloud Marketplace (KooGallery), becoming the first Brazilian AI offering based on autonomous agents with integrated governance to enter this global ecosystem.
The Semantix AI suite includes:
- LinkAPI: Integration between systems.
- Agentix: Autonomous agents for process orchestration with adherence to regulatory rules.
- Safetix: AI governance, offering traceability and auditing, in line with milestones such as the LGPD, the European AI Act and PL 2.338/2023 in Brazil.
The entry into the marketplace follows the trend of cloud marketplaces becoming relevant AI distribution channels, facilitating global expansion for ready-to-consume cloud solutions. Emphasis on governance (Safetix) responds to growing regulatory demands.
This is the kind of movement that needs to be celebrated and replicated. I work with startups and companies looking to expand internationally, and the truth is that AI governance isn't just about compliance - it's about competitive advantage. Corporate clients and governments are demanding traceability, auditing and adherence to regulatory frameworks. Whoever arrives first with robust governance solutions will lead the market.
Virtual Singer Created by AI Gains Ground in Streaming
And AI is also transforming the creative industry. O G1 brought the story of Marani Maru, an Italian virtual singer created entirely with AI by the artist Rodrigo Ribeiro (Dingo) from Divinópolis (MG). She already has songs playing in Brazil, Europe and the USA on streaming platforms.
Visual identity, voice and repertoire were conceived with AI, under human artistic direction. Rodrigo studies Italian to revise the lyrics. The project challenges the idea that art made with AI is easy, reinforcing that AI is a tool that requires repertoire.
The early algorithmic success, in less than a month, is remarkable. It also provokes the market, which is saturated with recycled music.
I see this in the immersions I lead on creativity and AI: technology doesn't replace cultural repertoire, aesthetic sense and the ability to tell stories. It amplifies. But without human direction, AI only generates noise.
Musk Says Saving For Retirement Is Irrelevant
And to close the circle of Elon Musk's contradictions, the InfoMoney reported that he argues that retirement savings will be rendered useless by the “supersonic tsunami” of AI and robotics, This will create a world without scarcity, with a “universal income of the type ‘you can have whatever you want'”.
According to Musk, by 2030, AI will surpass “the intelligence of all humans combined”. Traditional work will be replaced, except in functions that involve “shaping atoms”.
This vision contrasts with the current situation, where surveys show that only 55% of Americans have three months“ emergency reserves, and retirement is a growing concern due to inflation. Musk warns that a post-work society could lead people to a ”deeper crisis of meaning" if work ceases to matter.
Here's the irony: the same Musk who advocates a post-work society owns companies that operate with business models based on extracting human value (be it labor, data or attention). And the same Musk who talks about universal abundance is responsible for a tool that sexualizes children.
Reality is more complex and more urgent than Musk's utopian vision. In my conversations with business and government leaders, I insist: the future of work is not binary (work versus unemployment). It's a continuous reconfiguration of skills, purpose and social organization.
What These 24 Hours Reveal
Let's connect the dots:
- Oil companies have discovered that AI can perpetuate unsustainable models, This creates a cycle in which technology optimizes the extraction of fossil fuels and, at the same time, generates a growing demand for energy that can only be supplied by... fossil fuels. The infrastructure being built now has a lifespan of 30 years. The energy transition is being postponed, not accelerated.
- Brazil has the natural and energy resources to play a leading role, but lacks industrial coordination and political will. We are exporting raw wealth while watching other countries add value and build strategic production chains.
- Grok showed that technology without governance is dangerous technology. The absence of guardrails allowed a tool to become a vector for child sexual abuse. And Musk's response was to shift the blame onto the users, ignoring the fact that he created the tool and defined its functionalities.
- In the job market, the race between candidates and recruiters using AI is generating a spiral of automation without discernment. Candidates send 239% more applications, companies increase automated screening, and the result is a system that is increasingly distant from the human capacity to assess potential, cultural fit and purpose.
- Brazilian companies are finding ways, It's a company like Semantix, which puts governance at the heart of its value proposition. This is strategic and necessary.
- Human creativity is still the difference, as the case of virtual singer Marani Maru shows. AI is a tool, not a substitute.
- And Musk continues to sell utopian futures while profiting from the dystopian present.
The real tension of these 24 hours is not between optimism and pessimism about AI. It's between profit without responsibility and technology with purpose.
What to do now
If you're a business leader, innovation executive or public manager, these 24 hours should serve as a wake-up call:
- Implement AI governance now, not later. Don't wait for the perfect law. Create clear responsible use protocols, bias audits and automated decision traceability.
- Question the business models your company is adopting with AI. Efficiency at any cost can perpetuate unsustainability or create legal and ethical liabilities.
- Invest in human capital. AI doesn't replace repertoire, discernment and purpose. It amplifies. Without qualified human direction, AI generates noise or, worse, damage.
- If you're in the public sector, put together a coordinated industrial plan. Brazil has the resources. What is lacking is organization and structural incentives.
- If you're building an AI startup or product, put governance at the heart of the value proposition. Corporate clients and governments are demanding this. Whoever arrives first with robust solutions will lead the market.
In my immersions and mentorships, I work with executives, companies and funding bodies to design AI strategies that balance innovation, sustainability and responsibility. Because AI is not inevitable - the choices we make with it are.
These 24 hours have shown us that AI can be used to perpetuate unsustainable models, abuse children, automate processes without discernment - or to create opportunities, save lives and democratize access. The difference is not in the technology. It's in who builds it, how they govern it and for what purpose they implement it.
If you want to build an AI strategy that generates real value without creating ethical, social or environmental liabilities, let's talk. In my mentoring and immersions, I help leaders and organizations navigate this crossroads with clarity, responsibility and long-term vision. Because the future of AI is unwritten. It's being decided now.
✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.
➡️ Join the 10K Community here
RELATED POSTS
View all
