Fei-Fei Li Bets on “Spatial Intelligence” While 68% of Brazilian Companies Adopt AI in HR - Why This Turn to the Physical World Defines the Next Decade
December 2, 2025 | by Matos AI

You know that feeling that the conversation about AI has changed tone in the last 24 hours? It's not just you. While Fei-Fei Li, considered “the godmother of artificial intelligence”, announces that the next era will be defined by systems that understand and act in the physical world, Brazilian data shows that 68% of companies already use or test AI in human resources, a jump of 48% compared to the previous survey.
And there's more: Brazilian Catharina Doria went viral with more than 300 thousand followers translating the risks and opportunities of AI for ordinary people, while the debate about the “AI bubble” is becoming more and more concrete, with 53% of global managers claiming that the sector is already in a bubble.
Why does all this matter? Because we are witnessing the transition of AI from the digital to the physical world - and Brazil is navigating this transformation in a unique way, with high practical adoption but still facing literacy and maturity challenges.
Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!
- AI for Business: focused on business and strategy.
- AI Builders: with a more technical and hands-on approach.
Space Intelligence: The Next Era According to Fei-Fei Li
Fei-Fei Li is not just another voice in the AI race - she is one of the few scientists who helped create the foundations of what we now call modern artificial intelligence. And her vision for the next decade is clear: AI needs to get out of the screens and understand the world in three dimensions.
Second Forbes Brazil report, Li believes that the era of text-based chatbots - such as ChatGPT, Claude and Copilot - was just the beginning. The next step is what she calls “spatial intelligence”, developed by her new venture, World Labs.
But what is spatial intelligence anyway?
While LLMs (Large Language Models) predict the next word based on text, the world models proposed by Li look at videos and images to learn how to recreate them. complete 3D spaces. They maintain a real three-dimensional understanding of space, preserving laws of physics and spatial relationships.
Li argues that humans are “embodied agents”, learning through physical interaction. AI systems trained on text alone lack this link, creating a gap that world models seek to fill, giving machines an “intuition” about how the world operates.
Why Does It Matter for Real Business?
Spatial intelligence will allow companies to model decisions before putting them into practice, reducing risks and speeding up execution. Think about:
- Redesigning production lines industrial simulating virtual flows before any physical change
- Testing logistics networks in digital environments that respect real-world physics and constraints
- Simulate patient flows in hospitals virtually before implementing structural changes
- Embedded AI in robots, drones and autonomous vehicles, avoiding the high cost and risk of real-world training
World Labs' first product, Marble, already generates explorable 3D environments from text descriptions. It's the “world created on demand” - and that changes everything.
Li argues that the next era of innovation will be won by leaders who understand that the power and risks of AI increase with its reach. And this brings us directly to the Brazilian scenario.
The Brazilian Paradox: High Adoption and Critical Literacy
While Li projects the future of AI in the physical world, Brazil is experiencing an interesting paradox: high rate of empirical adoption combined with urgent need for literacy.
The Caju in partnership with Fundação Dom Cabral shows that 68% of Brazilian companies are already using or testing AI solutions in human resources, a jump of 20 percentage points compared to the previous survey (48%). This puts Brazil at the forefront of practical adoption of the technology.
But there is a catch.
The Missing Voice: Catharina Doria and AI Literacy
This is where Catharina Doria, the Brazilian woman who went viral on social media by publishing videos explaining in an accessible way how to protect yourself and understand the risks of artificial intelligence. In less than a year, she has gained more than 300,000 followers on Instagram, with videos that exceed one million views.
Doria's story is fascinating. At 16, she created an app to report sexual harassment. Then, when she read the book Algorithms of Oppression (Algorithms of Oppression), by Safiya Noble, which addresses Google's bias, decided to change careers. She completed a degree in communications, a master's in data science, graduating with honors, and worked for an American AI governance company.
But the turning point came when she realized that the industry's discussions were too advanced and weren't being translated to ordinary people. like his mother, who didn't understand what AI was or the scams that existed.
Doria identifies that everyone is vulnerable to AI today, regardless of age or demographics. The main reason is speed of AI adoption without adequate literacy. Industry has embraced technology as a process optimizer without explaining the problems.
Practical examples she addresses:
- Companies have started using ChatGPT without telling you that chat histories are saved and can be used to train models
- Lack of transparency about how Meta's algorithms can use open profile photos to train their systems
- Privacy risks with robot vacuum cleaners that can collect data or capture images
- Difficulty recognizing whether an image or video was created by AI
And most importantly: Doria rejects scaremongering, transforming fear into knowledge. She explains the problems, but always ends with what the person can do to empower and protect themselves.
In my work with companies and governments, I see exactly this gap that Catharina identifies: organizations adopting AI quickly, but without creating the necessary governance, transparency and literacy structures for sustainable and ethical use.
The War of Perception: Bubble or Solid Foundation?
Now let's talk about the elephant in the room: the “AI bubble”.
Second Bank of America research, At the same time, 54% of global managers believe that investments in the “Magnificent Seven” (Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia and Tesla) are “crowded”, 45% point to the risk of a bubble, and 53% say that the sector's shares are in fact already in a bubble.
The Magnificent Seven lost more than US$ 1.7 trillion in market value in less than a month, according to a survey by Elos Ayta. The combined value fell from US$ 22.24 trillion on October 29 to US$ 20.49 trillion on November 20.
But beware: not everyone sees this as a problem.
Divergent Voices: From Alarmism to Strategic Optimism
Sundar Pichai, CEO of Google, admitted to BBC that there is “some irrationality” in the AI investment boom. Jeff Bezos mentioned that investors don't usually give US$ 2 billion to a team of six people with no product - something that would be happening now.
On the other hand, according to report from Correio do Povo, Bezos also takes a different view: he sees an eventual crash as a natural market selection that can be beneficial, because “when the dust settles and the winners are left, society benefits from these inventions”.
Economist Moisés Waismann, from Unilasalle, reiterates that all innovations have bubbles, And usually the ones who “pay the price” are the last to enter, with the fewest resources.
But there is one crucial detail: Goldman Sachs has published an analysis stating that the rise in technology stocks has “solid fundamentals”. And Nvidia's CEO, Jensen Huang, acknowledged the bubble talk but said he saw “more than US$ 500 billion in revenue coming in over the next few quarters”.
So what's the truth?
The way I see it, both. There is speculative excess in startups with no product and absurd valuations. But there are also solid investments in infrastructure that will have long-term returns - such as data centers, chips and fundamental models.
The question is how to distinguish one from the other. And that requires maturity.
Google Strikes Back: The Slew of Gemini 3
Speaking of maturity and fundamentals, Google has just taken an important turn in the AI race that illustrates this differentiation well.
Second CNN Brazil report, Gemini 3 debuted on November 18 and now tops the leaderboards for tasks such as text generation, image editing, image processing and text-to-image conversion, putting it ahead of rivals such as ChatGPT, Grok and Claude.
Google said that more than one million users tried Gemini 3 in the first 24 hours.
This provoked interesting reactions:
- Nvidia published on X stating that she was “very pleased with Google's success”
- Sam Altman (OpenAI) wrote “Congratulations to Google on Gemini 3! Looks like a great model”
- Marc Benioff (Salesforce) has said it will not return to ChatGPT
- Goal is reportedly in talks with Google to buy its Tensor chips
Why so much movement? Because Google combines two elements that few companies have: capacity to develop cutting-edge models and own specialized chip infrastructure (ASICs Tensor).
Although Google's chips are designed for more restricted workloads than Nvidia's GPUs, they demonstrate that there are viable alternative paths - and more efficient ones in specific scenarios - beyond Nvidia's dominance of the AI chip market.
Google's shares rose by almost 8% in the week after the launch, while Nvidia's fell by just over 2%.
This doesn't mean that Google will dethrone Nvidia any time soon. But it does mean that the AI race is far from having a single clear winner - and companies that invest in solid technological foundations (such as proprietary chips and differentiated models) are better positioned for the long term than startups with no real product.
Leadership Changes: The Apple Case and the Sign of the Times
Another piece of news that has attracted attention in the last 24 hours: John Giannandrea to step down as Apple's head of AI.
Giannandrea held the position of senior vice president and reported directly to CEO Tim Cook. The departure marks the biggest change in Apple's AI team since the launch of Apple Intelligence in 2024 - a product which, it has to be said, was not well received by users and critics, especially after the launch of an improved version of Siri was postponed until 2026.
Amar Subramanya, an AI researcher who worked at Microsoft, has been announced as Apple's new vice president of AI, reporting to Craig Federighi (head of software).
What does this tell us?
That even giants like Apple are making significant strategic adjustments to their AI teams. The appointment of Subramanya, who comes from Microsoft, indicates the value of external expertise to boost core areas of fundamental models, research and security.
Second MacMagazine report, The move is aimed at leveraging Subramanya's experience in AI research and the integration of these technologies into products and features, which is crucial for Apple's “continued innovation and future artificial intelligence capabilities”.
Tim Cook pointed out that Craig Federighi has been instrumental in driving AI efforts forward, including overseeing work to deliver a more personalized Siri to users, due in March-April 2026 with iOS 26.4.
Leadership changes in AI aren't necessarily bad - they can be signs of maturation and strategic realignment. But when they are accompanied by product delays and lukewarm market reception, they indicate that even the biggest players are still figuring out how to turn technical capacity into real value for users.
AI Saving US$ 70 Billion in Disasters - But At What Cost?
Now let's talk about concrete applications that show the transformative potential of AI when applied well.
Second Deloitte study, The use of artificial intelligence can prevent up to US$ 70 billion a year in losses caused by natural disasters by 2050.
Currently, the global losses generated by disasters amount to around US$ 460 billion a year and could exceed US$ 500 billion in the coming decades. The incorporation of AI solutions throughout the entire infrastructure lifecycle - from planning to recovery - can reduce expected losses by up to 15%.
Here are some applications:
- Planning: AI uses digital twins and predictive maintenance, such as intelligent vegetation management to reduce power outages and forest fires
- Response to events: Early detection systems can prevent US$ 100 million to US$ 300 million in annual fire losses in Australia alone
- Reconstruction: AI speeds up damage assessment after disasters
Globally, storm losses alone could be reduced by up to US$ 30 billion a year.
Eduardo Raffaini, partner leading Strategy, Infrastructure & Sustainability at Deloitte, says that “Intelligent infrastructure built on AI can redefine the way we deal with extreme weather events, reducing impacts and protecting lives and assets”.
But - and there's always a “but” - there's another side.
AI's Energy Dilemma
The high energy consumption of the data centers needed to train and operate AI models puts pressure on electricity grids and can increase the carbon footprint. Manuel Fernandes, partner and leader of energy and natural resources at KPMG in Brazil and South America, warns that the expansion of data centers will continue, even without a guarantee of clean energy.
Jefferson Lopes Denti, Chief Disruption Officer at Deloitte Brazil, concludes that the use of AI is a “survival strategy” and that global collaboration is essential to develop solutions that prevent failures, reduce production losses and cut emergency costs.
Here's the paradox: we use AI to make our infrastructure more resilient to climate disasters, but the energy consumption of AI itself can accelerate the climate change that causes these disasters.
This requires strategic maturity. It's not enough to adopt AI - it has to be done sustainably, with clear targets for energy efficiency and the use of renewable sources.
Corporate Adoption at an Accelerated Pace
Returning to the Brazilian business scene, the figures are still impressive.
Second survey released by Estadão, Nine out of ten companies are already using generative AI to increase efficiency. The central task is reduce friction in the retail buying journey and optimize sales in real time.
A concrete example: Casas Bahia launched the Zap Casas BahIA, an intelligent assistant integrated into WhatsApp for Black Friday, according to StartSe report. The solution allows the consumer to interact by text, audio or image, receiving direct responses like a human salesperson, acting as a real-time purchasing consultant.
This is the practical materialization of AI in Brazilian retail - it's not science fiction, it's operational reality.
But here's the critical point: the penetration of 90% in use for efficiency implies that companies that have not yet adopted it may face significant competitive disadvantages in terms of cost and operational speed in the near future.
Generative AI is becoming a key competitive factor, and no longer a differentiator.
But what about organizational maturity?
Adopting a tool is one thing. Integrating AI into the workflow in a sustainable way is quite another.
Second MIT analysis published by Terra, The majority of AI pilots do not generate a measurable impact on the business when carried out in isolation, without redesigning processes and routines.
Research by McKinsey identified workflow redesign as one of the factors with the highest correlation with financial results in AI projects.
The organizational challenge involves the so-called “AI sprawl”: the multiplication of disconnected solutions in different areas, with functional overlaps and different rules of use and control. This scenario increases costs, dilutes learning and makes it difficult to create reliable standards of use.
Studies of MIT Sloan point out that capturing productivity with AI requires “deconstructing” activities, rebuilding processes and redefining human-machine collaboration, including new skills of judgment, criticism and creativity to avoid producing low-utility content.
In my mentoring work with executives, I see exactly this: companies that implement 15 different AI tools in separate silos, without unified governance, end up creating more complexity than value.
Technical competence alone is not enough: it's about aligning technology, work design and leadership practices.
Geoffrey Hinton's Warning: Economic Collapse?
Before I close, I must mention a voice that cannot be ignored: Geoffrey Hinton, winner of the Nobel Prize.
Second report from Estadão, Hinton warned that the replacing human workers with AI could lead to economic collapse.
The mass replacement of jobs by AI is a central concern raised by Hinton, suggesting a scenario of economic crisis resulting from this automation.
It's not scaremongering from a technophobe - it's a warning from one of the fathers of modern AI.
And that's the point: how do we balance the efficiency and productivity gains of AI with the preservation of meaningful jobs and the economic structure that depends on human labor?
I don't have an easy answer to that. But I do know that we need to start seriously discussing public policies for retraining, basic income, and new models for distributing the wealth generated by AI.
Otherwise, we will be creating technical efficiency at the expense of social collapse.
What Does It All Mean For You?
So how do we connect all these dots - from Fei-Fei Li's spatial intelligence to Hinton's warnings, through to accelerated adoption in Brazil and the debate on bubbles?
In my view, we are at a moment of critical maturity. AI has left the phase of pure experimentation and is entering the phase of real integration into the physical world and business processes. But this integration requires:
- Broad literacy: As Catharina Doria demonstrates, we need to translate the complexity of AI into accessible language, empowering people to use the technology with an awareness of the risks
- Unified governance: End the “AI sprawl” and create structures of control, transparency and accountability
- Process redesign: It's not enough to add AI to existing flows - you have to rethink the work itself
- Sustainable investments: Distinguish solid fundamentals (infrastructure, chips, models) from speculation without a product
- Awareness of social impact: Anticipate and mitigate the effects on the workforce and the economy
For leaders and companies, this means the window of opportunity to lead responsibly is narrowing. Those who adopt AI with maturity - literacy, governance, process redesign - will come out ahead. Those who just buy tools and throw them into different silos will be wasting resources and creating future liabilities.
For professionals, it means investing in skills that AI cannot easily replacecritical judgment, strategic creativity, emotional intelligence, the ability to ask the right questions of AI systems.
And for society as a whole, it means demanding transparency, intelligent regulation and public policies that ensure that the benefits of AI are distributed fairly.
Next Steps: Building Maturity in the Age of Spatial Intelligence
If you've come this far, you're probably asking yourself: “OK, Felipe, but what do I do with all this?”
First, breathe. The speed of change is truly breathtaking, but there is order in chaos when you see the structuring patterns.
Second, invest in literacy - you and your teams. Not everyone needs to become a data scientist, but everyone needs to understand the basics of how AI works, what its limits are, and how to use it judiciously.
Third, start by redesigning specific processes, don't try to transform the whole organization at once. Choose a critical workflow, map out where AI can add real value (not just automate what already exists), and build clear governance for that process.
Room, connect with communities of practice. You are not alone on this journey. There are thousands of leaders, managers and professionals navigating the same challenges. Learn from the mistakes and successes of others.
And fifthly - and perhaps most importantly - keep purpose at the center. AI is a tool, not an end in itself. The goal is not to adopt AI because everyone else is doing it, but to use AI to do more meaningful work, create more real value for customers, and build organizations that are more human, not less.
In my mentoring and consulting work with executives and companies, I help navigate exactly these issues: how to translate the speed of AI innovation into concrete, sustainable strategies that generate real value. How to build ethical AI organizational literacy. How to redesign processes to capture productivity without falling into “AI sprawl”. And how to do all this while keeping meaningful work and positive social impact at the center.
If your organization is at this inflection point - adopting AI quickly but feeling that it lacks strategic clarity, governance or measurable impact - it's time to move on. let's talk. I have immersive courses and mentoring programs designed for this exact moment we are living in.
The next decade will be defined by those who know how to integrate artificial intelligence into the physical world with maturity, purpose and awareness of impact. It's not about having the best tools, but knowing how to use them to build the future we want to see.
And that future begins today, with the choices we make about how to adopt, govern and apply artificial intelligence in our organizations and in our lives.
✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.
➡️ Join the 10K Community here
RELATED POSTS
View all
