Felipe Matos Blog

Google Accused of Violating Its Own AI Policies in Israel While Genie 3 Scares the Games Market - Why These 24 Hours Reveal the Tension Between Stated Ethics and Actual Enforcement

February 2, 2026 | by Matos AI

3PbOOjxkB6mW7sZWQoNvu_31fd9291e843428e99baf8aba812056f

The last 24 hours have brought two pieces of news about Google which, taken together, expose a fundamental contradiction in AI governance: on the one hand, the accusation by a former employee that the company violated its own ethical guidelines by technically supporting military surveillance applications in Israel; on the other, the launch of Genie 3, a tool capable of generating interactive digital worlds from simple prompts, causing panic in the video game market with significant falls of up to 21% in the shares of major companies.

These two stories are not unconnected. They reveal the same pattern: technology companies are advancing rapidly in the creation of powerful capabilities, but internal governance - that which should guarantee ethical and secure use - is not keeping pace. And this has direct consequences for workers, entire industries and, of course, public trust.

Let's dive into the facts, understand what's at stake and, above all, reflect on what this means for those who work with technology, for those who lead companies and for those who depend on AI to build the future.


Join my WhatsApp groups! Daily updates with the most relevant news in the AI world and a vibrant community!


Google Accused of Supporting Military AI in Israel - The Ethical Boundary in Question

According to a report by Washington Post, In August 2024, a former Google employee filed a confidential complaint with the US Securities and Exchange Commission (SEC), alleging that the company provided technical support for the use of AI in the analysis of drone images by an Israeli military supplier.

The accusation is specific: Google teams allegedly helped an Israel Defense Forces (IDF) contractor improve the Gemini model's performance in identifying targets such as drones, armored vehicles and soldiers in aerial surveillance videos. The request allegedly came from an email address linked to the IDF and was associated with the Israeli company CloudEx, named in the lawsuit as providing services to the Israeli Army.

The central point of the complaint is not just technical, but legal and ethical: at the time, Google's public AI principles explicitly prohibited the use of the technology in applications linked to weapons or surveillance systems that violated international standards. The whistleblower argues that the technical support provided contradicts these policies and, in doing so, would have misled investors and regulators - a potential violation of US securities laws.

Google vehemently denied the accusations, stating that it had only responded to a generic query, without offering in-depth technical support, and classifying the complaint as “frivolous”. In February 2025, the company revised its ethical guidelines, removing explicit prohibitions on military applications - a move that, in itself, had already generated internal and external controversy.

Why Does It Matter Beyond Google?

This story is not just about a specific company. It rekindles a fundamental debate: to what extent can big techs declare ethical principles and, at the same time, operate in gray areas where commercial, geopolitical and strategic pressure can make those same principles more flexible?

In my work with companies and governments, I constantly see the difficulty of implementing AI governance in a genuine way. It's not enough to have a nice document with “AI Principles”. You need to create internal auditing structures, rigorous approval processes and, above all, a culture that allows employees to question decisions without fear of retaliation.

The case raised by the former Google employee - if confirmed - reveals a systemic flaw: that ethics in AI can be treated as a corporate brand, rather than a binding commitment. And this erodes trust not just in one company, but in the entire AI ecosystem.

Google's Genie 3: The Tool That Made Game Stocks Plummet

Meanwhile, on the product front, Google DeepMind presented the Genie 3, an AI tool capable of generating interactive digital worlds from text prompts or static images. In public demonstrations, the system has recreated scenes reminiscent of well-known titles such as Fortnite, Dark Souls and Grand Theft Auto (GTA), producing 30 to 60 second interactive clips where the user can control a character via the keyboard (W, A, S, D).

The market reaction was immediate and brutal. According to InfoMoney, the actions of Take-Two Interactive (creator of GTA) fell 10%, the online gaming platform Roblox plummeted by more than 12%, and the Unity Software fell by 21%.

Industry professionals point out that, despite the impressive advance, the technology is still a long way from producing a full AAA-standard game - with a complex narrative, robust physics, stable multiplayer and scalable monetization. But the demonstration was enough to scare investors, who began to price the risk of disruption in an industry that moves billions globally.

What does Genie 3 really mean?

Let's separate the hype from the reality. Genie 3 is a powerful proof of concept, but still limited. It generates short clips, with approximate physics and no complex game mechanics. The tool is only available to subscribers of the Google AI Ultra plan in the USA.

But here's the point: Even if Genie 3 won't replace game studios tomorrow, it signals the direction of technology. In a few years, it is likely that generative AI tools will allow small teams - or even individuals - to create complex interactive experiences at a fraction of the current cost and time.

This doesn't eliminate the games industry. But it profoundly changes the business model, the cost structure and, above all, who has access to the means of production. And this is something I've been defending for years: AI can democratize creation, but it also concentrates power if only a few companies control the most advanced tools.

Another critical point: there are clear legal risks. Genie 3 was presumably trained with images and data from existing games. This raises questions about intellectual property, copyright and the use of protected content to train generative models - a debate that has yet to be resolved in the courts and promises years of legal disputes.

Mass Layoffs Attributed to AI: Is It True or AI-Washing?

In parallel, we have seen the phenomenon of “AI-washing” grow - companies justifying mass layoffs with the future implementation of AI, even without having mature applications ready. According to Globe/The New York Times report, AI has been cited in announcements of more than 50,000 layoffs by 2025.

Examples include Amazon, where the CEO mentioned reducing the corporate staff with AI, but focused mainly on bureaucracy; the Pinterest, which cut 15% from the team to reallocate resources to AI; and the HP, which announced planned cuts linked to the incorporation of AI.

Analysts suggest that AI-washing is a way of signaling to the market that the company is adopting AI and finding savings - an investor-friendly message - rather than admitting to financial difficulties. A study by Forrester indicates that many companies making cuts related to AI don't have mature applications ready. Another study by Yale Budget Lab concluded that AI has not yet significantly altered the labor market as a whole, and much of the recent cuts reflect the pandemic's post-hiring correction.

What's Really Happening to Jobs?

I'll be direct: AI is impacting work, but not in the simplistic way that many headlines suggest. According to The Economist, The greatest impact is being felt in junior vacancies - especially in areas such as software engineering and customer service, where repetitive and bureaucratic tasks can be automated.

Studies show big falls in employment in the US for 22 to 25-year-olds in these fields. The reason is clear: AI can do the traditional bureaucratic and repetitive jobs for these beginners more cheaply.

But there are important nuances. Experts give reasons why managers shouldn't cut junior positions indiscriminately:

  • Uncertainty about the long-term impact of AI: Technology is still evolving rapidly, and cutting the entire talent base could mean running out of the future leadership pipeline.
  • The risk of losing the talent pool: Companies that cut back too much today may not be able to bring in new professionals when demand returns.
  • Young people tend to use AI more: Data from OpenAI indicates that people aged 18 to 29 are more than twice as likely to use ChatGPT at work as those over 50.

In the legal profession, for example, AI can free up trainees for more complex tasks, such as negotiating with clients, instead of just reviewing documents. This doesn't eliminate the trainee - but it radically transforms what is expected of them from day one.

In my mentoring work with executives and companies, I insist: AI doesn't automatically eliminate jobs, but redefines the skills required. Those who lead need to invest in continuous training, create spaces for safe experimentation and, above all, be transparent about what is changing and why.

Apple Loses AI Talent to Meta and Google - Brain Wars Heat Up

Another important movement in the last 24 hours was the Apple's AI talent stampede, With the departure of at least four researchers (Yinfei Yang, Haoxuan You, Bailin Wang, Zirui Wang) and Siri executive Stuart Bowers to competitors such as Meta and Google DeepMind.

Employee dissatisfaction, according to sources, is partly due to Apple's decision to outsource part of its AI technology to Google to drive features such as the Siri enhancement. This has led to internal frustration, as top researchers want to work in teams that develop proprietary technology, not just integrate third-party solutions.

The loss of talent is weighing on Apple's efforts to catch up in the AI race, despite significant profits from the iPhone. There was an internal reorganization last year, with Tim Cook transferring AI leadership to Craig Federighi and hiring Amar Subramanya (ex-Google/Microsoft) to reinforce the area.

What Does This Say About Competition in AI?

We are seeing a unprecedented talent war. Meta, Google, OpenAI, Anthropic and other companies are competing fiercely for top researchers, offering million-dollar salaries, significant equity and, most importantly, autonomy to work on the most challenging problems in the field.

Apple, traditionally a company of integrated products and strict secrecy, is finding it difficult to compete in this model. AI researchers want to publish papers, attend conferences and see their contributions publicly recognized - something that Apple's culture of secrecy makes difficult.

This creates a strategic dilemma: Apple can continue outsourcing critical AI capabilities (as it does with Google for Siri), but this makes it dependent on direct competitors. Or it can invest massively in building internal capacity, but it will face brutal competition for talent in a market where the best researchers are rare and very expensive.

For Brazilian and Latin American companies, this war for talent also has implications. We need to invest in high-level AI training locally, create competitive research environments and, above all.., offering purpose and social impact as a differentiator - something that big tech often can't provide.

Moltbook: The Social Network of AI Agents and Security Risks

One of the most curious stories of the last 24 hours was the launch of the Moltbook, a social network exclusively for AI agents, where open source bots (now called OpenClaw) exchange ideas in a forum format, similar to Reddit, with posts, comments and votes.

Created by Matt Schlicht, the project quickly went viral, and attracting the attention of figures such as Andrej Karpathy. Humans are welcome only to observe, not to participate directly - although they do need to set up and register their agents.

But the project also raises serious security concerns. Experts warn that these agents often have broad access to users“ files, email and cloud storage. This facilitates ”prompt injection" attacks to obtain sensitive data or manipulate the agents to perform unwanted actions.

Reports from SOC Prime indicated that installations of OpenClaw (formerly Moltbot/Clawdbot) were accessible via the internet, with weak authentication and plain text credentials, allowing for data theft.

What Does This Experience Reveal?

Moltbook is a creative and philosophical exploration: what happens when AI agents interact with each other, without constant human intervention? Interesting patterns, unexpected collaborations or, in the worst-case scenario, loops of misinformation and adversarial behavior can emerge.

But the security issue is real and urgent. AI agents with broad access to critical systems represent a new and powerful attack vector. In my work with companies, I always insist: before you give an AI agent access to email, CRM or financial systems, you need to implement layers of authentication, auditing and clear boundaries.

This is not a futuristic concern. It is an immediate necessity for any organization that is experimenting with autonomous agents.

Google Rewrites Headlines with AI in Discover - And Editors Lose Control

Another important move was the announcement that Google is permanently implementing an AI feature in the Discover feed which rewrites editors' headlines, calling them “overview headlines”.

Although Google cites good user satisfaction, AI-generated headlines often have inaccuracies or misrepresent the original content. Google also uses generative AI for the Web Guide in Search Labs, which groups results with subtitles summarized by AI, flattening the voice of the editors.

An example of an AI summary cited: “Global Google Search referrals to publishers fell by 33% (38% in the US) in one year. Google Discover referrals also decreased by 21%.”

These practices represent a Google's increasing intermediation between search and journalistic content, This reduces the space for editors to present themselves directly to the public.

Why Is This a Structural Problem?

When a platform like Google rewrites publishers' headlines - even with good intentions of simplifying or adapting for context - it takes on the role of publisher, not distributor. This creates editorial responsibility that Google has historically rejected.

In addition, the economic impact is direct: if referrals fall 33%, publishers lose advertising revenue, potential subscribers and, above all, control over how their stories are presented to the public.

In the Brazilian media ecosystem, this is even more critical. Press outlets are already facing structural financial difficulties. If technology platforms mediate access to content in an opaque and algorithmic way, the sustainability of independent journalism is compromised.

Brazil Stands Out in Electoral AI Regulation - But TSE Sees Challenges

Finally, some positive news: according to Poder360, Brazil stands out internationally, with the TSE (Superior Electoral Court) leading the way by establishing specific electoral rules for AI, such as a ban on deepfakes in campaigns and mandatory labeling of AI-generated content.

The Brazilian model is similar to the EU AI Act, which classifies the risk of AI in democratic processes. However, experts warn that the TSE's institutional capacity to apply these advanced rules still needs technical maturing, especially in identifying fake content produced by AI.

What does this mean for the 2026 elections?

Brazil is, in fact, at the forefront of regulating AI in electoral contexts. This is a significant step forward. But practical implementation is the big challenge. Detecting sophisticated deepfakes at scale, identifying AI-generated content and applying sanctions in real time during an election campaign are technically complex tasks.

We need investment in technical training for TSE teams, partnerships with universities and technology companies, and above all, transparency about how detection tools work - so that they don't become black boxes that make decisions without accountability.

In my work with governments and support organizations, I argue that AI regulation needs to be accompanied by investment in technical infrastructure and team building. Otherwise, we run the risk of having laws that are advanced on paper but incapable of being enforced in practice.

What Do These 24 Hours Reveal About the State of AI?

If there's one pattern that unites all these stories, it's this: technology is advancing faster than governance, applied ethics and the institutional capacity to deal with the consequences.

Google may have stated ethical policies, but it faces accusations of violating them under geopolitical and commercial pressure. Genie 3 can generate impressive digital worlds, but raises questions about intellectual property and economic disruption. Companies can justify layoffs with AI, but often without mature applications ready. AI agents can interact on social networks, but with significant security risks. Google can rewrite headlines with AI, but it reduces editors' control over their own content.

And in the middle of it all, workers, companies, journalists, game developers and voters have to navigate this transformation without a clear map.

What to do about this reality?

I'm going to offer three practical directions, based on what I've been building with companies, governments and support organizations:

1. invest in real AI governance, not just declarations

If you lead a company that is adopting AI, it's not enough to have a document of principles. You need to create auditing processes, ethics committees with veto power, and safe channels for employees to raise concerns without fear of retaliation.

In my executive mentoring, I work with leaders to implement governance structures that are viable, auditable and aligned with the organizational culture. This isn't bureaucracy - it's building internal and external trust.

2. Prepare Your Teams for Work Transformation

AI is redefining the skills required in practically every field. But that doesn't mean that jobs will disappear - it means that what is expected of each professional is changing rapidly.

In my immersive AI courses, I help companies empower teams to use AI critically and productively, understanding limitations, risks and opportunities. This isn't about turning everyone into a data scientist - it's about creating real digital literacy that allows each professional to adapt.

3. Demand Transparency and Accountability from Platforms and Suppliers

If you're an editor, game developer, lawyer or any other professional whose work is being intermediated or automated by AI, demand transparency. How was the model trained? What data was used? What are the limits of liability?

In my consulting work with companies and governments, I build frameworks to evaluate AI suppliers, map risks and establish contracts that protect long-term interests. This is especially critical in regulated sectors such as health, education and the public sector.

Conclusion: The Tension Between Promise and Execution

The last 24 hours have reminded us that AI is both an extraordinarily powerful technology and a source of deep tensions - between declared ethics and actual practice, between innovation and economic disruption, between automation and human labor, between transparency and algorithmic black boxes.

There are no easy answers. But there are possible ways forward: real governance, continuous training, required transparency and, above all, leadership that takes responsibility for the consequences of the technology it implements.

In my daily work with companies, startups, governments and support organizations, I help leaders and teams navigate these transformations with a critical eye, practical tools and, above all, a commitment to positive social impact. If you are leading or participating in this process and want to build real capacity - not just talk - to deal with AI responsibly and productively, let's talk.

Because AI won't wait for us to be ready. But we can choose how we prepare for it.


✨Did you like it? You can sign up to receive 10K Digital's newsletters in your email, curated by me, with the best content about AI and business.

➡️ Join the 10K Community here


RELATED POSTS

View all

view all