{"id":1234,"date":"2025-12-30T07:06:28","date_gmt":"2025-12-30T10:06:28","guid":{"rendered":"https:\/\/blog.felipematos.net\/pt_br\/fisico-alerta-que-ia-consome-energia-exponencialmente-enquanto-gigantes-transferem-riscos-por-que-estas-24-horas-revelam-a-encruzilhada-entre-sustentabilidade-e-crescimento-acelerado\/"},"modified":"2025-12-30T07:06:28","modified_gmt":"2025-12-30T10:06:28","slug":"physicist-warns-that-ia-consumes-energy-exponentially-while-giants-transfer-risks-why-these-24-hours-reveal-the-crossroads-between-sustainability-and-accelerated-growth","status":"publish","type":"post","link":"https:\/\/blog.felipematos.net\/en\/fisico-alerta-que-ia-consome-energia-exponencialmente-enquanto-gigantes-transferem-riscos-por-que-estas-24-horas-revelam-a-encruzilhada-entre-sustentabilidade-e-crescimento-acelerado\/","title":{"rendered":"Physicist Warns That AI Consumes Energy Exponentially While Giants Transfer Risks - Why These 24 Hours Reveal the Crossroads Between Sustainability and Accelerated Growth"},"content":{"rendered":"<p>We are at a crossroads. It's no longer a choice between adopting AI or not - that's already been made. The question now is: which path will we take? That of machines that consume resources exponentially while the financial and environmental risks are transferred to the most vulnerable, or that of consciously building a technology that serves humanity without destroying the planet?<\/p>\n<p>The last 24 hours have brought a series of news items that, taken together, expose exactly this tension. On the one hand, <a href=\"https:\/\/www.vaticannews.va\/pt\/mundo\/news\/2025-12\/rasetti-encruzilhada-inteligencia-artificial-ia.html\">physicist Mario Rasetti warns<\/a> that AI is developing at a \u201cdoubly exponential\u201d rate and consuming energy and water at unsustainable levels. On the other, <a href=\"https:\/\/www.infomoney.com.br\/business\/global\/como-as-maiores-empresas-de-tecnologia-estao-transferindo-os-riscos-da-ia\/\">companies like Meta and Microsoft are transferring the risks of this race<\/a> for smaller investors through sophisticated financial structures.<\/p>\n<p>About that, <a href=\"https:\/\/saude.abril.com.br\/ecossistema\/a-insustentavel-leveza-da-inteligencia-artificial\/\">the book <em>Atlas of AI<\/em> documents the extractive trail<\/a> of this industry, and <a href=\"https:\/\/www.jota.info\/opiniao-e-analise\/artigos\/tecnologia-e-ia-no-mundo-do-trabalho\">experts debate how to mitigate inequalities<\/a> in the job market.<\/p>\n<p>It's not catastrophism. It's realism. And it's also an opportunity to choose the right path.<\/p>\n<h2>The Fastest Revolution in History and Its Invisible Cost<\/h2>\n<p>Mario Rasetti, world-renowned physicist and professor emeritus at the Politecnico di Torino, doesn't mince words. For him, AI represents <strong>\u201cperhaps the greatest cultural revolution in the entire history of the <em>Homo sapiens<\/em>\u201c<\/strong> - an \u201canthropological transition\u201d.<\/p>\n<p>But unlike other technological revolutions, this one is happening at \u201cdouble exponential\u201d speed. What does this mean in practice? That with each cycle, growth not only doubles, but accelerates at an ever faster pace. It's as if we were in a car whose accelerator is increasing its own power every second.<\/p>\n<p>And here's the problem: <strong>the physical infrastructure does not keep up with this speed<\/strong>.<\/p>\n<p>Rasetti specifically warns of two critical costs:<\/p>\n<ul>\n<li><strong>Energy:<\/strong> Data centers that process AI consume monumental amounts of electricity. He cites the case of Google's centers in Ireland and Microsoft's investments in Three Mile Island (yes, the same nuclear power plant that had the famous accident in 1979).<\/li>\n<li><strong>Water:<\/strong> Cooling these servers consumes water on an industrial scale, having a direct impact on regions already affected by water scarcity.<\/li>\n<\/ul>\n<p>I've been working with companies and governments for years on developing AI strategies, and one question I always get is: <strong>\u201cDo you know how much it costs in environmental terms to run this model?\u201d<\/strong> The answer, more often than not, is silence.<\/p>\n<p>Not because people are irresponsible, but because this information is deliberately made invisible. The cloud seems light, but <a href=\"https:\/\/saude.abril.com.br\/ecossistema\/a-insustentavel-leveza-da-inteligencia-artificial\/\">as Kate Crawford documents<\/a> in <em>Atlas of AI<\/em>, It is sustained by a brutal extractive chain.<\/p>\n<h3>The Unsustainable Lightness of the Cloud<\/h3>\n<p>Kate Crawford, an internationally recognized researcher, dismantles the myth of \u201cclean\u201d technology. Her book shows that AI relies on:<\/p>\n<ul>\n<li><strong>Rare mineral extraction:<\/strong> Lithium, cobalt, rare earths - essential elements for supercomputers that are extracted in often precarious conditions, changing entire landscapes.<\/li>\n<li><strong>Global supply chains:<\/strong> A logistics chain that connects mines in Africa to factories in Asia and data centers in the US and Europe.<\/li>\n<li><strong>Exponential energy consumption:<\/strong> AI has become one of the biggest energy consumers on the planet.<\/li>\n<\/ul>\n<p>And then there are the social costs: <strong>privacy constantly mined<\/strong>, algorithms that <strong>reproduce historical prejudices<\/strong>, and a <strong>concentration of power<\/strong> in the hands of an ever smaller technological elite.<\/p>\n<p>The point is not to abandon AI. It's about demanding that it be built in a sustainable and democratic way. In my work with companies, I always reinforce this: <strong>Responsible AI is not just an ethical issue - it's a question of long-term viability<\/strong>.<\/p>\n<h2>How Giants Transfer Risk (And Why It Matters)<\/h2>\n<p>While environmental costs are being made invisible, financial risks are being transferred in increasingly sophisticated ways.<\/p>\n<p><a href=\"https:\/\/www.infomoney.com.br\/business\/global\/como-as-maiores-empresas-de-tecnologia-estao-transferindo-os-riscos-da-ia\/\">A New York Times report published on InfoMoney<\/a> exposes how Meta and Microsoft use complex financial structures to expand data centers without taking on debt directly on their balance sheets.<\/p>\n<p>Meta, for example, has created a special-purpose vehicle called <strong>Beignet Investor LLC<\/strong>. Through it, it issued bonds that mature in 2049 (!), transferring the risk to private creditors such as Blue Owl Capital and Pimco.<\/p>\n<p>The strategy is simple (and ingenious, from a financial point of view):<\/p>\n<ol>\n<li>Renting computing capacity from third parties<\/li>\n<li>Wait for demand to be confirmed<\/li>\n<li>Only then make a long-term financial commitment<\/li>\n<li>If the boom slows down, exit agreements by classifying costs as operating expenses<\/li>\n<\/ol>\n<p><strong>Who takes the loss?<\/strong> Smaller companies, suppliers and investors who have taken on long-term infrastructure.<\/p>\n<p>As Shivaram Rajgopal, professor at Columbia Business School, said: <em>\u201cRisk is like a tube of toothpaste. You squeeze it here, it comes out somewhere else.\u201d<\/em><\/p>\n<p>It reminds me of the accounting structures that preceded the dotcom bubble and the 2008 crisis. The difference is that now the asset in question is not just mortgages or shares in internet companies, but the AI computing infrastructure itself.<\/p>\n<p>Experts compare these arrangements to outdated methods that add <strong>opacity in financing<\/strong>. And when there is opacity, there is a systemic risk.<\/p>\n<h3>What Does This Mean for the Innovation Ecosystem?<\/h3>\n<p>If you are an entrepreneur, executive or investor, pay attention: <strong>the concentration of risks in smaller players can create a domino effect<\/strong>.<\/p>\n<p>Imagine a Brazilian startup that has developed a promising AI solution and needs computing power. It can:<\/p>\n<ul>\n<li>Renting from a large cloud (which may readjust prices or withdraw from the agreement)<\/li>\n<li>Invest in your own infrastructure (by taking on long-term debt)<\/li>\n<li>Dependence on private credit (with high interest rates and transferred risks)<\/li>\n<\/ul>\n<p>While the giants have the flexibility to navigate these options, <strong>startups and medium-sized companies are exposed<\/strong>.<\/p>\n<p>In my mentoring work with executives and companies, I always reinforce this: <strong>It's not enough to have a good AI idea. You need to understand the cost and risk structure of the infrastructure that supports it<\/strong>.<\/p>\n<h2>The Crossroads: Three Possible Paths<\/h2>\n<p>Rasetti uses a powerful metaphor: we are at a crossroads. And like every crossroads, we have to choose a path.<\/p>\n<h3>Path 1: Accelerate Without Limits<\/h3>\n<p>Continuing the exponential race, ignoring environmental and social costs, transferring risks to the most vulnerable. It's the path of growth at any cost.<\/p>\n<p><strong>Consequence:<\/strong> Environmental collapse, systemic financial crises, brutal widening of inequalities.<\/p>\n<h3>Path 2: Radical braking<\/h3>\n<p>Imposing moratoriums, restricting development, treating AI as an existential threat. It's the path of fear.<\/p>\n<p><strong>Consequence:<\/strong> Loss of real opportunities to solve complex problems, competitive backwardness, concentration of power in the countries that didn't brake.<\/p>\n<h3>Path 3: Turning AI from Practice into Science<\/h3>\n<p>This is Rasetti's proposal - and the one that makes the most sense. <strong>Deep understanding of technology<\/strong>, This means establishing clear ethical and environmental limits, democratizing access, and building responsible governance.<\/p>\n<p><strong>Consequence:<\/strong> Sustainable, inclusive AI that serves humanity without destroying the planet.<\/p>\n<p>I choose the third way. And I believe that most people, when they understand the real options, choose it too.<\/p>\n<h2>AI Is Not Creative, It Has No Conscience - And That's Fundamental<\/h2>\n<p>Rasetti makes a point of reaffirming something we often forget in the hype: <strong>intelligent machines will never have feelings or consciousness<\/strong>.<\/p>\n<p>AI <strong>represents<\/strong> reality, but not <strong>understands<\/strong>. It's like Plato's allegory of the cave: we see shadows projected on the wall, but not reality itself.<\/p>\n<p>And more:<\/p>\n<ul>\n<li><strong>AI is not creative:<\/strong> It reorganizes existing patterns, but does not genuinely create new knowledge.<\/li>\n<li><strong>He doesn't understand the deeper meaning:<\/strong> It can process language, but it doesn't understand human experience.<\/li>\n<li><strong>They don't know how to correct themselves:<\/strong> It depends on human intervention for ethical and contextual adjustments.<\/li>\n<\/ul>\n<p>Rasetti's conclusion is comforting and challenging at the same time: <em>\u201cWe human beings are much more powerful than machines.\u201d<\/em><\/p>\n<p><strong>But that also means that the responsibility lies with us.<\/strong><\/p>\n<p>In my work with companies, one of the main transformations I seek is precisely this change in mentality: from \u201cAI will solve everything\u201d to \u201cAI is a powerful tool that needs strategic and ethical human guidance\u201d.<\/p>\n<h2>The Impact on the World of Work: Between Opportunity and Inequality<\/h2>\n<p><a href=\"https:\/\/www.jota.info\/opiniao-e-analise\/artigos\/tecnologia-e-ia-no-mundo-do-trabalho\">An article published in JOTA<\/a> addresses exactly this tension: AI transforms the labor market by creating new jobs and making others obsolete, but <strong>the pace is much faster than our ability to adapt<\/strong>.<\/p>\n<p>The main concern is not whether AI will create or destroy jobs - it will do both. The question is: <strong>who will have access to the necessary retraining?<\/strong><\/p>\n<h3>Three Strategies That Work<\/h3>\n<p>The article highlights three proven ways to mitigate inequalities:<\/p>\n<p><strong>1. Digital skills training<\/strong><\/p>\n<p>Successful examples in Kenya show that proper training generates real income growth. It's not theory - it's empirical evidence.<\/p>\n<p><strong>2. Reskilling within companies<\/strong><\/p>\n<p>Companies that adopt AI and reorganize functions (instead of just cutting jobs) generate significant salary gains for those who are retrained. And watch out: <strong>socio-emotional skills are increasingly valued<\/strong>.<\/p>\n<p>AI can process data, but it can't negotiate conflicts, lead diverse teams, or understand the cultural nuances of a market.<\/p>\n<p><strong>3. AI-facilitated inclusion<\/strong><\/p>\n<p>AI tools for creating CVs have increased the chances of less qualified candidates being hired by 8%. Technology can democratize access - when applied well.<\/p>\n<p><strong>The risk:<\/strong> Screening algorithms can reproduce historical prejudices. If the training data reflects past discrimination, AI amplifies that discrimination.<\/p>\n<p>That's why I always reinforce it: <strong>Responsible AI requires constant auditing, diversity in development teams, and transparency in criteria<\/strong>.<\/p>\n<h3>Rethinking Social Security<\/h3>\n<p>The article raises a crucial point: it will be necessary to rethink social security in order to accommodate the <em>gig economy<\/em> and flexible working.<\/p>\n<p>The traditional model of formal employment with a formal contract is being complemented (and in some cases replaced) by more fluid arrangements. This requires <strong>integrated public policies<\/strong> that protect workers without stifling innovation.<\/p>\n<p>It's a complex debate, but one that can't be postponed.<\/p>\n<h2>Curious Cases That Reveal Deep Trends<\/h2>\n<p>The news of the last 24 hours has also brought seemingly minor cases, but which reveal important trends.<\/p>\n<h3>Autonomous Cars That \u201cRead the Minds\u201d of Pedestrians<\/h3>\n<p><a href=\"https:\/\/www.gazetadopovo.com.br\/ideias\/novo-modelo-ia-consegue-prever-decisoes-pedestres-tempo-real\/\">Researchers from the USA and South Korea have developed OmniPredict<\/a>, an AI that predicts pedestrian behavior in real time with 67% accuracy - 10% more than previous models.<\/p>\n<p>The system analyzes 16 video frames (half a second) to predict the action 30 frames ahead (about a second). It doesn't sound like much, but <strong>is enough for a car to plan maneuvers in advance<\/strong>, This makes traffic run more smoothly and reduces sudden stops.<\/p>\n<p>The innovation lies in the use of <em>zero-shot learning<\/em> and language models to \u201creason\u201d about human intentions. Instead of just reacting, the vehicle anticipates.<\/p>\n<p><strong>Limitations:<\/strong> Difficulties with strong shadows and differentiating between pedestrians and cyclists in certain conditions.<\/p>\n<p>This perfectly illustrates what Rasetti says: AI represents, but it doesn't understand. It identifies patterns of behavior, but it doesn't understand <em>why<\/em> someone acts in a certain way.<\/p>\n<h3>Fake AI Videos and the Crisis of Confidence<\/h3>\n<p><a href=\"https:\/\/revistaoeste.com\/tecnologia\/video-viral-que-mostra-fabrica-de-visualizacoes-foi-feito-com-ia\/\">A viral video<\/a> which claimed to show a Military Police \u201cviews factory\u201d inflating hits on YouTube - reaching 1.2 million views on X - was created using AI.<\/p>\n<p>Although <em>click farms<\/em> are a real scam, <strong>the use of AI to produce videos about them<\/strong> raises concerns about layered disinformation.<\/p>\n<p>It's not just the fake content that's worrying, but the <strong>AI's ability to generate credible material about real fraudulent practices<\/strong>, creating confusion between what is genuine and what is manufactured.<\/p>\n<p>Repression is hampered by the lack of specific legislation in many countries. And here we come back to the crossroads: we need governance.<\/p>\n<h3>TJRJ Spends R$ 518 Thousand on AI Course in Italy<\/h3>\n<p><a href=\"https:\/\/oglobo.globo.com\/blogs\/lauro-jardim\/post\/2025\/12\/tjrj-gasta-r-518-mil-em-curso-de-ia-para-desembargadores-na-italia.ghtml\">Twenty-three TJRJ judges<\/a> attended a course on \u201cLaw, Justice and Artificial Intelligence\u201d at the University of Milan, costing the public coffers R$ 518,000.<\/p>\n<p>The news has generated controversy, but I see two sides:<\/p>\n<p><strong>Positive side:<\/strong> Judges do need to understand AI in order to judge cases involving the technology. Regulation, data protection, the ethical challenges of judicial automation - these are key issues.<\/p>\n<p><strong>Problematic side:<\/strong> The cost and opacity. Are there courses of equivalent quality in Brazil? Why choose an expensive international course? What is the real impact of this investment on legal application?<\/p>\n<p>The TJRJ is planning three more similar courses in Europe in 2026. <strong>Transparency and accountability are essential<\/strong>, especially when it comes to public resources.<\/p>\n<h2>What to do at this crossroads?<\/h2>\n<p>So we come back to the initial question: which path to choose?<\/p>\n<p>I believe that there are five fundamental principles on the right path:<\/p>\n<h3>1. radical transparency in environmental costs<\/h3>\n<p>Companies need to disclose the energy and water consumption of their AI models. Not as marketing, but as a mandatory metric. Just as we have nutritional labels on food, we need \u201cenvironmental labels\u201d on AI.<\/p>\n<h3>2. Strict Financial Governance<\/h3>\n<p>Structures that transfer risks in an opaque way must be regulated. Not to make innovation unviable, but to protect the ecosystem as a whole.<\/p>\n<h3>3. Massive investment in upgrading<\/h3>\n<p>Companies, governments and support organizations need to create accessible and practical training programs. There's no point in talking about the \u201cfuture of work\u201d without preparing people for it.<\/p>\n<h3>4. Ethical Auditing of Algorithms<\/h3>\n<p>AI systems used in critical decisions (hiring, credit, justice) should be regularly audited by diverse and independent teams.<\/p>\n<h3>5. Conscious leadership<\/h3>\n<p>Executives and entrepreneurs need to understand that <strong>AI is not magic<\/strong>. It's powerful technology that requires informed strategic decisions, social responsibility and long-term vision.<\/p>\n<p>As Rasetti emphasizes: we human beings are more powerful than machines. But only if we exercise this leadership consciously.<\/p>\n<h2>The Choice is Ours - And the Time is Now<\/h2>\n<p>The last 24 hours have brought a clear picture of the crossroads we are at. On the one hand, technology is developing exponentially, consuming resources and transferring risks. On the other, the possibility of building sustainable, inclusive and responsible AI.<\/p>\n<p>The choice is not between having or not having AI. It's between <strong>having AI that serves humanity or having humanity that serves AI<\/strong>.<\/p>\n<p>Rasetti reminds us that we are living through perhaps the greatest cultural revolution in human history. Crawford warns us about the hidden costs. Experts point to concrete ways to mitigate inequalities.<\/p>\n<p><strong>My question to you is: which path will you and your organization choose?<\/strong><\/p>\n<p>In my mentoring work with executives and companies, I help navigate exactly this crossroads - translating technical complexity into applicable strategy, connecting innovation with responsibility, and building real organizational capacity to take advantage of AI without falling into the traps of hype or extractivism.<\/p>\n<p>Because in the end, the most powerful AI isn't the one that processes the most data. It's the one that serves a clear purpose, is led by conscious people, and is built to last.<\/p>\n<p>And this way, unlike algorithms, <strong>you can't build it alone<\/strong>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Estamos em uma encruzilhada. N\u00e3o \u00e9 mais uma escolha entre adotar IA ou n\u00e3o \u2014 essa j\u00e1 foi feita. A quest\u00e3o agora \u00e9: qual caminho vamos tomar? O das m\u00e1quinas que consomem recursos exponencialmente enquanto os riscos financeiros e ambientais s\u00e3o transferidos para os mais vulner\u00e1veis, ou o da constru\u00e7\u00e3o consciente de uma tecnologia que [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":1233,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[24,19],"class_list":["post-1234","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-inteligencia-artificial","tag-criado-por-ia","tag-ia"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/blog.felipematos.net\/en\/wp-json\/wp\/v2\/posts\/1234","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.felipematos.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.felipematos.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.felipematos.net\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.felipematos.net\/en\/wp-json\/wp\/v2\/comments?post=1234"}],"version-history":[{"count":0,"href":"https:\/\/blog.felipematos.net\/en\/wp-json\/wp\/v2\/posts\/1234\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.felipematos.net\/en\/wp-json\/wp\/v2\/media\/1233"}],"wp:attachment":[{"href":"https:\/\/blog.felipematos.net\/en\/wp-json\/wp\/v2\/media?parent=1234"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.felipematos.net\/en\/wp-json\/wp\/v2\/categories?post=1234"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.felipematos.net\/en\/wp-json\/wp\/v2\/tags?post=1234"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}