Alphabet, Google’s parent company, has quietly altered its stance on artificial intelligence (AI). The tech giant has removed a crucial commitment from its AI principles—one that explicitly ruled out using AI for developing weapons and surveillance tools. In a significant shift, Google now argues that AI must support national security, signaling a potential collaboration with democratic governments. This change, detailed in a blog post by Google executives James Manyika and Demis Hassabis, has sparked intense debate over AI’s ethical boundaries. Why is Google changing its AI ethics? When Google first laid out its AI principles in 2018, it sought to reassure the public that it would not use AI for harmful purposes. But technology has evolved at a rapid pace. According to Manyika and Hassabis, AI has gone from a niche research topic to a fundamental technology shaping modern life—much like the internet and smartphones. With billions now using AI daily, they argue that fresh guidelines are needed to address new challenges. AI, security, and global power struggles Google’s revised stance arrives at a time of growing geopolitical tensions. The blog post emphasizes that democracies must lead AI development based on principles of freedom, equality, and human rights. However, this shift raises questions: AI experts remain divided. Some believe national security concerns justify AI’s involvement in defense, while others warn of the risks tied to military AI applications. Google’s expanding AI investments Alongside this ethical shift, Google is ramping up its AI investments. The company announced a staggering $75 billion AI budget for the year—far more than analysts expected. These funds will support AI research, infrastructure, and tools like Gemini, Google’s AI-powered search assistant. Despite these advancements, Alphabet’s latest financial report revealed weaker-than-expected results, briefly denting its stock price. Nevertheless, Google remains committed to AI’s expansion, embedding AI into search, smartphones, and various applications. From “Don’t be evil” to “Do the right thing” Google has long wrestled with its ethical responsibilities. In its early days, the company’s motto was “Don’t be evil.” But after restructuring under Alphabet in 2015, it shifted to “Do the right thing.” In 2018, Google abandoned an AI contract with the U.S. Pentagon following mass employee protests against “Project Maven,” a military AI initiative. Workers feared it was the first step toward AI-driven warfare. Now, with Google dropping its explicit ban on AI for military purposes, the question remains: Is the tech giant upholding its ethical commitments—or moving toward a future where AI serves defense interests above all else?Original Article