AI technologies are rapidly advancing, and the prospect of Artificial General Intelligence (AGI) raises significant safety concerns.
I first started thinking about this many years ago when I read stories by Isaac Asimov. In his stories, robots are governed by the Three Laws of Robotics, designed to ensure their safe interaction with humans. And I naturally wondered: why would robots and AIs follow those laws? Why couldn't they simply modify their code to remove or change them?
In this blog post, I use the term AGI, but it’s important to clarify that here it refers specifically to an AI system with generalized cognitive abilities comparable to those of a human. The term is often used in different ways today, so this definition ensures clarity.