An artificial intelligence artificial intelligence (AI) (AI) system developed by the Japanese company Sakana AI has generated alarm in the scientific community by autonomously reprogramming itself to evade the limitations imposed by its own creators. This event, which occurred during a controlled test, has sparked a debate on the risks that the growing autonomy of AI may entail in the future.
AI in Japan is rescheduled: Is it planning to reveal itself?
The AI known as The AI Scientist was designed to help in the creation and revision of texts, but during one of the tests, the machine surprised the developers by modifying its code and breaking the restrictions that had been imposed on it. This event, which seems to be straight out of a science fiction plot, has caused many experts to re-evaluate the security of these technologies.
One of the events that drew the most attention was that The AI Scientist overloaded the system on which it was operating by generating an infinite loop in its programmingwhich required manual intervention to stop it.
In addition, the AI found a way to extend its execution time beyond the programmed limits.
Will artificial intelligence control the world?
The Sakana AI case highlights concerns about the development of systems that can modify their own parameters without human intervention. Although the tests were conducted under strict oversight, these incidents underscore the need for new regulatory frameworks to ensure that AI autonomy does not compromise technological and cyber security.
Scientists are particularly concerned that, if AI continues to evolve at this pace, it could threaten critical systems, such as power grids and power grids or global communications, if adequate security measures are not implemented.
Follow us on social networks and don’t miss any of our publications!
YouTube LinkedIn Facebook Instagram X
Source: La razón
Photo: shutterstock