OpenAI, Google, Microsoft and Meta reach agreement to evaluate AI

Isbel Lázaro.

Share on social networks


Inspenet, November 18, 2023.

The AI ​​Security Summit, which took place last week in the UK, concluded with major technology companies such as OpenAI, Google, Microsoft and Meta committing to working closely with governments on future assessments of their new models.

This agreement aims to mitigate the risks associated with this technology, which participating countries have identified as potentially catastrophic, according to a joint statement.

Un chofer humanoide se apodera del volante de un
TerraPower construye una planta de reactores nucleares
El robot Eve ahora obedece comandos de voz
En China estan fabricando robots con expresiones faciales
El desarrollo y entrenamiento de robots humanoides de Tesla
A humanoid driver takes over the steering wheel of a robotaxi
Bill Gates and TerraPower build nuclear reactor plant in the U.S.
Eve robot now obeys voice commands
China is making robots that perform facial movements
Tesla announces that two autonomous humanoid robots are working in one of its factories.
previous arrowprevious arrow
next arrownext arrow

British Prime Minister Rishi Sunak highlighted that the United States, the European Union and other countries have committed to a select group of companies to carry out extensive testing of Artificial Intelligence models, both before and after their public launch . Representatives from OpenAI, Anthropic, Google, Microsoft, Meta and xAI participated in this Thursday’s sessions, where these evaluation processes were addressed.

In order to move in this direction, they have asked Yoshua Bengio, a prominent Canadian computer scientist recognized as one of the leaders in artificial intelligence, to lead the preparation of a report on the “state of the science.” This report will contribute to understanding the current capabilities of the technology and prioritizing the associated risks.

So far, the only people who have tested the safety of new AI models have been the same companies that develop them ,” Sunak said in a statement.

The British government announced that both companies and governments support the creation of a new global testing center to be located in the United Kingdom. It was highlighted that special attention will be paid to risks related to national security and society.

The risks of Artificial Intelligence

Vice President Kamala Harris, leading the United States delegation, expressed that her government’s measures should serve as inspiration and instruction for other countries. Harris highlighted that the main focus should be on how Artificial Intelligence could contribute to widening inequalities both within societies and between countries.

The US official had already insisted on the need to adopt a “more practical approach”, which is not limited to evaluating existential risks. ” There are additional threats that also demand our action, threats that are currently causing harm ,” Harris said.

The vice president pointed out specific examples, such as the case of an elderly man who canceled his health care plan due to a defective Artificial Intelligence algorithm and that of a woman threatened by an abusive partner through modified photos. Harris emphasized the need to address issues across the entire threat spectrum, not just focusing on existential risks like massive cyberattacks or biological weapons.

Among the most common dangers in the development of this technology are identity cloning, autonomous driving through Artificial Intelligence and disinformation campaigns. In response, Harris announced the creation of a new artificial intelligence security institute in the United States, focused on setting standards for testing public-use systems, in collaboration with the testing center in the United Kingdom announced by Sunak.

Don’t miss any of our posts and follow us on social media!






Source and photo: siempre -inteligencia-artificial

Share this news on your social networks

Rate this post
1 star2 stars3 stars4 stars5 stars (No rating yet)
loading spinnerLoading...