The leaders of OpenAI, the organization responsible for the development of ChatGPT, are calling for the regulation of “superintelligent” intelligences.
They advocate the creation of a regulatory body similar to the International Energy Agency Atomic Energy to protect humanity against the risks of creating an intelligence capable of undoing.
see more
Google develops AI tool to help journalists in…
Unopened original 2007 iPhone sells for nearly $200,000; know...
In a brief statement posted on the company's website, co-founders Greg Brockman and Ilya Sutskever, along with CEO Sam Altman, call for the creation of an international regulatory body to initiate the work of authenticating intelligence systems artificial.
The proposal aims at carrying out audits, compliance tests with security standards and implementation of restrictions in the deployment and security levels. These measures aim to reduce the existential risk associated with these systems, seeking to protect humanity from potential dangers.
For several decades, researchers have been highlighting the risks potentially associated with superintelligence, where we have actually seen an evolution. However, as AI development rapidly advances, these risks become more and more concrete.
O Center for AI Safety (CAIS), based in the US, is dedicated to mitigating the societal risks associated with artificial intelligence and has identified eight categories of risk considered “catastrophic” and “existential” related to the development of AI, by example. There are serious risks and we are exposed to them.
It is possible to imagine that, in the next 10 years, AI systems will reach a level of specialized skill in several domains and become as productive as some of the biggest corporates out there today, as experts at OpenAI.
This rapid evolution of AI has the potential to significantly transform many industries while maintaining efficiency and automation in many productive activities.
In terms of potential advantages and protection, superintelligence is considered a more advanced technology. powerful than those humanity has dealt with in the past, said the leaders' message.
There is a prospect of a significantly more prosperous future, but it is essential to carefully manage the risks involved in achieving this scenario. Faced with the possibility of existential risks, as they informed in a note, it is crucial to adopt a proactive posture, instead of just responding to situations as they arise.
In the immediate context, the group emphasizes the need for “some level of coordination” between companies involved in advanced research. in AI, in order to ensure a smooth integration of increasingly powerful models in society, with a special priority on security.
Such coordination could be established through initiatives or collective agreements that seek to limit the advancement of AI capability. These approaches would be key to ensuring AI development is allowed in a controlled and responsible manner, considering the risks involved.
Lover of movies and series and everything that involves cinema. An active curious on the networks, always connected to information about the web.