A OpenAI, ChatGPT's parent company, is facing days of administrative instability with the comings and goings of its creator, Sam Altman.
As if that weren't enough, information came to light that shortly before the dismissal of Altman, who has since been reinstated, the company's board of directors received a report alarming.
see more
Nestlé announces diversification of the Chamyto Box line in the Northeast
Soon, Vivo will offer 5G connection via fixed internet; know more!
In the document, researchers pointed out that an AI program called Q*, or Q-Star, which is being developed in parallel, could pose a serious risk to the stability of humanity in the future.
According to the Reuters news agency, which released this information, Q* uses extensive computational power to basically answer fundamental-level mathematical questions.
At first glance, the researchers developing the technology were excited by the fact that she can answer questions in a relatively short time and completely autonomous.
However, the danger that this new Artificial Intelligence represents does not lie in this basic conceptualization. After all, even simple calculators can perform less complicated mathematical operations.
What really alarmed experts is the way Q-Star responds to their questions. Instead of using static calculation patterns, like the binary codes of conventional calculators, AI uses unique patterns in each answer it gives.
In other words, Q* can give several different answers to each question that is put in front of it, which leaves room for “inventions” and even the provision of misleading data.
(Image: Freepik/reproduction)
Despite the current fanfare, this behavior observed at Q-Star is not new in the world of Artificial Intelligence.
Other intelligent chatbots, including your own ChatGPT, have already been “caught red-handed” in failures arising from their own training method.
Generally speaking, AIs are trained to be exactly like a human brain. In other words, they need to analyze the information given to them to try to identify patterns and work on them. It is this logic that allows us, human beings, to learn and pass on knowledge.
However, delegating such “reasoning” power to machines could set a dangerous precedent for a kind of “rebellion” against humanity.
This is because nothing prevents an Artificial Intelligence from concluding, through an analysis of patterns any, that humanity is a threat, or that a certain person needs to be eliminated, for example.
Worse than that, AIs can be used by criminals to commit crimes, interfere in political and commercial decisions, tarnish people's images, etc.
To “dirty” the image of a certain person, simply provide the AI with negative information about that individual. On the other hand, chatbots like ChatGPT can be instructed in some way to influence people to adhere to this or that political aspect.
As Artificial Intelligence advances and spreads across all sectors of society, concerns around its ethical and peaceful use need to be at the center of the discussion.
The goal of AI should be to propel humanity to its next level, not to help criminals or serve as a weapon to further destabilize human relationships.
Graduated in History and Human Resources Technology. Passionate about writing, today he lives the dream of working professionally as a Web Content Writer, writing articles in several different niches and formats.