Whistleblower protection
AI researchers want to be allowed to warn of risks
A group of AI researchers, including from ChatGPT inventor OpenAI, is demanding the right to warn the public about the dangers of the software. The current protection for whistleblowers is not sufficient, the experts emphasized in an open letter published on Tuesday.
This is because it is primarily geared towards illegal activities by companies - but in many cases there are still no legal requirements for artificial intelligence. "Some of us rightly fear retaliation, as there have already been such cases in the industry," the letter said.
The researchers called on companies with advanced AI models to follow four principles. These include not prohibiting employees from making negative comments about their employers. It was recently revealed that OpenAI threatened former employees with the forfeiture of their stock options if they "disparaged" the company. OpenAI CEO Sam Altman apologized and had the clause, which he had not known about, removed. He also claimed that it had never been applied.
Another demand in the letter is a procedure that allows employees to anonymously inform the boards of directors of companies and regulators about the risks they believe exist in AI software. They should also have the freedom to go public as long as there are no internal channels.
Concerns about loss of control
Some AI experts have long warned that the rapid development of artificial intelligence could lead to autonomous software that is beyond human control. The consequences could range from the spread of misinformation and large-scale job losses to the destruction of people, it is often said. This is another reason why governments are working on establishing rules for the development of AI software.
OpenAI is considered a pioneer in this area with the software behind ChatGPT. A spokeswoman for OpenAI told the New York Times that the company believes in a scientific approach to the risks of the technology.
No reason for a warning so far
Four current and two former OpenAI employees signed the letter anonymously. Among the seven signatories who made their names public are five former employees of OpenAI and one former employee of the Google subsidiary DeepMind. Neel Nanda, who currently works at DeepMind and was previously at the AI start-up Anthropic, also emphasized that he had not come across anything at his current or former employers that he wanted to warn against.
This article has been automatically translated,
read the original article here.







Da dieser Artikel älter als 18 Monate ist, ist zum jetzigen Zeitpunkt kein Kommentieren mehr möglich.
Wir laden Sie ein, bei einer aktuelleren themenrelevanten Story mitzudiskutieren: Themenübersicht.
Bei Fragen können Sie sich gern an das Community-Team per Mail an forum@krone.at wenden.