gtag('config', 'G-0PFHD683JR');
Crypto Trends

Open

Individuals with previous and current roles in Openai and Google He called Deepmind to protect critics and those whose violations June 4.

The authors of an open message urged artificial intelligence companies not to conclude agreements that prevent criticism or revenge from criticism by impeding economic benefits.

Moreover, they mentioned that companies should create a “open criticism” culture while protecting commercial secrets and intellectual property.

The authors have asked companies to create protection for current and former employees, as current reporting operations failed to risk. They wrote:

“Protecting those who report violations is not sufficient because they focus on illegal activity, while many of the risks we are concerned about have not yet been organized.”

Finally, the authors said that artificial intelligence companies must create employees ’procedures to increase risk risk. Such actions should allow individuals to raise their concerns to the company’s councils, organizers and foreign organizations alike.

Personal concerns

The thirteen authors described the message themselves as current and former employees of “Frontier AI”. The group includes 11 members in the past and present in Openai, as well man.

They described personal concerns, saying:

“Some of us are reasonably feared different forms of revenge, given the history of such cases throughout the industry.”

The authors highlight the various risks of artificial intelligence, such as inequality, manipulation, wrong information, loss of self -intelligence control, and potential human extinction.

They said that artificial intelligence companies, along with governments and experts, have recognized the dangers. Unfortunately, companies have “strong financial incentives” to avoid supervision and a little commitment to exchanging special information about the capabilities of their systems voluntarily.

The authors confirmed otherwise their belief in the benefits of artificial intelligence.

Earlier 2023 messages

The request follows an open letter in April 2023 entitled “AI GIANT GIANT experiments”, which highly highlighted the risks about artificial intelligence. The previous message has gained signatures from industrial leaders such as the CEO of Tesla and the Chairman of X Elon Musk And the co -founder Apple Steve Wozniak.

The 2023 letter urged companies to stop artificial intelligence experiences for a period of six months so that politics can create legal frameworks, safety and other frameworks.

Pamphlet Open First appear on Cryptoslate.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button