Artists who test the Openai Sora Ai tool: Openai cannot capture a break

Openai faces another controversy after a group of artists Sora, an impressive video generation tool, leaked online, prompting the company to stop the tool at the present time. The long -awaited artificial intelligence tool was supposed to be another future show from ChatGPT and a step forward towards its final goal of achieving artificial general intelligence. However, some artists accuse Openai of of using them as “PR dolls” and their exploitation of “R&D and PR unpaid”.
Here is everything we know about the accident and its effects on Openai and the broader artificial intelligence industry.
What is a surah?
Sora is a text tool to generate video that can create one minute. Openai gave free access to hundreds of artists to test the tool. However, some of these artists leaked the tool online, accusing parents of Chatgpt to exploit them financially.
In their statement published in the face of embraceThe group said: “Hundreds of artists provide unpaid workers by testing errors, comments and experimental work for the program for a company worth $ 150 billion. While hundreds contribute free of charge, a few of the few will be chosen through a competition to examine their films created by Sora-which provides the minimum compensation that diminishes compared to the abandoned value of marketing and marketing.
Openai is this true ???
There is some news about Openai Sora leakage, and now it is available on Lugingface.Openai Sora Early Testers and the artists concerned are angry because they feel exploited. They claim that they were invited as a “test”, but they finished freely … pic.twitter.com/NK4f6VSQX6
– Ashutoshshrivastava (@AII_for_sucss) November 26, 2024
The value of Openaii is 157 billion dollars in the last financing round
Artists take a criticism of the evaluation of Mammoth Mammouth $ 157 billion, which she achieved in the recent financing round. Companies such as Thrive Capital, Microsoft and Tiger Global 6.6 billion dollars combined in Openai on that tour on its own, even as AI’s start -up continues to burn an amount that cannot be understood every quarter.
According to informationOpenai spends approximately $ 4 billion to run ChatgPT and annual training costs alone about $ 3 billion. Openai also spends a large portion of money on employee salaries and office rents, and is expected to lose $ 5 billion this year. The company is no less than money, and spent an estimated $ 20 million on buying the Chat.com field. No wonder the artists feel that they have been shortened by what is among the most emerging companies of artificial intelligence and rich in criticism.
Openai did not officially say that Sora Leak was authentic
Meanwhile, artists who categorically leaked them stated that they had not opposed the use of artificial intelligence tools of art. They added, “What we do not agree with is how this artist’s program was presented and how the tool is formed before a possible public release. We share this in the world in the hope that Openai will become more open and more artist and support the arts that go beyond public relations.”
They are deceiving the Openai content and say: “Every exit must be approved by the Openai team before participation.”
Openai has not officially accepted that the leak was real. In a statement, Openaii Niko Felix spokesman“At the present time, stop Openai Sora, but he did not formally confirm whether the leakage of Real.
“Hundreds of artists in Alpha have formed the development of Sora, which helped give priority to new features and guarantees. Participation is voluntary, without any commitment to provide comments or use the tool. We were excited to provide the arrival of these artists for free and will continue their support through grants, events and other programs.”
The external test is common in the artificial intelligence industry
This external test is definitely common in making artificial intelligence and technology in general. However, it is very uncommon for technology to be leaked as access to these types of early leading products are controlled tightly. The issue raises a question mark on Openai safety mechanisms at a time when the discussion of spreading artificial intelligence is gained.
It also raises concerns about artificial intelligence companies such as Openai pushing its share due to the original content developers. Several media organizations filed a lawsuit against artificial intelligence companies (alleged) illegally, in which they will be preferred to train copyrights to train artificial intelligence models. These companies and individuals have a good reason to increase these concerns, as Openai and some of their competitors have not obtained permission to train their AI products on their IP address.
Journalists and workers in any number of industries also worry that technology can eventually overcome many news room functions.
Openai faces many lawsuits on copyright cases
Last year, the New York Times filed a lawsuit, accusing the Openai and Microsoft intelligence giants of copying millions of articles from their web sites without worthy of training to train artificial intelligence systems.
Earlier this year, journalists Andrea Partz, Kirk Wallace Johnson and Charles Grayber filed a collective lawsuit against the Antarbur in the California Court accused of using their work without permission to train Chatbot.
Last month, the parent company of New York Post and Wall Street Journal filed a lawsuit against Jeff Bezos, accusing Amnesty International illegally using its copyright news.
Openai has also been accused of using YouTube videos to train their models. The company is facing a lawsuit on this case and avoiding a large technology employees, Mira Morati, questions about Sora data sources in an interview with WSJ. You can check the interview below.
It is difficult to know how Openai can escape from these copyright problems because it is clear that it has never obtained permission for the vast majority of IP that is trained on AI products.
It is time for artificial intelligence companies to pay them
Technology companies are now looking to license the work of media institutions to train LLMS with them. Earlier this year, Openai signed a $ 250 million deal with News Corp, which allows it to access the current and founding content on the leading gates such as The Wall Street Journal, The New York Post and Daily Telegraph. Openai also signed a deal over time that allows the company backed by Microsoft to reach its archived content dating back to a century. This may seem like a lot of data but not anywhere nearly enough to train a strong model like GPT-4 (at least with current technology).
Companies such as Meta Platforms have been canceled through user data published publicly unless the regulations in some areas prevented them specifically from doing this. Last year, Google also quietly updated the privacy policy to add that it can use broken web data to train artificial intelligence models.
Most of the time, the data is used for profit without the user has the option to cancel the subscription. The reason is that technology companies have been able to scrape a lot of general data without revenge (so far) is due to the privacy laws of inaction.
The allegations of the artists in the Sora Leak case go to the fact that Biggies of artificial intelligence does not pay their due share of creators and artists even with their high assessments.