Rise Act provides degrees from AI, but there are not enough details
The Civil Responsibility Law often does not make a great conversation for dinner parties, but it can have a tremendous effect on the way emerging technologies such as artificial intelligence.
If it is drawn badly, the rules of responsibility can be Create barriers To the future innovation by exposing entrepreneurs – in this case, artificial intelligence developers – to unnecessary legal risks. Or he argues this as the American Senator Centeh Lomes, who last week presented responsible innovation and safe experience (Ascension) Law 2025.
This draft law seeks to protect the developers of artificial intelligence from prosecuting in a civil legal court so that doctors, lawyers, engineers and other professionals can understand what artificial intelligence can do before relying on it.
Early reactions to the rise law from the sources that Cointelegraph contacted were often positive, although some criticized the limited scope of the draft law, and its deficiencies regarding the standards of transparency and interrogation provided to artificial intelligence developers.
Most of the rise is a continuous work, not a final document.
Is the “gift” of artificial intelligence developers?
According to Hayed Ikebia, a professor at the Faculty of Citizenship and Public Affairs at the University of Sri Laws, the Lomes bill is “in time and required”. (Lomes And invite her Legislation for the country’s first target for artificial intelligence. “))
But the draft law is largely inclined to the developers of artificial intelligence, I tell Ekbia Cointelegraph. The law of high requires them to publicly reveal the specifications of the forms so that professionals can make enlightened decisions on the tools of artificial intelligence they choose to use, but ::
“It puts the largest part of the risk burden on the” agreed professionals “, and the developers only demand” transparency “in the form of technical specifications – models and specifications cards – and provide them with wide immunity.”
It is not surprising that some be fast in jumping on the Lummis bill as a “gift” for artificial intelligence companies. Under the democratic land, which describes itself as “the left of the central political community”, ” male In one of its forums, “Artificial Intelligence companies do not want to sue them because of their tools’ failures, and this bill, if passed, will achieve that.”
Not everything is agreed. “I will not go to the extent that I call the draft law” gift “to artificial intelligence companies,” Felix Chipkvich, Law’s Law, Cointelegraph.
Shipkevich explained that the proposed immune condition of the RISE law aims to protect developers from strict responsibility for the unexpected behavior of large language models. From a legal perspective, this is a rational approach. He added:
“Without a form of protection, developers may face unlimited exposure to outputs who have no practical way to control.”
The scope of the proposed legislation is somewhat narrow. It greatly focuses on scenarios in which professionals use artificial intelligence tools while dealing with their customers or patients. The financial consultant can use the AI tool to help develop an investment strategy for the investor, for example, or the radiologist can use the AI program to help interpret X -ray.
Related to: Senate Bill Bell passes stablecoin amid concerns about regular risks
The RISE law does not really address cases where there is no professional broker between the developer of artificial intelligence and the final user, as is the case when using Chatbots as a digital companion for minors.
Such civil responsibility issue I recently grew up in Florida, where a teenager committed suicide after engaging for several months with Chatbot, Amnesty International. The family of the deceased said that the program was designed in a reasonable way that was reasonably safe for minors. “Who should be responsible for losing lives?” Ikebia request. Such cases have not been dealt with in the proposed Senate legislation.
“There is a need for clear and unified criteria so that users, developers and all stakeholders understand the road rules and legal obligations,” Ryan Abbott, Professor of Health Sciences at Law Faculty at Sari University.
But it is difficult because artificial intelligence can create new types of possible damage, given the complexity of technology, its equipment and independence. The healthcare circuit will become a particular challenge in terms of civil responsibility, according to Abbot, who holds medical and legal certificates.
For example, doctors outperformed the artificial intelligence program in medical diagnoses historically, but recently, the evidence shows that in certain areas of medical practice, Abbott explained in humanitarian areas, “Indeed, it is worse than that, Amnesty International allowed to do all work.” “This raises all kinds of interesting responsibility issues.”
Who will pay compensation if a painful medical error is made when the doctor is no longer in the episode? Will it be covered by securing wrong practices? Maybe not.
The AI Futures project, a non -profit research organization, has supported the draft law initially (it was consulted during the draft law). But CEO Daniel Cocotolo He said The required transparency disclosure of artificial intelligence developers come short.
“The public deserves to know the goals, values, business schedules, biases, instructions, etc., companies are trying to give strong artificial intelligence systems.” Kokotajlo said that the draft law does not require such transparency and therefore does not go away enough.
Also, “companies can always choose to accept responsibility rather than be transparent, so whenever the company wants to do something that the public or organizers do not like, they can simply cancel the subscription.”
The “Right -based” approach to the European Union
How is the Law of Evidence compares the provisions of responsibility in the AI Law of the European Union for the year 2023, which is the first comprehensive list of artificial intelligence by a major organizer?
The position of responsibility for artificial intelligence in the European Union was on a state of flow. Amnesty International was visualized for the first time in 2022, but it was withdraw In February 2025, some say as a result of the pressure of the artificial intelligence industry.
However, the European Union law generally adopts a human rights -based framework. like male In a recent article to review the Law of the University of California in Los Angeles, the rights -based approach “emphasizes the empowerment of individuals”, especially final users such as patients, consumers or clients.
On the contrary, the risk -based approach, on the contrary, depends on the processes, documentation and evaluation of evaluation tools. It will focus more on the discovery of bias and mitigation, for example, instead of providing people affected by concrete rights.
When Cointelegraph Kokotajlo asked whether the “risk -based” approach or “based on civil responsibility” was more appropriate to the United States, he answered, “I think the focus should be risk based and focus on those who create and publish technology.”
Related to: Crossers are at risk as Trump disintegrates consumer monitoring
Shipkevich in general added, the European Union follows a more active approach to such matters in general. “Their laws require artificial intelligence developers to show in advance that they follow the rules of safety and transparency.”
Clear criteria are needed
The Lummis Bill may require some amendments before a year in the law (if any).
“I see a positive law positively as long as this proposed legislation is seen as a starting point,” said Shipkevich. “It is reasonable, after all, providing some protection for developers who do not act neglect and have no control on how to use their models in the direction of the river.” He added:
“If this bill evolves to include real transparency requirements and risk management obligations, it may lay the basis for a balanced approach.”
According to Justin Bullock, Vice President of the Americans of the American Innovation (ARI), “The Act of Evidence sets some strong ideas, including federal transparency guidelines, a safe port with a limited scope and clear rules for responsibility for professional artificial intelligence adopters,” although ARI did not rely on legislation.
But Bullock, too, had concerns about transparency and disclosure – that is, to ensure that the required transparency assessments are effective. Cointelegraph said:
“Publishing the models cards without checking and evaluating the risk of a strong external entity may give a false sense of safety.”
However, in all, the Lummis Law “is a first constructive step in the conversation about what the requirements of federal transparency should appear from artificial intelligence.”
Assuming the passage of legislation and its signature to become a law, it will become valid on December 1, 2025.
magazine: Bitcoin is invisible between claims and CypherPunks