Are processors of artificial intelligence safe? 5 risks you should know
The artificial intelligence processor in Dartmouth has completed a RCT, but what does this mean in the future?
background
The days of Elisa, mankind sought to understand itself through the eyes of silicone that has no way to a machine. It now seems, in 2025, the reality is finally present here. Two articles or studies published last month challenged how I see the treatment with the help of the computer. However, with my background in data and technology, I also fueled my fears. These harsh ideas have focused on writing my science fiction, my Cyberpunk book, above the dark waters, which is related to how a person is used to treat artificial intelligence to improve his brain, and send human snowball towards uniqueness.
The first piece is a study from Dartmath College called A random experience from Chatbot from artificial intelligence to treat mental healthWhich was published last month. They had a sample size of 200, dividing them into control and testing groups of about 100 each. The results were very promising as the control group showed a decrease in three types of disorders.
In general, Therabot RCT results are very promising. We have found a high and acceptable participation of intervention, as well as symptoms decrease while maintaining a similar therapeutic alliance to treat human healers and their patients.
The second is the Harvard Business College article entitled “,”How people were really using Gen AI,“It was in itself a follow -up to the article 2024. I reduced the graph below, but as you can see, the three best cases of use in 2025 are related to self -help, with treatment/companionship in the highest place!
Regardless of whether you think this is a heresy or if it doesn’t care about you remotely, the therapists from artificial intelligence are coming! I think There are many positives For this technology, but in this article, I want to cover the sticks only.
5 possible problems with artificial intelligence treatment
This is only my initial list. I am sure there is a lot, but I will be briefly covered by the following areas of anxiety:
No. 5: Risk of addiction
This is the most obvious and (fortunate) is easier to control it. Many of these applications are promoting to reach 24/7 as a main sale point, but I think this is really negative. Why are you waiting for two weeks or a month for treatment when you can get it in your hands now! In fact, the Dartman Study showed a nice heat map for use for us. As you can see, there are many user days with more than 100 messages. Other users used it almost every day of the study.
This is as successful as it makes people use the product. But is this successful in the sense that the participant no longer needs treatment? I would also like to love Watch this by time today. Do people wake up in the middle of the night, use the therapist, and do not sleep well? Because we all know one way to help anxiety is to get enough comfort. Fortunately, it is easy to implement these types of technically controlled controls. However, there is a great reason for the use of a private company to use. The goal of treatment is to stop one day from using treatment!
No. 4: Data privacy risks
This is also clear. If you cannot trust those you are talking about, You will not say anything of value. And if you will go to self -censorship, it is not permissible for you to never go down a fruitful line of interrogation and land on a fruitful and/or nutrient answer. It may just be a hint of data violations to destroy any faith you have in the provider, and as such, you have lost your potential date with the artificial therapist. This is not a virtual scenario, this is the reality. It ended up selling data from Betterhelle to Meta, based on this FTC complaint.
To take advantage of the health information of these consumers, the mufti has delivered it to many third -party advertising platforms, including Facebook, Pinterest, Snapchat and CRITEO, often allowing these companies to use information about developing them and developing products as well.
Now, this is not exclusive to digital therapists. This can happen to real healers as well. The therapist penetrates confidence in a small town will be devastating to the patient; However, the scope of breach will be small. What will the patient do? They will never return to this processor. The main breach of the data may have a chilling effect on it complete The ecosystem of artificial intelligence therapy.
No. 3: Control of companies or government
Perhaps these processors of artificial intelligence will be open or managed by non -profit; Regardless of any future hopes, most of which are managed by startups. And if there is one thing that needs a Christ, it is cash! They have to reach the next round of financing. What happens if the company is selling even descriptive data for other advertising companies? Will you have an account with a company enough to pay pills, tea, creams, retreats or books?
This leads me to the topic of his bumps in point 4: Mental health data weapon. There are cases that people are closed from their money by banks for doubtful reasons. Imagine if you say to treat your therapist, commenting on the flight ban list?
Governments can call companies for data. The United States recently called on companies in other countries, as in this article, the United States’ lawmakers summoned Chinese communications giants. The UK put people in prison for their sites on social media.
There are also attempts to further control over what the algorithms raise. This is called data poisoning and can happen without the host company until you realize it. Imagine that someone makes hundreds of accounts, and is constantly pushing away from the “correct” answer by responding over and over again “well, I will self -finish” or something like that. Will the artificial intelligence therapist lead people to a “new thing” believed to disappear from the “bad thing”? Do they suggest a final solution to combat people like the maid in Canada?
No. 2: Removing humanity
I am very terrible, even the human therapist will not take me. Did he do that represented in paying a technical solution that hurts some people?
People may simply search for someone, anyone, to listen to us in a non -judicial way. Perhaps we are in this hole specifically due to technology. Will the gods of Ai Relika help human?
Which developers there? What happens when you apply another technological solution to an existing technological problem? It often becomes worse. messy. Clunky. I have two other problems with the artificial intelligence processor: it is lengthy, and it always tries to solve a problem. Sometimes the best thing for the therapist is to wait a few long moments and allow the person to treat it and cry … silence and a feeling of hearing … and giving the space between the following claim … that will be necessary to make a real connection. However, llms is insatiable For your next demands.
Wait, wait, You may say, We can program that stopping in. You can, definitely, and I think it will be successful. After that, it will be more than the non -human being more than it is already. After that, the human user will be tired of other people because there is no good in conversation, emotion and sympathy like their service specialist. Imagine for a moment to replace “treatment” with “Sexbot”, and the algorithms become so good that no person can satisfy you. Will this be a good thing for society? no.
The deepest issue may be “change of nature”, instead of removing the personality. We do not fit with the world of infinite scroll that we have built around the clock throughout the week. It is a foreigner. Exit and feel that grass is a common advice today. But it is rooted in some fact, so that nature can be treatment; Shinerin YukoOr “forest bath”, may result in better results for free. Even watching birds in trees or bees in flowers can help in anxiety. Could the silence of silent snow over the grass be enough to heal one’s mind better than a human close?
No. 1: The algorithm bias 

The algorithm is defined as it Systematic and repetitive direction in a computerized social system to create “unfair” results, such as “concession” one category on another in different ways from the intended function of the algorithm. Ultimately, this technology shares the same problems as the algorithm police. It must have been made from this in the past five years,
One solution is transparency, but it is not clear whether LLMS actually knows how to reach these solutions.
If you are rustic or poor, this increases the possibility of using this technology, as accessing the regular processor may be limited. If you are young and do not have/you want to help parents, this will likely lead you to online solutions. Sometimes, it is good to train something on all kinds of people, but the bulk of the data is the technological community of the English language.
The problem is that many of these things We are Train on people’s output like you. As such, pushing people may be the most general solution, which works most of the time, but ends up to be suitable for everyone. The alternative problem comes when data is completely trained on chips: (people like you). Then you get a black algorithm and an echo room.
conclusion
I think the promise to treat artificial intelligence is real, but its risks on the Internet – adding, monitoring, and bias – great. Ultimately, I am interested in seeing how the promise to treat artificial intelligence is revealed, but it helps to keep the sharp -eyed eye shave, because the reality will always be strange to imagination.
Also, for note, I’m sure people who make artificial intelligence are great people trying to do.The right thing.“I hope that no one sees in this field this article as an attack. I simply want us all to think about second and third–to request The effects of the things we all build. Will this be a tool of recovery, or is it just another technical solution to a technical problem, standing on the basis of already shaken to our technological society?
Am I a huge moist blanket? Let me know, and if you are looking for a science fiction novel, you explore the ethical considerations to treat artificial intelligence and the electronic Internet capabilities Dystopian, please check my book, over the dark water.