Please take a seat. The psych”AI”trist will meet you in a moment…

Sonu Mariam George
8 min readMar 2, 2022
Image Credit: Unsplash.com

Over the years, there has been ample evidence on our wicked urge to replace humans with machines and machines with better machines, which eventually has led to the emergence of a new field in technology: Artificial Intelligence (AI) in Psychiatry. But what is “AI in Psychiatry” trying to achieve?

According to the “Royal College of Psychiatric Census 2021”, it is evident that about 568 out of 5317 consultant psychiatric posts are yet to be filled. The ratio of consultant psychiatrist to general public is 1 to every 12,567, reveals official NHS workforce data. There is a clear indication of shortages of psychiatrists to adequately cater to the mental health needs in the UK. The mental health sector is facing shortages of experts in various developed countries and let’s not even talk about the developing ones. These shortages coupled with mental health stigma is a recipe for disaster.

To understand this topic better, let’s take a look at two definitions:

  1. Psychiatrist : A psychiatrist is a physician who specializes in psychiatry, the branch of medicine devoted to the diagnosis, prevention, study, and treatment of mental disorders.
  2. 2. AI in Psychiatry: AI in psychiatry is a general term that implies the use of computerized techniques and algorithms for the diagnosis, prevention, and treatment of mental illnesses.

Does it ring a bell? Here we are stuck with two sides of a rope in a tug of war. On one side we are on the constant pull to bring more psychiatrists to attend to the ever-increasing number of mental health cases and on the other side the pressure to mimic a psychiatrist through a computer program.

Hereby the aim of using AI in psychiatry is to completely replace a psychiatrist and be able to train a machine to successfully diagnose and treat patients with mental health disorders. If we are so keen enough to give the designation of a psychiatrist to a computer program in the same way a country was focused on giving citizenship to a robot woman rather than improving lives of women who lived there since generations, then this implies that that the basic principles and ethics which come associated with the profession also apply.

To understand more on the ethical aspects, I will now highlight the principles of a psychiatrist and how a trained AI model may or may not adhere to these principles. The below quoted words in each point is the taken from the “Code of Ethics for Good Psychiatric Practice” according to the Royal College of Psychiatrists.

  1. “Psychiatrists shall respect the essential humanity and dignity of every patient” and “…. reduce the effects of stigma and discrimination.”

The effect of bias in AI has been quite a controversial topic for some years. The bias in AI not only comes from the lack of diversity in the training data but also on how the model was trained and optimized. The core of the problem however does not lie in the model or data itself but it lies in the society from which we draw our data and deductions from. The lack of representation and diversity in all areas of economy boils down to the data obtained. For e.g., suppose we want to understand the suicide rates among people from the executive positions of a company, it will be quite evident that the gender imbalance in these positions will certainly affect the data obtained. These imbalances may lead to discrimination and bias in the form of gender, colour, under represented community, LGBTQ, backward classes or highly orthodox societies during treatment. For e.g. A divorce may not be considered something negative in some cultures but some cultures treat it with the label of a personal flaw of either or both partners. Therefore, it is the duty of the psychiatrist to completely understand the culture and background from which his/her patient belongs to. Due to various biases that already exist in AI models it is highly unlikely that an AI psychiatrist may be able to provide the best healthcare solution to all groups.

2. “Psychiatrists shall maintain the confidentiality of patients and their families. “

Seeking consent may seem like a way to tackle privacy concerns in using patient data, however the lack of transparency in various algorithms of how and to what extent the data is used is neither very clear to the general public nor the developers of such tools. The data fed into these AI models tend to be huge and training of these models happen mostly on dedicated high-performance systems and servers. Most of these setups need to be lend from third party providers, which will give easy access to the data of patients who are completely unaware of the extent of which their data is used. Although psychiatrists are allowed to discuss patient details to other psychiatrists or family members for the welfare of patient treatment after seeking consent from patient/carer, the extent of data circulation is very limited compared to storing data in databases in servers located in different geographic locations. Some companies sourcing the AI software for treatment could also sell the data to other third party companies for profits. Other negative side is that this data could be used by certain authoritarian governments to track people with severe mental health issues to segregate them from normal people and for medical experiments.

3. “Psychiatrists shall not misuse their professional knowledge and skills, whether for personal gain or to cause harm to others.” And “Psychiatrists shall provide the best attainable psychiatric care for their patients.”

Humans have the capability to generate realistic scenarios or consequences based on their decision to choose the best option. Some of these decisions may not be rewarding in the near future but may prove best in the course of months or years. When a psychiatrist proposes the best treatment for a patient, this decision will take into account various health aspects of the patient and may be the most rewarding treatment over months/years to reach the desired result. The basic goal of an AI algorithm does not compliment this need but concentrates more on increasing accuracy and success rate of outputs. Therefore, a model-based psychiatrist may focus on achieving the most rewarding treatment at that instance rather than providing the best care tailored to each individual over a period of time. An AI algorithm may suggest the best and state of the art method to push its success rate to a 100%, which may be fatal for some. Let’s take a look at a type of AI model training mechanism called reinforcement learning. In reinforcement learning the model takes trial and error decisions/ future steps and based on the outcome of those steps it will be either rewarded or penalized. The goal of such an algorithm is to maximize the reward by taking the best steps without giving focus on the final outcome. A good example was a model which was trained to complete a race in a computer game but ended up getting maximum rewards and never tried to complete the race (link in references below).

4. “Psychiatrists shall continue to develop, maintain and share their professional knowledge and skills with medical colleagues, trainees and students, as well as with other relevant health professionals and patients and their families. “

Psychiatry is a field that is closely related to the human mind and these minds evolve and change over time along with changes in society and situations. For e.g., take the pandemic into consideration, with people feeling lonely or depressed during lockdown due to loss of jobs, inability to visit families, loss of loved ones etc. This kind of change can never be accurately caught by an algorithm nor aid in teaching/ inventing new methods in a short span of time due to less data captured for training the model. Additionally, study on certain medical conditions is constantly evolving and related research activities is currently ongoing and may not be completely studied or proven. The AI algorithm could take in such ongoing research studies/activities and misunderstand those are proven solutions to provide suggestions and treatment to patients leading to further complications. Moreover, the creators and developers of many advanced techniques in AI know how to implement these models but do not have the complete clarity in how these algorithms work. For e.g., two AI chatbots employed at Facebook were directed to prioritize communicating in English after they created their own shorthand language which was not clear to the developers. Neural networks is the famous and constantly evolving area of AI that have numbers associated with them known as weights which are changed with each learning examples till it can correctly interpret data. But, can these algorithms teach what they learn? The way most models really learn or acquire the smart understanding is not yet very known to researchers in AI. Thus, this abstract knowledge is of no use to teach the next generation of therapists and doctors. This will add more trouble giving less prior knowledge to future therapists and demotivating the number of people who would be planning to pursue this field.

5. “Psychiatrists shall provide the best attainable psychiatric care for their patients. “and “Psychiatrists shall ensure patients and their carers can make the best available choices about treatment.”

Psychiatrist need to let the patient and carers decide along with doctor inputs about the best care all will collectively think for a patient rather than the best treatment available to ensure success rate. Here the main point is that psychiatrist should make available best attainable and not best available choices. With the AI model stressing on increasing the success rate and accuracy of algorithm this will end up completely avoiding treatment tailored to individual needs.

6. “Psychiatrists shall comply with ethical principles embodied in national and international guidelines governing research. “

Psychiatrists in each country is made aware of the mental health policies for the country, with different laws in criminal psychiatry and consent seeking. Therefore, it is expected that these models should also be able to consider health policies of different countries with equal importance. This is where power and corruption can creep in by trying to impose the health policies and political facts on patients seeking therapy. There could be instances where the AI psychiatrists could be easily be used/manipulated into perform certain brain-washing activities for the benefit of a country or an organization. The Mental Capacity Act 2005 was introduced to provide the ability for individuals above the age of 18 to choose their treatment or refuse care. An AI model which is trying to maximize its accuracy may try to manipulate the patient to submit to care.

I am not ranting on the basis of an utter disgust of using AI in psychiatry but the key problem is by using it against the main idea on why AI was introduced in the first place: to aid humans. The problem lies when it is used as a replacement rather than an accomplice. Moreover, the stigma around mental health and the lack of exposure to psychiatry and psychology at early stages of an individual’s career can be some reasons of why many are not interested to pursue this field. Introducing these topics through literature in schools and teaching basic psychology can be ways to inject these topics into our system simply by treating this subject like one would treat maths and physics. Psychiatry is definitely not voted a “hot” profession in this era, however by instilling interests in future generations, awarding more scholarships and reducing stigma around mental health can help turn the tables around.

References:

https://www.rcpsych.ac.uk/news-and-features/latest-news/detail/2021/10/06/workforceshortages-in-mental-health-cause-painfully-long-waits-for-treatment

https://www.rcpsych.ac.uk/docs/default-source/improving-care/better-mh-policy/collegereports/college-reportcr186.pdf?sfvrsn=15f49e84_2#:~:text=1%20Psychiatrists%20shall%20respect%20the,of%20patients %20and%20their%20families.

https://youtu.be/tlOIHko8ySg https://medium.com/inspire-the-mind/the-shortage-of-psychiatrists-a-students-view-2fb66a24912e

--

--

Sonu Mariam George

MSc Artificial Intelligence and Machine Learning at University of Birmingham | Computer Vision |ML | NLP |Forecasting| Software Engineer(3yrs) at Synamedia.