Science

Will AI replace your psychiatrist soon?

“Hello. Please sit down. So… how are you doing since last time?”

What if in a few years this harmless sentence will no longer be passed by a psychiatrist in the flesh, but by AI, artificial intelligence? With the recent resurgence of psychiatry in the public debate, particularly due to the health crisis, the idea has resurfaced to propose AI-integrating mental health monitoring systems.

Let’s be honest, this is far from news, as we find the first traces of a psychiatric chatbot called ELIZA back in 1966. In recent decades, advances in artificial intelligence have allowed power to rise. chatbots, “robot therapists” or other systems for determining the state of health by voice.

Today, there are more than twenty robotic therapists, supported by scientific research in the field of psychiatry. Some of this work suggests that patients can develop a true therapeutic relationship with these technologies, and even that some of them would even feel more comfortable with a chatbot than with a human psychiatrist.

Therefore, the ambitions are high… Especially since, unlike their human counterparts, these digital “professionals” promise objective, reproducible and unbiased solutions – and be available at any time.

However, it should be noted that although the name “robot therapist” evokes the image of a physical robot, most of them are text-based, possibly animated videos. In addition to this lack of physical presence, which is important for most patients, many do not realize all the difficulties experienced by the people with whom they communicate. How then to ensure appropriate responses, such as a referral to a dedicated help desk?

Diagnosis and internal model in a psychiatrist

A psychiatrist in his interview with his patient is able to pick up important signals that indicate the presence of suicidal thoughts or domestic violence, which modern chatbots can miss.

Why is the psychiatrist still superior to its electronic version? When this specialist announces: “You have attention deficit disorder” or “Your daughter has anorexia nervosa”, the process that led him to make these diagnoses depends on his “internal model”: a set of mental processes, explicit or implicit, that allow him to to diagnose.

Just as nature inspires engineers to develop effective systems, it may be appropriate to analyze what goes on in the mind of a psychiatrist (how he develops and uses his internal model) when he makes a diagnosis, in order to better train AI. is responsible for imitating it… But how similar are the human “internal model” and the program?

This is what we asked ourselves in our recent article in the journal Frontiers in Psychiatry.

Comparison between man and machine

Based on previous research on diagnostic reasoning in psychiatry, we established a comparison between the psychiatrist’s internal model and the RN model. There are three main steps in making a diagnosis:

Collection and organization of information. During a conversation with a patient, the psychiatrist collects a lot of information (from his medical history, his behavior, what they say, etc.), which he then selects according to their relevance. This information can then be linked to already existing profiles with similar characteristics.

AI systems do the same thing: based on the data they have been trained with, they extract from their exchange with the patient features that they select and organize according to their importance (feature selection). They can then group them into profiles and thus make a diagnosis.

Model building. During their medical training, and then throughout their careers (clinical practice, reading case histories, etc.), psychiatrists formulate diagnoses, the results of which they know. This ongoing learning reinforces in their model the links between the decisions they make and their consequences.

Here again, AI models are trained in the same way: whether during their initial training or training, they constantly reinforce in their internal model the relationship between the descriptors retrieved from their databases and the result of the diagnosis. These databases can be very large and even contain more cases than the clinician will see in a lifetime.

Model use. At the end of the two previous steps, the psychiatrist’s internal model is ready to be used to manage new patients. Various external factors can influence how he will do this, such as his salary or his workload, which find their equivalents in the cost of equipment and the time it takes to learn or use AI.

As stated above, it is often tempting to think that the psychiatrist in his professional life is influenced by a whole set of subjective, variable and uncertain factors: the quality of his training, his emotional state, his morning coffee, etc. And that AI, being a “machine”, will get rid of from all these human whims … This is a mistake! Because AI also includes an important part of subjectivity; it’s just less noticeable right away.

AI really neutral and objective?

Indeed, all AI was designed by a human engineer. Thus, if one wants to compare the thought processes of a psychiatrist (and therefore the design and use of their internal model) and AI, the influence of the coder who created it must be considered. It has its own internal model, in this case not for linking clinical data and diagnosis, but for the type of AI and the problem to be automated. And here, too, many technical decisions are taken into account, but based on the human factor (which system, which classification algorithm, etc.)

The internal model of this coder is necessarily influenced by the same factors as the psychiatrist: his experience, the quality of his training, his salary, working hours for writing his code, his morning coffee, etc. All this will affect the design parameters of the AI ​​and, therefore, indirectly on the decision-making of the AI, that is, on the diagnoses that it will make.

Another subjectivity that affects the internal model of AI is related to the databases on which it is trained. These databases are indeed designed, compiled and annotated by one or more other people with their own subjectivity – subjectivity that will play a role in the choice of the types of data collected, the material used, the measure chosen to annotate the database, etc.

While AIs appear to be objective, they actually replicate the biases present in the databases they are trained on.

Subjectivity interferes not only with the human psychiatrist, but also with therapeutic AIs through the choices made by the engineers, the programmers… who designed them. Vincent Martin, Author provided

Limits of AI in Psychiatry

It follows from these comparisons that AI is not free from subjective factors and, in particular, for this reason, is not yet ready to replace the “real” psychiatrist. The latter has other relational and empathetic qualities to tailor the use of its model to the reality it encounters… something that AI is still trying to do.

Thus, the psychiatrist has the flexibility to collect information during his clinical interview, which allows him to access very different temporal information: he can, for example, ask a patient about a symptom that occurred weeks before, or develop a real-time exchange in accordance with the responses received. AI is currently limited to a predetermined and therefore rigid schema.

Another strong limitation of RNs is their lack of physicality, which is a very important factor in psychiatry. Indeed, any clinical situation is based on the meeting of two people, and this meeting includes verbal and non-verbal communication: gestures, body position in space, reading facial emotions or recognizing non-verbal social signals. .. In other words, the physical presence of the psychiatrist constitutes an important part of the relationship between patient and caregiver, which in itself constitutes an important part of care.

Any AI progress in this area depends on advances in robotics, where it already embodies the internal model of a psychiatrist.

Does this mean that we should forget about the idea of ​​a virtual psychiatrist? Nevertheless, the comparison of the reasoning of the psychiatrist and AI is interesting from the point of view of cross-pedagogy. Indeed, a good understanding of how psychiatrists reason will better account for the factors involved in the construction and use of AI in clinical practice. This comparison also sheds light on the fact that the encoder also brings its share of subjectivity to the AI ​​algorithms… which are thus unable to fulfill the promises made to them.

Only through such an analysis can a true interdisciplinary practice that allows the intersection of AI and medicine be able to develop in the future for the benefit of the greatest number of people.

_______

Vincent Martin, Doctor of Computer Science, University of Bordeaux and Christopher Goldchild psychiatrist and sonomic physician, University of Paris 1 Panthéon-Sorbonne.

The original version of this article was published on The Conversation.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.