Whether it is an unhealthy YouTube fascination with Boston Dynamic’s robot dogs opening doors, Amazon’s Alexa on your coffee table, or Elon Musk’s ominous March 13th statement of “Mark my words — AI is far more dangerous than nukes,” chances are the growing conversation of artificial intelligence has intruded in some way into your daily life. At a moment where everyone is speculating on life with AI, the world of academia has been offered a rare opportunity to step out of its ivory tower to offer insight on what such possible futures could look like given human limitations and the technology’s seemingly limitless potential. According to Allison Pugh, a professor and author from the University of Virginia, who spoke recently at the New School, the market cornering power of automation and AI has been poised to move beyond its analytical sphere of factory lines and day trading algorithms, and into more intimate domains such as therapy, education and nursing where emotional connection and trust are central.
Pugh identifies this new frontier for automation as the realm of connective labor, a term she introduces as distinct from the hashtag trending term emotional labor. Where the latter has been used professionally and colloquially to denote the individual processes and costs of regulating one’s own feelings in a working environment, Pugh claims the former as a clinical relation between practitioners and recipients that achieves productive outcomes through the use of those emotions in a working theory of mind. Theory of mind in this instance refers to the ability to attribute emotions and mental states to oneself and to others, informing one’s decision making towards specific occupational goals. It is in the pursuit of operationalizing these unique features of connective labor that a growing new technology market has emerged, epitomized in Pugh’s choice examples of automated therapy apps and nursing robotics. However, although recent research has shown that AI may have many benefits in terms of controlling for unconscious bias, enabling greater patient disclosure, and cost accessibility for lower income communities, Pugh questions whether something intangibly essential will be lost in the continued cooption of connective labor to cheap and efficient AI imitation. In other words, where it appears that the growing extinction of personnel in a specialization like radiology is a natural progression of evolving information technology, can the same be said or even desired for fields like psychotherapy?
It is with my own interest in the future of automation for psychotherapy that I believe Pugh’s analysis opens a can of worms the clinical field prefers to leave sealed with regards to its future as a autonomous profession. In particular, observing the growing tech industry interest in operationalizing the procedures of psychotherapy via self-guided apps, online interfacing, and AI responsive feedback, difficult clinical reflections are brought to bear over what the salient features of the therapeutic relationship are that do the real leg work in treating the needs of patients. Is an in-person therapist necessary in facilitating sustained therapeutic outcomes? Is there viability for automated therapy in conjunction with, and not in substitution of, in-person therapy? Is the therapeutic capacity of being known by another knower (intersubjectivity) something that can be automated, or is it the sacrosanct realm of in-person psychotherapist’s connective labor? Such questions are intimidating in the world of modern psychology where there has been continual investment in presenting the field as scientific and operational, while simultaneously struggling with a professional identity of being more of a craft than science in its therapeutic practices.
Indirectly, Pugh’s lecture offered some insight into these dilemmas for clinical psychology with her assertion that while automated and self-guided therapy apps appear to offer cathartic outlets for the stresses and anxieties we consciously keep bottled up, they may not help in uncovering the secrets or traumas we unconsciously keep from ourselves. This observation seems particularly poignant, ostensibly offering a space for automation in psychotherapy in addressing particular types of mental health issues revolving around self-disclosure and mindfulness, while still leaving a broad specialized realm for in-person therapists to address mental health issues that require dialogical unpacking. Such a specialized space for in-person therapy would also appear pertinent to high stakes therapeutic situations where suicidality or homicidal thoughts present themselves, for which the outsourcing of psychotherapy to AI or automation would seem fraught.
However, despite the possibility of maintaining this specialized realm for in-person therapy, on the surface there seems to be many therapeutic modalities that appear readily adaptable to automation given a sophisticated enough AI. For example, cognitive-behavioral therapies (CBT) implement methods for altering thought distortions and maladaptive behaviors, all of which are operationalized in training manuals that practitioners are expected to adhere to. Even under current technology, these operationalized techniques have been steadily accommodated within plug-and-play machine learning feedback models, wherein a patient simply inputs their behaviors, thoughts, and feelings and a program offers real-time alternative interpretations and tension reduction strategies specific to each patient’s specific situation. Goal tracking, another common CBT technique, has already been pervasively utilized by many apps with a degree of empirical success in helping persons struggling with substance use, one of which is the free, FDA-approved app Pear reSET.
Other less obvious modalities even in psychoanalysis could conceivably be adaptable to automation given some psychoanalysts’ preferences for free association techniques where therapists avoid interjecting meaning and interpretation to facilitate therapeutic outcomes. In such techniques, therapist enactments, interpretations, or projections are substituted for an open-ended prodding of the patients’ statements, slips of the tongue, and phantasies in order to encourage self-discovery and spontaneous change via a patient’s own narrative untarnished by the therapist. Such analytic listening techniques could even feasibly be accommodated by current machine learning technologies that can take context and dialect-specific cues from patient speech and offer idiosyncratic and open-ended interrogations to help the patient explore and reflect deeper into their issues via an intrapersonal dialogue prompted by automated responses. There is even one such program known as ELIZA developed back in the sixties that presents non-directional question scripts based on Rogerian psychotherapy in response to user text inputs.
In listing such examples I am again brought back to Pugh’s previous distinction between automated psychotherapy’s ability to facilitate self-guided therapeutic disclosure for secrets we keep from the world, and its inability in uncovering the deeper psychic traumas we keep from ourselves. However, as outlined above, many of the salient techniques practiced by therapists that arguably facilitate this very unpacking of self-distorted stressors or repressed secrets in psychoanalysis, CBT, or Rogerian therapy, all superficially appear adaptable to eventual (if not present) machine learning sophistication to provide comparable, unbiased, cheaper, and less intimidating therapeutic resources for patients.
Will the field of clinical practice become even narrower and specialized for in-person therapy as many of the dialogical techniques of psychotherapy are effectively outsourced to automation? Or is there a missing ingredient that in-person psychotherapists might claim as beyond the looming reaches of automation? One might argue that there are subtleties like body language or vocal affect fluctuations that would get lost in automated psychotherapy, but let’s assume in a thought experiment that artificial intelligence technology has achieved the level of androids capable of picking up on these subtleties. A sophistication the likes of the replicants in Ridley Scott’s sci-fi movie Blade Runner (1982), for example. Although Pugh herself didn’t address this idea directly, at the end of her talk she outlined a particular sociological feature to connective labor that arguably could make a case for psychotherapy never fully being susceptible to the assimilation of automation, no matter how sophisticated the technology. This distinguishing feature is what Pugh identified as the shame-paradox.
The shame-paradox, as used by Pugh, appears to indicate that what is at stake in connective labor, and irreplaceable within automation, is the interrelational risk of shame a recipient experiences in trying to live up to the practitioner’s connective labor goals in treatment. For example, one can imagine this risk of shame in a patient feeling nervous about telling a nurse about an STD, or a client not wanting to divulge some embarrassing childhood trauma to a therapist, both of which may compromise the achievement of the practitioner’s connective treatment due to imperfect information. A paradox emerges, however, in that while practitioners may encourage such disclosures despite the risk of the patient having a maladaptive experience of shame, such disclosures create room for the possibility of prosocial adaptation in reaction to those very same shameful feelings. In other words, in the absence of the risk of shame there is also an absence for the potential of pride or self-satisfaction in overcoming those shortcomings that can be acknowledged and witnessed by the Other (the practitioner). We can think of the example of a supportive nurse who in knowing the realities of a patient’s illness and their vulnerabilities is able to encourage and empathize with the pride a patient feels in undergoing the journey of getting well or even passing on with dignity.
In such cases, there are of course the technical aspects of medical treatment but also the arguably crucial connective labor aspect of emotional support and theory of mind in acknowledging the gains, failures, and reality of the patient that may underlie the unique benefits and outcomes of connective labor that can never be truly replicated in automation. In such terms, what was possibly the benefit of patient’s greater willingness to disclose personal secrets to machines in the absence of risking shame, may also be the very thing that stifles the distinguishing therapeutic feature of connective labor: growth and acceptance in the appraising eyes of the Other. Indeed, one could imagine that the potential for pride in the overcoming of shame when it is witnessed by another may be a key factor that gets the inertia of connective labor’s therapeutic processes underway in the first place, i.e., “I need to be extrinsically motivated to prove something conditionally to others before I psychologically transition to intrinsic motivations of proving it unconditionally to myself.” Such a notion would be akin to the social theorist Hannah Arendt’s concept of the “vita activa” in human activity, where in our actions we are motivated and compelled to strive within the gaze and accountability of our contemporaries to achieve eudaimonia (flourishing).
In cases of psychotherapy, concepts similar to that of the shame-paradox in connective labor have been discussed in certain psychoanalytic circles, where the value of risking one’s self in being known is understood as essential to therapeutic growth. Even further, the power of risking and being witnessed in our connective labor could be seen as not only underlying psychoanalytic notions regarding the phenomenology of a patient’s speech disclosures, but as also motivationally foundational to CBT’s structured goal setting and task assignments that are followed up on by overseeing practitioners. What is crucial in either case is being witnessed by, and being the witness of, the Other, an intersubjectivity that creates a new space where the real work of connective labor can occur. This is akin to what the psychoanalyst Jessica Benjamin calls “the third,” where phenomena like the vulnerability of shame, the potential of pride, and exchange of empathy for productive outcomes can be realized in a working, intersubjective accountability between recipient and practitioner.
All of this is not to say that there isn’t true merit in pursuing automation for psychotherapy to a degree. Indeed, in fields like psychotherapy, which are usually restricted to access for those who can afford it, the introduction of cheap, on-hand, and personalized therapeutic resources in the form of apps or online therapy will be invaluable in disseminating such resources to marginalized communities. Yet it must be still be noted that even within this apparent benefit of automated psychotherapy there still persists class dynamics that pervade the mental health field as a whole, where the rich may still claim privileged access to the connective labor qualities of in-person therapy while the poor are left with the one-sided accommodations of automated therapy. Some may fairly argue that something is better than nothing, but it should not be ignored that class disparities over the quality of mental health access would still persist regardless of automated psychotherapy’s growing pervasiveness and accessibility. Leaving such issues to future discussions, however, Pugh’s insights towards the encroaching force of automation for fields like psychotherapy appear to have marked out a possible bastion for connective labor occupations via the indefinite need for trained and emotionally competent personnel that can practice their indelible intersubjective craft. Indeed, although some needs can be adequately met in automating auxiliary aspects to the work done in connective labor (information collection, self-guided mindfulness techniques, etc.), the legwork of connective labor itself (theory of mind, witnessing shame and pride, intersubjectivity, etc.) arguably resists operationalization. In such terms, it is important to appreciate that despite the apparatuses and mediums through which our labor gets disseminated, connected, advanced, and altered, there are some features of our labor that will remain inevitably constant and essential to the people doing it: the witnessing of our shared endeavoring.
Kevin Rice is an MA student at NSSR in the Psychology Department with an interest on the intersection of psychotherapeutic practice and theory in the context of contemporary politics, society, and culture.
One thought on “Will Artificial Intelligence Revolutionize Psychotherapy?”
Thank you very much for this in-depth analysis, it was a pleasure to read.