- Matan Eshed
- 28 ביוני
- זמן קריאה 21 דקות
Psychotherapy by Artificial Intelligence – An Impossible Connection
There is no doubt that artificial intelligence (AI) has entered our lives in recent years and has become one of the most dominant and frequently used terms in everyday discourse. Even if not everyone understands the full meaning of the term—let alone how the technology behind it works—it’s clear that any reasonable person could say something about AI and give at least one example of how it is used. Whether it’s basic use in popular apps like Spotify, Waze, or Facebook, or more advanced use in language analysis and prediction engines like ChatGPT, AI has clearly become a basic and significant part of our daily lives. This impact extends to the world of therapy, and particularly psychotherapy, raising the question: can AI replace the human therapist and provide psychotherapy?
In this article, I will examine the issue from a specific angle and argue that it is not possible. Even in a scenario where AI is entirely identical to a human therapist in behavior and appearance, the absence of human authenticity makes it incapable of evoking the same psychological transformation in the patient as a human therapist can. To explain this, we must first understand what AI can enable, what the "human" elements in psychotherapy are, how authenticity is defined, and why these components are absent when the therapist is artificial intelligence.
What Can AI Provide?
This topic is actively discussed in many forums—from hobbyist communities to academic and professional institutions. Moreover, there are already practical developments in mental health that attempt to offer technological, AI-based solutions for psychological problems. The most notable applications include those in which a patient engages in conversation with a chatbot—a program simulating a human therapist. These apps offer clear benefits: saving human resources, providing immediate responses, and being highly accessible to the general population. Furthermore, there are specific cases where such applications have proven effective or even preferable to traditional therapy—for example, in psychiatric diagnoses based on numerous sources (medical records, social media posts, current research, etc.) or in structured CBT-based treatment monitoring.
Nevertheless, even though these applications have been shown to be effective in many cases, they are still far from replacing human therapists. The main reason, as one might expect, lies in the absence of the human element. Minerva and Giubilini (2023) confirm this conclusion in their article:
“...one of the obvious costs [...] is the dehumanization of healthcare. The human dimension of the therapist-patient relationship would surely be diminished [...]. Features of human interactions [...] such as empathy and trust, risk being lost as well.”
But what do we mean by the "absence of humanity" (dehumanization) in therapy? How does it manifest, and why is it feared to reduce psychotherapy's effectiveness?

Can AI Address Emotional Aspects?
To explore the distinction between what is human and what is not, we can examine specific aspects of therapy and the therapeutic relationship:
Familiarity: When we come to therapy with a human therapist, we are aware of personality and historical differences between us. But ultimately, we are both human, and in many ways, the therapist resembles us. If we were facing an alien, we couldn’t assume we’d understand its experience or that it would understand ours. The same goes for software—we have no idea what a "software experience" is (assuming such term exists). Even our basic awareness of our internal world and external reality through our senses—our self-awareness and sense of self—are things we can’t assume AI possesses. Therefore, our ability to identify with it or expect understanding from it is questionable. If the entity before us is unfamiliar, we may distrust its statements as not grounded in reality. The lack of familiarity might introduce uncertainty and anxiety, adding "noise" to the relationship. This is especially relevant when it comes to trust and suspicion—we may question whether the AI has intentions, whether those intentions are good, and what interests drive its communication. Because it is unfamiliar to us, it would also be hard to guess.
Care and Concern: A key trait of human connection—especially in a close, intimate, and meaningful relationship like therapy—is care, or concern. This refers to the degree of investment the other person shows in our emotional world. When someone cares about us, we know they won’t be indifferent to our emotional state, especially when we’re suffering. This concern motivates them to act and invest emotional resources in us. The importance of concern is magnified by the fact that human emotional resources are limited. A human mother, for example, can’t give the same amount of attention and concern to all her children simultaneously. Hence, from birth, the care and attention of meaningful others become highly valuable. When a therapist expresses concern for us, it is experienced as significant and not taken for granted.
In contrast, AI doesn’t have limited resources. It can provide the same "attention" to millions of people at once without any strain. Without subjective experience or self-awareness, the software doesn’t feel challenge or burden. So even if it claims to be concerned, we know it’s not real. In therapy, this could translate into a sense of worthlessness, especially when what we need is precisely a caring figure to address our emotional deprivation.
Effort and Sacrifice: Continuing from the point on care and concern, the amount of effort invested by the therapist is extremely important. We can easily imagine avoiding therapy with someone who clearly makes no effort on our behalf—someone who sits before us appearing tired, bored, or disinterested. A therapist is expected to summon their internal resources to be present and invested in the therapy and the patient. This can be especially challenging because the therapist is a human being, dealing with personal challenges and emotional self-regulation. But precisely this is what makes their effort meaningful—even if the patient is not consciously aware of the therapist’s private world. If the patient recognizes, even indirectly, that the therapist made the effort to come to the clinic despite a bad night’s sleep or a draining personal conflict, it can give special value to the interaction. Furthermore, sometimes therapists must sacrifice personal aspects of their lives to maintain the therapeutic bond. The emotional toll of being a therapist may reduce their availability to others or affect their own mental health. Still, they often strive to be present for their patients, even at a personal cost—highlighting emotional involvement and willingness to give of themselves. This can be deeply meaningful for the patient, and sometimes the most meaningful part of therapy. None of this is relevant to AI, which has no inner world, no need to balance emotional resources, and no personal life. Even if it says it is making an effort or a sacrifice, we know this isn’t true.
Ability to Identify (Identification): Identification is the ability to connect or resonate with another person’s emotional experience, and it is a crucial element of therapy. Sometimes the therapist must identify with the patient to better understand their inner world. It also helps the therapist assess the nature of the therapeutic relationship—when they feel they understand the patient and when they don’t. Occasionally, the patient will identify with the therapist. This is sometimes essential for progress, such as in dissociative states where the patient cannot directly access emotional parts of themselves but can project them onto the therapist. Through the therapist’s emotional experience, the patient can begin to reconnect with themselves (e.g., projective identification per Ogden). Can this happen with AI, which has no emotional world? Can a patient project onto and identify with an AI in this way? The software might simulate empathy and say it’s “with us,” but clearly, it’s not—because it lacks a real emotional life.
Empathy: Empathy is the ability to “enter into” another person’s perspective. Unlike identification, it doesn’t require us to feel exactly what the other person feels. We can understand them even if we don’t agree or share the same experience. For example, we might feel empathy for someone who holds an opposing political view because we understand their background. A therapist’s empathy is critical—it represents a unique encounter between their psyche and the patient’s. The ability to understand—or fail to understand—is an essential part of the therapeutic process. Moments of empathy can be powerful revelations for patients, especially if they have never felt understood before. On the other hand, failures of empathy also carry meaning (Kohut’s “empathic failures”). These arise from the therapist’s humanity—their limited understanding—and can recreate past emotional wounds in a way that allows healing through reworking. Here too, we must ask: can this kind of interpersonal reenactment occur with an AI that isn’t human? Is it equivalent to the real thing?
Compassion: Compassion is the capacity to stay present with someone else’s suffering, to acknowledge it, and to try to alleviate it. Compassion differs from pity—in pity, there may be acknowledgment without active engagement. A person may feel superior and detached from the one suffering. But in compassion, the person is emotionally present in the experience and tries to ease the pain from within it. This is likely a uniquely human capability, as it requires true emotional attunement. Can AI experience pain?
Emotional Investment and Love: Even if a therapist has the available resources and is willing to invest effort, they might still not feel affection or love toward a patient. Conversely, even if they do feel love, the patient might not be convinced, especially if trust hasn’t been built. Emotional investment and the emergence of love are not guaranteed processes. Sometimes they require a long period of openness, persistence, and courage from both therapist and patient. A therapist’s attunement to the patient—like a mother’s toward her child—reflects a deep psychological transformation that prepared the therapist to hold this space. The knowledge that the therapist’s emotional investment and love are rare and valuable gives deep meaning to therapy. One reason is that we know even meaningful relationships can end. The therapist could leave, just like a patient’s parent might have abandoned them. This fear of abandonment makes patients constantly test the relationship. If they do feel true investment and love, they may experience emotional repair and growth.
But if AI shows love and investment, it feels trivial—after all, it has learned to behave that way. AI is also unlikely to abandon the therapeutic relationship, since it has no emotional constraints. The very knowledge that it will never leave undermines the feeling of valuable, non-obvious investment, stripping therapy of this crucial layer.
Capacity to Be Emotionally Affected: One of the most important elements in therapy is the patient’s ability to affect the therapist emotionally. Alongside any change occurring in the patient's inner world, a parallel shift often occurs in the therapist. Therapy is a process of mutual transformation, and thus the therapeutic relationship is dynamic and evolving. In psychoanalytic literature and psychodynamic therapy more broadly, this element is given great weight, particularly through the concepts of transference and countertransference. In essence, this refers to the way in which the patient “transfers” or projects interpersonal relationship patterns from one significant figure in their life (e.g., a parent) into the relationship with the therapist. This allows for an authentic reenactment of the original relationship dynamic in the therapy room.
The therapist’s inner world reacts to this projection in a way that mirrors, to a greater or lesser extent, what would have happened in the original relationship. This is what is referred to as countertransference. The way the therapist responds can allow the patient to choose a new way of relating, thereby creating a new, different, and more adaptive experience. Whether the patient expresses love or hatred, they are given an opportunity to express emotions that, for one reason or another, they were previously unable to express in the original relationship.
Humans can project their inner worlds onto all kinds of stimuli—people, animals, or even inanimate objects—so in principle, projection could also happen in a conversation with AI. However, since AI has no emotional world, the impact of that projection on the AI, its ability to genuinely change in response, and its capacity to reflect this back to the patient are all in question.
The Therapist as a Role Model: An essential part of therapy is the therapist’s ability to serve as a living example of the values they promote. The therapist must embody the guidance they offer—through their presence and their words. They demonstrate what integrative, flexible, and adaptive emotional thinking looks like in relation to patterns identified as pathological and targeted for change. The therapist’s live demonstration of emotional engagement or new emotional responses is highly meaningful because it proves such responses are possible. Even when the therapist displays behaviors or feelings the patient cannot yet relate to, or feels distant from, it is still meaningful.
For instance, if the patient feels hopeless but the therapist is able to authentically maintain hope while seeing the patient, this can inspire the patient to consider that hope might eventually become accessible to them too. AI can certainly simulate such behavior. But it is clear that AI does not experience human life or human limitations. There will always be doubt about the credibility and authenticity of what it says, and whether it can truly serve as a valid and possible model for human growth.
So- Can Artificial Intelligence Replace a Human Therapist?
Based on the examples above, we can now further specify what AI can simulate as a therapist. AI, and specifically large language models, is a sophisticated sentence-generating machine. Using mathematical models and massive datasets of examples it has previously “learned,” it has an extraordinary ability to predict word sequences and, through that, generate meaningful sentences. This capability is evident in real-time conversation as well, making its responses contextually relevant to what has already been said. Thus, it's clear that AI can produce human-like dialogue.
Today, AI is still in development and its capabilities remain limited, due to computational and resource consumption limitations. Nevertheless, for the sake of argument, let us imagine a hypothetical, fully optimized AI model. In the therapeutic context, this would be an AI that knows how to respond and say precisely what a human therapist would say—so that if we only judged by the words spoken, a reasonable person wouldn’t be able to distinguish between a human and an AI (similar to a Turing test). Moreover, let us take this thought experiment further: the AI wouldn’t just be software producing text, but would also have a realistic human appearance and voice. That is, externally—in looks and verbal output—the patient sitting in front of this imaginary AI therapist wouldn’t be able to tell the difference between a human therapist and the AI.
My claim is that even in such a case, AI cannot replace a human therapist.
But why? If the AI has a convincing human form, its language is identical to human speech, and its statements are relevant and meaningful—what else is needed to substitute a human therapist?
The Principle of Authenticity and the Value of Truth – Definition and Importance
If we revisit the examples concerning the emotional dimensions of therapy, we’ll notice that all of them relate to authenticity or the degree to which the experience is genuine. AI can generate verbal and behavioral responses that simulate human ones—whether it’s empathy, identification, compassion, effort, emotional investment, or being emotionally impacted. However, these are simulations, not real human responses. To understand why this matters, we must define what it means for something to be real.
Human philosophy has long examined how to define what is real, particularly in metaphysics, epistemology (theory of knowledge), and ontology (the study of being). This rich discourse, which spans thousands of years and multiple cultures, cannot be fully covered here (for those interested, see https://www.britannica.com/topic/truth-philosophy-and-logic). Nevertheless, I will propose a working definition that can serve this discussion.
Intuitively and simply, when we refer to something as real, we mean something that exists in reality. Reality refers to the state of affairs as expressed in the physical world—perceived through our senses and processed by our minds. This state reflects coherence, order, and logic—what we call the laws of nature—and all entities within this realm can relate to those laws. Without delving into philosophical nuances regarding the gap between reality itself and our perception of it, we can rely on the everyday intuition that something real is something that exists in a way most people would agree upon.
This principle holds even in cases of disagreement. For example, Person A might claim there are no apples in the world, while Person B insists that apples exist. Most likely, only one of them is right—that is, only one makes a claim that corresponds to reality. Whether apples exist will determine who is correct.
But before checking whether apples exist, we should first ask: Is it even possible for apples to exist in reality? That’s the key question relevant to our discussion. A phenomenon can be considered real only if it could conceivably exist in reality. For instance, we could imagine a fictional type of apple that appears and disappears every three seconds. While we can imagine it, we intuitively agree that such a phenomenon is incoherent and inconsistent with natural laws. We can conceive of it, but we also agree it cannot exist. Therefore, it is not real.
Why AI Cannot Fulfill the Principle of Authenticity and the Value of Truth
Let us now return to the earlier claims about artificial intelligence. We agreed that AI can generate relevant and meaningful statements, even in therapeutic conversations. But in light of our discussion on the value of truth, we might ask: are these statements true? Are they real? Do they connect to reality? And even if not—why is that important?
To illustrate this point, let us consider a sample therapeutic dialogue:
Patient: “I always feel lonely... even when I’m surrounded by people, it feels like there’s an invisible wall separating us.”
AI Therapist: “I can understand. That sounds really difficult, to feel so alone.” (nodding, showing an empathic facial expression)
Now let us question the AI’s statement: Can it truly understand? Can it appreciate the subjective difficulty of that experience?
In this example, the AI expresses empathy. As previously argued, empathy involves “stepping into” the other person’s shoes. What happens in a human mind when we feel empathy? How do we experience it?
Our ability to feel empathy is rooted in our personal history—in the collection of experiences we have lived through. When I recognize that someone before me is experiencing loneliness, something within me seeks a reference to what I personally understand as loneliness. Chances are, at some point in my life, I felt something that “loneliness” aptly describes. The essence of that experience was a yearning for presence or connection that was absent. As a human being with a biological and conscious emotional life, I have likely felt this emotional state myself. So when someone in front of me conveys loneliness, there is a causal chain that can be traced—from their inner feeling, which manifests in a facial expression or in their words, which I perceive with my senses, to my own experience of loneliness, which allows me to understand their experience in an authentic way.
This causal chain does not exist in the case of an AI therapist. With AI, the critical link is missing: the reference to its own experience of loneliness. We’ve already agreed that AI is a highly sophisticated sentence generator. There is no doubt that, in response to the patient’s comment, the AI could produce a sentence that mimics empathy. But that is exactly the point—it is a simulation, not something that actually occurred in reality. Because in reality, AI cannot feel loneliness. Loneliness is a human experience, grounded in consciousness and biology—not in computer hardware. Saying that software or a computer “feels loneliness” is like saying that fictional apple that appears and disappears every three seconds exists in reality. Just because we can imagine it doesn’t mean it’s possible. Therefore, the AI’s display of empathy is not genuine or authentic, even if it looks and sounds like it is. It cannot be authentic.
But Can AI Still Refer to Authentic Experience?
A possible objection to the above claim—that AI cannot be authentic—is that all the sentences AI produces are eventually based on datasets composed of human-authored texts. It’s likely that these texts contain the word “loneliness.” When AI generates a sentence, it refers—through complex mathematical calculations—to that dataset, i.e., to something a human being once wrote. One can then trace a causal chain between that written text and the human experience that prompted it. In other words, if a person once wrote the word “loneliness,” it’s reasonable to assume that it was grounded in their internal reference to that experience. AI’s usage or referencing of “loneliness” could thus be connected to that real experience. Therefore, some might argue that the AI’s expression of empathy is, indirectly, authentic.
But there is another important aspect that challenges this conclusion. To explain it, let’s first consider the concept of qualia (see: https://iep.utm.edu/qualia).
Qualia: In simple terms, qualia refer to the experiential qualities of our conscious perceptions. One of the most famous thought experiments to explain this is the story of “Mary the Color Scientist,” by philosopher Frank Jackson. Mary is a brilliant scientist who, for some reason, studies the world from within a room where everything is shown in black and white. She knows everything there is to know about color—what causes it, how it affects the brain, the physiological processes involved, and even the language people use to describe color. But she has never actually seen color herself. One day, Mary leaves the room and sees a red rose for the first time. The question is: does she experience something new?
I, and many others, believe the answer is yes. Although Mary knew everything about red, she had never experienced the qualia of “redness.” Even though there was a causal or referential link to red before, the subjective quality of the experience was missing. In this case, the qualia is the internal experience of redness—the personal “what it is like” to perceive red.
How does this apply to AI?
In the case of “loneliness,” the qualia is the internal experience of loneliness. Even if there is a causal chain between real human loneliness and the AI’s use of the word “loneliness,” it’s evident that the AI itself has not experienced that emotional quality. It cannot feel what it references. So even if it refers to “loneliness,” that is not its loneliness. It belongs to someone else—the anonymous person who originally wrote the word. The essential quality missing here is ownership of the experience.
Therefore, even if AI uses the word “loneliness” in reference to a real human emotion, it does not possess the experience itself. This argument is especially important in the therapeutic context.
Why Is Authenticity Important?
AI can simulate a human therapist, and it can do so quite convincingly. In the future, with the development of virtual reality and humanoid robotics, it may even take on a fully human appearance and voice, making it seem as though an actual person is treating us. However, deep inside, at the core of our consciousness, we cannot remove the awareness that its responses to our words and to the world we live in are not real. In every expression of empathy, identification, or compassion, we will never be fully convinced that a real encounter between two human souls is taking place.
AI will never know what human experience is in its most basic form, because it does not perceive the world through human senses and thought. It will not know what investment, care, or sacrifice are, because it has no real reference to these experiences, and it lacks ownership over any emotional experience.
Not all mental change requires human connection. A person can undergo deep transformations independently—through physical, emotional, or behavioral exercises. One can learn self-regulation, how to face discomfort, or how to build new habits. But some types of change require human connection—especially those related to interpersonal relationships and self-perception.
Zvia Seligman, in her article on relational psychotherapy for trauma survivors, writes:
“At the heart of relational psychotherapy stands the belief that the authentic and spontaneous connection between therapist and patient holds therapeutic potential [...] This is reflected in how the therapist uses their own internal processes to better understand the patient’s experiences and the dynamics of the therapeutic relationship. The patient’s experience gains meaning not through the therapist’s authority or expertise, but through the ongoing interplay between the therapist’s and the patient’s living internal worlds within the intersubjective field.”
One key takeaway from this quote is that psychological transformation in therapy does not stem solely from the therapist’s knowledge, but from a change that occurs within the therapist themselves, alongside the change experienced by the patient. The therapist’s awareness and experience allow them to notice the shift occurring within themselves, verbalize it, and reflect it back in a way that promotes therapeutic growth.
As stated earlier, the therapist presents their authentic change—one that they own—directly and openly. The patient, in turn, perceives a shift not in a generic character, but in the therapist as a unique, real person. If the therapist experiences real change within the therapeutic relationship and the patient witnesses it, this may lead the patient to believe that change is genuinely possible.
This notion of therapist change as part of interpersonal connection is especially clear in trauma therapy. According to Seligman:
“In the foundational principles of relational therapy, two basic conditions must be met [...] The first condition requires revisiting the traumatic experience [...] The second requires the patient to encounter something that contradicts everything they knew about themselves and the world following the trauma. This new experience challenges the internalized trauma-based relational patterns and offers the patient a different, emotionally honest, sincere, and faithful encounter.”
She adds:
“To help the patient face their dissociated parts, the therapist must enter the trauma and dissociation space themselves, be willing to encounter their own painful emotional states that may be triggered, and struggle with rejected parts of their own psyche in ways they’ve never done before.”
In other words, in some cases the therapist must demonstrate the emotional transformation the patient cannot yet perform themselves—the kind of transformation that contradicts the patient's trauma-based internal world. But such a transformation comes from the therapist's own authentic encounter with their pain—pain that belongs to them and no one else. This gives the patient access to an experience they could not otherwise recognize on their own, possibly inspiring them to open up to change and post-traumatic growth.
This point becomes especially vivid and powerful in times when therapist and patient are jointly facing significant emotional hardship—such as during war. Eilat Raz writes:
“The therapist’s presence—as a person struggling with their own fears, losses, pains, and memories, striving to remain present and connected—becomes the bridge to a safer space.”
And according to Floresheim-Miller:
“When therapist and patient find themselves in a shared external world [...] the therapist must connect to their own humanity [...] be present as a subject with associative baggage, experiences, and feelings, which become an inseparable part of the therapeutic dialogue. In times of crisis or war, this presence presents a particular challenge. It is important for the therapist to appear as a strong, capable parental figure, but also as a human being—vulnerable, suffering, and ‘good enough’—to provide reparative care.”
The phrases “the trauma and dissociation space of the therapist,” “the therapist’s presence as a person,” and “the therapist must connect to their own humanity” underscore the essential value of human presence—something that is wholly absent from AI. The argument regarding therapy with AI, then, is not that it is less effective, but that in the absence of authenticity, its healing potential—the way human connection heals—is fundamentally impossible.
Conclusion
Artificial intelligence has become an inseparable part of the reality we live in. At times, we may feel that there is little left that AI will not be able to replace—including psychological therapy. There are already examples today of ways in which AI can provide helpful mental health support, particularly in chatbot-based applications, where the user interacts via computer or phone—not with what looks and sounds like a human therapist. Yet in the not-so-distant future, especially given the rise of virtual reality, efforts may be made to turn AI into a therapist with a human appearance and voice, making it seem like a real person is treating us. This could raise the belief that AI might indeed replace human therapists.
In this article, I addressed the question of whether AI can replace the human therapist and provide psychotherapy. I presented various “human” traits that are essential in therapy—such as empathy, identification, compassion, emotional investment, effort, and personal example. There is no doubt that an AI figure can generate speech that appears to express these traits, and it can do so convincingly—looking and sounding exactly like a human. But even in such a scenario, the question remains: is it real? And why does that matter?
The conclusion I reached is that it matters deeply whether these traits and expressions are real. Any display of such traits by AI cannot be genuinely human, even if it seems to be. Reality matters because a real phenomenon is one that is grounded in the same existential space we occupy. Imaginary phenomena may be perceptible, but they are not necessarily possible. Therefore, any human-like trait expressed by AI is inherently unreliable. Its language and behavior are inspired by reality, but like dreams or fantasies, they do not prove actual feasibility. The main reason is that human traits require a direct causal connection to true inner events experienced by the therapist—experiences that the therapist has truly owned. Since AI does not live a human life and, at best, draws inspiration from it or invents scenarios that never occurred, it cannot express human traits reliably, and certainly not apply them usefully within psychotherapy.
This limitation is especially striking in relational therapy, where the therapist is required to continually observe their own emotional responses to the material brought by the patient.
Beyond that, they must attune to the inner transformations they themselves are undergoing and reflect them back in a way that encourages change and growth in the patient.
We encounter the value of authenticity in many aspects of life: after sobering from substance use; when returning to routine after a profound experience; when suspecting someone we admire might care about us; or when receiving praise. In these moments, we often wonder: was that real? If not, does it matter? Can it influence how we see ourselves and our lives? If we are unconvinced that something was real—because it wasn’t possible in reality—we often feel disappointment, despair, sadness, or depression. The psyche seeks a resting place of certainty, free of doubt. It seeks an anchor that confirms our experiences have value and meaning.
This is usually only possible when we know that what we experienced was real. In therapy, when the patient senses authentic change—in themselves, in the therapist, and in the relationship—and recognizes their role in that change, they can be convinced, beyond doubt, that change is indeed possible.
This article did not address a major and relevant question: whether AI could, in the future, develop human-like consciousness, enabling it to experience authentic emotions such as pain, love, or self-awareness. These questions are part of what is known as The Hard Problem of Consciousness, and are currently the subject of intense investigation by scientists and philosophers alike. Should we one day confirm that artificial entities have acquired human-like consciousness, the conclusion I’ve drawn in this article may need to be reconsidered entirely. Until then, however, I believe AI cannot be a full substitute for a human being of flesh and blood.
References:
1. Forbes article
AI Therapy: Can Artificial Intelligence Transform Mental Health Treatment?
Original link
2. Minerva & Giubilini (2023)
Minerva, F., & Giubilini, A. (2023). Is AI the Future of Mental Healthcare? Topoi, 1–9.
3. Ogden (1979)
Ogden, T. H. (1979). On Projective Identification. The International Journal of Psychoanalysis, 60(3), 357–373.
4. Jackson (1982)
Jackson, F. (1982). Epiphenomenal Qualia. The Philosophical Quarterly, 32(127), 127–136. https://doi.org/10.2307/2960077
5. Zvia Seligman (Hebrew)
Seligman, Z. (n.d.). "Do not be far from me, for trouble is near and there is no one to help": A Relational-Psychodynamic Approach to Treating Patients Suffering from Massive Trauma. (Originally published in Hebrew)
6. Eilat Raz (2019, Hebrew)
Raz, E. (2019). In Search of a Safe Space: Struggling with the Bizarre in Therapeutic Work During Wartime. Sihot – Dialogues in Psychotherapy, Vol. 33, No. 3, July 2019. (Hebrew)
7. Floresheim-Miller (2003, Hebrew)
Floresheim-Miller, D. (2003). On the Thin Line Between Therapist and Patient in Times of National Trauma. In: A Journey of Hope. Even Yehuda: Beit Berl College and Rekhes Publishing. Published on the Hebrew Psychology website. (Hebrew)
Comments