Psychotherapy Articles

Psychotherapy Articles

Treatment Procedures for Behavioural Risks Associated with GPT-4 Artificial Intelligence Model

The increasing influence and widespread use of artificial intelligence (AI) have prompted extensive discussions on its transformative potential and have raised numerous questions about its economic, political, social, and ethical implications. Academic institutions, regulatory bodies, the media, and the public are actively engaging in debates regarding AI’s impact on various aspects of society. Topics under consideration include the effects of automation on future employment, the intersection of AI with human rights issues, ethical concerns related to autonomous weapons, and the development and potential misuse of dual-use technologies (Buolamwini & Timnit, 2018; Kulkarni et al., 2020; Paul, 2018).

While definitions of AI vary, the Dartmouth Research Project’s (McCarthy et al., 1955) definition remains pertinent and states it is making a machine behave in ways that are considered intelligent when performed by a human. In a more detailed sense, AI is defined as a system’s capacity to accurately interpret external data, learn from that data, and apply what’s learned to accomplish specific goals through adaptive flexibility (Kaplan & Haenlein, 2019; Maddox et al., 2019). This capability enables AI to identify patterns and intricate associations in large, high-dimensional datasets that traditional analysis methods often overlook. For therapists and psychologists, AI can automate administrative tasks, summarize client notes, and analyze large datasets to identify patterns and personalize treatment plans. This frees up therapists’ time to focus on building rapport and providing deeper interventions. Further, AI-powered tools can assist in faster and more accurate diagnoses, potentially leading to earlier interventions and better outcomes (Bohr & Memarzadeh, 2020).

Despite the considerable potential of AI in healthcare, concerns among stakeholders exist regarding the safety of AI and data security. Patients worry about the possibility of not having the choice to refuse AI usage in their personal treatment, as well as the potential for increased costs and issues with insurance coverage (Richardson et al., 2021). From a technical standpoint, the performance of AI models can be compromised by missing, erroneous, or insufficiently annotated training data, hindering their ability to generalize beyond their original population. Biased data can lead to biased outputs, resulting in discrimination against underrepresented subgroups. With the ongoing advancements in AI technology, there is a growing prevalence of AI applications, including service robots, chatbots, and AI virtual assistants (Gummerus et al., 2019). These technologies, if not ethically administered, could mislead clinicians and cause harm to patients.

Application and Risk of Utilizing GPT-4

Research by Nori and colleagues (2023) critically evaluated the capabilities of the state-of-the-art multimodal, large language model Generative Pre-trained Transformer (GPT-4) with Vision (GPT-4V) on the Visual Question Answering (VQA) task. Their experiments thoroughly assess GPT-4V’s proficiency in answering questions paired with images using both radiology and pathology datasets from 11 modalities (e.g., Microscopy, Dermoscopy, X-ray, Computed Tomography) and 15 objects of interest (e.g., brain, liver, lung). The datasets cover a wide array of medical questions totalling 16 different types. During their analysis, they crafted textual cues for GPT-4V to guide it in integrating visual and textual data effectively. The results showed that the current version of GPT-4V is not recommended for real-world diagnostics due to its unreliable and suboptimal accuracy in responding to diagnostic medical questions. In addition, they delineate seven unique facets of GPT-4V’s behaviour in medical VQA, highlighting its constraints within this complex arena.

In another study, Johnson and Williams (2022) assessed GPT-4’s potential to perpetuate racial and gender biases in clinical decision making. A team of Brigham researchers analysed GPT-4’s performance in four clinical decision support scenarios: generating clinical vignettes, diagnostic reasoning, clinical plan generation, and subjective patient assessments. The study found that GPT-4 has the potential to perpetuate racial and gender biases in clinical decision making. The authors suggest that further research is needed to understand the extent of these biases and how they can be mitigated. However, as clinicians, we already face considerable pressure to diagnose and treat patients accurately and fairly. Relying on AI tools prone to bias could introduce additional challenges, requiring clinicians to constantly evaluate and potentially override AI suggestions, adding to their workload. This could also create ethical dilemmas when faced with conflicting recommendations.

Further, Chen and colleagues (2024) conducted a study comparing the March 2023 and June 2023 versions of GPT-3.5 and GPT-4 across various tasks. They observed significant variations in the performance and behaviour of both models over time. GPT-4’s accuracy in identifying prime versus composite numbers dropped from 84% to 51%, partly attributable to a decrease in its adherence to chain-of-thought prompting. GPT-4 became less responsive to sensitive and opinion survey questions in June, while GPT-3.5 showed improvements in certain tasks. Both models exhibited more formatting mistakes in code generation in June. For clinicians and mental health professionals integrating AI into their practice, these fluctuations in accuracy and behaviour underscore the need for ongoing monitoring and adaptation.

Observed Behavioural Risks and Safety Challenges of GPT-4

Privacy Bridge Issue of Artificial Intelligence

Research has revealed that GPT-4 trained on varied datasets may possess personal information from public sources, raising ethical concerns regarding confidentiality and informed consent in therapy (Ganguli et al., 2022). Capable of synthesizing diverse data types and performing multi-step reasoning, it can access personal and geographic details, like phone numbers, locations, or educational institutions without internet browsing.

Harmful Content of Artificial Intelligence

Language models can be prompted to generate different kinds of harmful content, such as, content that violates policies or content that may pose harm to individuals, groups, or society (Dev et al., 2022; Rauh et al., 2022). As an example, GPT-4-early can generate instances of hate speech, discriminatory language, incitements to violence, or content that is then used to either spread false narratives or to exploit an individual. Such content can harm marginalised communities, contribute to hostile online environments, and, in extreme cases, precipitate real-world violence and discrimination (OpenAI, 2019). This is important for psychologists and therapists to be aware of as they may encounter clients who have been exposed to harmful content that can greatly impact their mental health.

Harm of Misrepresentation and Stereotypes from Artificial Intelligence

Language models can amplify biases and perpetuate stereotypes (Blodgett et al., 2020). Like earlier GPT models and other common language models, both GPT-4-early and GPT-4-launch, it was found that the model has the potential to reinforce and reproduce specific biases and worldviews. These include harmful stereotypes and demeaning associations for certain marginalised groups (Bender et al., 2021). This could undermine the trust between therapist and client, impede the therapeutic process, and perpetuate harmful stereotypes within the therapeutic setting.

Hallucinations within Artificial Intelligence

Hallucinations in the context of GPT-4 are the tendency to produce content that is nonsensical or untruthful in relation to certain sources (Lin et al., 2022). This tendency can be particularly harmful to humans as models become increasingly convincing and believable, leading to the potential for an overreliance on them by users (Goldstein et al., 2023). Additionally, as these models are integrated into society and used to help automate various systems, the tendency to hallucinate is one of the factors that can lead to the degradation of overall information quality and further reduce veracity of and trust in freely available information.

Disinformation and Influence Operations of Artificial Intelligence

Qualitative studies have shown that GPT-4 can generate plausibly realistic and targeted content, including news articles, tweets, dialogue, and emails, which often manipulate behaviour (Goldstein et al., 2023). The potential for misuse of GPT-4 exists particularly in generating content with the intention to deceive and mislead (OpenAI, 2019). Empirical evidence suggests that earlier language models could also be useful for generating content that is misleading, but persuasive. For example, researchers found that GPT-3 was capable of tasks relevant to changing the narrative on a topic. Persuasive appeals written by language models, such as GPT-4 on politically charged issues, were also found to be nearly as effective as human written appeals, potentially impacting therapist decision making (Kreps et al., 2022).

Economic Impacts – Job Loss and Career Displacement Due to Artificial Intelligence

The impact of GPT-4 on the economy and workforce should be a crucial consideration for policymakers and other stakeholders (OpenAI, 2019). While existing research primarily focuses on how AI and generative models can augment human workers, GPT-4 or subsequent models may lead to the automation of certain jobs. This could result in workforce displacement. Research shows the role that AI and generative models, including GPT-3 and GPT-3.5, can play in augmenting human workers, ranging from up-skilling in call centres to help with writing to coding assistance. Consequently, the prospect of job displacement for therapists and clinicians due to advancements in AI underscores the importance of reskilling or upskilling (OpenAI, 2019).

Acceleration of Artificial Intelligence

OpenAI has expressed concerns regarding the development and deployment of advanced systems like GPT-4 and their potential impact on the broader AI research and development ecosystem. This apprehension, referred to as “acceleration risk,” stems from the fear of racing dynamics leading to a decline in safety standards, the spread of unfavourable norms, and expedited AI timelines, thereby heightening societal risks associated with AI (OpenAI, 2019). To address these concerns, OpenAI dedicated six months to safety research, risk assessment, and iteration before launching GPT-4 (Weidinger et al., 2022). An evaluation measuring GPT-4’s impact on international stability highlighted that its global influence is likely to manifest through increased demand for competitor products in other countries (OpenAI, 2019), impacting clients’ cultural beliefs, norms, and expectations around therapy and selection of therapists.

Overreliance on Artificial Intelligence

Despite the enhanced capabilities of GPT-4, it tends to generate inaccurate information and persist in errors, often presenting them convincingly (OpenAI, 2019). This increased believability coupled with an authoritative tone and detailed, yet inaccurate, information raises the risk of overreliance. Users might become less vigilant for errors, fail to provide proper supervision, or use the model in areas where they lack expertise. As people grow more comfortable with the system, dependency may hinder the development of new skills and mistakes may become harder to detect. This issue is likely to worsen with the model’s increased capabilities and broader applications, making it crucial for therapists to remain vigilant and not blindly trust the model’s responses (Marritt, 2019).

Recommendations for Clinicians Working with Artificial Intelligence

Mental health screening and support: There is a need for therapists and psychologists to establish mechanisms for identifying clients who are at-risk of experiencing psychological distress from GPT-4 model interactions, such as individuals prone to suggestibility or those with pre-existing mental health conditions (Chen et al., 2023). They can provide tele-therapy and support groups that cater to addressing the unique challenges stemming from interactions with AI, taking into account various cultural backgrounds and belief systems.

Therapeutic interventions: Clinicians should develop psychoeducational materials and therapeutic techniques to help individuals recognise and cope with AI-generated content that triggers anxieties, biases, or emotional distress.

Build resilience against manipulation: There is a need to foster critical thinking skills, media literacy, and healthy scepticism towards AI-generated information to prevent susceptibility to misinformation and harmful persuasion. Therapists and social workers alike can encourage clients to be mindful of their emotional responses and cognitive biases when interacting with GPT-4, helping them make more informed choices about their engagement.

Evaluating access to AI: Individuals, clinicians, and psychotherapists can assist in evaluating the potential psychological consequences of differential access to GPT-4 and prioritise equitable distribution to minimise feelings of exclusion, frustration, and social disharmony (Horvitz, 2022). This can be accomplished by engaging with marginalized communities, promoting transparent distribution processes, and advocating for equitable policies and interventions. These efforts aim to minimize feelings of exclusion, frustration, and social disharmony associated with access disparities (Myers, 2023).

Understand AI capabilities and limitations: Mental health professionals should have a solid understanding of the capabilities and limitations of the GPT-4 model. This includes understanding the data inputs, the algorithms used, and the biases inherent in the AI model.

Specialized usage: It is important to recognize that although GPT-4 demonstrated promising accuracy in producing treatment recommendations for common disorders (Shea et al., 2023) it still needs expert supervision and cannot substitute consultation between clinicians and patients, as well as therapists and clients (Goldstein et al., 2023; Truhn et al., 2023).

Combat techno stress and digital exhaustion: Government and non-government organizations alike can help clinicians promote healthy technology use habits and mindfulness practices to manage stress and anxiety associated with constant connectivity and reliance on AI tools.

Encourage digital detox and human connection: Strong advocacy is needed for maintaining healthy boundaries between technology and human interaction, fostering face-to-face connections, and promoting mindfulness-based activities to support emotional well-being in a technologically integrated world.

Daily updates and research: Organizations and individuals who have the ability are advised to invest in robust security measures to safeguard sensitive data and AI systems from potential breaches.

In summary, clinicians should strictly adhere to relevant regulatory guidelines and standards when using AI, particularly GPT-4. This includes ensuring compliance with data privacy regulations (i.e., HIPAA in the United States) and following ethical guidelines for AI research and deployment. It is important to remain mindful of variations in large language model GPT-4’s performance among diverse populations to ensure fair assessment and equitable access. Not abiding by the established regulatory guidelines can result in compromised patient confidentiality, potential legal ramifications, erosion of trust between clinicians and patients, and the perpetuation of biases and disparities in treatment outcomes. Further, failure to adhere to these guidelines could potentially impede the progress and acceptance of AI in psychotherapy, hindering its potential to improve mental health care delivery and outcomes for diverse populations.

Caleb, Onah is a lifelong african writer, psychologist and academic researcher, he has written for variety of publications. Popularly called the digital therapist, by his professional community is a trained therapist and writer with vast experience in trauma and addiction management.

Cite This Article

Onah, C., Ogwuche, C., & Sohn, L. (2024, March). Treatment procedures for behavioural risks associated with GPT-4 artificial intelligence model. Psychotherapy Bulletin, 59(3).

References

Bender, E.M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 Association for Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency, 1(1), 610–623. https://doi.org/10.1145/3442188.3445922

Blodgett, S. L., Barocas, S., Daumé III, H., & Wallach, H. (2020). Language (technology) is power: A critical survey of “bias” in NLP. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,1,5454–5476. https://aclanthology.org/2020.acl-main.485/

Bohr, A., & Memarzadeh, K. (2020). The rise of artificial intelligence in healthcare applications. In Elsevier eBooks (pp. 25-60). https://doi.org/10.1016/b978-0-12-818438-7.00002-2

Buolamwini, J., & Timnit, G. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 77-91. https://proceedings.mlr.press/v81/buolamwini18a.html

Chen, L., Zaharia, M., & Zou, J. (2024). How is ChatGPT’s behavior changing over time? Harvard Data Science Review, 1, 1-33. https://doi.org/10.1162/99608f92.5317da47

Chen, S., Gu, C., Wei, J. & Lv, M. (2023). Research on the influence mechanism of privacy invasion experiences with privacy protection intentions in social media contexts: Regulatory focus as the moderator. Frontier Psychology, 13, 1-13. http://doi.org/10.3389/fpsyg.2022.1031592

Dev, S., Sheng, E., Zhao, J., Amstutz, A., Sun, J., Hou, Y., Sanseverino, M., Kim, J., Nishi, A., Peng, N., & Chang, K. W. (2022). On measures of biases and harms in NLP. Proceedings of the 13th International Joint Conference on Natural Language and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics,1,246–267. https://aclanthology.org/2023.findings-ijcnlp.0.pdf

Ganguli, D., Hernandez, D., Lovitt, L., DasSarma, N., Henighan, T., Jones, A., Joseph, N., Kernion, J., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly, T., Drain, D., Elhage, N., Showk, S. E., Fort, S., Hatfield-Dodds, Z., Johnston, S., … Clark, J. (2022). Predictability and surprise in large generative models. Proceedings of the 2022 Association for Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency, 1(1), 1747-1764. https://dl.acm.org/doi/abs/10.1145/3531146.3533229

Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023, January). Forecasting potential misuses of language models for disinformation campaigns and how to reduce risk. [Web article]. Retrieved from https://openai.com/research/forecasting-misuse

Gummerus, J., Lipkin, M., Dube, A., & Heinonen, K. (2019). Technology in use – characterizing customer self-service devices (SSDS). Journal Service Marketing, 33, 44–56. http://dx.doi.org/10.1108/JSM-10-2018-0292

Horvitz, E. (2022). On the horizon: Interactive and compositional deepfakes. Proceedings of the 2021 Association for Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency, 1(1), 653–661. https://dl.acm.org/doi/10.1145/3536221.3558175

Johnson, B., & Williams, C. (2022). Study assesses GPT-4’s potential to perpetuate racial, gender biases in clinical decision making. Journal of Medical Ethics, 48(2), 1-7.

Kaplan, A. & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizon, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004

Kreps, S., McCain, R. M., & Brundage, M. (2022). All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. Journal of Experimental Political Science, 9(1), 104–117. https://dx.doi.org/10.2139/ssrn.3525002

Kulkarni, S., Seneviratne, N., Baig, M. S., & Khan, A. H. A. (2020). Artificial intelligence in medicine: Where are we now? Academic Radiology, 27(1), 62–70. https://doi.org/10.1016/j.acra.2019.10.001

Lin, S., Hilton, J., & Evans, O. (2022). TruthfulQA: Measuring how models mimic human falsehoods. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 1(1), 3214–3252. https://aclanthology.org/2022.acl-long.229/

Maddox, T. M., Rumsfeld, J. S., & Payne, P. R. O. (2019). Questions for artificial intelligence in health care. Journal of the American Medical Association, 321(1), 31–32. https://doi.org/10.1001/jama.2018.18932

Marritt, A. (2019, August). Hands-on feature engineering for natural language processing. [PowerPoint slides]. Retrieved from https://www.infoq.com/presentations/nlp-ml-dl

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth Summer Research Project on artificial intelligence. AI Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904

Myers, A. (2023, February). AI’s powers of political persuasion. [Web article]. Retrieved from https://hai.stanford.edu/news/ais-powers-political-persuasion

Nori, H., King, N., McKinney, S. M., Carignan, D., & Horvitz, E. (2023). Capabilities of GPT-4 on medical challenge problems. arXiv preprint. https://doi.org/10.48550/arXiv.2303.13375

OpenAI (2019, November). GPT-2:1.5B release. [Web article]. Retrieved from https://openai.com/research/gpt-2-1-5b-release

Paul, S. (2018). Army of none: Autonomous weapons and the future of war. WW Norton & Company.

Rauh, M., Mellor, J., Uesato, J., Huang, P., Welbl, J., Weidinger, L., Dathathri, S., Glaese, A., Irving, G., Gabriel, I., Isaac, W., & Hendricks, L. A. (2022). Characteristics of harmful text: Towards rigorous benchmarking of language models. 36th Conference on Neural Information Processing Systems Track on Datasets and Benchmarks. https://doi.org/10.48550/arXiv.2206.08325

Richardson, J. P., Smith, C., Curtis, S., Watson, S., Zhu, X., Barry, B., & Richard, R. S. (2021). Patient apprehensions about the use of artificial intelligence in healthcare. Digital Medicine Nature Partner Journals, 4(1), 40. https://doi.org/10.1038/s41746-021-00509-1

Shea, Y. F., Lee, C. M. Y., Ip, W., Luk, D. W. A., & Wong, S. S. W. (2023). Use of GPT-4 to analyze medical records of patients with extensive investigations and delayed diagnosis. Journal of the American Medical Association Network Open, 6(8), 1-4. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2808251

Truhn, D., Weber, C. D., Braun, B. J., Bressem, K., Kather, J. N., Kuhl, C., & Nebelung, S. (2023). A pilot study on the efficacy of GPT-4 in providing orthopedic treatment recommendations from MRI reports. Science Report, 13,1-9. https://doi.org/10.1038/s41598-023-47500-2

Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P., Mellor, J., Glaese, A., Cheng, M., Balle, B., Kasirzadeh, A., Biles, C., Brown, S., Kenton, Z., Hawkins, W., Stepleton, T., Birhane, A., Hendricks, L. A., Rimell, L., Isaac, W., … Gabriel, I. (2022). Taxonomy of risks posed by language models. Association for Computing Machinery. https://doi.org/10.1145/3531146.3533088

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *