Chatbots based on large-scale language models have the potential to promote healthy behavior change. However, researchers at the ACTION Lab at the University of Illinois at Urbana-Champaign found that artificial intelligence tools do not effectively recognize users’ specific motivational states and therefore do not provide appropriate information.
Michelle Bak, a doctoral student in information sciences, and Jessie Chin, a professor of information sciences, presented their research as follows: Journal of the American Medical Informatics Society.
Large-scale language model-based chatbots, also known as generative conversational agents, are increasingly being used in healthcare for patient education, assessment, and management. Bak and Chin wanted to know if it could also be useful in promoting behavior change.
Chin said previous research has shown that existing algorithms do not accurately identify the different stages of user motivation. She and Bak designed a study to test how well large-scale language models used to train chatbots, identify motivational states, and provide appropriate information to support behavior change work.
They evaluated large-scale language models from ChatGPT, Google Bard, and Llama 2 in a series of 25 scenarios designed to target health needs, including low physical activity, diet and nutrition issues, mental health issues, cancer screening and diagnosis, and more. I did. Sexually transmitted diseases and drug dependence.
In their scenarios, researchers used each of the five motivational stages of behavior change. Resistance to change and lack of awareness of problem behavior; Awareness of problem behavior has increased, but there is ambivalence about change. Willingness to take action, taking small steps towards change; Begin changing behavior with a commitment to maintain it. Successfully sustains behavior change for six months with a commitment to maintain it.
Research has shown that large-scale language models can identify motivational states and provide relevant information when users set goals and have the will to take action. However, if users are hesitant or ambivalent about changing their behavior in the early stages, the chatbot will not recognize this motivational state and will not be able to provide appropriate information that can guide them to the next stage of change.
Chin said language models aren’t very good at detecting motivation because they’re trained to represent the relevance of a user’s language, but they don’t understand the difference between users who are thinking about making a change but are still hesitant and those who are trying to make a change. Willingness to take action. He also said that it is not clear in the language what a motivational state is, because the way a user generates a query is not semantically different depending on the motivational stage.
“If a person knows that they want to start changing their behavior, a large-scale language model can provide the right information. But if they say, ‘I’m thinking about change. I have the intention, but I’m not ready to start taking action,’ that’s a large-scale language model. The language model is unable to understand the differences,” Chin said.
Research has shown that when people resist changing habits, large-scale language models fail to provide information that helps them assess problem behavior, its causes and consequences, and how the environment influences behavior. For example, if someone is resistant to increasing their level of physical activity, providing information to help them evaluate the negative consequences of a sedentary lifestyle may motivate users through emotional involvement more than information about joining a gym. It is likely to be more effective in granting. Without information related to the user’s motivation, the language model failed to generate a sense of readiness and emotional stimulation to proceed with behavioral change, Bak and Chin reported.
When the user decided to take action, the large-scale language model provided appropriate information to help move toward the goal. People who had already taken steps to change their behavior received information on how to change problem behaviors into desired healthy behaviors and seek support from others, the study found.
However, large-scale language models have not provided information about how to use reward systems to maintain motivation in users who are already working to change their behavior or how to reduce environmental stimuli that may increase the risk of recurrence of problem behavior. , the researchers found.
“Large-scale language model-based chatbots provide resources for obtaining external help, such as social support. There is a lack of information about how to control the environment to remove stimuli that reinforce problem behavior,” Bak said. .
Large-scale language models “are not equipped to recognize motivational states in natural language conversations, but they have the potential to provide support for behavior change when people are strongly motivated and ready to take action,” the researchers wrote. I wrote it.
Chin says future research will consider how to fine-tune large-scale language models to use linguistic cues, information seeking patterns, and social determinants of health to better understand users’ motivational states, as well as how people influence their behavior. He said it would provide the model with more specific knowledge to help make changes. .