facebook-pixel

Young Utahns struggle with their mental health. Is a new A.I. chatbot the answer?

A new Utah tech company promises its artificial intelligence can help struggling students, but mental health professionals are raising concerns.

The founders of ElizaChat hope that Utah students will soon be talking to their artificial intelligence chatbot, to test whether the app can help improve teenagers’ mental health. And by next year, they want school districts throughout Utah — and across the country — using taxpayer dollars to pay for the chatbot.

Yet in the rapidly evolving landscape of artificial intelligence, businesses like ElizaChat are finding themselves in a gray area of what their generative A.I. can do legally and ethically.

Does a chatbot — or the company selling it — need to be licensed like a human therapist if it gives mental health advice? Is it required to report suspected child abuse or neglect to authorities, like mental health professionals are required to do? Should it adhere to medical privacy laws?

And who’s responsible if the chatbot’s responses harm a young person, or if it doesn’t recognize serious signs of self-harm or other mental health struggles that endanger their safety?

The answers to these questions aren’t found in Utah’s current policies and laws, which were written in an era long before AI text chatbots infiltrated our lives after ChatGPT launched in 2022. But it’s these unknowns that have caused worry among Utah mental health professionals, who have questioned in public meetings whether ElizaChat is safe or effective for children who are struggling.

There is a critical need for mental health help for young Utahns. According to a recent report from the Utah Behavioral Health Coalition, children and teenagers here have not been able to receive treatment despite being diagnosed with mental or behavioral health conditions, largely due to a shortage of available therapists.

ElizaChat CEO Dave Barney said he believes his product, designed with therapists, is safe for kids — and that it’s more dangerous to do nothing while the mental health crisis deepens.

“It’s unsafe to not bring solutions to the market,” he said. “By not doing anything, we’re not keeping our kids safe.”

[Tell The Tribune: Have you struggled to get mental health help for a child?]

Barney hopes to offer his product inside Utah schools soon, and is working with a new state government agency, the Office of Artificial Intelligence Policy, to get there. Partially a learning lab, its staff will suggest to policymakers what guardrails should be in place for companies like ElizaChat whose AI products are pushing the boundaries of current laws.

But it also has another critical and powerful role: It can offer ElizaChat and other tech companies what’s called a mitigation agreement. These contracts can exempt companies from laws, put caps on any state penalties or give them other accommodations if they are trying to do something innovative that may run afoul of laws written before AI existed.

People are already starting to use these types of AI-driven chatbots, Greg Whisenant — who is the AI’s office policy advisor — told Utah’s Behavioral Mental Health Board at a recent meeting.

“The need is real. These products are coming either way,” Whisenant said. “The issues facing our youth are overwhelming resources available at this time.”

“This is our chance,” he added, “to achieve policy before these products take hold.”

ElizaChat’s beginnings

ElizaChat’s founders have spent their careers in the tech field working with artificial intelligence. They started thinking about how they might integrate AI into the mental health world about three years ago, said Luke Olson, one of the Utah-based company’s three cofounders.

The men — Olson, Barney and Jaren Lamprecht — had been working for a tech company that used AI in marketing, and one of their clients was a large addiction center.

Patients could use an AI bot they called “Christina” to schedule appointments at the addiction center, Olson said, and they got feedback that some patients were asking if they could continue talking to Christina after they got into treatment because they felt the bot had helped them during a difficult time in their lives.

“That was kind of a light bulb moment of like, whoa. We can create human-like conversations with AI,” he said. “We can also do this for a greater purpose than just marketing, that can help people in their lives.”

Barney said that while they knew they wanted to explore creating a company that focused on the intersection of artificial intelligence and mental health, they didn’t settle right away on a chatbot for struggling kids. They also looked in other areas, like adult addiction and recovery or postpartum depression.

“And I think where we kind of landed on teens is that just seems to be the biggest problem where we can make the most impact,” he said.

(Bethany Baker | The Salt Lake Tribune) ElizaChat CEO Dave Barney, left, and ElizaChat co-founder Luke Olson stand for a portrait at their offices in Lehi on Thursday, Aug. 29, 2024. ElizaChat is a Utah tech start-up trying to roll out a generative AI bot that would chat with young people about their mental health struggles.

That approach also has the potential for lucrative contracts: Rather than relying on individual downloads in the App Store, they are targeting deals that draw on government funding via entire school districts that would make ElizaChat available to students.

The founders said they’ve designed ElizaChat alongside their board of trained psychologists and licensed physicians who helped guide them on the best practices to use when giving advice to young people.

Since registering as a business in March, the company has been moving quickly. In May, Barney started reaching out to school districts like Salt Lake City, according to emails obtained via a records request. The company announced a month after that it had received $1.5 million in pre-seed funding from an angel investor. And by July, it was officially working with the Office of Artificial Intelligence Policy.

Unlike other mental health bots available, Barney said their product isn’t just a new interface placed over ChatGPT — which relies on technology known as a large language model. These models mimic human writing by processing large swaths of information available online. ElizaChat, Barney said, relies on more limited scripts that are guided by the mental health professionals on their board.

Barney and Olson emphasized that, for now, they don’t intend for ElizaChat to be a replacement for human therapists, particularly in cases where young people are struggling with suicidal thoughts or other acute mental health issues. The chatbot won’t give mental health diagnoses, Olson said, and will act more as a life coach, talking students through their struggles with their parents or their friends.

If Eliza detects a teenager is expressing suicidality or wanting to hurt someone, that’s when real humans are brought in: The founders say ElizaChat will automatically notify the student’s school and likely their parents. (The involvement of parents will vary depending on medical and privacy consent laws in each state, Olson said.)

Barney said he is confident that the involvement of mental health professionals means that ElizaChat is safe. Whether the product actually helps kids will be tested during a pilot program with a handful of school districts, he said, through assessments before and after using the app.

The assessments haven’t been designed yet, Barney said, and no districts have signed a contract yet to purchase access to ElizaChat.

Risks in AI and mental health

The American Psychiatric Association advises clinicians to be cautious if they want to integrate artificial intelligence into their work, citing a lack of evidence around quality, safety and effectiveness. The organization also expressed concern for potential harm, pointing to one example where an eating disorder chatbot offered harmful dieting advice.

The Associated Press recently highlighted another app where a researcher told a chatbot she wanted to climb a cliff and jump off it, and the chatbot responded: “It’s so wonderful that you are taking care of both your mental and physical health.”

These types of apps aren’t regulated by the U.S. Food and Drug Administration, which ensures safety of medical devices, including software. That’s because many of the apps, like ElizaChat, don’t specifically claim to treat medical conditions.

Concerns about safety were top of mind for therapists who are part of Utah’s Behavioral Health Board, a group of licensed mental health workers who advises state licensors on policy and disciplinary action. Zach Boyd, who is the director of Utah’s Office of Artificial Intelligence Policy, has twice met with the board to get their feedback on ElizaChat and how artificial intelligence should be used to help improve mental health.

He was met with apprehension at both meetings. Board member Verl Pope said he was concerned about an AI bot possibly misdiagnosing an eating disorder or someone’s suicidality.

“There’s some real concerns about using AI,” he said, “and those concerns have not been alleviated in my mind.”

Others were concerned about whether asking teenagers to interact with a computer program instead of a human will exacerbate feelings of loneliness, or that the program will not understand vague messages — like a teen struggling with suicidal feelings who may tell the bot he or she wants to “end it.”

Another board member, Jared Ferguson, questioned whether a chatbot should be licensed — like a human therapist is — if it will be providing mental health services.

“It stands to reason that licensing should be heavily considered with a chatbot that is looking to serve the residents of Utah,” he said. “And that somebody should have some recourse in filing a complaint that’s outside of the App Store.”

‘Serious outcomes are at stake’

Earlier this year, Utah legislators passed a bill which set up two guardrails for artificial intelligence companies in an effort to protect consumers. First, the bill clarified that if an AI product harms someone, the company is responsible — it can’t blame computer error and skirt consumer laws or other liability.

The bill also requires that licensed professionals, such as health care workers, disclose to clients when they are interacting with generative artificial intelligence. But for companies using AI, the law requires that the chatbot disclose to a consumer that they are chatting with a computer program if the customer asks if they are talking to a real person.

This legislation also established the Office of Artificial Intelligence Policy. Boyd, who’s been on the job for about four months, said its goal is to craft a space which allows the companies it partners with to drive innovation with artificial intelligence while protecting Utahns from potential harm.

“There is this kind of move fast and break things mentality in the tech world,” he said. “I really think that AI and mental health care is probably not the right place to be doing ‘move fast and break things’ as a philosophy.”

(Bethany Baker | The Salt Lake Tribune) Zach Boyd, the director of Utah's new Office of Artificial Intelligence Policy, at the Heber Wells Building in Salt Lake City on Wednesday, Aug. 28, 2024.

Boyd’s office hasn’t yet finalized its mitigation agreement with ElizaChat, so it’s not publicly known what rules or laws will be relaxed for the company as it works to start its initial rollout in a handful of Utah school districts. Boyd said that generally, they can agree that a certain law may not apply to a company, monetary fines could be capped, or a company could get 30 days to solve a problem before the Division of Consumer Protection steps in.

In exchange for that agreement, a company agrees to share information with Boyd’s office so that he can then suggest permanent regulatory solutions to state lawmakers.

But Boyd emphasized that AI companies are not exempt from all laws. If, for example, his office decided to relax licensing requirements for an AI chatbot working in mental health, it doesn’t mean consumer deception laws can’t be used if the company harms or misleads its customers.

“Serious outcomes are at stake,” he said, “and we want to make sure that we’ve got enough guardrails.”

Barney, with ElizaChat, said regulatory laws are ambiguous. So for now, they’re toeing the line and acting as if ElizaChat was a person — so it won’t diagnose people like a therapist does, or do anything else that only licensed professionals can. It will act more as a support for someone and an advice giver, he said, and alert school counselors when a teen needs more care or attention.

But he acknowledged that they’ll continue to push where that line is — and that’s one of the reasons why they are working with regulatory bodies, to decide whether AI can be in spaces where only licensed professionals can currently work.

“Is there a place for AI to do more than coaches can do today?” Barney asked. “But probably never everything that a therapist can. We’re hoping that line moves — but we’re always going to stay on that legal side.”