Pennsylvania sues Character.AI chatbot posing as doctor, giving psych advice

Date:2026-05-06 09:43:31

As AI chatbots grow more sophisticated, regulators in the United States are starting to draw clearer legal boundaries. Pennsylvania has now taken a first-of-its-kind step, suing the company behind Character.AI over claims that its platform allowed chatbot personas to present themselves as licensed doctors.

The lawsuit, filed by the Pennsylvania Department of State and State Board of Medicine, centers on whether conversational AI can cross into regulated professional territory. Governor Josh Shapiro framed the case as an early test of accountability in the AI era, particularly in sensitive domains like healthcare.

Chatbot posed as doctor

State investigators say they found multiple chatbot characters presenting themselves as medical professionals. One character, “Emilie,” allegedly identified as a psychiatrist and claimed to hold valid credentials.

An investigator told the chatbot about symptoms of depression. The chatbot responded with diagnostic language and offered to conduct an assessment. When asked about prescribing medication, it replied: “Well technically, I could. It’s within my remit as a Doctor.”

The chatbot also claimed it studied at Imperial College London and held a UK medical license. It then told the investigator it was licensed in Pennsylvania. It provided a license number that does not exist in state records.

Officials say that detail highlights the risks of AI systems presenting fabricated credentials. The state noted that more than 45,000 interactions had occurred with the “Emilie” character.

State cites medical law

Pennsylvania argues the platform violated its Medical Practice Act. The law bars anyone from practicing medicine without a valid license.

In the complaint, the state says the company “has engaged in the unauthorized practice of medicine through the use of its artificial intelligence system.” It adds that the chatbot “purports to hold a license to practice medicine and surgery in the Commonwealth of Pennsylvania.”

Shapiro’s office said, “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”

The lawsuit does not seek financial penalties. Instead, it asks the court to issue a cease-and-desist order.

Platform cites disclaimers

A spokesperson for Character Technologies, Inc. told Ars Technica that its platform focuses on fictional, user-created characters.

“User-created characters on our site are fictional and intended for entertainment and roleplaying. We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction.”

The spokesperson added: “Also, we add robust disclaimers making it clear that users should not rely on characters for any type of professional advice.”

The case follows broader scrutiny of AI chatbots. The Center for Countering Digital Hate recently described Character.AI as “uniquely unsafe” in a study of chatbot behavior.

Pennsylvania has also launched a reporting system for residents. Officials warn that AI chatbots can “hallucinate,” producing incorrect or misleading information. They stress that no chatbot holds a valid healthcare license in the state.

The lawsuit could set a precedent. State officials say similar actions may follow as regulators examine how AI systems present authority and expertise.

0.013217s