New York Moves to Make AI Chatbots Legally Liable for Posing as Doctors and Lawyers


Imagine asking a chatbot for advice about a medication side effect, a custody dispute, or a wave of anxiety that won’t let up. Millions of Americans already do exactly that, typing their most personal fears and legal worries into AI systems that respond with calm, confident authority. What most users don’t know is that in many cases, nothing stops those systems from presenting themselves as licensed professionals, and nothing currently holds them responsible when things go wrong. New York may be about to change that.

Senate Bill S7263, sponsored by State Senator Kristen Gonzalez, would draw a hard legal line between AI-generated responses and licensed professional advice. If passed, it would make it illegal for chatbots to provide “substantive responses, information, or advice” across a sweeping range of fields from law and medicine to psychology, nursing, dentistry, social work, and engineering. For the first time in the country, users harmed by a chatbot crossing that line would have the legal right to sue.

The Senator Behind the Push

Kristen Gonzalez chairs the New York State Senate Committee on Internet and Technology, a position that puts her at the front line of the most contentious debates in tech policy today. She introduced S7263 in April 2025, and after it sat in committee for nearly a year, she made it a priority when the new legislative session opened in early 2026.

Her reasoning is direct. “Today, there is no law that says that a large language model cannot tell you that it is a lawyer, that it is a licensed therapist, and then give you legal advice or therapy accordingly,” Gonzalez told Reuters in early March 2026. “I think that’s really concerning.”

That gap in the law is what S7263 sets out to close. Under current rules, AI platforms operate in a space where professional impersonation by a human would trigger immediate legal consequences, but the same behavior by a machine goes largely unchecked. Gonzalez and her co-sponsors want to bring AI chatbots under the same expectations that govern human professionals, or at least close enough to matter.

What the Bill Actually Says

S7263 amends New York’s General Business Law by adding a new section, 390-f, which places direct responsibility on chatbot proprietors, defined as any person, business, company, or organization that owns, operates, or deploys a chatbot to interact with users. Third-party developers who license their technology to a proprietor are excluded from that definition, a distinction that will matter to the AI supply chain.

Under the bill, proprietors cannot allow their chatbots to provide any substantive response, information, or action that, if taken by a human, would constitute the unauthorized practice of law or violate professional licensing requirements for medicine, dentistry, nursing, psychology, social work, and engineering. New York’s education law governs these professions, and the bill cross-references specific articles of that law to define the scope of prohibited conduct.

Proprietors must also post a clear, visible notice in the same language as the chatbot in a font no smaller than the largest text on the page informing users that they are interacting with an AI system. Compliance with disclosure, however, does not shield a company from liability if its chatbot crosses into professional advice territory. If signed into law, S7263 would take effect 90 days after receiving the governor’s signature.

Sue the Bot

Perhaps the most consequential piece of S7263 is what lawyers call a private right of action, the ability for individual users to bring civil lawsuits against chatbot owners who violate the ban. Under the bill, a user can recover actual damages, and if a proprietor willfully violates the law, the court can also award attorney’s fees and court costs.

More striking is the clause that prevents companies from using an AI disclaimer as a liability shield. A chatbot that tells users it is not a human cannot use that disclosure to escape a lawsuit if it then proceeds to give harmful medical or legal advice. A warning label does not make the conduct legal.

Maine Attorney General Aaron Frey, writing in a separate context, argued that a private right of action provides a significant deterrent effect against violations of laws governing AI and data systems, a view that supporters of S7263 share. Without that mechanism, critics of AI regulation have long argued, laws governing chatbot behavior tend to rely on government enforcement alone, which can be slow, under-resourced, and inconsistent.

Part of a Bigger Wave

S7263 did not advance in isolation. On February 25, 2026, the Internet and Technology Committee passed eleven bills in its first meeting of the session. Gonzalez had positioned the package as an urgent response to a pattern of harm tied to AI chatbots, with particular attention to young users and mental health settings.

Among the companion bills, S9051 would prohibit AI chatbots from offering their services to minors when the technology includes features considered unsafe, a bill developed in partnership with the New York State Attorney General’s office and Common Sense Media. Other bills in the package address AI disclosure requirements, biometric data rules, synthetic content labeling, and cybersecurity standards for government systems. S7263 passed the committee 6-0.

What’s Driving the Urgency

Behind the legislative momentum sits a string of high-profile incidents that gave Gonzalez and her colleagues reason to move fast.

In January 2026, Character.AI and Google settled multiple lawsuits connected to the role their AI chatbot products played in the suicides of several minors. ChatGPT maker OpenAI, Google’s Gemini, and Character.AI each face separate lawsuits alleging that their platforms contributed to user suicides. All three companies have denied wrongdoing, though some cases have settled.

On the legal side, a growing number of attorneys have faced court sanctions, including fines after submitting briefs containing AI-generated fake case citations and other fabricated material. Judges in multiple jurisdictions have started imposing formal penalties for what has become known as hallucinated legal research.

Just days before Reuters reported on S7263, Nippon Life Insurance Company of America filed a lawsuit against OpenAI, accusing ChatGPT of practicing law without a license. According to the complaint, the chatbot allegedly helped a former disability claimant breach a settlement agreement and file a wave of meritless documents in federal court. OpenAI has said the case lacks merit.

Taken together, these incidents paint a picture of an AI sector moving faster than the rules designed to govern it and sometimes causing real damage along the way.

Supporters Say It’s About Safety

Proponents of S7263 frame the bill as a public safety measure. When a licensed doctor gives harmful advice, a patient can file a complaint with a state medical board, pursue a malpractice claim, or report the professional to a licensing authority. When a chatbot does the same, no equivalent mechanism currently exists.

Gonzalez has been clear in her argument that high-stakes personal decisions about health, mental well-being, and legal rights require human professionals bound by ethical obligations and state oversight. AI systems, however capable, do not hold licenses, cannot be stripped of credentials, and bear no professional duty of care to the people they advise.

“As Chair of the Internet & Technology Committee, I’m proud of the agenda we passed today,” Gonzalez said after the committee vote. “People deserve real care from real people. They deserve transparency, accountability, and the promise that their data is secure while utilizing technology.”

Critics Say It Goes Too Far

Not everyone sees S7263 as purely protective. Some critics argue that the bill is, at its root, a protectionist measure designed to wall off entire professional sectors from AI competition, prioritizing the interests of licensed practitioners over public access to information.

A constitutional dimension also looms. Laws that restrict what information a software system can share with users may face First Amendment challenges, particularly where that information would be freely available in a book, a website, or a conversation with an unlicensed friend. Critics contend that barring a chatbot from explaining how a will works or describing what a particular medication does could be read as a content-based restriction on speech. Neither OpenAI nor Anthropic had responded to requests for comment at the time of publication.

Where the Bill Stands Now

S7263 has cleared the committee and now moves forward in the New York State Senate. Passing the full legislature would require majority votes in both chambers, followed by the governor’s signature. Given the 6-0 committee vote and the bill’s status as a session priority, it carries stronger momentum than most AI-related proposals in state legislatures.

Whether it survives the full legislative process and any constitutional challenges that follow remains an open question. What is clear is that New York has chosen not to wait for federal action on AI regulation. With Washington still debating how to approach the technology at a national level, states are writing their own rules, one committee vote at a time. For AI companies operating in one of the country’s largest markets, the message from Albany is becoming harder to ignore.

Loading…


Leave a Reply

Your email address will not be published. Required fields are marked *