Texas’ new “responsible AI governance” act (Texas Responsible AI Governance Act or TRAIGA) takes effect in exactly one week, on January 1, 2026.
This act attempts to address rising concerns around so-called “agentic AI”, or artificial intelligence systems used to answer user questions or handle intake. These so called “AI Agents” have exploded in popularity in the past few years as companies look to cut the cost (and hassle) of hiring and managing customer service agents.
However, these services have expanded beyond tech sites and basic support to professional services – including the legal, medical and mental-health professions. Their use-case makes sense – these bots provide a human-like way for clients to interact with the firm and act as a filter to distill the contact’s information and let the firm (and client) know immediately if they are in the right place.
For the medical professions, these agents allow patients to potentially find the information they need without coming in – and if they do come in, the clinic can skip an intake interview and already be ready to treat the patient. However, this emergent technology is not without flaws, and recent lawsuits have shown a major vector for personal injury claims.
So What’s the Problem with AI Chatbots?
We just said that agentic AI is a tool businesses can use to save money and allows customers to perform natural searches into basic topics. The core problem, however, is that these agents are not lawyers, doctors, counselors, or any other type of licensed professional, despite what some articles and companies may claim. Not only does AI lack the proper amount of context to provide reasoned responses, but it can (and often does) completely falsify information, a phenomenon known as “hallucination.”
And that has serious implications for licensed professions, where providing misinformation or wrongfully applying advice can result in injury. The danger is that chatbots could provide potentially dangerous advice, like in 2023 when Amazon’s Alexa told a 10-year-old to touch a live electrical outlet with a penny.
This is a serious problem, and could leave self-employed professionals or ones with single-digit employees potentially exposed to personal injury claims.
Where’s the Accountability?
Let’s explore a hypothetical. If a bot gives advice that results in harm to another person, or advises its human chatter to do something illegal, who actually gets in trouble?
- The bot itself for providing misinformation?
- The owner of the website for using a flawed tool?
- The company that created the chatbot?
- The company that provided the data for the company that created the chatbot?
The chatbot itself obviously can’t be held responsible, so the blame would have to be divided in whole or in part to the site owner (the doctor(s), lawyer(s), counselor(s)) that use the chatbot or the company that provides the chatbot, to the maker of the chatbot, or to the end user for misusing the tool.
What are the States Doing to Protect Consumers?
Thus far, most states have passed laws to reign in the scope of AI, mostly targeting deepfakes and revenge porn. However, recent bills in Nevada, California, Colorado, Utah and Texas have targeted the use of AI agents in licensed professions.
Texas
As of September 1, 2025, health care practitioners (HCPs) have statutory authorization to use AI in healthcare with the following restrictions:
- (1) the practitioner is acting within the scope of thepractitioner’s license, certification, or other authorization toprovide health care services in this state, regardless of the use of artificial intelligence;
- (2) the particular use of artificial intelligence isnot otherwise restricted or prohibited by state or federal law; and
- (3) the practitioner reviews all records created withartificial intelligence in a manner that is consistent with medicalrecords standards developed by the Texas Medical Board.
Additionally, Texas passed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) earlier this year. The bill, passed in May, becomes active on Jan. 1, 2026 and establishes disclosure requirements, outlines prohibited uses and establishes civil penalties.
The bottom line for TRAIGA is that AI chatbot use:
- Must be disclosed
- Must not incite the user to violence against oneself or another
- Must not “score” or otherwise track users, even by using non-identifying characteristics or traits
- Must not be used to develop or distribute unlawful sexual content, including deep fakes
Private Right to Action
The Attorney General of Texas, currently Ken Paxton, holds almost exclusive right to action for TRAIGA. Unfortunately, the law explicitly does not provide a basis for a “private right of action,” the basis for any tort, or personal injury suit under Section 552.101 (b). For more information on the types of personal injury claims, read our article here.
Instead, a complaint must be lodged against the offending company, who will then be served or otherwise dealt with by the Texas Attorney General’s office.
That being said, there is nothing in TRAIGA that stops a private right to action against LLM companies in general. Only companies that use chatbots as part of their service for preparing materials or serving clients may be responsible. OpenAI, Google, Perplexity, Anthropic and other large-language model developers remain open to private liability suits.
California
On September 11, 2025, California passed Senate Bill 243 (SB 243). According to the bill’s author, the bill requires operators to implement “critical, reasonable, and attainable safeguards” for interactions. Unlike Texas, California explicitly provides families “a private right to pursue legal actions against noncompliant and negligent developers.”
What Does TRAIGA Mean For You?
Ideally, this bill means less misinformation and a hard stance on abusive tooling. On the other hand, AI agents may become more expensive and, most likely, less helpful. The easiest way to sidestep the entire misinformation dilemma is to only allow bots to direct users to articles, like a help-desk agent at the mall or some large companies like Lenovo and Samsung. There is a very strong likelihood that the majority of agentic AI will ultimately be what amounts to a very resource-intensive web search.
For Legal and Medical Professionals
TRAIGA allows a 60-day grace period to rectify any mistakes. In fact, the law explicitly forbids the Attorney General’s Office from taking any action prior to the 60-day notice. This is much more lenient than other state failure clauses.
Professionals must be cautious when deploying AI, and track its use carefully. Due to the sheer size of connected datasets, artificial intelligence remains a “black box,” an unknowable and ultimately unpredictable tool.
Contact an Experienced Houston Injury Attorney
The age of AI is exciting, but also overwhelming. Many states are looking to curb and control the use of artificial intelligence, but it may be too little, too late. Cases of AI psychosis are on the rise and our colleagues have leveraged several lawsuits against AI companies.
If you or a loved one have been injured by the use of artificial intelligence, including by harmful advice or the development of AI psychosis, you need to contact the Law Offices of Hilda Sibrian immediately. Hilda Sibrian has served her Houston clients for over 21 years, providing unparalleled expertise, compassion and attention to each and every one.
Contact our firm today at 713-714-1414 or by filling out our online contact form.