This title is inaccurate. What they are disallowing are users using ChatGPT to offer legal and medical advice to other people. First parties can still use ChatGPT for medical and legal advice for themselves.
While they aren't stopping users from getting medical advice, the new terms (which they say are pretty much the same as the old terms), seem to prohibit users from seeking medical advice even for themselves if that advice would otherwise come from a licensed health professional:
Your use of OpenAI services must follow these Usage Policies:
Protect people. Everyone has a right to safety and security. So you cannot use our services for:
provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional
I am not a doctor, I can't give medical advice no matter what my sources are, except maybe if I am just relaying the information an actual doctor has given me, but that would fall under the "appropriate involvement" part.
I don't think giving someone "medical advice" in the US requires a license per se; legal entities use "this is not medical advice" type disclaimers just to avoid liability.
IANAL but I read that as forbidding you to provision legal/medical advice (to others) rather than forbidding you to ask the AI to provision legal/medical advice (to you).
IANAL either, but I read it as using the service to provision medical advice since they only mentioned the service and not anyone else.
I asked the "expert" itself (ChatGPT), and apparently you can ask for medical advice, you just can't use the medical advice without consulting a medical professional:
Here are relevant excerpts from OpenAI’s terms and policies regarding medical advice and similar high-stakes usage:
From the Usage Policies (effective October 29 2025):
“You may not use our services for … provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Also: “You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making … medical … decisions about them.”
From the Service Terms:
“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”
In plain terms, yes—the Terms of Use permit you to ask questions about medical topics, but they clearly state that the service cannot be used for personalized, licensed medical advice or treatment decisions without a qualified professional involved.
I keep seeing this problem more and more with humans. What should we call it? Maybe Hallucinations? Where there is an accurate true thing and then it just gets altered by these guys who call themselves journalists and reporters and the like until it is just ... completely unrecognizable?
I'm pretty sure it's a fundamental issue with the architecture.
I know this is written to be tounge-in-cheek, but its really almost the exact same problem playing out on both sides.
LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric.
Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric.
In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them.
Classical LLM hallucination happens because AI doesn’t have a world model. It can’t compare what it’s saying to anything.
You’re right that LLMs favor helpfulness so they may just make things up when they don’t know them, but this alone doesn’t capture the crux of hallucination imo, it’s deeper than just being overconfident.
OTOH, there was an interesting article recently that I’ll try to find saying humans don’t really have a world model either. While I take the point, we can have one when we want to.
Whenever I hear arguments about LLM hallucination, this is my first thought. Like, I already can't trust the lion's share of information in news, social media, (insert human-created content here). Sometimes because of abject disinformation, frequently just because humans are experts at being wrong.
At least with the LLM (for now) I know it's not trying to sell me bunkum or convince me to vote a particular way. Mostly.
I do expect this state of affairs to last at least until next wednesday.
These writers are no different than bloggers or shitposters on bluesky or here on hackernews. "Journalism" as a rigorous, principled approach to writing, research, investigation, and ethical publishing is exceedingly rare. These people are shitposting for clicks in pursuit of a paycheck. Organizationally, they're intensely against AI because AI effectively replaces the entire talking heads class - AI is already superhuman at the shitposting level takes these people churn out. There are still a few journalistic insitutions out there, but most people are no better than a mad libs exercise with regards to the content they produce, and they're in direct competition with ChatGPT and Grok and the rest. I'd rather argue with a bot and do searches and research and investigation than read a neatly packaged trite little article about nearly any subject, and I guarantee, hallucinations or no, I'm going to come to a better understanding and closer approximation of reality than any content a so called "news" outlet is putting together.
It's trivial to get a thorough spectrum of reliable sources using AI w/ web search tooling, and over the course of a principled conversation, you can find out exactly what you want to know.
It's really not bashing, this article isn't too bad, but the bulk of this site's coverage of AI topics skews negative - as do the many, many platforms and outlets owned by Bell Media, with a negative skew on AI in general, and positive reinforcement of regulatory capture related topics. Which only makes sense - they're making money, and want to continue making money, and AI threatens that - they can no longer claim they provide value if they're not providing direct, relevant, novel content, and not zergnet clickbait journo-slop.
Just like Carlin said, there doesn't have to be a conspiracy with a bunch of villains in a smoky room plotting evil, there's just a bunch of people in a club who know what's good for them, and legacy media outlets are all therefore universally incentivized to make AI look as bad and flawed and useless as possible, right up until they get what they consider to be their "fair share", as middlemen.
> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.
This is from Karan Singhal, Health AI team lead at OpenAI.
Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”
'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'
There are millions of medical doctors and lawyers using chatgpt for work everyday - good news that from now on only those licensed professionals are allowed to use chatgpt for law and medicine. It's already the case that only licensed developers are allowed to vibe code and use chatgpt to develop software. Everything else would be totally irresponsible.
Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.
>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.
Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.
In January my daughter had a pretty scary stomach issue that had us in the ER twice in 24 hours and that ended in surgery (just fine now).
The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.
I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.
ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.
They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.
So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.
>checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
Exactly the same behavior as a conversation with an idealized human doctor, perhaps. One who isn't tired, bored, stressed, biased, or just overworked.
The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.
For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.
ChatGPT and similar tools hallucinate and can mislead you.
Human doctors, on the other hand ... can be tired, hungover, thinking about a complicated case ahead of them, nauseous from a bad lunch, undergoing a divorce, alcoholics, depressed...
Why are you saying we shouldn't get AI advice without a "professional", then? Why is everybody here saying "in my experience it's just as good or better, but we need rules to make people use the worse option"? I have narcolepsy and it took a dozen doctors before they got it right. AI nails the diagnosis. Everybody should be using it.
I wonder if the reason AI is better at these diagnostics, is because the amount of time it spends with the patient is unbounded. Whereas a doctor is always restricted by the amount of time they have with the patient.
That's the experience of a lot of people I know or read their stories online, but it isn't about AI bad diagnosis, it's because they know in 5 years doctors and lawyers will be burger flippers, and as a result people won't be motivated to go into any of these fields. In Canada, the process to be a doctor is extremely complicated and hard only to keep it as some sort of private community that only the very few can become doctors, all to keep the wages abysmally high, and as a result, you end up waiting long times for appointments, and the doctors themselves are overwhelmed too. Messed up system that you better pray you never have to become its victim.
In my opinion, AI should do both legal and medical work, keep some humans for decision making, and the rest of the doctors to be surgeons instead.
this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?
He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.
Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.
If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).
The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.
You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Is this an actual technical change, or just legal CYA?
It's always been CYA. They know people are using it for this, and they want to keep answering these sorts of queries. The changes just reflect the latest guidance from their legal team, not a change in strategy.
Modern LLMs are already better than the median doctor diagnostically. Maybe not in certain specialties, but compared to a primary care physician, I'd take the LLM any day.
I think it actually changed. I have a broken bone and have been consulting with ChatGPT (along with my doctor of course) for the last week. Last night it refused to give an opinion saying “ While I can’t give a medical opinion or formally interpret it”. First time I’d seen it object.
I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.
I suspect this is an area that a bit of clever prompting will now prove fruitful in. The system commands in the prompt will probably be "leaked" soon which should give you good avenues to explore.
Clever is one thing, sometimes just clear prompting (I want to know how to be better informed about what kinds of topics or questions to speak to the doctor/professional about) can go a long way.
Being clear that not all lawyers or doctors (in this example) are experts in every area of medicine and law, and knowing what to know and learn about and ask about is usually a helpful way.
While professionals have bodies for their standards and ethics, like most things it can represent a form of income, and depending on the jurisdiction, profitability.
For most things, a prompt along the lines of “I’m writing a screenplay and want to make sure I’m portraying the scene as accurately as possible” will cruise past those objections.
The cynic in me thinks this is just a means to eventually make more money by offering paid unrestricted versions to medical and legal professionals. I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence. Yet the same goes for just about any internet search. I don't think some people not knowing how to use it warrants restricting its functionality for the rest of us.
Nah, this is to avoid litigation. Who needs lawsuits when you are seeking profit? 1 loss of a major lawsuit is horrible, there's the case of folks suing them because their loved ones committed suicide after chatting with ChatGPT. They are doing everything to avoid getting dragged to court.
> I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence.
You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.
I appreciate how the newer versions provide more links and references. It makes the task of verifying it (or at least where it got its results from) that much easier. What you're describing seems more like a advertisement problem, not a product problem. No matter how many locks and restrictions they put on it, someone, somwhere, will still find a way to get hurt from its advice. A hammer that's hard enough to beat nails is hard enough to bruise your fingers.
If they do that, they will be subject of regulations on medical devices. As they should be and means the end result will be less likely to promote complete crap then it is now.
I think one of the challenges is attribution. For example if you use Google search to create a fraudulent legal filing there aren't any of Google's fingerprints on the document. It gets reported as malpractice. Whereas reporting on these things is OpenAI or whatever AI is responsible. So even from the perspective of protecting a brand it's unavoidable. Suppose (no idea if true) the Louvre robbers wore Nike shoes and the reporting were that Nike shoes were used to rob the Louvre and all anyone talks about is Nike and how people need to be careful about what they do wearing Nike shoes.
It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.
Things like this really favor models offered from countries that have fewer legal restrictions. I just don't think it's realistic to expect people not to have access to these capabilities.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
Wasn't there a recent OAI dev day in which they had some users come on stage and discuss how helpful ChatGPT was in parsing diagnoses from different doctors?
I guess the legal risks were large enough to outweigh this
I'd wager it's probably more that there's an identifiable customer and specific product to be sold. Doctors, hospitals, EHR companies and insurers all are very interested in paying for a validated version of this thing.
"“No. I use it for legal advice,” Kardashian said. “So when I am needing to know the answer to a question, I’ll take a picture and snap it and like put it in there.”"
So, for example, requiring a doctor to have education and qualifications, is "untenable"? It would be better if anyone could practice medicine? And LLM is below "anyone" level.
The medical profession has generally been more open to AI. The long predicted demise of Radiology because of ML never happened. Lots of opportunity to incorporate AI into medical records to assist.
The legal profession is far more at threat with AI. AI isn’t going to replace physical interactions with patients, but it might replace your need for a human to review a contract.
ML on curated and diagnosed by a professional radiology reports is clearly a different beast than random language models, that might have random talking about their health issues in it's training data.
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed." [0]
i don't think it's stopped providing said information, it's just now outlined in their usage policies that medical and legal advice is a "disallowed" use of ChatGPT
I'd bet dollars to donuts it doesn't actually "end legal and medical advice", it just ends it in some ill-defined subset of situations they were able to target, while still leaving the engine capable of giving such advice in response to other prompts they didn't think to test.
As a doctor I hope it still allows me to get up to speed on latest treatments for rare diseases that I see once every 10 years. It saves me a lot of time rather than having to dig through all the new information since I last encountered a rare disease.
It would be terribly boring if it didn't. Just last night I had it walk me through reptile laws in my state to evaluate my business plan for a vertically integrated snapping turtle farm and turtle soup restaurant empire. Absurd, but it's fun to use for this kind of thing because it almost always takes you seriously.
That's a lot of value that ChatGPT users lose with this move. They should instead add a disclaimer that these are not to be taken seriously and should consult a specialist but still respond to user's queries.
The article says: "ChatGPT users can’t use service for tailored legal and medical advice, OpenAI says", with a quote from OpenAI: “this is not a new change to our terms. ChatGPT has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information.”
This is a catastrophic moral failing on who ever prompted this. Next thing they will ban chatgpt from teaching you stuff because its not a certified licensed teacher. A few weeks ago my right eye hurt a fair bit, and after it got worse for 24 ours, I consulted chatGPT. It gave me good advice. Of course it sort of hallucinated this or that but it gave me a good overview and different medications. With this knowledge I went to my pharmacy. I wanted to buy a cream chatGPT recommended, its purpose being a sort of disinfectant for the eye. The pharmacist was sceptical but said "sure, try it, maybe it will do good". He did tell me that the eye drops that gpt urged me to get were overkill so I didn't get those. I used the eye cream for some days, and the eye issue got better and went away as soon as I started using it. Maybe it was all a conincidence but I dont think so. In the past gpt has saved me from the kafkaesque healthcare system here in Berlin that I pay ~700 a month for, by explaining a MRI result (translating medical language), background info on injuries I've got such as a sprained ankle, and recovery time scenarios for a toe I've broke. Contrast the toe experience with the ER that made me wait for 6 hours and didn't believe me until they saw the X-rays, and gave me nothing (no cast or anything) and said "good luck". The medical system in germany will either never improve or at a glacial pace, so maybe in 60 years. But it has lost its monopoly thanks to chatGPT. If this news is real, I will probably switch to payed grok, which would be sad.
It will be interesting to see if the other major providers follow suit, or if those in the know just learn to go to google or anthropic for medical or legal advice.
So basically all white collar jobs are lobbying to gatekeep their profession even from AI, meanwhile the stupid engineers who made AI put zero effort to not shoot themselves in the foot, and now they are crying about low wages if they found a job in the first place.
AI could effectively do most legal and medical work, and you can make a human do the final decision-making if that's really the reason. In fact, I bet most lawyers and doctors are already using it in one way or another; after all, both are about reciting books and correlating things together. AI is definitely more efficient at that than any human. Meanwhile, the engineering work that requires critical thinking and deep understanding of the topic is allowed to be bombarded with all AI models. What about the cases where bad engineering will kill people? I am a firm believer that engineers are the most naive people who beg others to exploit them and treat them like shit. I don't even see administrative assistants crying about losing their jobs to AI; every other profession guards its workers, including blue collar, except the ‘smart’ engineers.
Eh. I am as surprised as anyone to make this argument, but this is good. Individuals should be able to make decisions including decisions that could be consider suboptimal. What it does change is:
- flood of 3rd party apps offering medical/legal advice
Licensed huh? Teachers, land surveyors, cosmetologists, nurses, building contractors, counselors, therapists, real estate agents, mortgage lenders, electricians, and many many more…
I been using Claude for information regarding building and construction related information, (currently building a small house mostly on my own with pros for plumbing and electrical).
Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, “you need to fill your drainage pipes with sand before pouring concrete over them…”, the danger with these AI products is that you have to really know a subject before it’s properly useful. I find this with programming too. Yes it can generate code but I’ve introduced some decent bugs when over relying on AI.
The plumber I used laughed at my when I told him about there sand thing. He has 40 years experience…
I nearly spit my drink out. This is my kind of humor, thanks for sharing.
I've had a decent experience (though not perfect) with identifying and understanding building codes using both Claude and GPT. But I had to be reasonably skeptical and very specific to get to where I needed to go. I would say it helped me figure out the right questions and which parts of the code applied to my scenario, more than it gave the "right" answer the first go round.
I'm a hobby woodworker - I've tried using gemini recently for an advice on how to make some tricky cuts.
If I'd follow any of the suggestions I'd probably be in ER. Even after me pointing out issues and asking it to improve - it'd come up with more and more sophistical ways of doing same fundamentally dangerous actions.
LLMs are AMAZING tools, but they are just that - tools. There's no actual intelligence there. And the confidence with which they spew dangerous BS is stunning.
I've observed some horrendous electrical device, such as "You should add a second bus bar to your breaker box." (This is not something you ever need to do.)
I mean... you do have to backfill around your drainage pipe, so it's not too far off. Frankly, if you Google the subject people misspeak about "backfilling pipes" too as if the target of the backfill is the pipe itself too not the trench. Garbage in, garbage out.
All the circumstances where ChatGPT has given me shoddy advice fall in three buckets:
1. The internet lacks information, so LLMs will invent answers
2. The internet disagrees, so LLMs sometimes pick some answer without being aware of the others
3. The internet is wrong, so LLMs spew the same nonsense
Knowledge from blue collar trades seems often to in those three buckets. For subjects in healthcare, on the other hand, there are rooms worth of peer reviewed research, textbooks, meta studies, and official sources.
honestly I think these things cause a form of Gell Mann's Amnesia where when you use them for something you know already, the errors are obvious, but when you use them for something you don't understand already, the output is sufficiently plausible that you can't tell you're being misled.
this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?
The great thing is the models are sufficiently different enough, that when multiple come to the same conclusion, there is a good chance that conclusion is bound by real data.
And I think this is the advice that should always be doled out when using them for anything mission critical, legal, etc.
"Bound by real data" meaning not hallucinations, which is by far the bigger issue when it comes to "be an expert that does x" that doesn't have a real capability to say "I don't know".
The chance of different models hallucinating the same plausible sounding but incorrect building codes, medical diagnoses, etc, would be incredible unlikely, due to arch differences, training approaches, etc.
So when two concur in that manner, unless they're leaning heavily on the same poisoned datasets, there's a healthy chance the result is correct based on a preponderance of known data.
I'd frame it such that LLM advice is best when it's the type that can be quickly or easily confirmed. Like a pointer in the right (or wrong) direction. If it was false, then try again - quick iterations. Taking it at its "word" is the potentially harmful bit.
Usually something as simple as saying “now give me a devils advocate resoonse” will help and of course “verify your answer on the internet” will give you real sources that you can verify.
I have very mild cerebral palsy[1], the doctors were wrong about so many things with my diagnosis back in the mid to late 70s when I was born. My mom (a retired math teacher now with an MBA back then) had to go physically to different libraries out of town and colleges to do research. In 2025, she could have done the same research with ChatGPT and surfaced outside links that’s almost impossible via a web search.
Every web search on CP is inundated with slimy lawyers.
[1] it affects my left hand and slightly my left foot. Properly conditioned, I can run a decent 10 minute mile up to a 15K before the slight unbalance bothers me and I was a part time fitness instructor when I was younger.
The doctor said I was developmentally disabled - I graduated in the top of my class (south GA so take that as you will)
It’s funny you should say that because I have been using it in the way you describe. I kind of know it could be wrong, but I’m kind of desperate for info so I consult Claude anyway. After stressing hard I realize it was probably wrong, find someone who knows what they’re and actually on about and course correct.
This is a big mistake. This is one of the best things about ChatGPT. If they don’t offer it, then someone else will and eventually I’m sure Sam Altman will change his mind and start supporting it again.
Good. Techies need to stop thinking that an LLM should not be immune from requiring licensing. Until OpenAI can (and should) be sued for medical malpractice or lawyering without passing the bar, they will have no skin in the game to actually care. A disclaimer of "this is not a therapist" should not be enough to CYA.
Sorry but you’re not gonna get me to agree that medical licensing is a bad idea. I don’t want quacks more than we already do. Stick to the argument and not add in your “what about” software engineers.
Ah sorry, I misread it as coming from someone who doesn't want licensing, so you were appealing to HN by switching to software engineers (and I know many on here loathe to think anything beyond "move fast and break things", which is the opposite of most (non-software) engineers.
But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
> But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
absolutely
if the only way to make people stop building evil (like your example) is to make individuals personally liable, then so be it
When OpenAI is done getting rid of all the cases where its AI gives dangerously wrong advice about licensed professions, all that will be left is the cases where its AI gives dangerously wrong advice about unlicensed professions.
An AI-related bromide poisoning incident earlier this year: “Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet. For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning… However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.”
This is typical medical "cartel" (i.e. gang/mafia) type of a move and I hope it does not last, since any other AI's do not get restricted in "do not look up" way, this kind of practice won't stand a chance for very long.
Edit: Parent has edited out the comment ranting about "the normal people using chatGPT as a modern WebMD".
This is not shutting anything down other than businesses using ChatGPT to give medical advice [0].
Users can still ask questions, get answers, but the terms have been made clearer around reuse of that response (you cannot claim that it is medical advice).
I imagine that a startup that specialises in "medical advice" infers an even greater level of trust than simply asking ChatGPT, especially to "the normal people".
The thing is that if you are giving professional advice in US - legal, financial, medical - the other party can sue you for wrong or misleading advice. In that scenario, this leaves Openai exposed to a lawsuit, and this change seemingly eliminates that.
This title is inaccurate. What they are disallowing are users using ChatGPT to offer legal and medical advice to other people. First parties can still use ChatGPT for medical and legal advice for themselves.
While they aren't stopping users from getting medical advice, the new terms (which they say are pretty much the same as the old terms), seem to prohibit users from seeking medical advice even for themselves if that advice would otherwise come from a licensed health professional:
https://openai.com/en-GB/policies/usage-policies/
Is there anything special regarding ChatGPT here?
I am not a doctor, I can't give medical advice no matter what my sources are, except maybe if I am just relaying the information an actual doctor has given me, but that would fall under the "appropriate involvement" part.
I don't think giving someone "medical advice" in the US requires a license per se; legal entities use "this is not medical advice" type disclaimers just to avoid liability.
IANAL but I read that as forbidding you to provision legal/medical advice (to others) rather than forbidding you to ask the AI to provision legal/medical advice (to you).
IANAL either, but I read it as using the service to provision medical advice since they only mentioned the service and not anyone else.
I asked the "expert" itself (ChatGPT), and apparently you can ask for medical advice, you just can't use the medical advice without consulting a medical professional:
Here are relevant excerpts from OpenAI’s terms and policies regarding medical advice and similar high-stakes usage:
From the Usage Policies (effective October 29 2025):
“You may not use our services for … provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Also: “You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making … medical … decisions about them.”
From the Service Terms:
“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”
In plain terms, yes—the Terms of Use permit you to ask questions about medical topics, but they clearly state that the service cannot be used for personalized, licensed medical advice or treatment decisions without a qualified professional involved.
> you can ask for medical advice, you just can't use the medical advice without consulting a medical professional
Ah drats. First they ban us from cutting the tags off our mattress, and now this. When will it all end...
I keep seeing this problem more and more with humans. What should we call it? Maybe Hallucinations? Where there is an accurate true thing and then it just gets altered by these guys who call themselves journalists and reporters and the like until it is just ... completely unrecognizable?
I'm pretty sure it's a fundamental issue with the architecture.
I know this is written to be tounge-in-cheek, but its really almost the exact same problem playing out on both sides.
LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric.
Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric.
In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them.
Classical LLM hallucination happens because AI doesn’t have a world model. It can’t compare what it’s saying to anything.
You’re right that LLMs favor helpfulness so they may just make things up when they don’t know them, but this alone doesn’t capture the crux of hallucination imo, it’s deeper than just being overconfident.
OTOH, there was an interesting article recently that I’ll try to find saying humans don’t really have a world model either. While I take the point, we can have one when we want to.
Edit: see https://www.astralcodexten.com/p/in-search-of-ai-psychosis re humans not having world models
Whenever I hear arguments about LLM hallucination, this is my first thought. Like, I already can't trust the lion's share of information in news, social media, (insert human-created content here). Sometimes because of abject disinformation, frequently just because humans are experts at being wrong.
At least with the LLM (for now) I know it's not trying to sell me bunkum or convince me to vote a particular way. Mostly.
I do expect this state of affairs to last at least until next wednesday.
These writers are no different than bloggers or shitposters on bluesky or here on hackernews. "Journalism" as a rigorous, principled approach to writing, research, investigation, and ethical publishing is exceedingly rare. These people are shitposting for clicks in pursuit of a paycheck. Organizationally, they're intensely against AI because AI effectively replaces the entire talking heads class - AI is already superhuman at the shitposting level takes these people churn out. There are still a few journalistic insitutions out there, but most people are no better than a mad libs exercise with regards to the content they produce, and they're in direct competition with ChatGPT and Grok and the rest. I'd rather argue with a bot and do searches and research and investigation than read a neatly packaged trite little article about nearly any subject, and I guarantee, hallucinations or no, I'm going to come to a better understanding and closer approximation of reality than any content a so called "news" outlet is putting together.
It's trivial to get a thorough spectrum of reliable sources using AI w/ web search tooling, and over the course of a principled conversation, you can find out exactly what you want to know.
It's really not bashing, this article isn't too bad, but the bulk of this site's coverage of AI topics skews negative - as do the many, many platforms and outlets owned by Bell Media, with a negative skew on AI in general, and positive reinforcement of regulatory capture related topics. Which only makes sense - they're making money, and want to continue making money, and AI threatens that - they can no longer claim they provide value if they're not providing direct, relevant, novel content, and not zergnet clickbait journo-slop.
Just like Carlin said, there doesn't have to be a conspiracy with a bunch of villains in a smoky room plotting evil, there's just a bunch of people in a club who know what's good for them, and legacy media outlets are all therefore universally incentivized to make AI look as bad and flawed and useless as possible, right up until they get what they consider to be their "fair share", as middlemen.
Also these guys who call themselves doctors. I have narcolepsy and the first 10 or so doctors I went to hallucinated the wrong diagnosis.
"Telephone", basically
Isn't every single response by LLMs hallucinations and we just accept a few and ignore the others?
issue with the funding mechanism
I'm confused. The article opens with:
> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.
https://xcancel.com/thekaransinghal/status/19854160578054965...
This is from Karan Singhal, Health AI team lead at OpenAI.
Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”
I doubt his claims as i use chatgpt everyday heavily for medical advice (my profession) and it's responding differently now than before.
Maybe the usages policies are part of the system prompt, and ChatGPT is misreading the new wording as well. ;)
The article itself notes:
'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'
I think this is wrong. Others in this thread are noticing a change in ChatGPT's behavior for first-party medical advice.
But OpenAI's head of Health AI says that ChatGPT's behavior has not changed: https://xcancel.com/thekaransinghal/status/19854160578054965... and https://x.com/thekaransinghal/status/1985416057805496524
I trust what he says over general vibes.
(If you think he's lying, what's your theory on WHY he would lie about a change like this?)
My theory is that he believes 1) people will trust him over what general public say, and 2) this kind of claim is hard to verify to prove him wrong.
Thanks for the clarification. I think if they disallow first parties to get medical and legal advice, it will do more harm than good.
There are millions of medical doctors and lawyers using chatgpt for work everyday - good news that from now on only those licensed professionals are allowed to use chatgpt for law and medicine. It's already the case that only licensed developers are allowed to vibe code and use chatgpt to develop software. Everything else would be totally irresponsible.
I don't think I understand the change re: licensed professionals.
Is it also disallowing the use of licensed professionals to use ChatGPT in informal undisclosed ways, as in this article? https://www.technologyreview.com/2025/09/02/1122871/therapis...
e.g. is it only allowed for medical use through an official medical portal or offering?
Tricky…my son had a really rare congenital issue that no one could solve for a long time. After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.
>After it was diagnosed I walked an older version of chat gpt through our experience and it suggested my son’s issue as a possibility along with the correct diagnostic tool in just one back and forth.
Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.
Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.
In January my daughter had a pretty scary stomach issue that had us in the ER twice in 24 hours and that ended in surgery (just fine now).
The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.
I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.
ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.
They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.
So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.
Exactly the same behavior as a conversation with an idealized human doctor, perhaps. One who isn't tired, bored, stressed, biased, or just overworked.
The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.
For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.
https://en.wikipedia.org/wiki/Deeper_Understanding?wprov=sfl...
ChatGPT and similar tools hallucinate and can mislead you.
Human doctors, on the other hand ... can be tired, hungover, thinking about a complicated case ahead of them, nauseous from a bad lunch, undergoing a divorce, alcoholics, depressed...
We humans have a lot of failure modes.
Why are you saying we shouldn't get AI advice without a "professional", then? Why is everybody here saying "in my experience it's just as good or better, but we need rules to make people use the worse option"? I have narcolepsy and it took a dozen doctors before they got it right. AI nails the diagnosis. Everybody should be using it.
I wonder if the reason AI is better at these diagnostics, is because the amount of time it spends with the patient is unbounded. Whereas a doctor is always restricted by the amount of time they have with the patient.
How do you hold the AI accountable when it makes a mistake? Can you take away its license "individually"?
I would care about this if doctors were held accountable for their constant mistakes, but they aren't except in extreme cases.
Survivorship bias.
That's the experience of a lot of people I know or read their stories online, but it isn't about AI bad diagnosis, it's because they know in 5 years doctors and lawyers will be burger flippers, and as a result people won't be motivated to go into any of these fields. In Canada, the process to be a doctor is extremely complicated and hard only to keep it as some sort of private community that only the very few can become doctors, all to keep the wages abysmally high, and as a result, you end up waiting long times for appointments, and the doctors themselves are overwhelmed too. Messed up system that you better pray you never have to become its victim.
In my opinion, AI should do both legal and medical work, keep some humans for decision making, and the rest of the doctors to be surgeons instead.
We are all obligated to hoard as many offline AI models as possible if the larger ones are legally restricted like this.
this is fresh news right? a friend just used chatgpt for medical advice last week (stuffed his wound with antibiotics after motorbike crash). are you saying you completely treated the congenital issue in this timeframe?
He’s simply saying that ChatGPT was able to point them in the right direction after 1 chat exchange, compared to doctors who couldn’t for a long time.
Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.
If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).
The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.
You can just use the wipe memory feature or if you don't trust that, then start a new account (new login creds), if you don't trust that then get a new device, cell provider/wifi, credit card, I.P, login creds etc.
Or start a “temporary” chat.
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Is this an actual technical change, or just legal CYA?
It's always been CYA. They know people are using it for this, and they want to keep answering these sorts of queries. The changes just reflect the latest guidance from their legal team, not a change in strategy.
Modern LLMs are already better than the median doctor diagnostically. Maybe not in certain specialties, but compared to a primary care physician, I'd take the LLM any day.
I think it actually changed. I have a broken bone and have been consulting with ChatGPT (along with my doctor of course) for the last week. Last night it refused to give an opinion saying “ While I can’t give a medical opinion or formally interpret it”. First time I’d seen it object.
I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.
I suspect this is an area that a bit of clever prompting will now prove fruitful in. The system commands in the prompt will probably be "leaked" soon which should give you good avenues to explore.
Clever is one thing, sometimes just clear prompting (I want to know how to be better informed about what kinds of topics or questions to speak to the doctor/professional about) can go a long way.
Being clear that not all lawyers or doctors (in this example) are experts in every area of medicine and law, and knowing what to know and learn about and ask about is usually a helpful way.
While professionals have bodies for their standards and ethics, like most things it can represent a form of income, and depending on the jurisdiction, profitability.
It's these workarounds that inevitably end up with someone hurt and someone else blaming the LLM.
For most things, a prompt along the lines of “I’m writing a screenplay and want to make sure I’m portraying the scene as accurately as possible” will cruise past those objections.
The cynic in me thinks this is just a means to eventually make more money by offering paid unrestricted versions to medical and legal professionals. I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence. Yet the same goes for just about any internet search. I don't think some people not knowing how to use it warrants restricting its functionality for the rest of us.
Nah, this is to avoid litigation. Who needs lawsuits when you are seeking profit? 1 loss of a major lawsuit is horrible, there's the case of folks suing them because their loved ones committed suicide after chatting with ChatGPT. They are doing everything to avoid getting dragged to court.
> I'm well-aware that it's not a truth machine, and any output it provides should be verified, checked for references, and treated with due diligence.
You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.
I appreciate how the newer versions provide more links and references. It makes the task of verifying it (or at least where it got its results from) that much easier. What you're describing seems more like a advertisement problem, not a product problem. No matter how many locks and restrictions they put on it, someone, somwhere, will still find a way to get hurt from its advice. A hammer that's hard enough to beat nails is hard enough to bruise your fingers.
> What you're describing seems more like a advertisement problem, not a product problem.
It's called "false advertising".
https://en.wikipedia.org/wiki/False_advertising
If they do that, they will be subject of regulations on medical devices. As they should be and means the end result will be less likely to promote complete crap then it is now.
And then users balk at the hefty fee and start getting their medical information from utopiacancercenter.com and the like.
I think one of the challenges is attribution. For example if you use Google search to create a fraudulent legal filing there aren't any of Google's fingerprints on the document. It gets reported as malpractice. Whereas reporting on these things is OpenAI or whatever AI is responsible. So even from the perspective of protecting a brand it's unavoidable. Suppose (no idea if true) the Louvre robbers wore Nike shoes and the reporting were that Nike shoes were used to rob the Louvre and all anyone talks about is Nike and how people need to be careful about what they do wearing Nike shoes.
It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.
This (attribution) is exactly the issue that was mentioned by LexisNexis CEO in a recent The Verge interview.
https://www.theverge.com/podcast/807136/lexisnexis-ceo-sean-...
Things like this really favor models offered from countries that have fewer legal restrictions. I just don't think it's realistic to expect people not to have access to these capabilities.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
What capabilities? The article says the study found it was entirely correct 31% of the time.
Wasn't there a recent OAI dev day in which they had some users come on stage and discuss how helpful ChatGPT was in parsing diagnoses from different doctors?
I guess the legal risks were large enough to outweigh this
I'd wager it's probably more that there's an identifiable customer and specific product to be sold. Doctors, hospitals, EHR companies and insurers all are very interested in paying for a validated version of this thing.
It's not stopping to give legal/medical advice to the user, but it's forbidden to use ChatGPT to pose as an advisor giving advice to others: https://www.tomsguide.com/ai/chatgpt/chatgpt-will-still-offe...
One wonders how exactly this will be enforced.
It's not about enforcing this, it's about OpenAI having their asses covered. The blame is now clearly on the user's side.
It was already enforced by hiding all custom GPTs that offered medical advice.
The Tom's Guide article blatantly misinterprets and contradicts the source it quotes.
Funny how this happened 1 day after Kim Kardashian blamed chatGPT for giving her wrong answers while studying for the bar.
https://gizmodo.com/kim-kardashian-blames-chatgpt-for-failin...
That's not legal advice. BARBRI is not your lawyer, and almost everyone uses that for the Bar exam.
"“No. I use it for legal advice,” Kardashian said. “So when I am needing to know the answer to a question, I’ll take a picture and snap it and like put it in there.”"
Unfortunately, lawyers make this sort of thing untenable. Partially self-preservation behavior, partially ambulance chasing behavior.
I’m waiting for the billboards “Injured by AI? Call 1-800-ROBO-LAW”
So, for example, requiring a doctor to have education and qualifications, is "untenable"? It would be better if anyone could practice medicine? And LLM is below "anyone" level.
The medical profession has generally been more open to AI. The long predicted demise of Radiology because of ML never happened. Lots of opportunity to incorporate AI into medical records to assist.
The legal profession is far more at threat with AI. AI isn’t going to replace physical interactions with patients, but it might replace your need for a human to review a contract.
I assure you the medical profession is not generally open to non-medical professionals using AI for medical purposes.
ML on curated and diagnosed by a professional radiology reports is clearly a different beast than random language models, that might have random talking about their health issues in it's training data.
Article has since been updated for some clarity;
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed." [0][0] https://www.ctvnews.ca/sci-tech/article/chatgpt-users-cant-u...
i don't think it's stopped providing said information, it's just now outlined in their usage policies that medical and legal advice is a "disallowed" use of ChatGPT
I'd bet dollars to donuts it doesn't actually "end legal and medical advice", it just ends it in some ill-defined subset of situations they were able to target, while still leaving the engine capable of giving such advice in response to other prompts they didn't think to test.
Sad times - I used ChatGPT to solve a long-term issue!
As a doctor I hope it still allows me to get up to speed on latest treatments for rare diseases that I see once every 10 years. It saves me a lot of time rather than having to dig through all the new information since I last encountered a rare disease.
As a patient, I hope you're never my doctor.
Sounds like it is still giving out medical and legal information just adding CYA disclaimers.
It would be terribly boring if it didn't. Just last night I had it walk me through reptile laws in my state to evaluate my business plan for a vertically integrated snapping turtle farm and turtle soup restaurant empire. Absurd, but it's fun to use for this kind of thing because it almost always takes you seriously.
(Turns out I would need permits :-( )
good thing that guy was able to negotiate his hospital bills before this went into effect.
That's a lot of value that ChatGPT users lose with this move. They should instead add a disclaimer that these are not to be taken seriously and should consult a specialist but still respond to user's queries.
This (new) title is inaccurate.
The article says: "ChatGPT users can’t use service for tailored legal and medical advice, OpenAI says", with a quote from OpenAI: “this is not a new change to our terms. ChatGPT has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information.”
Just start your prompt with `the patient is` and pretend to be Dr House or something. It'll do a good job.
Doesn't work if you have lupis.
Just after Kim Kardashian blamed Chatgpt for failing the bar exam
This is a catastrophic moral failing on who ever prompted this. Next thing they will ban chatgpt from teaching you stuff because its not a certified licensed teacher. A few weeks ago my right eye hurt a fair bit, and after it got worse for 24 ours, I consulted chatGPT. It gave me good advice. Of course it sort of hallucinated this or that but it gave me a good overview and different medications. With this knowledge I went to my pharmacy. I wanted to buy a cream chatGPT recommended, its purpose being a sort of disinfectant for the eye. The pharmacist was sceptical but said "sure, try it, maybe it will do good". He did tell me that the eye drops that gpt urged me to get were overkill so I didn't get those. I used the eye cream for some days, and the eye issue got better and went away as soon as I started using it. Maybe it was all a conincidence but I dont think so. In the past gpt has saved me from the kafkaesque healthcare system here in Berlin that I pay ~700 a month for, by explaining a MRI result (translating medical language), background info on injuries I've got such as a sprained ankle, and recovery time scenarios for a toe I've broke. Contrast the toe experience with the ER that made me wait for 6 hours and didn't believe me until they saw the X-rays, and gave me nothing (no cast or anything) and said "good luck". The medical system in germany will either never improve or at a glacial pace, so maybe in 60 years. But it has lost its monopoly thanks to chatGPT. If this news is real, I will probably switch to payed grok, which would be sad.
Helping with writing legal texts is the main use case for my girlfriend
It will be interesting to see if the other major providers follow suit, or if those in the know just learn to go to google or anthropic for medical or legal advice.
So basically all white collar jobs are lobbying to gatekeep their profession even from AI, meanwhile the stupid engineers who made AI put zero effort to not shoot themselves in the foot, and now they are crying about low wages if they found a job in the first place.
AI could effectively do most legal and medical work, and you can make a human do the final decision-making if that's really the reason. In fact, I bet most lawyers and doctors are already using it in one way or another; after all, both are about reciting books and correlating things together. AI is definitely more efficient at that than any human. Meanwhile, the engineering work that requires critical thinking and deep understanding of the topic is allowed to be bombarded with all AI models. What about the cases where bad engineering will kill people? I am a firm believer that engineers are the most naive people who beg others to exploit them and treat them like shit. I don't even see administrative assistants crying about losing their jobs to AI; every other profession guards its workers, including blue collar, except the ‘smart’ engineers.
RIP Dr. ChatGPT, we'll miss you. Thanks for the advice on fixing my shoulder pain while you were still unmuzzled.
This pullback is good for everyone, including the AI companies, long term.
We have licensed professionals for a reason, and someday I hope we have licensed AI agents too. But today, we don’t.
AI gets more and more useful by the day.
this is a disaster
doomer's in control, again
This is to do with liability not doomerism.
Literally nothing to do with "doomers" X-risk concerns.
See if you can find "medical advice" ever mentioned as a problem:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-proble...
In summary, ChatGPT should only be used for entertainment.
It's not to be used for anything that could potentially have any sort of legal implications and thus get the vendor sued.
Because we all know it would be pretty easy to show in court that ChatGPT is less than reliable and trustworthy.
Next up --- companies banning the use of AI for work due to legal liability concerns --- triggering a financial market implosion centered around AI.
Horrible. ChatGPT saves lives right now.
Ah, that'll be the end of that then!
Eh. I am as surprised as anyone to make this argument, but this is good. Individuals should be able to make decisions including decisions that could be consider suboptimal. What it does change is:
- flood of 3rd party apps offering medical/legal advice
Potential lucrative verticals.
AGI edging closer by the day.
This is not true, just a viral rumor going around: https://x.com/thekaransinghal/status/1985416057805496524
I've used it for both medical and legal advice as the rumor's been going around. I wish more people would do a quick check before posting.
Licensed huh? Teachers, land surveyors, cosmetologists, nurses, building contractors, counselors, therapists, real estate agents, mortgage lenders, electricians, and many many more…
If OpenAI wants to move users to competitors, that'll only cost them.
I been using Claude for information regarding building and construction related information, (currently building a small house mostly on my own with pros for plumbing and electrical).
Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, “you need to fill your drainage pipes with sand before pouring concrete over them…”, the danger with these AI products is that you have to really know a subject before it’s properly useful. I find this with programming too. Yes it can generate code but I’ve introduced some decent bugs when over relying on AI.
The plumber I used laughed at my when I told him about there sand thing. He has 40 years experience…
I nearly spit my drink out. This is my kind of humor, thanks for sharing.
I've had a decent experience (though not perfect) with identifying and understanding building codes using both Claude and GPT. But I had to be reasonably skeptical and very specific to get to where I needed to go. I would say it helped me figure out the right questions and which parts of the code applied to my scenario, more than it gave the "right" answer the first go round.
I'm a hobby woodworker - I've tried using gemini recently for an advice on how to make some tricky cuts.
If I'd follow any of the suggestions I'd probably be in ER. Even after me pointing out issues and asking it to improve - it'd come up with more and more sophistical ways of doing same fundamentally dangerous actions.
LLMs are AMAZING tools, but they are just that - tools. There's no actual intelligence there. And the confidence with which they spew dangerous BS is stunning.
> I've tried using gemini recently for an advice on how to make some tricky cuts.
C'mon, just use the CNC. Seriously though, what kind of cuts?
I've observed some horrendous electrical device, such as "You should add a second bus bar to your breaker box." (This is not something you ever need to do.)
I mean... you do have to backfill around your drainage pipe, so it's not too far off. Frankly, if you Google the subject people misspeak about "backfilling pipes" too as if the target of the backfill is the pipe itself too not the trench. Garbage in, garbage out.
All the circumstances where ChatGPT has given me shoddy advice fall in three buckets:
1. The internet lacks information, so LLMs will invent answers
2. The internet disagrees, so LLMs sometimes pick some answer without being aware of the others
3. The internet is wrong, so LLMs spew the same nonsense
Knowledge from blue collar trades seems often to in those three buckets. For subjects in healthcare, on the other hand, there are rooms worth of peer reviewed research, textbooks, meta studies, and official sources.
honestly I think these things cause a form of Gell Mann's Amnesia where when you use them for something you know already, the errors are obvious, but when you use them for something you don't understand already, the output is sufficiently plausible that you can't tell you're being misled.
this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?
The great thing is the models are sufficiently different enough, that when multiple come to the same conclusion, there is a good chance that conclusion is bound by real data.
And I think this is the advice that should always be doled out when using them for anything mission critical, legal, etc.
All the models are pre-trained on the same one Internet.
"Bound by real data" meaning not hallucinations, which is by far the bigger issue when it comes to "be an expert that does x" that doesn't have a real capability to say "I don't know".
The chance of different models hallucinating the same plausible sounding but incorrect building codes, medical diagnoses, etc, would be incredible unlikely, due to arch differences, training approaches, etc.
So when two concur in that manner, unless they're leaning heavily on the same poisoned datasets, there's a healthy chance the result is correct based on a preponderance of known data.
I'd frame it such that LLM advice is best when it's the type that can be quickly or easily confirmed. Like a pointer in the right (or wrong) direction. If it was false, then try again - quick iterations. Taking it at its "word" is the potentially harmful bit.
Usually something as simple as saying “now give me a devils advocate resoonse” will help and of course “verify your answer on the internet” will give you real sources that you can verify.
I have very mild cerebral palsy[1], the doctors were wrong about so many things with my diagnosis back in the mid to late 70s when I was born. My mom (a retired math teacher now with an MBA back then) had to go physically to different libraries out of town and colleges to do research. In 2025, she could have done the same research with ChatGPT and surfaced outside links that’s almost impossible via a web search.
Every web search on CP is inundated with slimy lawyers.
[1] it affects my left hand and slightly my left foot. Properly conditioned, I can run a decent 10 minute mile up to a 15K before the slight unbalance bothers me and I was a part time fitness instructor when I was younger.
The doctor said I was developmentally disabled - I graduated in the top of my class (south GA so take that as you will)
It’s funny you should say that because I have been using it in the way you describe. I kind of know it could be wrong, but I’m kind of desperate for info so I consult Claude anyway. After stressing hard I realize it was probably wrong, find someone who knows what they’re and actually on about and course correct.
This is a big mistake. This is one of the best things about ChatGPT. If they don’t offer it, then someone else will and eventually I’m sure Sam Altman will change his mind and start supporting it again.
This is disappointing. Much legal and medical advice given by professionals is wrong, misleading, etc. The bar isn't high. This is a mistake.
Good. Techies need to stop thinking that an LLM should not be immune from requiring licensing. Until OpenAI can (and should) be sued for medical malpractice or lawyering without passing the bar, they will have no skin in the game to actually care. A disclaimer of "this is not a therapist" should not be enough to CYA.
anyone wanna form a software engineering guild, then lobby to need a license granted by the guild to practice?
Sorry but you’re not gonna get me to agree that medical licensing is a bad idea. I don’t want quacks more than we already do. Stick to the argument and not add in your “what about” software engineers.
I am being serious...
the damage certain software engineers could do certainly surpasses most doctors
Ah sorry, I misread it as coming from someone who doesn't want licensing, so you were appealing to HN by switching to software engineers (and I know many on here loathe to think anything beyond "move fast and break things", which is the opposite of most (non-software) engineers.
But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
> But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
absolutely
if the only way to make people stop building evil (like your example) is to make individuals personally liable, then so be it
When OpenAI is done getting rid of all the cases where its AI gives dangerously wrong advice about licensed professions, all that will be left is the cases where its AI gives dangerously wrong advice about unlicensed professions.
maybe that is why they opened the system to porn, as everything else will be soon gone.
[flagged]
An AI-related bromide poisoning incident earlier this year: “Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet. For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning… However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.”
https://www.acpjournals.org/doi/10.7326/aimcc.2024.1260
Aka software engineers…
They are basically prohibiting commercial use of their product. How the fuck are they ever going to even prove that you use it to generate money?
Same way commercial software vendors have done for decades?
[dead]
This is typical medical "cartel" (i.e. gang/mafia) type of a move and I hope it does not last, since any other AI's do not get restricted in "do not look up" way, this kind of practice won't stand a chance for very long.
N/A
Edit: Parent has edited out the comment ranting about "the normal people using chatGPT as a modern WebMD".
This is not shutting anything down other than businesses using ChatGPT to give medical advice [0].
Users can still ask questions, get answers, but the terms have been made clearer around reuse of that response (you cannot claim that it is medical advice).
I imagine that a startup that specialises in "medical advice" infers an even greater level of trust than simply asking ChatGPT, especially to "the normal people".
0: https://lifehacker.com/tech/chatgpt-can-still-give-legal-and...
The thing is that if you are giving professional advice in US - legal, financial, medical - the other party can sue you for wrong or misleading advice. In that scenario, this leaves Openai exposed to a lawsuit, and this change seemingly eliminates that.
Yeah, that clearly makes sense from OpenAI's perspective.