The best use of AI at the moment is to act as a tool to quickly search and present data quicker than humanly possible. Not to act upon the findings blindly.
It’s not as easy to say anyone using AI should be fired. There needs to be a more nuanced approach to this. It wholly depends on what the GP did with the information it presented.
An example: back in the day GPs had a huge book of knowledge they would defer to that was peer researched and therefore trusted. If you came in with an odd symptom they’d spend time (often in front of you) flipping through the book to find that elusive disease they read about that one time at university. Later that knowledge moved to a traditional search engine. Why wouldn’t you now use AI to make that search faster? The AI can easily be trained on this same corpus of knowledge.
Of course the GP should double check what they are being told. But simply using AI is not the problem you make it out to be. If you have a corpus of knowledge and the GP uses this in a dangerous way then the GP should be fired. But you don’t then burn the book they found this information from.
I think the difference here is that medical reference material is based on long process of proven research. It can be trusted as a reliable source of information.
AI tools however are so new they haven’t faced anything like the same level of scrutiny. For now they can’t be considered reliable, and their use should be kept within proper medical trials until we understand them better.
Yes human error will also always be an issue, but putting that on top of the currently shaky foundations of AI only compounds the problem.
Lets not forget that AI is known for not only not providing any sources, or even falsifying them, but now also flat out lying.
Our GP’s are already mostly running on a tick-box system where they feed your information (but only the stuff on the most recent page of your file, looking any further is too much like hard work) in to their programme and it, rather than the patient or a trained physician, tells them what we need. Remove GP’s from the patients any more, and they’re basically just giving the same generic and often wildly incorrect advice we could find on WebMD.
Indeed. GPs have been doing this for a long time. It’s nothing new, and expecting every GP to know every single ailment that humanity has ever experienced, to recall it quickly, and immediately know the course of action to take, is unreasonable.
Like you say, if they’re blindly following a generic ChatGPT instance, then that’s bad.
If they’re aiding their search using an LLM that has been trained on a good medical dataset, then taking that and looking more into it, then there’s no issue.
People have become so reactionary to LLMs and other ‘AI’ stuff. It seems there’s a “omg it’s so cool everybody should use it to the max. Let’s blindly trust it!” camp and a “it’s awful and should exist, burn it all! No algorithms or machine learning anywhere. New tech is bad!” There’s zero nuance in the discussion about this stuff, and it’s tiring.
People have become so reactionary to LLMs and other AI stuff. It seems there’s a “omg it’s so cool everybody should use it to the max. Let’s blindly trust it!” camp and a “it’s awful and shouldn’t exist, burn it all! No algorithms or machine learning anywhere. New tech is bad!”
Both camps are just as stupid. There’s zero nuance in the discussion about this stuff, and it’s tiring.
You can build excellent expert systems that will definitely help a doctor remember all the illnesses, know what questions to ask to narrow things down or double check it’s not something weird, and provide options for treatment.
These exist and are good
Chatgpt isn’t an expert system and doctors using it like one need a serious warning from the BMC and would eventually need to be struck off, same as using ouija boards or bones to diagnose illnesses.
‘Everyone anywhere’? That’s an amazingly broad statement. What’re you defining as ‘using one’? If I use ChatGPT to rewrite a paragraph, should I be fired? What about if a non native speaker uses it to remove grammatical errors from an email, should they be fired? How about using it for assisting with coding errors? Or generating draft product marketing copy? Or summarising content for third parties to make it easier to understand? Still a fireable offence? How about generating insights from data? Assistance with Roadmap prioritisation? Generating summaries of meeting notes or presentations? Helping users with learning disabilities understand complex information? Or helping them with letters, emails etc? How about if it use it to remind me of tasks? Or managing my routines?
There’s a difference between using LLMs to edit text, provide ideas or give you information that you can double check because you have the subject matter experience. Relying on it as a substitute
for skill when something important is at stake like someone’s well being is reckless at best.
Everyone anywhere using one on the job should be fired
There’s no nuance there it’s just AI = bad. I agree that it’s shouldn’t, in its current form, be used as a substitute for skill in important situations. You’re totally right there.
They need to lose their licenses.
Everyone anywhere using one on the job should be fired, but medical personnel is endangering people.
The best use of AI at the moment is to act as a tool to quickly search and present data quicker than humanly possible. Not to act upon the findings blindly.
It’s not as easy to say anyone using AI should be fired. There needs to be a more nuanced approach to this. It wholly depends on what the GP did with the information it presented.
An example: back in the day GPs had a huge book of knowledge they would defer to that was peer researched and therefore trusted. If you came in with an odd symptom they’d spend time (often in front of you) flipping through the book to find that elusive disease they read about that one time at university. Later that knowledge moved to a traditional search engine. Why wouldn’t you now use AI to make that search faster? The AI can easily be trained on this same corpus of knowledge.
Of course the GP should double check what they are being told. But simply using AI is not the problem you make it out to be. If you have a corpus of knowledge and the GP uses this in a dangerous way then the GP should be fired. But you don’t then burn the book they found this information from.
AI != LLM
I think the difference here is that medical reference material is based on long process of proven research. It can be trusted as a reliable source of information.
AI tools however are so new they haven’t faced anything like the same level of scrutiny. For now they can’t be considered reliable, and their use should be kept within proper medical trials until we understand them better.
Yes human error will also always be an issue, but putting that on top of the currently shaky foundations of AI only compounds the problem.
Lets not forget that AI is known for not only not providing any sources, or even falsifying them, but now also flat out lying.
Our GP’s are already mostly running on a tick-box system where they feed your information (but only the stuff on the most recent page of your file, looking any further is too much like hard work) in to their programme and it, rather than the patient or a trained physician, tells them what we need. Remove GP’s from the patients any more, and they’re basically just giving the same generic and often wildly incorrect advice we could find on WebMD.
Indeed. GPs have been doing this for a long time. It’s nothing new, and expecting every GP to know every single ailment that humanity has ever experienced, to recall it quickly, and immediately know the course of action to take, is unreasonable.
Like you say, if they’re blindly following a generic ChatGPT instance, then that’s bad.
If they’re aiding their search using an LLM that has been trained on a good medical dataset, then taking that and looking more into it, then there’s no issue.
People have become so reactionary to LLMs and other ‘AI’ stuff. It seems there’s a “omg it’s so cool everybody should use it to the max. Let’s blindly trust it!” camp and a “it’s awful and should exist, burn it all! No algorithms or machine learning anywhere. New tech is bad!” There’s zero nuance in the discussion about this stuff, and it’s tiring.
Exactly. Love the username BTW.
Well said.
You can build excellent expert systems that will definitely help a doctor remember all the illnesses, know what questions to ask to narrow things down or double check it’s not something weird, and provide options for treatment.
These exist and are good
Chatgpt isn’t an expert system and doctors using it like one need a serious warning from the BMC and would eventually need to be struck off, same as using ouija boards or bones to diagnose illnesses.
Any examples off the top of your head? I would assume/speculate they are fairly expensive?
‘Everyone anywhere’? That’s an amazingly broad statement. What’re you defining as ‘using one’? If I use ChatGPT to rewrite a paragraph, should I be fired? What about if a non native speaker uses it to remove grammatical errors from an email, should they be fired? How about using it for assisting with coding errors? Or generating draft product marketing copy? Or summarising content for third parties to make it easier to understand? Still a fireable offence? How about generating insights from data? Assistance with Roadmap prioritisation? Generating summaries of meeting notes or presentations? Helping users with learning disabilities understand complex information? Or helping them with letters, emails etc? How about if it use it to remind me of tasks? Or managing my routines?
Don’t you be bringing nuance into this.
If you used an LLM to find that mistyped variable name, you deserve to lose your job. You and your family must suffer.
If you are blind and you use a screen reader with some AI features, you should be fired and that tech needs to be taken from you. You must suffer.
There’s a difference between using LLMs to edit text, provide ideas or give you information that you can double check because you have the subject matter experience. Relying on it as a substitute for skill when something important is at stake like someone’s well being is reckless at best.
Sure, but the original quote was:
There’s no nuance there it’s just AI = bad. I agree that it’s shouldn’t, in its current form, be used as a substitute for skill in important situations. You’re totally right there.
I never said AI = bad. AI is much broader and contains worthwhile and non-plagiarized approaches.
If it’s worth doing, do it properly.
No you did say that.
You said anyone using an AI in any capacity should be fired. I have heard infinitely better takes from 4-year-olds and why they need more ice cream.
This is on a post about chatgpt use. Chatgpt is from the set of llms, which is a subset of ai.
Ai is cool. The current batch of LLMs/PISS can leave.
That’s not what was said. What was said was anybody using it in any capacity for any job should be fired.
Which is obviously a very, very stupid take.
Yep I agree. Also I love your user name.
Yes.