> As a nurse, you end up relying on intuition a lot. It’s in the way a patient says something, or just a feeling you get from how they look
There is a longstanding tension between those who believe human intuition is trustworthy, and the “checklist manifesto” folks. Personally I want room for both, there are plenty of cases where for example the nurse/doctor’s intuition fails and they forget to ask about travel or outdoor activities and miss some obvious tropical disease, or something situational like Lyme’s.
I’ve spent a fair amount of time in a hospital and the human touch is really invaluable. My hope is that AI can displace the busywork and leave nurses more time to do the actual care.
But a concrete example of the thing an AI will struggle with is looking at the overlapping pain med schedule, spotting that the patient has not been exhibiting or complaining of pain, and delaying one med a couple hours from the scheduled time to make the night schedule more pleasant for the patient. It’s hard to quantify the tradeoffs here! (Maybe you could argue the patient should be given a digital menu to request this kind of thing…)
This is a problem with management, not AI.
The acuity system obviously doesn't work well and wasn't properly rolled out. It's clear that they did not even explain how it was supposed to work. That's a problem with that system and it's deployment, not AI in general.
Recording verbal conversations instead of making doctors and nurses always type things is surely the result of a massive portion of doctors saying that record keeping was too awkward and time intensive. It is not logical to assume that there is a privacy concern that overrides the time saving and safety aspect of doing that. People make that assumption because they are pre-conditioned against surveillance and are not considering physician burnout with record keeping systems.
It's true that there are large gaps in AI capability and that software rollouts are quite difficult and poor implementation can cause a significant burden on medical professionals as it has here. I actually think if it's as bad as he says with the acuity then that puts patients in danger and should result in firings or lawsuits.
But that doesn't mean that AI isn't useful and won't continue to become more useful.
I did a bunch of research essays into medical uses of AI/ML and I'm not terrified, in fact the single most significant use of these technologies is probably in or around healthcare. One of the most cited uses would be expert analysis of medical imaging, especially breast cancer imaging. There is a lot of context to unpack around breast cancer imaging, or more sucinctly put, controversial drama! The fact is there is a statisticalluy high rate of false positives in breast cancer diagnostics made by human doctors. This reality resulted in a big overall policy shift to have women breast scanned less often, depending on their age, or something like that. Because so many women were victimized with breast surgery that turned out to be false positive or whatever. The old saying to make an omlet one must break a few eges is sometimes used, and that's a terrible euphamism. AI has proven to be better at looking at medical image, and in the case of breast cancer seems to out perform humans. And of course the humans have a monotonous job revewing image after image, and they want to be safe instead of latter being sorry, so of course they have high false possitives. The machines never get tired, they never get biased (this is a bone of contention), and they never stop. Ultimatly a human doctor still has to review the images, and the machines simply inform if the doctor is being too agressive in diagnosis, or possibly missing something. The whole thing gets escellated if there is any disparity. The out come from early studdies is encouraging, but these studies take years, and are very expensive. One of the biggest problems is the technology proficiency of medical staff is low, and so we are now in a situation where software engineers are cross traning to be at the level of a nurse or even doctors in rare cases.
> We didn’t call it AI at first. The first thing that happened was these new innovations just crept into our electronic medical record system. They were tools that monitored whether specific steps in patient treatment were being followed. If something was missed or hadn’t been done, the AI would send an alert. It was very primitive, and it was there to stop patients falling through the cracks.
Journalists LOVED The Checklist Manifesto when it came out in 2009, I guess if you call it AI then they will hate it? Similarly, in the early 2020s intuition was bad because of implicit bias, but now I guess it is good?
I think something this article demonstrates is how AI implementation is resulting in building resistance to AI because AI is being forced onto people instead of being demanded by those people. Typically, the people doing the forcing don't understand very well the job the people being forced to adopt AI actually perform.
“Physician burnout” from documentation was the excuse for AI adoption - Stop Citrix or VMware or whatever. make a responsive emr where you don’t have to click buttons like a monkey
100% agree AI will ruin healthcare. I'm an IT director at a rural mental health clinic and I see the push for AI across my state and it's scary what they want. All i can do is push back. Healthcare is a case by case personal connection, something AI can't do. It only reduces humans down to numbers and operates on that. There is no difference between healthcare AI to a web scraper on webmd or mayo clinic.
This article feels “ripped from today’s headlines” for me, as my mother-in-law was recently in the ICU after a fall that caused head trauma. The level of AI-driven automated decision-making is unsettling, especially as it seems to allow large organizations to deflect accountability—“See? The AI made us do it!” I’m not entirely sure what guided her care—or lack thereof—but, as someone who frequently works in healthcare IT, I see these issues raised all the time.
On the other hand, having access to my own “AI” was incredibly helpful during her incident. While in the ICU, speaking with her doctors, I used ChatGPT and Claude to become a better advocate for her by asking more informed questions. I could even take pictures of the monitors tracking her vitals, and ChatGPT helped me interpret the readings, which was surprisingly useful.
In this “AI-first” world we’re heading into, individuals need their own tools to navigate the asymmetric power dynamic with large organizations. I wonder how long it will be until these public AI models get “tweaked” to limit their effectiveness in helping us question “the man.”
This is the same technology story told thousands of times a day with nearly every technology. Medical seems to be especially bad at this.
Take a very promising technology that could be very useful. Jump on it early without even trying to get buy in and without fully understanding the people that will use it. Then push a poor version of it.
Now the nurses hate the tech, not the poor implementation of it. The techies then bypass the nurses because they are difficult, even though they could be their best resource for improvement.
AI is a tool. Doctors can use the tool to ensure they haven't overlooked anything. At the end of the day, it's still doctors who are practicing medicine and are responsible for treatment.
Yes, there are a lot of bridges we need to cross with regards to the best practices for using semi-intelligent tools. These tools are in their infancy, so I expect there's going to be a lot we learn over the next five to ten years and a lot of policy and procedure that get put in place.
Everyone should be terrified. The "promise" of AI is the following: remove any kind of remaining communication between humans, because that is "inefficient", and replace it with an AI that will mediate all human interactions (in business and even in other areas). In a few years, AIs trained by big corps will run the show and humans will be required to interface with them to do anything of value. Similar to what they want to do nowadays with mobile/enterprise systems, but at a much deeper level.
> There’s a proper way to do this.
Is there? Seems like people will complain however fast you roll out AI, so you might as well roll it out quickly and get it over with.
Am I reading incorrectly or does this entire article come down to:
1. A calculated patient acuity score
2. Speech-based note-taking
I didn’t see any other AI taking over the hospital.
Healthcare is a massive cost for people, businesses, and governments.
>So we basically just become operators of the machines.
Driving down the cost of manufacturing because of process standardization and automation brought down the cost of consumer goods, and labor's value.
If you don't think this is coming for every single area of business, you're foolish. Driving down labor costs is the golden goose. We've been able to collect some eggs through technology, but AI and the like will be able to cut that goose open and take all the eggs.
"We didn’t call it AI at first." Because the first things described in the article are not AI. They are ML at most.
Then the article discusses a patient needs scoring method, moving from their own Low/Medium/High model to a scoring method on an unbound linear scale. The author appears to struggle with being able to tell if 240 is high or not. They don't state if they ever had training or saw documentation for the scoring method. Seems odd to not have these things but that if they did the scores would be a lot easier to interpret.
Then they finally get to AI, and it's a pilot scheme for writing patient notes. That's all. If it sucks and hallucinates information it's not going to go live anywhere. No matter how many tech bros try to force it through. If the feedback model for the pilot is bad then the author should take issue with that. It's important that such tests give testers an adequate method to flag issues.
Very much an AI = bad article. AI converging with medical technology is a really dangerous space, for obvious reasons. An article like this does make me worry it's being rushed through, but not because of the author's objections, instead because of their ignorance of what is and what isn't AI, and then on the other side of the apparent lack of consultation being offered by the technology providers even during testing stages.
There's this earthquake phrase in past tense:
> We felt like we had agency.
This article is total bullshit.
The author uses "AI" as a shortcut for "technology that I don't understand." EPIC is an EMR, and apparently scores patients using an "algorithm." Let's call that "AI" because AI is hot.
Scribe sounds like a transcriber. Oh boy. I know of offices that have a literal scribe, a person who's job it is to follow a doc around and transcribe to Epic. Automated? Why not.
Having just been in the ICU with a family member for 60 days, I can say that nurses are good at some things and horrible at other things. And big picture thinking isn't something most nurses seem to be good at. Listening to nurses talk about what's wrong in healthcare is like talking to soldiers about what's wrong in the military.
What terrifies me is people will turn their brains off and blindly trust AI.
[dead]
[dead]
As an investor in healthcare AI companies, I actually completely agree that there's a lot of bad implementations of AI in healthcare settings, and what practitioners call "alarm fatigue" as well as the feeling of loss of agency is a huge thing. I see a lot of healthcare orgs right now roll out some "AI" "solution" in isolation that raises one metric of interest, but fails to measure a bunch of other systemic measures.
Two thoughts: 1: I think the industry could take cues from aerospace and the human factors research that's drastically improved safety there -- autopilot and autoland systems in commercial airliners are treated as one part of a holistic system with the pilot and first officer and flight attendants in keeping the plane running smoothly. Too few healthcare AI systems are evaluated holistically.
2: Similarly, if you're going to roll out a system, either there's staff buy-in, or the equilibrium level of some kind of quality/outcomes/compliance measure should increase that justifies staff angst and loss of agency. Not all AI systems are bad. One "AI" company we invested in, Navina, is actually loved by physicians using them, but the team also spent a LOT of time doing UX research and feedback with actual users and the support team is always super responsive.