You said it yourself: extra places that need human attention … those need … humans, right?
It’s easy to say “let AI find the mistakes”. But that tells us nothing at all. There’s no substance. It’s just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what’s happening.
And think about it here. We already have computer systems that monitor patients’ real-time data when they’re hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We’re already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that’s … checks notes … already being done? … Yeah, the safe money is that it’s just a scam.
How? There’s loads of actual humans who have been pointing things out for decades at this point, what makes you think the government is going to listen to an LLM when they ignore the experts who’ve been doing this for ages?
I don’t disagree, but using AI to point at extra places that need human attention seems like a smart move to me.
You said it yourself: extra places that need human attention … those need … humans, right?
It’s easy to say “let AI find the mistakes”. But that tells us nothing at all. There’s no substance. It’s just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what’s happening.
And think about it here. We already have computer systems that monitor patients’ real-time data when they’re hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We’re already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that’s … checks notes … already being done? … Yeah, the safe money is that it’s just a scam.
How? There’s loads of actual humans who have been pointing things out for decades at this point, what makes you think the government is going to listen to an LLM when they ignore the experts who’ve been doing this for ages?