• TheUnicornOfPerfidy@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    2 days ago

    I don’t disagree, but using AI to point at extra places that need human attention seems like a smart move to me.

    • fodor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 hours ago

      You said it yourself: extra places that need human attention … those need … humans, right?

      It’s easy to say “let AI find the mistakes”. But that tells us nothing at all. There’s no substance. It’s just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what’s happening.

      And think about it here. We already have computer systems that monitor patients’ real-time data when they’re hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We’re already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that’s … checks notes … already being done? … Yeah, the safe money is that it’s just a scam.

    • Sturgist@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      How? There’s loads of actual humans who have been pointing things out for decades at this point, what makes you think the government is going to listen to an LLM when they ignore the experts who’ve been doing this for ages?