Jesus fuck. As someone who works in biotech: this is probably going to kill people.
Like why don’t we just have ACTUAL DOCTORS and techs trained to do this actually test it. Wild concept I know.
Ah great, horrors beyond my imagination.
He could see AI being used more immediately to address certain “low-hanging fruit,” such as checking for application completeness. “Something as trivial as that could expedite the return of feedback to the submitters based on things that need to be addressed to make the application complete,” he says. More sophisticated uses would need to be developed, tested, and proved out.
Oh no, the dystopian horror…
LLMs also do a lot of mistakes even when used for text analysis, and as the tech sector loves the “move fast and break things” mantra, it’ll be put into practice much earlier than it should be.
I hate this timeline.
damn i see that chatbots don’t want to stay behind rfk jr in body count
will they learn that safety regulations are written in blood? who am i kidding, that’s not their blood
If it’s trained carefully, professionally, responsibly, with bonafide medical research data exclusively, I can see it being a boon to healthcare professionals. I just don’t know if I can trust that will happen in the timeline we live in.
Open evidence is a legit tool my colleagues and classmates use every day. Open AI is leagues behind them especially in terms of HIPAA compliance.