As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    4 days ago

    However, a human would also need to verify that the generated solution actually solves a problem.

    That’s already an issue with human-generated answers to problems. :)

    “Verification” could be done by an AI agent too, though, as I described above. Depends on the sort of problem. A programming solution can be tested in a simple sandbox, a medical solution would require a bit more effort to validate (whether by human or by AI).

    I just don’t think current LLMs are quite smart enough yet.

    Certainly, we’re both speculating about future developments here.