As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

  • chaosCruiser@futurology.todayOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    That is an option, and undoubtedly some people will continue to do that. It’s just that the number of those people might go down in the future.

    Some people like forums and such much more than LLMs, so that number probably won’t go down to zero. It’s just that someone has to write that first answer, so that eventually other people might benefit from it.

    What if it’s a very new product and a new problem? Back in the old days, that would translate to the question being asked very quickly in the only place where you can do that - the forums. Nowadays, the first person to even discover the problem might not be the forum type. They might just try all the other methods first, and find nothing of value. That’s the scenario I was mainly thinking of.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      4 days ago

      I did suggest a possible solution to this - the AI search agent itself could post a question in a forum somewhere if has been unable to find an answer.

      This isn’t a feature yet of mainstream AI search agents but I’ve been following development and this sort of thing is already being done by hobbyists. Agentic AI workflows can be a lot more sophisticated than simple “do a search summarize results.” An AI agent could even try to solve the problem itself - reading source code, running tests in a sandbox, and so forth. If it figures out a solution that it didn’t find online, maybe it could even post answers to some of those unanswered forum questions. Assuming the forum doesn’t ban AI of course.

      Basically, I think this is a case of extrapolating problems without also extrapolating the possibilities of solutions. Like the old Malthusian scenario, where Malthus projected population growth without also accounting for the fact that as demand for food rises new technologies for making food production more productive would also be developed. We won’t get to a situation where most people are using LLMs for answers without LLMs being good at giving answers.

      • chaosCruiser@futurology.todayOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        This idea about automated forum posts and answers could work. However, a human would also need to verify that the generated solution actually solves a problem. There are still some pretty big ifs and buts in this thing, but I assume it could work. I just don’t think current LLMs are quite smart enough yet. It’s a fast moving target, and new capabilities are bing added on a daily basis, so it might not take very long until we get there.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          4 days ago

          However, a human would also need to verify that the generated solution actually solves a problem.

          That’s already an issue with human-generated answers to problems. :)

          “Verification” could be done by an AI agent too, though, as I described above. Depends on the sort of problem. A programming solution can be tested in a simple sandbox, a medical solution would require a bit more effort to validate (whether by human or by AI).

          I just don’t think current LLMs are quite smart enough yet.

          Certainly, we’re both speculating about future developments here.