• FartMaster69@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    12 hours ago

    Again, I’m not fucking using it.

    I played with it when it was new but it doesn’t do anything useful, I’m perfectly capable of brainstorming on my own.

    Back to the topic at hand, do you not see how helping someone brainstorm their delusions with a sycophantic chatbot could be dangerous?

    • womjunru@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      9 hours ago

      So today I had it do a bunch of fractional math on dimensional lumber at the hardware store. While it was doing this math for me it asked if this was for the guitar project I was working on in another chat, where I was mostly asking about magnetic polarity and various electronic, and yes it was. So then it made a different suggestion for me, which made a big impact on what I bought. I know that’s vague, but it was a long conversation.

      Then, when I got home my neighbor had left a half dead plant on my stoop because I’m the neighborhood green thumb apparently. I had never seen this plant before. Took a photo, sent it to AI, and it told me what it was (yes, with sources).

      Then while I was 3d modeling some shelf brackets, it helped my design by pointing out a possible weight distribution issue. While correcting that, I was able to reduce material usage by like 30%.

      I don’t see any of that as “delusional”

      But to the topic at hand, I think the conversations groups and pairs of humans have, both online and real life, will always be more damaging that what a single person can trick a computer into saying.

      And by tricking it… you are abusing a tool designed for a different purpose. So, kitchen knives. Not meant to be murder weapons, certainly can be used for that purpose. Should we blame the knife?

      I also had it make you this image:

      • FartMaster69@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I’m not saying you’re delusional, you seem to have completely lost the thread of this conversation in your defense of chatbots.

        My point is someone who is already prone to delusional thinking will be sent down a feedback loop of affirming their delusions making things much worse.

        • womjunru@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          I haven’t lost anything, I’m just not agreeing with you.

          I think that if a person is suffering from mental issues that they can get the justification for their delusions regardless of AI. While it does provide some immediate access to information that they may interpret unhealthily, it is not unlike participating in social media within an echo chamber—which I would argue does more damage.

          I will give you one thing though… I think more publicly available (ChatGPT) AI models need to cut off topics at a certain point and just refuse to go any further without forcefully inserting warning messages about getting professional help—but we could say the same thing about social media, haha.

          • FartMaster69@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            47 minutes ago

            Sure, but by its very nature AI cares more about saying what you want to hear than what is true.

            Further, the problem of AI hallucinations has no solution at this time which can be very dangerous depending on who’s using it.