• darkdemize@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    3
    ·
    5 hours ago

    If they are training the AI with copyrighted data that they aren’t paying for, then yes, they are doing the same thing as traditional media piracy. While I think piracy laws have been grossly blown out of proportion by entities such as the RIAA and MPAA, these AI companies shouldn’t get a pass for doing what Joe Schmoe would get fined thousands of dollars for on a smaller scale.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      In fact when you think about the way organizations like RIAA and MPAA like to calculate damages based on lost potential sales they pull out of thin air training an AI that might make up entire songs that compete with their existing set of songs should be even worse. (not that I want to encourage more of that kind of bullshit potential sales argument)

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      6
      arrow-down
      16
      ·
      4 hours ago

      The act of copying the data without paying for it (assuming it’s something you need to pay for to get a copy of) is piracy, yes. But the training of an AI is not piracy because no copying takes place.

      A lot of people have a very vague, nebulous concept of what copyright is all about. It isn’t a generalized “you should be able to get money whenever anyone does anything with something you thought of” law. It’s all about making and distributing copies of the data.

      • ultranaut@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        4 hours ago

        Where does the training data come from seems like the main issue, rather than the training itself. Copying has to take place somewhere for that data to exist. I’m no fan of the current IP regime but it seems like an obvious problem if you get caught making money with terabytes of content you don’t have a license for.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          A lot of the griping about AI training involves data that’s been freely published. Stable Diffusion, for example, trained on public images available on the internet for anyone to view, but led to all manner of ill-informed public outrage. LLMs train on public forums and news sites. But people have this notion that copyright gives them some kind of absolute control over the stuff they “own” and they suddenly see a way to demand a pound of flesh for what they previously posted in public. It’s just not so.

          I have the right to analyze what I see. I strongly oppose any move to restrict that right.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 hours ago

          Streaming involves distributing copies so I don’t see why it would be. The law has been well tested in this area.