A rising movement of artists and authors are suing tech companies for training AI on their work without credit or payment

  • oracle33@lemmy.ml
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    8
    ·
    1 year ago

    While I recognize the AI art is quite obviously derivative and considering that ML pattern matching requires much more input, there’s argument that it’s more derivative, I really struggle to grasp how humans learning to be creative aren’t doing exactly the same thing and what makes that ok (except of course that’s ok).

    Maybe it’s just less obvious and auditable?

    • inspxtr@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      edit-2
      1 year ago

      I believe with humans, the limitations of our capacity to know, create, learn, and the limited contexts that we apply such knowledge and skills may actually be better for creativity and relatability - knowing everything may not always be optimal, especially when it is something about subjective experience. Plus, such limitations may also protect creators from certain claims about copyright, 1 idea can come from many independent creators, and can be implemented briefly similar or vastly different. And usually, we, as humans, develop a sense of work ethics to attribute the inspirations of our work. There are other who steal ideas without attribution as well, but that’s where laws come in to settle it.

      On the side of tech companies using their work to train, AI gen tech is learning at a vastly different scale, slurping up their work without attributing them. If we’re talking about the mechanism of creativity, AI gen tech seems to be given a huge advantage already. Plus, artists/creators learn and create their work, usually with some contexts, sometimes with meaning. Excluding commercial works, I’m not entirely sure the products AI gen tech creates carry such specificity. Maybe it does, with some interpretation?

      Anyway, I think the larger debate here is about compensation and attribution. How is it fair for big companies with a lot of money to take creators’ work, without (or minimal) paying/attributing them, while those companies then use these technologies to make more money?

      EDIT: replace AI with gen(erative) tech

      • Ath47@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        7
        ·
        1 year ago

        How is it fair for big companies with a lot of money to take creators’ work, without (or minimal) paying/attributing them, while those companies then use these technologies to make more money?

        Because those works were put online, at a publicly accessible location, and not behind a paywall or subscription. If literally anyone on the planet can see your work just by typing a URL into their browser, then you have essentially allowed them to learn from it. Also, it’s not like there are copies of those works stored away in some database somewhere, they were merely looked at for a few seconds each while a bunch of numbers went up and down in a neural network. There is absolutely not enough data kept to reproduce the original work.

        Besides, if OpenAI (or other companies in the same business) had to pay a million people for the rights to use their work to train an AI model, how much do you think they’d be able to pay? A few dollars? Why bother seeking that kind of compensation at all?

    • bane_killgrind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      6
      ·
      1 year ago

      It’s not being creative. It’s generating a statistically likely facsimile with a seperate set of input parameters. It’s sampling, but keeping the same pattern of beats even if the order of the notes changes.

      • admiralteal@kbin.social
        link
        fedilink
        arrow-up
        17
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Because I never think I can post it enough, Let’s forget the term AI. Let’s call them Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI).

        So much confusion and prejudice is thrown into this discussion by the mere fact that they’re called AIs. I don’t believe they are intelligent anymore than I believe a calculator is.

        And even if they are, the AIs don’t have the needs the humans do. So we still must value the work of the humans more highly than the work of the AIs.

        • inspxtr@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I agree with you on both points. Fixed texts in my comment from AI to generative tech, mostly because I honestly dont fully have a good grasp on what exactly can be considered intelligence.

          But your second point, I think, is more important, at least to me. We can have debates on what AI/AGI or whatever is, the thing that matters right now and in years (even months) to come is that we as humans have multiple needs.

          We need to work, some of our work requires generating something (code, arts, blueprints, writing) that may be replaceable by these techs really soon. Such work takes years, even decades, of training and experience, especially domain knowledge experience that is invaluable to issues such as necessary human interaction, communication, bias detection and resolution, … Yet within a couple of years, if all of that effort might get replaced by a bot (that might have more unintended consequences but cut costs), instead of augmented/assisted, many of us would struggle to have a job for living while the companies that build these profit and benefit from that.

      • Peaces@infosec.pubOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Though, at what point does sampling become coherentism from philosophy? In the end, whether an AI performs “coherently” is all that matters. I think we are amazed now at ChatGPT because of the quality LLM from 2021 but that value will degrade or become less “coherent” over time, i.e. model collapse.

    • Fonchote@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      1 year ago

      I agree with you, the only caveat here is that the artist mentioned say that their books were illegally obtained. Which is a valid argument. I don’t see c how training an ai in publicly available information is any different than a human reading/seeing said information and learning from it. Now that same human pirating a book is illegal.

      The additional complexity here are laws that were written and are enforced by people that don’t fully grasp this technology. If this was traditional code, then yes it could be a copywrite issue, but the models should be trained on enough data to create derivative works.

      • 14th_cylon@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        I don’t see c how training an ai in publicly available information is any different than a human reading/seeing said information and learning from it.

        well, the difference is that humans are quite well autoregulated system… as new artists are created by learning from the old ones, the old ones die, so the total number stays about the same. the new artists also have to eat, so they won’t undermine others in the industry (at least not behind some line) and they cannot scale their services to the point where one artist would serve all the customers and all other artists would go die due to starvation. that’s how the human civilization works since the dawn of the civilization.

        i hope i don’t need to describe how ai is different.

        • Skua@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          I’m not sure this argument really addresses the point. If some human artist did become so phenomenally efficient at creating art that they could match the output of the likes of Midjourney as it is today, I don’t think anybody would be complaining that they learned their craft by looking at other artists’ work. If they wouldn’t, it’s clearly not the scale of the output alone that’s the issue here.

          It’s also not reasonable to describe the art market as an infinitely and inherently self-regulating one just because artists die. Technology has severely disrupted it before. The demand for calligraphers certainly took quite a hit when the printing press was invented. The camera presumably displaced a substantial amount of the portrait market. Modern digital art tools like Photoshop facilitate an enormously increased output from a given number of artists.

    • 14th_cylon@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      4
      ·
      edit-2
      1 year ago

      what makes that ok

      well, the difference is that humans are quite well autoregulated system… as new artists are created by learning from the old ones, the old ones die, so the total number stays about the same. the new artists also have to eat, so they won’t undermine others in the industry (at least not behind some line) and they cannot scale their services to the point where one artist would serve all the customers and all other artists would go die due to starvation. that’s how the human civilization works since the dawn of the civilization.

      i hope i don’t need to describe how ai is different.

    • TheHighRoad@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      AI is an existential threat to so many. I see it similarly to how an established worker may sabotage a talented up-and-comer to protect their position.