• @surewhynotlem@lemmy.world
    link
    fedilink
    English
    224 days ago

    Watching videos of rape doesn’t create a new victim. But we consider it additional abuse of an existing victim.

    So take that video and modify it a bit. Color correct or something. That’s still abuse, right?

    So the question is, at what point in modifying the video does it become not abuse? When you can’t recognize the person? But I think simply blurring the face wouldn’t suffice. So when?

    That’s the gray area. AI is trained on images of abuse (we know it’s in there somewhere). So at what point can we say the modified images are okay because the abused person has been removed enough from the data?

    I can’t make that call. And because I can’t make that call, I can’t support the concept.

    • @KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      924 days ago

      I mean, there’s another side to this.

      Assume you have exacting control of training data. You give it consensual sexual play, including rough play, bdsm play, and cnc play. We are 100% certain the content is consensual in this hypothetical.

      Is the output a grey area, even if it seems like real rape?

      Now another hypothetical. A person closes their eyes and imagines raping someone. “Real” rape. Is that a grey area?

      Let’s build on that. Let’s say this person is a talented artist, and they draw out their imagined rape scene, which we are 100% certain is a non-consensual scene imagined by the artist. Is this a grey area?

      We can build on that further. What if they take the time to animate this scene? Is that a grey area?

      When does the above cross into a problem? Is it the AI making something that seems like rape but is built on consensual content? The thought of a person imagining a real rape? The putting of that thought onto a still image? The animating?

      Or is it none of them?

      • @surewhynotlem@lemmy.world
        link
        fedilink
        English
        023 days ago

        Consensual training data makes it ok. I think AI companies should be accountable for curating inputs.

        Any art is ok as long as the artist consents. Even if they’re drawing horrible things, it’s just a drawing.

        Now the real question is, should we include rapes of people who have died and have no family? Because then you can’t even argue increased suffering of the victim.

        But maybe this just gets solved by curation and the “don’t be a dick” rule. Because the above sounds kinda dickish.

    • @MoonlightFox@lemmy.world
      link
      fedilink
      English
      324 days ago

      I see the issue with how much of a crime is enough for it to be okay, and the gray area. I can’t make that call either, but I kinda disagree with the black and white conclusion. I don’t need something to be perfectly ethical, few things are. I do however want to act in a ethical manner, and strive to be better.

      Where do you draw the line? It sounds like you mean no AI can be used in any cases, unless all the material has been carefully vetted?

      I highly doubt there isn’t illegal content in most AI models of any size by big tech.

      I am not sure where I draw the line, but I do want to use AI services, but not for porn though.

    • @dubyakay@lemmy.ca
      link
      fedilink
      English
      324 days ago

      It’s not just AI that can create content like that though. 3d artists have been making victimless rape slop of your vidya waifu for well over a decade now.

      • @surewhynotlem@lemmy.world
        link
        fedilink
        English
        -123 days ago

        Yeah, I’m ok with that.

        AI doesn’t create, it modifies. You might argue that humans are the same, but I think that’d be a dismal view of human creativity. But then we’re getting weirdly philosophical.

    • @Petter1@lemm.ee
      link
      fedilink
      English
      324 days ago

      With this logic, any output of any pic gen AI is abuse… I mean, we can 100% be sure that there are CP in training data (it would be a very bug surprise if not) and all output is result of all training data as far as I understand the statistical behaviour of photo gen AI.

        • @Petter1@lemm.ee
          link
          fedilink
          English
          120 days ago

          Well AI is by design not able to curate its training data, but companies training the models would in theory be able to. But it is not feasible to sanitise this huge stack of data.

        • @Petter1@lemm.ee
          link
          fedilink
          English
          121 days ago

          😆as if this has something to do with that

          But to your argument: It is perfectly possible to tune capitalism using laws to get veerry social.

          I mean every “actually existing communist country” is in its core still a capitalist system, or how you argue against that?