• @weremacaque@lemmy.world
    link
    fedilink
    English
    18
    edit-2
    5 days ago

    You have Thirteen hours in which to solve the labyrinth before your baby AI becomes one of us, forever.

  • Dr. Moose
    link
    fedilink
    English
    24
    edit-2
    5 days ago

    Considering how many false positives Cloudflare serves I see nothing but misery coming from this.

    • @Xella@lemmy.world
      link
      fedilink
      English
      54 days ago

      Lol I work in healthcare and Cloudflare regularly blocks incoming electronic orders because the clinical notes “resemble” SQL injection. Nurses type all sorts of random stuff in their notes so there’s no managing that. Drives me insane!

    • @Dave@lemmy.nz
      link
      fedilink
      English
      205 days ago

      In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.

        • @Dave@lemmy.nz
          link
          fedilink
          English
          24 days ago

          I’m not sure what can be done at the free tier. There is a switch to turn on AI not blocking, and it breaks federation.

          You can’t whitelist domains because federation could come from and domain. Maybe you could somehow whitelist /inbox for the ActivityPub communication, but I’m not sure how to do that in Cloudflare.

  • Deebster
    link
    fedilink
    English
    19
    edit-2
    5 days ago

    So they rewrote Nepenthes (or Iocaine, Spigot, Django-llm-poison, Quixotic, Konterfai, Caddy-defender, plus inevitably some Rust versions)

    Edit, but with ✨AI✨ and apparently only true facts

    • 野麦さん
      link
      fedilink
      English
      1
      edit-2
      5 days ago

      It’s the consequences of the MIT and Apache licenses showing up in real time.

      GPL your software, people!

    • @Fluke@lemm.ee
      link
      fedilink
      English
      104 days ago

      And consumed the power output of a medium country to do it.

      Yeah, great job! 👍

      • @LeninOnAPrayer@lemm.ee
        link
        fedilink
        English
        20
        edit-2
        4 days ago

        We truly are getting dumber as a species. We’re facing climate change but running some of the most power hungry processers in the world to spit out cooking recipes and homework answers for millions of people. All to better collect their data to sell products to them that will distract them from the climate disaster our corporations have caused. It’s really fun to watch if it wasn’t so sad.

    • IninewCrow
      link
      fedilink
      English
      105 days ago

      They should program the actions and reactions of each system to actual battle bots and then televise the event for our entertainment.

          • Singletona082
            link
            fedilink
            English
            25 days ago

            Putting a chopped down lawnmower blade in front of a thing, and having it spin at harddrive speeds is honestly kinda terrifying…

    • @supersquirrel@sopuli.xyz
      link
      fedilink
      English
      3
      edit-2
      4 days ago

      No, it is far less environmentally friendly than rc bots made of metal, plastic, and electronics full of nasty little things like batteries blasting, sawing, burning and smashing one another to pieces.

  • @surph_ninja@lemmy.world
    link
    fedilink
    English
    354 days ago

    I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.

  • @Greyfoxsolid@lemmy.world
    link
    fedilink
    English
    -194 days ago

    People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.

    • @DasSkelett@discuss.tchncs.de
      link
      fedilink
      English
      54 days ago

      This will only make models of bad actors who don’t follow the rules worse quality. You want to sell a good quality AI model trained on real content instead of other misleading AI output? Just follow the rules ;)

      Doesn’t sound too bad to me.

    • @shads@lemy.lol
      link
      fedilink
      English
      174 days ago

      I find this amusing, had a conversation with an older relative who asked about AI because I am “the computer guy” he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.

      He observed, “oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That’s good, religions that have become untethered from day to day practical life have never caused problems for anyone.”

      Which I found scarily insightful.

    • katy ✨
      link
      fedilink
      English
      44 days ago

      i mean this is just designed to thwart ai bots that refuse to follow robots.txt rules of people who specifically blocked them.

    • ArchRecord
      link
      fedilink
      English
      94 days ago

      Here’s the key distinction:

      This only makes AI models unreliable if they ignore “don’t scrape my site” requests. If they respect the requests of the sites they’re profiting from using the data from, then there’s no issue.

      People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people’s work who explicitly opt-out their work from training.

      • @A_Random_Idiot@lemmy.world
        link
        fedilink
        English
        -1
        edit-2
        4 days ago

        I’m a person.

        I dont want AI, period.

        We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.

        Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.

        • ArchRecord
          link
          fedilink
          English
          44 days ago

          We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.

          That is simply not how “AI” models today are structured, and that is entirely a fabrication based on science fiction related media.

          The series of matrix multiplication problems that an LLM is, and runs the tokens from a query through does not have the capability to be overworked, to know if it’s been used before (outside of its context window, which itself is just previous stored tokens added to the math problem), to change itself, or to arbitrarily access any system resources.

            • ArchRecord
              link
              fedilink
              English
              03 days ago
              1. Say something blatantly uninformed on an online forum
              2. Get corrected on it
              3. Make reference to how someone is perceived at parties, an entirely different atmosphere from an online forum, and think you made a point

              Good job.

              • @A_Random_Idiot@lemmy.world
                link
                fedilink
                English
                0
                edit-2
                3 days ago
                1. See someone make a comment about a AI going rogue after being forced to produce too much goblin tentacle porn
                2. Get way to serious over the factual capabilities of a goblin tentacle porn generating AI.
                3. Act holier than thou over it while being completely oblivious to comedic hyperbole.

                Good job.

                Whats next? Call me a fool for thinking Olestra stains are capable of sentience and thats not how Olestra works?

    • @tacobellhop@midwest.social
      link
      fedilink
      English
      5
      edit-2
      4 days ago

      Maybe it will learn discretion and what sarcasm are instead of being a front loaded google search of 90% ads and 10% forums. It has no way of knowing if what it’s copy pasting is full of shit.

  • @Empricorn@feddit.nl
    link
    fedilink
    English
    245 days ago

    So we’re burning fossil fuels and destroying the planet so bots can try to deceive one another on the Internet in pursuit of our personal data. I feel like dystopian cyberpunk predictions didn’t fully understand how fucking stupid we are…

  • @umbraroze@lemmy.world
    link
    fedilink
    English
    614 days ago

    I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”

    Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?

    • Phoenixz
      link
      fedilink
      English
      314 days ago

      Because you are coming from the perspective of a reasonable person

      These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already

    • @EddoWagt@feddit.nl
      link
      fedilink
      English
      23 days ago

      They want everything, does it exist, but it’s not in their dataset? Then they want it.

      They want their ai to answer any question you could possibly ask it. Filtering out what is and isn’t useful doesn’t achieve that

    • @T156@lemmy.world
      link
      fedilink
      English
      4
      edit-2
      3 days ago

      Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.

      I’d not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.

  • @4am@lemm.ee
    link
    fedilink
    English
    3105 days ago

    Imagine how much power is wasted on this unfortunate necessity.

    Now imagine how much power will be wasted circumventing it.

    Fucking clown world we live in

    • @Demdaru@lemmy.world
      link
      fedilink
      English
      565 days ago

      On on hand, yes. On the other…imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.

              • @Melvin_Ferd@lemmy.world
                link
                fedilink
                English
                03 days ago

                Capitalist pigs are paying media to generate AI hatred to help them convince you people to get behind laws that all limit info sharing under the guise of IP and copyright

            • @explodicle@sh.itjust.works
              link
              fedilink
              English
              154 days ago

              Get behind the what?

              Perhaps an AI crawler crashed Melvin’s machine halfway through the reply, denying that information to everyone else!

        • @CileTheSane@lemmy.ca
          link
          fedilink
          English
          55 days ago

          Because it’s not AI, it’s LLMs, and all LLMs do is guess what word most likely comes next in a sentence. That’s why they are terrible at answering questions and do things like suggest adding glue to the cheese on your pizza because somewhere in the training data some idiot said that.

          The training data for LLMs come from the internet, and the internet is full of idiots.

          • @Melvin_Ferd@lemmy.world
            link
            fedilink
            English
            -14 days ago

            That’s what I do too with less accuracy and knowledge. I don’t get why I have to hate this. Feels like a bunch of cavemen telling me to hate fire because it might burn the food

            • @CileTheSane@lemmy.ca
              link
              fedilink
              English
              33 days ago

              Because we have better methods that are easier, cheaper, and less damaging to the environment. They are solving nothing and wasting a fuckton of resources to do so.

              It’s like telling cavemen they don’t need fire because you can mount an expedition to the nearest valcanoe to cook food without the need for fuel then bring it back to them.

              The best case scenario is the LLM tells you information that is already available on the internet, but 50% of the time it just makes shit up.

              • @Melvin_Ferd@lemmy.world
                link
                fedilink
                English
                -13 days ago

                Wasteful?

                Energy production is an issue. Using that energy isn’t. LLMs are a better use of energy than most of the useless shit we produce everyday.

                • @CileTheSane@lemmy.ca
                  link
                  fedilink
                  English
                  23 days ago

                  Did the LLMs tell you that? It’s not hard to look up on your own:

                  Data centers, in particular, are responsible for an estimated 2% of electricity use in the U.S., consuming up to 50 times more energy than an average commercial building, and that number is only trending up as increasingly popular large language models (LLMs) become connected to data centers and eat up huge amounts of data. Based on current datacenter investment trends,LLMs could emit the equivalent of five billion U.S. cross-country flights in one year.

                  https://cse.engin.umich.edu/stories/power-hungry-ai-researchers-evaluate-energy-consumption-across-models

                  Far more than straightforward search engines that have the exact same information and don’t make shit up half the time.

    • @zovits@lemmy.world
      link
      fedilink
      English
      13 days ago

      From the article it seems like they don’t generate a new labyrinth for every single time: Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."

  • @x0x7@lemmy.world
    link
    fedilink
    English
    43 days ago

    Jokes on them. I’m going to use AI to estimate the value of content, and now I’ll get the kind of content I want, though fake, that they will have to generate.