• Kichae
    link
    fedilink
    102 years ago

    Bots picking the questions, bots answering them. They clearly understand whatever the fuck the captcha bot thinks a bus is better than I do.

    • panCatQ
      link
      fedilink
      English
      492 years ago

      They were never a test to evade bots to begim with, most capchas were used to train machine learning algorithms to train the bots on ! Just because it was manual labour google got it done for free , using this bullshit captcha thingy ! We sort of trained bots to read obsucre texts , and kinda did the labour for corps for free !

      • @Holyhandgrenade@lemmy.world
        link
        fedilink
        English
        292 years ago

        I heard Captcha was being used as training data for self-driving cars. Which probably explains why almost all of them ask you to identify cars, motorcycles, bridges, traffic lights, crosswalks etc.

        • panCatE
          link
          fedilink
          English
          12 years ago

          Yeah thats pretty much what it is being use for now

        • @Calatia@lemmy.world
          link
          fedilink
          English
          222 years ago

          Both are right. The older ones with squiggly letters, numbers or that ask you to identify animals or objects were being used to train ai bots.

          The ones that ask for crosswalks, bikes, overpass, signs etc are used to train self driving ai.

        • @pqdinfo@lemmy.world
          link
          fedilink
          English
          14
          edit-2
          2 years ago

          Removed as a protest against the community’s support for campaigns to bring about the deaths of members of marginalized groups, and opposition to private entities working to prevent such campaigns, together with it’s mindless flaming and downvoting of anyone who disagrees.

          • @antonim@lemmy.world
            link
            fedilink
            English
            12 years ago

            it would reject invalid answers

            Not quite. When I used to care and kind of tried to distort the training data, I would always select one additional picture that did not contain the desired object, and my answer would usually be accepted. I.e. they were aware that the images weren’t 100% lined up with the labels in their database, so they’d give some leeway to the users, letting them correct those potential mistakes and smooth out the data.

            it won’t let me get past without clicking on the van

            That’s your assumption. Had you not clicked on the van, maybe it would’ve let you through anyway, it’s not necessarily that strict. Or it would just give you a new captcha to solve. Either way, if your answer did not line up with what the system expected (your assumption being that they had already classified it as a bus) it would call attention to the image. So, they might send it over to a real human to check what it really is, or put it into some different combination with other vehicles to filter it out and reclassify.

            • @pqdinfo@lemmy.world
              link
              fedilink
              English
              2
              edit-2
              2 years ago

              Removed as a protest against the community’s support for campaigns to bring about the deaths of members of marginalized groups, and opposition to private entities working to prevent such campaigns, together with it’s mindless flaming and downvoting of anyone who disagrees.

          • @Honytawk@lemmy.zip
            link
            fedilink
            English
            52 years ago

            The first captcha they already knew the answer to. The second captcha was to build the database.

          • Chris
            link
            fedilink
            English
            72 years ago

            Pretty sure I’ve had “click all bicycles”, with a bicycle drawing on the road.

      • Draconic NEO
        link
        fedilink
        English
        12 years ago

        I’ve found that a lot of sites use captchas or captcha-like systems as a means of frustrating users as a way of keeping away certain people that they don’t want to access the site (intellectual property owners), though it’s not the only tactic that they use. I mean it works, pretty much all of those sites are still up today, despite serving data that’s copyrighted by Nintendo, Sony, and other parties.

    • Fat Tony
      link
      fedilink
      English
      10
      edit-2
      2 years ago

      New Captcha question: Does pressing a controller’s button harder make the character’s action more impactful?

      if answer = yes : human

      if answer = no : bot

  • Overzeetop
    link
    fedilink
    502 years ago

    There is considerable overlap between the smartest AI and the dumbest humans. The concerns over bears and trash cans in US National Parks was ahead of its time.

  • sprl
    link
    fedilink
    English
    82 years ago

    I’ve had to do 15 different captcha tests one after the other and they still wouldn’t validate me today.

  • @Phen@lemmy.eco.br
    link
    fedilink
    English
    372 years ago

    Curious how this study suggesting we need a new way to prevent bots came out just a fews days after Google started taking shit for proposing something that among other things would do just that.

  • @dan1101@lemm.ee
    link
    fedilink
    English
    152 years ago

    So is it time to get rid of them then? Usually when I encounter one of those “click the motorcycles” I just go read something else.

    • @T156@lemmy.world
      link
      fedilink
      English
      132 years ago

      It’s a double-edged sword. Just because it doesn’t work perfectly doesn’t mean it doesn’t work.

      To a spammer, building something with the ability to break a captcha is more expensive than something that cannot, whether in terms of development time, or resource demands.

      We saw with a few Lemmy instances that they’re still good at protecting instances from bots and bot signups. Removing captchas entirely means erasing that barrier of entry that keeps a lot of bots out, and might cause more problems than it fixes.

      • @IAm_A_Complete_Idiot@sh.itjust.works
        link
        fedilink
        English
        22 years ago

        Problem is this assumes that everyone has to build their own captcha solver. It’s definitely a bare minimum standard barrier to entry, but it’s really not a sustainable solution to begin with.

  • Rhaedas
    link
    fedilink
    202 years ago

    So just keep the existing tests and change the passing ones to not get access. Checkmate robots.

    Just kidding, I welcome our robot overlords…I’ll act as your captcha gateway.

    • @hex@programming.dev
      link
      fedilink
      English
      352 years ago

      This but unironically… The purpose literally is to train computers to get better at recognising things

    • @over_clox@lemmy.world
      link
      fedilink
      English
      92 years ago

      The funniest part of that is the people designing the AI systems seem to be completely oblivious to the fact that they’re slowly but surely trying to eliminate their own species. ☹️

      • @sheogorath@lemmy.world
        link
        fedilink
        English
        102 years ago

        Yes the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.

    • Draconic NEO
      link
      fedilink
      English
      192 years ago

      And also to frustrate people who use anonimization techniques including use of the Tor Network to get them to turn off their protections to be more easily fingerprinted.

  • superkret
    link
    fedilink
    English
    73
    edit-2
    2 years ago

    online study
    not peer reviewed
    “published” on arxiv (which is a public document server, not a journal)
    study and authors not named or linked in the article

    tl/dr: “Someone uploaded a pdf and we’re writing about it.”

    • @Zeth0s@lemmy.world
      link
      fedilink
      English
      -4
      edit-2
      2 years ago

      You are overrating peer reviewing. It’s basically a tool to help editors to understand if a paper “sells”, to improve readability and to discard clear garbage.

      If methodologies are not extremely flawed, peer reviewing almost never impact quality of the results, as reviewers do not redo the work. From the “trustworthy” point of view, peer reviewing is comparable to a biased rng. Google for actual reproducibility of published experiments and peer-reviewing biases for more details

      Preprints are fine, just less polished

        • @Zeth0s@lemmy.world
          link
          fedilink
          English
          1
          edit-2
          2 years ago

          Unfortunately not. https://www.nature.com/articles/533452a

          Most peer reviewed papers are non reproducible. Peer review has the primary purpose of telling the editor how sellable is a paper in a small community he only superficially knows, and to make it more attractive to that community by suggesting rephrasing of paragraphs, additional references, additional supporting experiment to clarify unclear points.

          But it doesn’t guarantees methodology is not flawed. Editor chooses reviewer very superficially, and reviews are mainly driven by biases, and reviewers cannot judge the quality of a research because they do not reproduce it.

          Honesty of researchers is what guarantees quality of a paper

          • @C4d@lemmy.world
            link
            fedilink
            English
            12 years ago

            Yes. A senior colleague sometimes tongue-in-cheek referred to it as Pee Review.

            • @Zeth0s@lemmy.world
              link
              fedilink
              English
              1
              edit-2
              2 years ago

              The downvotes to my comments shows that no many people here has ever done research or knows the editorial system of scientific journals :D

              • @C4d@lemmy.world
                link
                fedilink
                English
                22 years ago

                There is some variation across disciplines; I do think that in general the process does catch a lot of frank rubbish (and discourages submission of obvious rubbish), but from time to time I do come across inherently flawed work in so-called “high impact factor” and allegedly “prestigious” journals.

                In the end, even after peer review, you need to have a good understanding of the field and to have developed and applied your critical appraisal skills.

                • @Zeth0s@lemmy.world
                  link
                  fedilink
                  English
                  2
                  edit-2
                  2 years ago

                  Absolutely. One needs to know what is reading. That’s why pre prints are fine.

                  High impact factor journals are full of works purposely wrong, made because author wants the results that readers are looking for (that is the easiest way to be published in high impact factor journal).

                  https://www.timeshighereducation.com/news/papers-high-impact-journals-have-more-statistical-errors

                  It’s the game. Reader must know how to navigate the game. Both for peer reviewed papers and pre prints

                • @barsoap@lemm.ee
                  link
                  fedilink
                  English
                  52 years ago

                  And TBF just getting on arxiv also means you jumped a bullshit hurdle: Roughly speaking you need to be in a position in academia, or someone there needs to vouch for the publication. At the same time getting something published there isn’t exactly prestigious so there’s no real incentive to game the system, as such the bar is quite low but consistent.

    • @barsoap@lemm.ee
      link
      fedilink
      English
      9
      edit-2
      2 years ago

      I suppose it’s this paper. Most prolific author seems to be Gene Tsudik, h-index of 103. Yeah that’s not “someone”. Also the paper is accepted for USENIX Security 2023, which is actually ongoing right now.

      Also CS doesn’t really do academia like other sciences, being somewhere on the intersection of maths, engineering, and tinkering. Shit’s definitely not invalid just because it hasn’t been submitted to a journal this could’ve been a blog post but there’s academics involved so publish or perish applies.

      Or, differently put: If you want to review it, bloody hell do it it’s open access. A quick skim tells me “way more thorough than I care to read for the quite less than extraordinary claim”.

  • @tacosplease@lemmy.world
    link
    fedilink
    English
    322 years ago

    Just encountered a captcha yesterday that I had to refresh several times and then listen to the audio playback. The letters were so obscured by a black grid that it was impossible to read them.

  • @C4d@lemmy.world
    link
    fedilink
    English
    282 years ago

    I thought Captcha tests were being used to train image recognition systems no?

    • @Odelay42@lemmy.world
      link
      fedilink
      English
      152 years ago

      Yes, but that’s more of a side quest for the system. Primary use case has always been security.

      • Heresy_generator
        link
        fedilink
        6
        edit-2
        2 years ago

        Maybe. Or maybe it was always about using millions of hours of free labor to tune their algorithms and “bot detection” was just how they marketed it to the people that added it to their sites. Makes me wonder who was running the bots that needed to be protected against. Exacerbate the problem then solve the problem and get what you really want.