• Australis13
    link
    fedilink
    3820 days ago

    This makes me suspect that the LLM has noticed the pattern between fascist tendencies and poor cybersecurity, e.g. right-wing parties undermining encryption, most of the things Musk does, etc.

    Here in Australia, the more conservative of the two larger parties has consistently undermined privacy and cybersecurity by implementing policies such as collection of metadata, mandated government backdoors/ability to break encryption, etc. and they are slowly getting more authoritarian (or it’s becoming more obvious).

    Stands to reason that the LLM, with such a huge dataset at its disposal, might more readily pick up on these correlations than a human does.

    • Affine Connection
      link
      fedilink
      English
      1
      edit-2
      18 days ago

      No, it does not make any technical sense whatsoever why an LLM of all things would make that connection.

      • Australis13
        link
        fedilink
        218 days ago

        Why? LLMs are built by training maching learning models on vast amounts of text data; essentially it looks for patterns. We’ve seen this repeatedly with other behaviour from LLMs regarding race and gender, highlighting the underlying bias in the dataset. This would be no different, unless you’re disputing that there is a possible correlation between bad code and fascist/racist/sexist tendencies?