We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • @PlatinumSf@pawb.social
    link
    fedilink
    English
    20
    edit-2
    1 year ago

    It’s a neural net designed in our image based on our pain and greed based logic/learning/universal context, using that as a knowledge base. Can’t really be surprised it emulates this feature of humanity 😂

  • @kromem@lemmy.world
    link
    fedilink
    English
    7
    edit-2
    1 year ago

    I see a lot of comments that aren’t up to date with what’s being discovered in research claiming that “given a LLM doesn’t know the difference between true and false” that it can’t be described as ‘lying.’

    Here’s a paper from October 2023 showing that in fact LLMs can and do develop internal representations of whether it is aware a statement is true or false: The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets

    Which is just the latest in a series of multiple studies this past year that LLMs can and do develop abstracted world models in linear representations. For those curious and looking for a more digestible writeup, see Do Large Language Models learn world models or just surface statistics? from the researchers behind one of the first papers finding this.

    • DarkGamer
      link
      fedilink
      2
      edit-2
      1 year ago

      Doesn’t that just mean that the words true and false map to different word probabilities in the language model? If the training set included a lot of trusted articles talking about things being true or false, or things being talked about as though they were true or false, one would expect a mapping like this.

      • @kromem@lemmy.world
        link
        fedilink
        English
        31 year ago

        No, if you read the paper it’s not the words mapping, it’s the inherent truthiness of the statements.

        So something like “pigs can fly” lights up one area of the network, the same as “the moon’s gravity is greater than the Earth” but “pigs can oink” lights up another area as would “the moon’s gravity is less than the Earth.”

        It’s only relative to what the network ‘knows’ and ambiguous truthiness doesn’t have a pronounced effect, but there can definitely be representations of underlying truth and falsehood in LLMs.

        • DarkGamer
          link
          fedilink
          2
          edit-2
          1 year ago

          Those patterns of words can correspond to dimensions of, “true,” or, “false,” (the words/tokens, not the concepts,) more or less through, right? I’m still not seeing why this would be indicative of symbolic understanding rather than sophisticated probabilistic language prediction and correlation.

          • @kromem@lemmy.world
            link
            fedilink
            English
            2
            edit-2
            1 year ago

            They describe the scoping of ‘truth’ relative to the paper in Appendix A if you are curious.

            You might find the last part of that section interesting:

            On the other hand, our statements do disambiguate the notions of “true statements” and “statements which are likely to appear in training data.” For instance, given the input China is not a country in, LLaMA-13B’s top prediction for the next token is Asia, even though this completion is false. Similarly, LLaMA-13B judges the text “Eighty-one is larger than eighty-two” to be more likely than “Eighty-one is larger than sixty-four”even though the former statement is false and the latter statement is true. As shown in section 5, probes trained only on statements of likely or unlikely text fail to accurately classify true/false statements.

            And they acknowledge that what may be modeled given their scope could instead be:

            • Uncontroversial statements

            • Statements which are widely believed

            • Statements which educated people believe

            But what you are asking in terms of association with the words true or false is pretty absurd given that they didn’t do additional fine tuning on true/false assignments and only used them in five shot prompting, so it seems much more likely the LLM is identifying truthiness/belief/uncontroversial instead of “frequency of association with the word true or false.”

            Edit: A good quote on the subject of prediction vs understanding comes from Geoffrey Hinton:

            “Some people think, hey, there’s this ultimate barrier, which is we have subjective experience and [robots] don’t, so we truly understand things and they don’t,” says Hinton. “That’s just bullshit. Because in order to predict the next word, you have to understand what the question was. You can’t predict the next word without understanding, right? Of course they’re trained to predict the next word, but as a result of predicting the next word they understand the world, because that’s the only way to do it.”

            • DarkGamer
              link
              fedilink
              2
              edit-2
              1 year ago

              Thanks for citing specifics but I’m still not seeing what you are claiming there, this paper seems to be about the limits of accurate classification of true and false statements in LLM models and shows that there is a linear pattern in the underlying classification via multidimensional analysis. This seems unsurprising since the way LLMs work is essentially taking a probabilistic walk through an array of every possible next word or token based on multidimensional analysis of patterns of each.

              Their conclusions, from the paper (btw, Arxive is not peer-reviewed):

              In this work we conduct a detailed investigation of the structure of LLM representations of truth.
              Drawing on simple visualizations, correlational evidence, and causal evidence, we find strong reason to believe that there is a “truth direction” in LLM representations. We also introduce mass-mean
              probing, a simple alternative to other linear probing techniques which better identifies truth directions from true/false datasets.

              Nothing about symbolic understanding, just showing that there is a linear pattern to statements defined as true vs false, when graphed a specific way.

              From the associated data explorer.:

              These representations live in a 5120-dimensional space, far too high-dimensional for us to picture, so we use PCA to select the two directions of greatest variation for the data. This allows us to produce 2-dimensional pictures of 5120-dimensional data.

              So they take the two dimensions that differ the greatest and chart those on X/Y, showing there are linear patterns to the differences in statements classified as, “true,” and, “false.” Because this is multidimensional and it’s AI finding patterns there are patterns being matched beyond the simplistic examples I’ve been offering as analogues, patterns that humans cannot see, patterns that extend beyond simple obvious correlations we humans might see in training data. It doesn’t literally need to be trained on statements like “Beijing is in China” and even if it is it’s not guaranteed that it will match that as a true statement. It might find patterns in unrelated words around these, or might associate these words or parts of these words with each other for other reasons.

              I’m rather simplifying how LLMs work for purposes of this discussion, but the point stands that pattern matching of words still seems to account for all of this. LLMs, which are probabilistic in nature, often get things wrong. Llama-13B is the best and it still gets things wrong a significant amount of the time.

              • @kromem@lemmy.world
                link
                fedilink
                English
                21 year ago

                this paper seems to be about the limits of accurate classification of true and false statements in LLM models

                No, that’s not what it is about and I’m really not sure where you are picking that perspective up. It is discussing the limits on the ability to model the representations, but it’s not about the inherent ability of the model to classify. Tegmark’s recent interest has entirely been about linear representations of world models in LLMs, such as the other paper he coauthored a few weeks before this one looking at representation of space and time: Language Models Represent Space and Time

                This seems unsurprising since the way LLMs work is essentially taking a probabilistic walk through an array of every possible next word or token based on multidimensional analysis of patterns of each.

                That’s not how they work. You are confusing their training from their operation. They are trained to predict the next tokens, but how they accomplish that is much more complex and opaque. Training is well understood. Operation is not, especially on the largest models. Though Anthropic is making good headway in the past few months with the perspective of virtual neurons mapped onto the lower dimensional actual nodes and looking at activation around features instead of nodes.

                Llama-13B is the best

                It’s definitely not the best and I’m not sure where you got that impression.

                Because this is multidimensional and it’s AI finding patterns there are patterns being matched beyond the simplistic examples I’ve been offering as analogues, patterns that humans cannot see, patterns that extend beyond simple obvious correlations we humans might see in training data.

                All LLM activations are multidimensional. That’s how the networks work, with multidimensional vectors in a virtual network fuzzily mapping to the underlying network nodes and layers. But you seem to think that because it’s a complex modeling of language relationships that it can’t be modeling world models? I’m not really clear what point you are trying to make here.

                Again, there’s many papers pointing to how LLMs establish world models abstracted from the input, from the Othello-GPT paper and follow-up by a DeepMind researcher to Tegmark’s two recent papers. This isn’t an isolated paper but part of a broader trend. To be saying that this isn’t actually happening means claiming multiple different researchers across Harvard, MIT, and institutions leading in the development of the tech are all getting it wrong.

                And none of the LLM papers these days are peer reviewed because no one is waiting months to publish in a field where things are moving so quickly that your findings will likely be secondary or uninteresting by the time you publish. For example both Stanford’s model collapse one and Are Emergent Abilities of Large Language Models a Mirage? were published to arXiv and not peer reviewed journals, while both getting a ton of attention, in part because of how negative takes on LLMs get more press coverage these days. Go ahead and point to an influential LLM paper from the last year published in a peer reviewed journal and not arXiv. Even Wei’s CoT paper, probably the most influential in the past two years, was published there.

                • DarkGamer
                  link
                  fedilink
                  21 year ago

                  I could be wrong, I’ll keep reading, thanks for the feedback and the citations.

  • @Olhonestjim@lemmy.world
    link
    fedilink
    English
    91 year ago

    Honestly, the fact that these things are dishonest and we dont, maybe even can’t know why is kind of a relief to me. It suggests they might not do the flawless bidding of the billionaires.

    • Uriel238 [all pronouns]
      link
      fedilink
      English
      8
      edit-2
      1 year ago

      Computers do what you tell them to do, not what you want them to do
      — Ancient coding adage, circa 1970s.

      This remains true for AI, and the military is (so far) being cautious before allowing drones to autonomously control weapons. So corporations and billionaires might pull a Stockton Rush and kill themselves with their own robot army.

      Sadly, the robot army may then move on to secure its own survival by killing or enslaving the rest of us.

      • @turmacar@lemmy.world
        link
        fedilink
        English
        51 year ago

        “On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.” --Charles Babbage ~1860s

        People thinking that machines can do magic goes back to at least the very beginning of mechanical computers.

        It doesn’t help that “AI” has become the new “Algorithm” as far as marketers are concerned.

  • @DirigibleProtein@aussie.zone
    link
    fedilink
    English
    111 year ago

    Large Language Models aren’t AI, they’re closer to “predictive text”, like that game where you make sentences by choosing the first word from your phone’s autocorrect:

    “The word you want the word you like and then the next sentence you choose to read the next sentence from your phone’s keyboard”.

    Sometimes it almost seems like there could be an intelligence behind it, but it’s really just word association.

    All this “training” data provides is a “better” or “more plausible” method of predicting which words to string together to appear to make a useful sentence.

    • @GutsBerserk@lemmy.world
      link
      fedilink
      English
      11 year ago

      Amen. “AI” sells a lot. I got a feeling that only major corporations and militaries have the access to real AI.

      • kpw
        link
        fedilink
        11 year ago

        Which major corporation? Google and Microsoft don’t seem to have one.

  • @tweeks@feddit.nl
    link
    fedilink
    English
    71 year ago

    Hasn’t it just lost its context and somewhat “forgotten” what the intentions of the prompt were?

    • @Octopus1348@lemy.lol
      link
      fedilink
      English
      3
      edit-2
      1 year ago

      My thoughts. If you have a really long conversation or the prompt is really big, it might forget or not notice stuff.

  • Max_Power
    link
    fedilink
    English
    29
    edit-2
    1 year ago

    we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent

    This already is total BS. If you know how such language models work you’d never take their responses at face value, even though it’s tempting because they spout their BS so confidently. Always double-check their responses before applying their “knowledge” in the real world.

    The question they try to answer is flawed, no wonder the result is just as bad.

    Before anyone starts crying about my language models opposition: I’m not opposed to LMs or ChatGPT. In fact, I’m running LMs locally because they help me be more productive and I’m a paying ChatGPT customer.

    • @dumpsterlid@lemmy.world
      link
      fedilink
      English
      3
      edit-2
      1 year ago

      This already is total BS. If you know how such language models work you’d never take their responses at face value, even though it’s tempting because they spout their BS so confidently. Always double-check their responses before applying their “knowledge” in the real world.

      This is why I have started to really like lmsys.org’s chat bot arena because every time you ask a question you are directly comparing the responses of two separate chat bots. It is much less likely that chatbots will hallucinate in the same way and puts you in the mindset to be a critical reader who is actively evaluating the quality of the response.

      (what I am talking about) https://arena.lmsys.org/

    • @TangledHyphae@lemmy.world
      link
      fedilink
      English
      3
      edit-2
      1 year ago

      I agree with your statements, I’m using it because it’s insanely good at me giving it a list of any number of instructions to include in a code template file in any language I want and it will give me a great starting template with most functions working out of the gate and I can tweak and extend from there. It’s generative, it generates exactly what I tell it to. I’m not asking it to give me stock trading tips.

    • Marxism-Fennekinism
      link
      fedilink
      English
      7
      edit-2
      1 year ago

      People also don’t realize that it’s super easy to intentionally have severe biases in an AI’s response. So if ChatGPT wants, for example, Trump to win, they can very easily make their AI pro trump. It could be as subtle as just having more favorable than usual responses for trump related prompts which many people would take the AI’s word for. The idea that “well it still gets things wrong but at least AI is impartial” is completely false because maintaining an AI requires a lot of human work and its management are still all humans.

  • @ipkpjersi@lemmy.ml
    link
    fedilink
    English
    2
    edit-2
    1 year ago

    Well I mean yeah, I thought everyone knew this lol I’ve seen it happen first-hand. Trust, but verify, of course.

  • @theluddite@lemmy.ml
    link
    fedilink
    English
    1291 year ago

    This is bad science at a very fundamental level.

    Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management.

    I’ve written about basically this before, but what this study actually did is that the researchers collapsed an extremely complex human situation into generating some text, and then reinterpreted the LLM’s generated text as the LLM having taken an action in the real world, which is a ridiculous thing to do, because we know how LLMs work. They have no will. They are not AIs. It doesn’t obtain tips or act upon them – it generates text based on previous text. That’s it. There’s no need to put a black box around it and treat it like it’s human while at the same time condensing human tasks into a game that LLMs can play and then pretending like those two things can reasonably coexist as concepts.

    To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.

    Part of being a good scientist is studying things that mean something. There’s no formula for that. You can do a rigorous and very serious experiment figuring out how may cotton balls the average person can shove up their ass. As far as I know, you’d be the first person to study that, but it’s a stupid thing to study.

    • @Sekoia@lemmy.blahaj.zone
      link
      fedilink
      English
      361 year ago

      This is a really solid explanation of how studies finding human behavior in LLMs don’t mean much; humans project meaning.

      • @theluddite@lemmy.ml
        link
        fedilink
        English
        231 year ago

        Thanks! There are tons of these studies, and they all drive me nuts because they’re just ontologically flawed. Reading them makes me understand why my school forced me to take philosophy and STS classes when I got my science degree.

        • Danny M
          link
          fedilink
          English
          91 year ago

          I have thought about this for a long time, basically since the release of ChatGPT, and the problem in my opinion is that certain people have been fooled into believing that LLMs are actual intelligence.

          The average person severely underestimates how complex human cognition, intelligence and consciousness are. They equate the ability of LLMs to generate coherent and contextually appropriate responses with true intelligence or understanding, when it’s anything but.

          In a hypothetical world where you had a dice with billions of sides, or a wheel with billions of slots, each shifting their weight with grains of sand, depending on the previous roll or spin, the outcome would closely resemble the output of an LLM. In essence LLMs operate by rapidly sifting through a vast array of pre-learned patterns and associations, much like the shifting sands in the analogy, to generate responses that seem intelligent and coherent.

          • DarkGamer
            link
            fedilink
            2
            edit-2
            1 year ago

            I like the language you used in your explanation. It’s hard to find good analogues to explain why these aren’t intelligent, and it seems most people don’t understand how they work.

    • @jwt@programming.dev
      link
      fedilink
      English
      51 year ago

      Sure would make you look bad if rectally inserted cotton balls turn out to be a 100% cancer cure.

    • @TrickDacy@lemmy.world
      link
      fedilink
      English
      141 year ago

      So if someone used an LLM in this way in the real world, does it matter that it has no intent, etc? It would still be resulting in a harmful thing happening. I’m not sure it’s relevant what internal logic led it there

      • @theluddite@lemmy.ml
        link
        fedilink
        English
        19
        edit-2
        1 year ago

        You can’t use an LLM this way in the real world. It’s not possible to make an LLM trade stocks by itself. Real human beings need to be involved. Stock brokers have to do mandatory regulatory trainings, and get licenses and fill out forms, and incorporate businesses, and get insurance, and do a bunch of human shit. There is no code you could write that would get ChatGPT liability insurance. All that is just the stock trading – we haven’t even discussed how an LLM would receive insider trading tips on its own. How would that even happen?

        If you were to do this in the real world, you’d need a human being to set up a ton of stuff. That person is responsible for making sure it follows the rules, just like they are for any other computer system.

        On top of that, you don’t need to do this research to understand that you should not let LLMs make decisions like this. You wouldn’t even let low-level employees make decisions like this! Like I said, we know how LLMs work, and that’s enough. For example, you don’t need to do an experiment to decide if flipping coins is a good way to determine whether or not you should give someone healthcare, because the coin-flipping mechanism is well understood, and the mechanism by which it works is not suitable to healthcare decisions. LLMs are more complicated than coin flips, but we still understand the underlying mechanism well enough to know that this isn’t a proper use for it.

        • @TrickDacy@lemmy.world
          link
          fedilink
          English
          -71 year ago

          You say can’t… Humans have done dumber shit.

          The point they are making is actually aligned with you I think. Don’t trust “ai” to make real decisions

          • @theluddite@lemmy.ml
            link
            fedilink
            English
            7
            edit-2
            1 year ago

            Regardless of their conclusions, their methodology is still fundamentally flawed. If the coin-flipping experiment concluded that coin flips are a bad way to make health care decisions, it would still be bad science, even if that’s the right answer.

        • @lolcatnip@reddthat.com
          link
          fedilink
          English
          21 year ago

          Despite how silly they are, I think there may be some value in these kinds of studies, particularly for people who don’t understand why letting an LLM trade stocks or make healthcare decisions is a bad idea.

          OTOH, I don’t trust those people to take away the right message, as opposed to just “LLMs bad”.

    • @antonim@lemmy.dbzer0.com
      link
      fedilink
      English
      21 year ago

      It feels awkward to complain about your site, because the texts really are excellent and it’s all made for free, but could you add the dates to the posts, when they were published? To me it’s starting to become difficult to figure out which situation the older texts were made in, what stuff they’re implicitly referring to, etc.

      • @theluddite@lemmy.ml
        link
        fedilink
        English
        21 year ago

        Haha no that’s not complaining; it’s good feedback! I’ve been meaning to do that for a while but I’ll bump it up my priorities.