• kn0wmad1c@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    1 day ago

    I’m not an expert. I’d just expect a neural network to follow the core principle of self-improvement. GPT is fundamentally unable to do this. The way it “learns” is closer to the same tech behind predictive text in your phone.

    It’s the reason why it can’t understand why telling you to put glue on pizza is a bad idea.

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      the main thing is that the system end-users interact with is static. it’s a snapshot of all the weights of the “neurons” at a particular point in the training process. you can keep training from that snapshot for every conversation, but nobody does that live because the result wouldn’t be useful. it needs to be cleaned up first. so it learns nothing from you, but it could.

    • frezik@midwest.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 day ago

      “Improvement” is an open ended term. Would having longer or shorter toes be beneficial? Depends on the evolutionary environment.

      ChatGPT does have a feedback loop. Every prompt you give it affects its internal state. That’s why it won’t give you the same response next time you give the same prompt. Will it be better or worse? Depends on what you want.