• 0 Posts
  • 25 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • Models are geared towards seeking the best human response for answers, not necessarily the answers themselves. Its first answer is based on probability of autocompleting from a huge sample of data, and in versions that have a memory adjusts later responses to how well the human is accepting the answers. There is no actual processing of the answers, although that may be in the latest variations being worked on where there are components that cycle through hundreds of attempts of generations of a problem to try to verify and pick the best answers. Basically rather than spit out the first autocomplete answers, it has subprocessing to actually weed out the junk and narrow into a hopefully good result. Still not AGI, but it’s more useful than the first LLMs.



  • There’s no question I wrote the couple of things I’ve done for work to automate things, but I swear every time I have to revisit the code after a long while it’s all new again, often wondering what the hell was I thinking. I like to tell myself that since I’m improving the code each time I review it, each new change must be better overall code. Ha. But it works…









  • they charge money to generate his style of art without compensating him.

    That’s really the big thing, not just here but any material that’s been used to train on without permission or compensation. The difference is that most of it is so subtle it can’t be picked out, but an artist style is obviously a huge parameter since his name was being used to call out those particular training aspects during generations. It’s a bit hypocritical to say you aren’t stealing someone’s work when you stick his actual name in the prompt. It doesn’t really matter how many levels the art style has been laundered, it still originated from him.


  • This isn’t really about the tool but the general idea of what to move. Like many I don’t feel most of my posts alone were that valuable, but what was lost was the chain of conversation that they were part of. I had requested and received my data from Reddit after a while and looking through it I realized this. I suppose I could weed through the Excel file (!!) and grab longer comments that I’ve made in ten years of discussion, and it’s always there to search if a memory is sparked, but I see no personal reason to dump it somewhere else.

    I guess this is more a caution to not use a tool to mass spam just because you can. Not all posts are worth repetition, especially out of context.





  • I didn’t either, actually. It seems to me that where LLMs excel is in situations where there will be a large consensus of a topic, so the training weights hit close to 100%. Anyone who has read through or Googled for answered for programming in the various sources online has seen how among the correct answers there are lots of deviations which muddy the waters even for a human browsing. Which is where the specialized training versions that hone down and eliminate a lot of the training noise come in handy.