Doing the Lord’s work in the Devil’s basement

  • 0 Posts
  • 13 Comments
Joined 11 months ago
cake
Cake day: May 8th, 2024

help-circle

  • They have no ability to actually reason

    I’m curious about this kind of statement. “Reasoning” is not a clearly defined scientific term, in that it has a myriad different meanings depending on context.

    For example, there has been science showing that LLMs cannot use “formal reasoning”, which is a branch of mathematics dedicated to proving theorems. However, the majority of humans can’t use formal reasoning. This would make humans “unable to actually reason” and therefore not Generally Intelligent.

    At the other end of the spectrum, if you take a more casual definition of reasoning, for example Aristotle’s discursive reasoning, then that’s an ability LLMs definitely have. They can produce sequential movements of thought, where one proposition leads logically to another, such as answering the classic : “if humans are mortal, and Socrates is a human, is Socrates mortal ?”. They demonstrate the ability to do it beyond their training data, meaning they do encode in their weights a “world model” which they use to solve new problems absent from their training data.

    Whether or not this is categorically the same as human reasoning is immaterial in this discussion. The distinct quality of human thought is a metaphysical concept which cannot be proved or disproved using the scientific method.



  • Yeh, i did some looking up in the meantime and indeed you’re gonna have a context size issue. That’s why it’s only summarizing the last few thousand characters of the text, that’s the size of its attention.

    There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.

    If 16k isn’t enough for you then that’s probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.







  • That’s even richer. So the term AI should be reserved for the future tech that may or may not come to exist, even though that mythical technology already has a perfectly suitable name (AGI) ? That sounds… useful ! But also very interesting, and intellectually stimulating ! After all, who doesn’t love those little semantics games ?

    AI is a technical term that has been used by researchers and product developers for 50 years, with a fairly consistent definition. I know it hurts because it contradicts your pedestrian opinion on how Big Words should be used, but that’s just the way it is. We’re not at a point yet where humanity recognizes your legitimacy to decide how words are used.