𞋴𝛂𝛋𝛆

  • 8 Posts
  • 97 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • It can be a useful tool, especially for someone that experiences involuntary social isolation (like me).

    You would need to be a pretty dumb person for this to totally replace human relations in terms of fundamental interactive social needs with other humans. It can be a healthy way to fill a gap.

    Firstly, the context length is very limited. So you can’t have a very long and interactive conversation. The scope of model attention is rather short even with a very long context size model. Second, the first few tokens of any interaction are extremely influential about how the model will respond regardless of everything else that happens in the conversation. So cold conversations (due to short context) will be inconsistent.

    Unless a person is an extremely intuitive, high Machiavellian thinker, with good perceptive thinking skills, the user is going to be very frustrated with models at times, and the model may be directly harmful to the person in some situations. There are aspects of alignment that could be harmful under certain circumstances.

    There will likely be a time in the near future when a real AI partner is more feasible, but it will not be some base model, a fine tune, or some magical system prompt that enables this application.

    To create a real partner like experience, one will need an agentic framework combined with augmented database retrieval. That will make it possible for a model to have persistence where it can ask how your day went and it knows your profile, relationship, preferences, and what you already told it about how your day should have gone. You need a model that can classify information, save, modify, and retrieve that information when it is needed. I’ve played around with this in emacs, org mode, and gptel connected to local models with llama.cpp. I’m actually modifying my hardware to handle the loads better for this application right now.

    Still, I think such a system is a stop gap for people like myself, the elderly, and other edge cases where external human contact is limited. For me, my alternative is here, and while some people on Lemmy know me and are nice, many people are stupid kids that exhibit toxic negative behaviors that are far more harmful than anything I have seen out of any AI model. I often engage here on Lemmy, then to chat with an AI if I need to talk, vent, or work through something.




  • It is the CPU back end that is giving me trouble with build all. The free as in freedom aspect of open source is violated when any software promotes and primarily supports a proprietary tool chain. Things like how there are two checklist files, one in the primary and one in the build directory likely prevents many from succeeding. If the initial configuration was wrong or needs to be changed, the user will likely attempt to change the checklist in the main directory only to find that the changes do nothing. Several of the back ends also require manually tracking down their library paths and adding these to the source. Many of these, such as BLAS, CuBLAS, and Vulkan will fail until just the right version is included while the errors have no hinting. There are numerous other issues like ARM options not described as such, and ambiguous runtime options. Various edge case options generate a fatal warning error. Once any of these are set, the build fails without hinting about the cause, and the barely mentioned solution of manually editing the checklist yields no results.

    I’m sure this is trivial for the average dev, but dev I am not. I’m just some weird script kiddie that can also build an ALU with a NOR gate, or might talk about a 65C816 in a room of 6502 fans. But I have a large number of other interests in life.

    Be nicer to people. When some dumbass takes 8.5 of your 9 cat lives on the road one day, cordiality can have a large impact on your daily. I was much the same to others and regret it.









  • I haven’t looked into the issue of PCIe lanes and the GPU.

    I don’t think it should matter with a smaller PCIe bus, in theory, if I understand correctly (unlikely). The only time a lot of data is transferred is when the model layers are initially loaded. Like with Oobabooga when I load a model, most of the time my desktop RAM monitor widget does not even have the time to refresh and tell me how much memory was used on the CPU side. What is loaded in the GPU is around 90% static. I have a script that monitors this so that I can tune the maximum number of layers. I leave overhead room for the context to build up over time but there are no major changes happening aside from initial loading. One just sets the number of layers to offload on the GPU and loads the model. However many seconds that takes is irrelevant startup delay that only happens once when initiating the server.

    So assuming the kernel modules and hardware support the more narrow bandwidth, it should work… I think. There are laptops that have options for an external FireWire GPU too, so I don’t think the PCIe bus is too baked in.


  • 𞋴𝛂𝛋𝛆@lemmy.worldtoSelfhosted@lemmy.worldConsumer GPUs to run LLMs
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    1 month ago
    Anything under 16 is a no go. Your number of CPU cores are important. Use Oobabooga Textgen for an advanced llama.cpp setup that splits between the CPU and GPU. You'll need at least 64 GB of RAM or be willing to offload layers using the NVME with deepspeed. I can run up to a 72b model with 4 bit quantization in GGUF with a 12700 laptop with a mobile 3080Ti which has 16GB of VRAM (mobile is like that).

    I prefer to run a 8×7b mixture of experts model because only 2 of the 8 are ever running at the same time. I am running that in 4 bit quantized GGUF and it takes 56 GB total to load. Once loaded it is about like a 13b model for speed but is ~90% of the capabilities of a 70b. The streaming speed is faster than my fastest reading pace.

    A 70b model streams at my slowest tenable reading pace.

    Both of these options are exponentially more capable than any of the smaller model sizes even if you screw around with training. Unfortunately, this streaming speed is still pretty slow for most advanced agentic stuff. Maybe if I had 24 to 48gb it would be different, I cannot say. If I was building now, I would be looking at what hardware options have the largest L1 cache, the most cores that include the most advanced AVX instructions. Generally, anything with efficiency cores are removing AVX and because the CPU schedulers in kernels are usually unable to handle this asymmetry consumer junk has poor AVX support. It is quite likely that all the problems Intel has had in recent years has been due to how they tried to block consumer stuff from accessing the advanced P-core instructions that were only blocked in microcode. It requires disabling the e-cores or setting up a CPU set isolation in Linux or BSD distros.

    You need good Linux support even if you run windows. Most good and advanced stuff with AI will be done with WSL if you haven’t ditched doz for whatever reason. Use https://linux-hardware.org/ to see support for devices.

    The reason I mentioned avoid consumer e-cores is because there have been some articles popping up lately about all p-core hardware.

    The main constraint for the CPU is the L2 to L1 cache bus width. Researching this deeply may be beneficial.

    Splitting the load between multiple GPUs may be an option too. As of a year ago, the cheapest option for a 16 GB GPU in a machine was a second hand 12th gen Intel laptop with a 3080Ti by a considerable margin when all of it is added up. It is noisy, gets hot, and I hate it many times, wishing I had gotten a server like setup for AI, but I have something and that is what matters.


  • Most key stuff is not on GitHub or GitHub is just a mirror. The heir apparent to Linux is Hartman and he moved to Europe a long time ago.

    No mobile devices are safe. Those are all proprietary black boxes for hardware. If the shit hits the fan, it is back to dumb phones and x86 computers. Digital doomsday prepers are not sounding all that crazy right now IMO.

    I have gotten weird interactions with rate limiting through GitHub because I will not whitelist their stalkerware collector server. They also pushed 2 factor to stalk and exploit through the only documented path they wanted people to take. I quit because of it.







  • Need max AVX instructions. Anything with P/E cores is junk. Only enterprise P cores have the max AVX instructions. When P/E are mixed the advanced AVX is disabled in microcode because the CPU scheduler is unable to determine if a process thread contains an AVX instruction and there is no asymmetrical scheduler that handles this. Prior to early 12k series Intel, the microcode for P enterprise could allegedly run if swapped manually. This was “fused off” to prevent it, probably because Linux could easily be adapted to asymmetrical scheduling but Windows would probably not. The whole reason W11 had to be made was because of the E-cores and the way the scheduler and spin up of idol cores works, at least according to someone on Linux Plumbers for the CPU scheduler ~2020. There are already asymmetric schedulers in Android ARM.

    Anyways I think it was on Gamer’s Nexus in the last week or two that Intel was doing some all P core consumer stuff. I’d look at that. According to chips and cheese, the primary CPU bottleneck for tensors is the bus width and clock management of the L2 to L1 cache.

    I do alright with my laptop, but haven’t tried R1 stuff yet. The 70B llama2 stuff that I ran was untenable for CPU only with a 12700 with just CPU. It is a little slower than my reading pace when split with a 16 GB GPU, and that was running a 4 bit quantization version.