Don’t be worse than Russia. Please fix.
Compression is mostly done in software.
They still can share code. Just not maintained by dma.
and with rossman too.
I decided to read replies: wierd, they suggest accusation is overblown.
I decided to read context: WTF is this?! Unholy shit, dear Faust, what did I read? What a deflection!
I thought I was terminally online with mental disorders, but this makes me look most grass-touching and sanest person.
MIT X11-style license
BSD on rust. Will meet same fate long term unless they move to GPL or more copyleft.
https://lore.kernel.org/lkml/293df3d54bad446e8fd527f204c6dc301354e340.camel@mailbox.org/
General idea seems to be “keep your glue outside of core subsystems”, not “do not create cross-language glue, I will do everything in my power to oppose this”.
It will take more effort than writing kernel from scratch. Which they are doing anyway.
Only one compiler nailed to LLVM. And other reasons already mentioned.
“Follow rules to a letter” kind of sabotage manual.
Lol. They didn’t even publish weights since GPT2. And weights are not enough for open source.
Many lossless codecs are lossy codecs + residual encoders. For example FLAC has predictor(lossy codec) + residual.
If you “talk to” a LLM on your GPU, it is not making any calls to the internet,
No, I’m talking about https://en.m.wikipedia.org/wiki/External_memory_algorithm
Unrelated to RAGs
Just buy used PC. Same perf, lower price.
You can always uses system memory too. Not exactly an UMA, but close enough.
Or just use iGPU.
Aren’t LLMs external algorithms at this point? As in the all data will not fit in RAM.
…why? CUPS is print server. You don’t need anything else.