

Just for yourself? Why not rent or colo a dedicated server or two or three and install a hypervisor? What all do you even run for yourself that needs 25 servers?
Just for yourself? Why not rent or colo a dedicated server or two or three and install a hypervisor? What all do you even run for yourself that needs 25 servers?
I’ve never trained one on code or used a code model. I’ve just come across them in research. Like this https://huggingface.co/Salesforce/codegen2-16B
release gpt was a better coder than gpt-turbo and 10 questions every 60 min was never enough for long code with gpt4 since it also forgets.
Edit I would ask here !localllama@sh.itjust.works
I wasn’t really disagreeing outside that to the model everything is just language. I just want to keep saying not magic just a large language model with a huge manicured training set.
Have you looked into running llama.cpp at home with a model trained on code? Some of them work pretty well and you can play with some stuff chatgpt abstracts away. You can even try to train/fine-tune model on your code if you have enough and a relatively new GPU.
It’s slow but all you need is lots of ram and a CPU no GPU nessassary.
¯\_(ツ)_/¯ it’s a language model it’s all the same for it. Granted it depends on it’s training set for how accurate it is. And the context window is pretty small with chatgpt, codex does a better job with longer code samples since you can increase the context window size and the training set is more code based.
Yeah I know but it’s pretty much the same for chatgpt html/c/ASM/python all the same to a language model
It’s pretty good at making functions and basic program layouts like make a go program to do x with a docker compose file to run it but it hardly ever runs correctly the first time. You’re right it can get confused or just totally hallucinate or refactor longer code.
Disclosure I did have it write me a html/css professional webpage and all the text and a prompt for an ai image it needed for the layout in a single paragraph prompt. It all turned out pretty good sounding and looking. I was surprised I replaced my old one with it.
Thanks for the tip! I’ll suggest this up with my scrum master in our next half day daily meeting.
Wait is this not how everyone reviews code?
Thanks I’m going to make this my go-to for vms that need a gui for whatever reason!
Yeah without wobbly windows the desktop seems so sterile and cold. They make everything feel alive it’s the first setting I change on any new setup.
Do you use wobbly windows?
Hey but that 600 dollars caused inflation. Also there was no recession during COVID that’s fake news! Strongest economy ever! /s
I tried this it was so loud. I’m not sure if since it’s the future you can buy a quiet rack mount servers. a few years ago I was doing a home rack in a small space was pretty much a space heater/airport plane takeoff sound simulator. I still have a mini rack but I basically use it to hold smaller computers from minisform or beelink. The DDR5 models make perfect hypervisors and they are cheap enough to cluster without feeling like you are killing the environment.
Although I do turn some older severs on in the winter to keep warm rather than use a space heater.
Zfs I lost so much data trying to use btrfs. And zvols are neat.
Thanks that’s kind of what I was thinking. Have you used cloudflared before?
I really want to figure out if it’s possible to stick it behind cloudflare or something. I would rather not expose any IP address directly to the internet. I’m leaning on just setting up a reverse proxy on a cheap cloud instance back to my home.
How are you routing it to the internet?
How fast is the disk use growing for you?
Yeah maybe. I just read it as it’s personal stuff based on this line.
I manage a very large OpenStack cluster for my day job and we just give people what they pay for.