This still isn’t specific enough to specify exactly what the computer will do. There are an infinite number of python programs that could print Hello World in the terminal.
This still isn’t specific enough to specify exactly what the computer will do. There are an infinite number of python programs that could print Hello World in the terminal.
You can send 4xx errors yourself too. If the client needs to change something about the request, that’s a 4xx, like 400 Bad Request. If the server has an error and it’s not the client’s fault, that’s a 5xx like 502 Bad Gateway.
The wikipedia listing of all HTTP codes is quite helpful. People also forget you can send a custom response body with a 4xx or 5xx error. So you can still make a custom JSON error message.
If those vibe coders knew what a binary tree was, they’d be very upset.
But then they’d have a dev team who wrote the code and therefore knows how it works.
In this case, the hackers might understand the code better than the “author” because they’ve been working in it longer.
Imo if they can’t max out their harddrive for at least 24 hours without it breaking, their computer was already broken. They just didn’t know it yet.
Any reasonable SSD would just throttle if it was getting too hot, and I’ve never heard of a HDD overheating on its own, only if there’s some external heat sources, like running it in a 60°C room
May I present to you: https://github.com/mame/quine-relay
Now print “¯\_(ツ)_/¯” with the quotes
They mean time to write the code, not compile time. Let’s be honest, the AI will write it in Python or Javascript anyway
I always thought T&&
made sense as a movable reference. In order to move something, you need to change where the reference points, so conceptually you need a reference to the original reference to update it. (Effectively a double reference)
i < array.length
or else you overflow.
I will say, you can’t beat the satisfaction of tracking down the root cause and actually fixing the bug. I don’t understand why so many people meme about covering up bugs with crazy hacks. It’s not fun constantly looking over your shoulder to see if that bug has resurfaced. They start to pile up.
It’s so easy! Watch:
{"contents": "<garbled .docx contents goes here>"}
Ironically HoloISO also can’t be installed easily right now since all the prepared downloads are missing. You could maybe built it yourself from source, but I haven’t figured it out…
I opened this issue several months ago: https://github.com/HoloISO/issuetracker/issues/59
These don’t seem to be particularly new panels. $600 and only 97% of the sRGB color space (= ~78% DCI-P3), meanwhile a similarly priced LG “QNED” can do 90-95% of DCI-P3. I’m not sure you can even call those TVs HDR if they’re only 8-bit color. None of these models can even remotely compare to a brand new OLED TV.
Also, a key part of how GPT-based LLMs work today is they get the entire context window as their input all at once. Where as a human has to listen/read a word at a time and remember the start of the conversation on their own.
I have a theory that this is one of the reasons LLMs don’t understand the progression of time.
The context window is a fixed size. If the conversation gets too long, the start will get pushed out and the AI will not remember anything from the start of the conversation. It’s more like having a notepad in front of a human, the AI can reference it, but not learn from it.
the kind of stuff that people with no coding experience make
The first complete program I ever wrote was in Basic. It took an input number and rounded it to the 10s or 100s digit. I had learned just enough to get it running. It was using strings and a bunch of if statements, so it didn’t work for more than 3 digit numbers. I didn’t learn about modulo operations until later.
In all honesty, I’m still pretty proud of it, I was in 4th or 5th grade after all 😂. I’ve now been programming for 20+ years.
I think part of the problem is that LLMs stop learning at the end of the training phase, while a human never stops taking in new information.
Part of why I think AGI is so far away is because to run the training in real-time like a human, it would take more compute than currently exists. They should be focusing on doing more with less compute to find new more efficient algorithms and architectures, not throwing more and more GPUs at the problem. Right now 10x the GPUs gets you like 5-10% better accuracy on whatever benchmarks, which is not a sustainable direction to go.
And this is why the WSL1 filesystem was so damn slow. WSL2 uses a native ext4 filesystem (usually, you can format it to whatever)
For NASA, data types don’t matter when you’re programming Voyager 1 and 45 years later it gets hit by an energy burst causing 3% of the RAM to become unusable, and it’s transmitting gibberish. It’s awesome they were able to recover it.