

Yeah, for things that will likely be used, caching is good. I just have a problem with the “memory is free, so find more stuff to cache to fill it” or “we have gigabytes of RAM so it doesn’t matter how memory-efficient any program I write is”.


Yeah, for things that will likely be used, caching is good. I just have a problem with the “memory is free, so find more stuff to cache to fill it” or “we have gigabytes of RAM so it doesn’t matter how memory-efficient any program I write is”.


I don’t want my PC wasting resources trying to guess every possible next action I might take. Even I don’t know for sure what games I’ll play tonight.


Ib4 “uNusEd RAm iS wAStEd RaM!”
No, unused RAM keeps my PC running fast. I remember the days where accidentally hitting the windows key while in a game meant waiting a minute for it to swap the desktop pages in, only to have to swap the game pages back when you immediately click back into it, expecting it to either crash your computer or probably disconnect from whatever server you were connected to. Fuck that shit.


There isn’t anything fundamentally slower about using a GUI vs just text in a console. There’s more to draw but it scales linearly. The drawing things on the screen part isn’t the slow bit for slow programs. Well, it can be if it’s coded inefficiently, but there are plenty of programs with GUIs that are snappy… Like games, which generally draw even more complex things than your average GUI app.
Slow apps are more likely because of an inefficient framework (like running in a web browser with heavy reliance on scripts rather than native code), inefficient algorithms that scale poorly, poor resource use, bad organization that results in doing the same operation more times than necessary, etc.


Except for KDE. At least compared to cinnamon, I find KDE much more responsive.
AI generated code will make things worse. They are good at providing solutions that generally give the correct output but the code they generate tends to be shit in a final product style.
Though perhaps performance will improve since at least the AI isn’t limited by only knowing JavaScript.


The outside would be burnt and the inside raw. There might be a layer of well-cooked chicken between them, though just cutting through it to see that will contaminate the cooked bit from the raw bit. That’s why the penicillin sauce is so important.


A decent number of them have already sent cash gifts. Which is already spent paying off the loan for the wedding planner and the artists that made some concept art of the theme which is being presented as actual footage of the venue, despite a few of the visuals being things that cutting edge technology cannot physically produce.
Oh and after the last meeting where some of this was laid out, the wedding planner just started laughing hysterically, left the room, and isn’t answering or returning calls. The kids who were playing out front know a bit more information but are worried they’d get in trouble if they repeated what she said about the groom.


Yeah, it’s good enough that it even had me fooled, despite all my “it just correlates words” comments. It was getting to the desired result, so I was starting to think that the framework around the agentic coding AIs was able to give it enough useful context to make the correlations useful, even if it wasn’t really thinking.
But it’s really just a bunch of duct tape slapped over cracks in a leaky tank they want to put more water in. While it’s impressive how far it has come, the fundamental issues will always be there because it’s still accurate to call LLMs massive text predictors.
The people who believe LLMs have achieved AGI are either just lying to try to prolong the bubble in the hopes of actually getting it to the singularity before it pops or are revealing their own lack of expertise because they either haven’t noticed the fundamental issues or think they are minor things that can be solved because any instance can be patched.
But a) they can only be patched by people who know the correction (so the patches won’t happen in the bleeding edge until humans solve the problem they wanted AI to solve), and b) it will require an infinite number of these patches even to just cover all permutations of everything we do know.


Here’s an example I ran into, since work wants us to use AI to produce work stuff, whatever, they get to deal with the result.
But I had asked it to add some debug code to verify that a process was working by saving the in memory result of that process to a file, so I could ensure the next step was even possible to do based on the output of the first step (because the second step was failing). Get the file output and it looks fine, other than missing some whitespace, but that’s ok.
And then while debugging, it says the issue is the data for step 1 isn’t being passed to the function the calls if all. Wait, how can this be, the file looks fine? Oh when it added the debug code, it added a new code path that just calls the step 1 code (properly). Which does work for verifying step 1 on its own but not for verifying the actual code path.
The code for this task is full of examples like that, almost as if it is intelligent but it’s using the genie model of being helpful where it tries to technically follow directions while subverting expectations anywhere it isn’t specified.
Thinking about my overall task, I’m not sure using AI has saved time. It produces code that looks more like final code, but adds a lot of subtle unexpected issues on the way.


An alternative that will avoid the user agent trick is to curl | cat, which just prints the result of the first command to the console. curl >> filename.sh will write it to a script file that you can review and then mark executable and run if you deem it safe, which is safer than doing a curl | cat followed by a curl | bash (because it’s still possible for the 2nd curl to return a different set of commands).
You can control the user agent with curl and spoof a browser’s user agent for one fetch, then a second fetch using the normal curl user agent and compare the results to detect malicious urls in an automated way.
A command line analyzer tool would be nice for people who aren’t as familiar with the commands (and to defeat obfuscation) and arguments, though I believe the problem is NP, so it won’t likely ever be completely foolproof. Though maybe it can be if it is run in a sandbox to see what it does instead of just analyzed.
I think those are where the name “desktop” comes from, though that term now refers to other computer things.
I refer to them as “tower”, “case” (which is technically just the shell and frame, but can include the contents), “computer”, or “machine”.
It might be sufficient if the case airflow is good. Not sure if you could avoid any heat throttling that way, but I’d guess it wouldn’t need to shut down because of heat.
This one is even worse than just removing the CPU cooler, because that cooler is now blocking the hot air from leaving the case via the rear fan.
No, they pulled the cooler off thr CPU and decided to use it to block airflow entirely to the CPU case fan. Best guess is that they are trying to build an expensive smart oven.
Back in the 00s, a story about CPUs getting so hot they’d start on fire went viral. In it was a video of someone removing the cooler while it was running and then a few seconds later a flame appears.
On the one hand, obviously you shouldn’t remove your CPU cooler while it was running.
But on the other hand, fans and mounts can fail, so this was still a risk even for people who were smarter than removing the cooler entirely.
It prompted CPU makers to add thermal protections that started out as “if CPU hits threshold, cut power”, but over time more sophisticated heat management was integrated with more sophisticated performance and power management.
So these days, if you aren’t sufficiently cooling your CPU, it won’t die much quicker, instead it will throttle performance to keep heat at safe levels. OP would have gotten better performance out of it after removing that plastic. Assuming it was CPU bottlenecked in the first place. Things like RAM choice and settings can make it a moot point because the RAM can’t keep up with the CPU at 100% power anyways.
That “we” isn’t global. Some called it “the CPU”, some called it “the hard drive”, some made fun of those two groups for not knowing what they were talking about.
I believe it was a product of the earlier conflict between copyright owners and AIs on the training side. The compromise was that they could train on copyright data but lose any copyright protections on the output of the AI.
On yeah, the little mouse puzzles. I always figured it wouldn’t be that hard to give cursor movement a more natural curve, just give it an interpolation that clamps the first 3 derivatives of position and adds jitter and a little overshoot and correction or clamps the derivatives even harder at the end to mimic slowing down for precision.
I’d say countdown to programs that pretend to be webcams and display an AI video of the requested action has started but I bet at least someone has already done it. And then the arms race between actions to be requested and what AI can do will start until eventually passing the test will be a fail because the actions requested are either too difficult for humans to understand or too difficult for humans to perform, at which point AIs will be trained on knowing the physical limitations of humans.
This will come in handy for when they get tired of our shit.
Can you elaborate on that? I disagree but would like to understand why you think that. Maybe you’re referring to something I wouldn’t disagree with.