Ooh good catch. That makes sense. Not sure I would call this beautiful, especially without any way to tell how much time has passed, but fair enough
Ooh good catch. That makes sense. Not sure I would call this beautiful, especially without any way to tell how much time has passed, but fair enough
That doesn’t make sense unless this was personal expenditure, which it doesn’t seem to be
But… from when? Surely expenditure hasn’t gone up linearly with time
…how did the line come about? How did they determine what the life expectancy would have been with less expenditure per capita?
It’s great for racing games where you have gradual steering but also quicker response times than with a controller
TIL it’s entitled to ask that software you use is either compliant with the law or clearly lets you know that it isn’t, especially when the developers have no idea what the law is
It will happen, probably in weeks to months.
in the next few years, like, very few
Now who’s moving the goalposts…?
Considering you asked it to follow a rule that does not exist, it’s unsurprising it can’t do it “correctly”
Interesting & quite a cool visualization!
I am on day 7 of symptoms right now, also just got it after dodging it for almost four years. My fever has subsided but boy am I congested still. I’m just glad the massive headaches are gone, I couldn’t think straight for a while.
On the other hand, I’ve been wearing FFP2 masks in all indoor spaces religiously and I’ve been planning to keep doing it until I got covid (my masks didn’t fail me in the end either, I got it from a household member), and at this point I am genuinely excited to finally not be the outsider anymore and feel more normal again.
While at the same time closing all PRs indiscriminately, even the ones that are just trying to update the repo from its decades old JavaScript syntax (and get support in the comments)
Here in Germany everyone I know pronounces the letters individually – as German letters that is, which means the Q is pronounced “coo” rather than “cue”. I don’t mind it, it’s not quite as clunky as in English.
I do say sequel when speaking English though.
Es ist zum wahnsinnigwerden. Bei der NPD hieß es ja die sind zu unwichtig, als dass von ihnen eine echte Gefahr ausginge, da verbietet man nicht. Und jetzt?? Ist das etwa auch keine echte Gefahr??
I regularly have those dreams where I am desperately trying to open my eyes because of some danger or other, but they’re suuuper heavy and it doesn’t work. This is that
If it’s in the minified front end code it’s already client side, of course you don’t show it to the user but they could find out if they wanted to. Server side errors are where you really have to watch out not to give out any details, but then logging them is also easier since it’s already on the server.
Well, I think for a 9 year old it’s fine. I think the stage where you would run into issues is when trying to get into “actual” software development, where the flexibility in scoping and typing afforded by Python can lead to some bad habits (e.g. overusing global/shared variables, declaring them from within functions, catching errors late instead of validating data first, …)
I don’t have a ton of experience with it but I think C# strikes a pretty good balance between strictness and beginner-friendliness. Modern Java isn’t all that bad either, though it doesn’t have very good options for fun things to build. But again, I don’t think this necessarily applies to a child; I’m an educator at a university so both my target audience and point of reference are freshman compsci students.
I was brought up on Python and also do not like it for a variety of reasons, both practical and by personal preference. I also have the opinion that if you are trying to learn software engineering it is not a good language to start out with, despite it being so easy to pick up at first.
Some people try to use Python’s popularity as a counterpoint, and while it does show that my view is a minority opinion, it’s not a very convincing argument for the language itself.
Do keep in mind that if you upgrade your regular RAM this will only benefit models running on the CPU, which are far slower than models on the GPU. So with more RAM you may be able to run bigger models, but when you run them they will also be more than a literal order of magnitude slower. If you want a response within seconds you would want to run that model on the GPU, where only VRAM counts.
Probably in the near future there will be models that perform much better at consumer device scale, but for now unfortunately it’s still a pretty steep tradeoff, especially since large VRAM hasn’t really been in high demand and is therefore much harder to come by.
“Runs locally” is a very different requirement and not one you’ll likely be able to find anything for. There are smaller open source LLMs but if you are looking for GPT-4 level performance your device will not be able to handle it. Llama is probably your best bet, but unless you have more VRAM than any consumer gpu currently does , you’ll have to go with lower size models which have lower quality output.
It’s not code anyone is supposed to read or work with, this is the result of minifying it to be as short as possible. And from a quick glance what’s happening is that a variable is set to correspond with whether the cursor is currently over a certain element. Not sure what’s funny about this?
The point OP is making is that those people would not put 2 and 2 together to understand that the files they were looking at are called temp files, just because that’s the folder they found them in. They may not even remember the name of the folder, only that it contains a bunch of files with a prefix they’re now googling.
Not sure why I’m bothering explaining this to you, the way you responded makes you look absolutely insufferable, but maybe someone else who comes across this will find it useful.