• 1 Post
  • 51 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • looking at files inside the temp folder … otherwise known as … are you sitting down? You really should sit for this … temp files!

    The point OP is making is that those people would not put 2 and 2 together to understand that the files they were looking at are called temp files, just because that’s the folder they found them in. They may not even remember the name of the folder, only that it contains a bunch of files with a prefix they’re now googling.

    Not sure why I’m bothering explaining this to you, the way you responded makes you look absolutely insufferable, but maybe someone else who comes across this will find it useful.
















  • Well, I think for a 9 year old it’s fine. I think the stage where you would run into issues is when trying to get into “actual” software development, where the flexibility in scoping and typing afforded by Python can lead to some bad habits (e.g. overusing global/shared variables, declaring them from within functions, catching errors late instead of validating data first, …)

    I don’t have a ton of experience with it but I think C# strikes a pretty good balance between strictness and beginner-friendliness. Modern Java isn’t all that bad either, though it doesn’t have very good options for fun things to build. But again, I don’t think this necessarily applies to a child; I’m an educator at a university so both my target audience and point of reference are freshman compsci students.


  • I was brought up on Python and also do not like it for a variety of reasons, both practical and by personal preference. I also have the opinion that if you are trying to learn software engineering it is not a good language to start out with, despite it being so easy to pick up at first.

    Some people try to use Python’s popularity as a counterpoint, and while it does show that my view is a minority opinion, it’s not a very convincing argument for the language itself.


  • Do keep in mind that if you upgrade your regular RAM this will only benefit models running on the CPU, which are far slower than models on the GPU. So with more RAM you may be able to run bigger models, but when you run them they will also be more than a literal order of magnitude slower. If you want a response within seconds you would want to run that model on the GPU, where only VRAM counts.

    Probably in the near future there will be models that perform much better at consumer device scale, but for now unfortunately it’s still a pretty steep tradeoff, especially since large VRAM hasn’t really been in high demand and is therefore much harder to come by.


  • “Runs locally” is a very different requirement and not one you’ll likely be able to find anything for. There are smaller open source LLMs but if you are looking for GPT-4 level performance your device will not be able to handle it. Llama is probably your best bet, but unless you have more VRAM than any consumer gpu currently does , you’ll have to go with lower size models which have lower quality output.