• 0 Posts
  • 37 Comments
Joined 2 years ago
cake
Cake day: August 8th, 2023

help-circle


  • Are people really this dramatic? There are plenty of conventions at work that we don’t like but just accept. We’ll moan about it every now and then (looking at you “only one return statement per method”), but in the end we’ll just accept that any standard is better than total mayhem and anarchy. Usually I write the code in a way that makes sense to me and then just tidy it up to satisfy the angry rule machine. Having moet of the code in the same format makes it easier to follow, and the code that was written before these rules has me convinced that this whole thing is am improvement.




  • Good meme. However I do think that most people starting out will not really have to deal with any of those issues in the first few years apart from maybe the pip/venv/poetry/etc choice. But whatever they’ll pick it’ll probably work well enough for whatever they’re doing. When I started out I didn’t use any external libraries apart from pygame (which probably came pre-installed). I programmed in the IDLE editor that came with Python. I have no idea how I functioned that way, but I learnt a lot and hat plenty of fun.



  • Sometimes I look at the memes around here and wonder wtf y’all are doing. Like, neither my code nor the code at the place I work at are perfect. But I don’t think I’ve ever seen a merge do this. Maybe some of the most diverged merges temporarily had a lot of errors because of some refactoring, but then it was just a few find + replaces away from being fixed again. But those were merges where multiple teams had been working on both the original and the fork for years and even then it was usually pretty okay.



  • Machine learning and compression have always been closely tied together. It’s trying to learn the “rules” that describe the data rather than memorizing all the data.

    I remember implementing a paper older than me in our “Information Theory” course at university that treated the creation of a decision tree as compression. Their algorithm considered sending the decisions tree and all the exceptions to the decision tree and the tree itself. If a node in the tree increased the overall message size, it would simply be pruned. This way they ensured that you wouldn’t make conclusions while having very little data and would only add the big patterns in the data.

    Fundamentally it is just compression, it’s just a way better method of compression than all the models that we had before.

    EDIT: The paper I’m talking about is “Inferring decision trees using the minimum description length principle” - L. Ross Quinlan & Ronald L. Rivest







  • You call it “quick to judge and superficial”, but imo that’s the wrong attitude. Every tool we use as humans should be designed to be as intuitive as possible. It makes it easiest for people to learn how to use a new tool. That doesn’t mean that a tool cannot be complex or customizable, but the default experience should make it easy for new users to quickly achieve something. Once they grow accustomed to the tool they can tailor it their own way.

    No tool has to do this, but if it wants to be widely used then this is kinda necessary.

    There’s a reason why there are whole fields of study into human media interaction, and why software companies hire UI designers. Everything that doesn’t have to be explained in words and text because it is intuitive saves mental overhead for the user and makes the application more accessible.



  • Can, but not by default. The default setup is what leaves an impression on most users. Most users opening GIMP for the first time expect to be able to find stuff that they need, not have to first spend a lot of time getting familiar with all of its options. It shouldn’t be needed to first spend time opening all the sane default windows and re-aliging stuff every time you boot it for the first time. At least, that shouldn’t be the case of GIMP wants to be as popular with non-technical users like Krita is.

    Also, the tool bar still doesn’t have the nice separations between tool functions, and it still feel a bit more chaotic. Not sure of it’s the icons or the order.


  • Now admittedly I’m not someone who often uses drawing programs, but my biggest issue in GIMP is that I never seem to be able to find what I’m looking for.

    In the two images you posted you can actually see an example of such a case. In Krita all the tools (or whatever you’d call them) in the bar on the left are ordered in a logical way, and separate types of tools are also visually separated by separator lines. The bar with tools is also only 2 icons wide, which makes scanning for the right tool a bit easier, since you can mostly just scan along the vertical axis. In GIMP it’s just a pile of low contrast icons in seemingly random order. Unless you’ve used it enough to know the order, you’re gonna have to do a lot more searching. And searching will be way harder since you’ll have to search horizontally and vertically.

    It’s like reading a website where the text is taking the whole with of the screen and without paragraphs (GIMP) vs reading a website where the line length is constrained, the text is horizontally centered, and there are proper paragraphs.

    I feel like this example reflects my personal experience with both. I’ve used quite a few different types of image editing programs, and with most of them I can fairly easily find the stuff I need. Using GIMP however, I used to be quite lost. Nowadays it’s gotten better because the windows are not all floating around and I’ve used it more. But still, I only found Krita after using a fair bit of GIMP, and yet I felt instantly more at home because the UI was easier to navigate.

    Edit: That being said, GIMP is a very cool program. I don’t want to hate on it too much. It’s helped me countless times. The UI has already improved a lot since the floaty window days, and I hope that continues.


  • I’m not a hundred percent sure, but afaik it has to do with how random the output of the GPT model will be. At 0 it will always pick the most probable next continuation of a piece of text according to its own prediction. The higher the temperature, the more chance there is for less probable outputs to get picked. So it’s most likely to pick 42, but as the temperature increases you see the chance of (according to the model) less likely numbers increase.

    This is how temperature works in the softmax function, which is often used in deep learning.