• 0 Posts
  • 20 Comments
Joined 4 months ago
cake
Cake day: December 6th, 2024

help-circle

  • Go check in Aliexpress: there are tons of non-smart phones, especially the stuff marked as “senior phone”, and they’re pretty cheap too (like $15 for a mobile phone that just does calls and SMS).

    If you want the stuff that’s not glitzy and heavy on marketing you need to get it from where the factories are, not were the brands are - basic mobile phone tech is a thoroughly solved problem and highly integrated nowadays and well within range for even smallish electronics manufacturers to design themselves.

    Also check HMD, the Finnish mobile maker who bought Nokia’s mobile business, who also have several non-smart models (including old Nokia models).

    Edit: No idea if any are flip-phones though. Here’s an example flip phone


  • I’m running a full version of Ubuntu on my Orange Pi 5 Plus, which is roughly the same as a Raspberry Pi 5 and it runs fine, so that thing could easilly be hardware in same class of power as a Raspberry Pi 5 or entry level intel Mini-PC and run Ubuntu.

    That said, it would still be an SBC that costs about $120.

    In my experience, a $40 SBC can’t run more than Armbian and would be better off with a lightweight distro running a lighter window manager.


  • The entire purpose of writing good readable code which is mostly self-explanatory and were it isn’t it’s properly commented to explain what’s going on, is so that it it’s not a necessary for the person who picks it up later to be somebody who does remember what that code does and how it does it.

    Whilst this is mainly important to allow other people to work in that code, as a side effect the actual person who wrote the code if they follow those coding principles needs not remember what it does and how it does it.

    One of the upsides of being a senior dev is having figured this kind of thing out from experience, which offsets the downside that since you’re older and have done a ton of things, it’s less likely that you will properly remember the details of a specific code base after some months of not looking at it.



  • Propely done Agile is more to solve the “We have the general idea of what we need but will only know for sure the details of how it will work once the users see it and start playing around with it”.

    You still need to upfront know that a wedding is actually needed, but have a process for figuring out and trying out the details of the various elements of it (say, as part of deciding what kind of food will there be for the reception, actually preparing and trying various options) before the whole things actually gets “delivered”.

    Agile also works well for environments were software is developed to serve the kind of business which is are constantly changing (for example, certain areas of Finance) or is something totally new being created from the ground up (i.e. many if not most Startups) because the business itself is a sort of a neverending “we’ll figure out what we need and if it works well when we get there and try it out” which matches almost perfectly the fast and scope-limited definition->implementation->feedback cycles of the Agile software development process.


  • I once had a customly designed project for an external client of a web-development company were I was technical lead and the sales guy who sold it to the customer without ever consulting us about it had the project management responsability.

    On the very first day the guy got me, the junior developer and the designer together for the project launch meeting and started saying how we would have to work extra to make it fit his (ridiculously short) deadlines and I just said “No, it’s not at all possible to fullfill those deadlines so that’s not going to happen” and when he tried to argue with “what about the client” I replied that “You came up with those estimates and gave them to the client without even talking to us, the experts in that domain, so managing the fallout with the client from that is your problem not ours”.

    I fondly remember all that because of the transition from downtrodden and unhappy to absolute happiness visible on the face of the junior developer when, after the sales guy / project manager gave us the “work extra hard” spiel I (as the tech lead) replied with “No, that’s not going to happen”.

    (Ultimatelly the project took twice as long as the sales guy’s estimates)

    The whole “putting the cart in front of the oxen” (as we say in my country) of this meme reminded me of that one (and that memory invariably puts a smile on my face).


  • When I was finishing of my degree at Uni I actually spent a couple of months as an auxiliary teacher giving professional training in Unix, which included teaching people shell script.

    Nowadays (granted, almost 3 decades later), I remember almost nothing of shell scripting, even though I’ve stayed on the Technical Career Track doing mostly Programming since.

    So that joke is very much me irl.


  • I suspect that starting your own version of the API is the Software Designer / Software Architect version of Programmers’ “I know best so I’m going to do my part of the code my way which is different from everybody else’s”.

    Mind you, at the very least good Software Architects should know best, but sometimes people get the title without having the chops for it.



  • I’ve seen, again an again, deploying to Staging and integration testing in that Production-like environment together with the software of other teams, reveal problems that we did not saw in Dev, thus saving us from deploying into Production software that broke or, worse, corrupted the database.

    This was certainly very important when I worked in environments such as Investment Banking where Production being down because of integration issues or, worse, sending bad data to other systems or the database having to be rolled back to the overnight backup, might mean the business losing millions of dollars.

    It’s not a foolproof mechanism but it certainly catches most integration problems, which are often most of the problems in complex environments where multiple teams are responsible for multiple highly integrated software systems,

    Granted, little teams doing small software systems in simple environments were their software has little or no integration with other software, can probably get away with not having a Staging environment if their Dev environments has the same setup as Production (same OS, same database and so on) since they’re going to have very little in the way of Integration problems with other people’s software.


  • A family of software development processes for teams, which focuses on cycles of quickly building and delivering smaller blocks of program functionally (often just a single program feature - say: “search customers by last name” - or just part of a feature) to end-users so as to get quick feedback from those users of the software, which is then is use to determining what should be done for subsequent cycles.

    When done properly it addresses the issues of older software development processes (such as the Waterfall process) in siuations where the users don’t really have a detailed vision of what the software needs to do for them (which are the most usual situations unless the software just helps automates their present way of doing things) or there are frequent changes of what they need the software to do for them (i.e. they already use the software but frequently need new software features or tweaks to existing features).

    In my own career of over two decades I only ever seen it done properly maybe once or twice. The problem is that “doing Agile” became fashionable at a certain point maybe a decade ago and pretty much a requirement to have in one’s CV as a programmer, so you end up with lots of teams mindlessly “doing Agile” by doing some of the practices from Agile (say, the stand up meeting or paired programming) without including other practices and elements of the process (and adjusting them for their local situation) thus not achieving what that process is meant to achieve - essentially they don’t really understand it as a software development process which is more adequate for some situations and less for others and what it actually is supposed to achieve and how.

    (The most frequent things not being done are those around participation of the end-users of the software in evaluating what has been done in the last cycle, determining new features and feature tweaks for the next cycle and prioritizing them. The funny bit is that these are core parts of making Agile deliver its greatest benefits as a software development process, so basically most teams aren’t doing the part of Agile that actually makes it deliver superior results to most other methods).

    It doesn’t help that to really and fully get the purpose of Agile and how it achieves it, you generally need to be at the level of experience at which you’re looking at the actual process of making software (the kind of people with at least a decade of experience and titles like Software Architect) which, given how ageist a lot of the Industry is are pretty rare, so Agile is usually being done by “kids” in a monkey-sees-monkey-does way without understanding it as a process, hence why it, unsurprising, has by now gotten a bit of a bad name (as with everything, the right tool should be used for the right job).


  • They’re supposed to work as an adaptor/buffer/filter between the technical side and the non-technical stakeholders (customers, middle/upper management) and doing some level of organising.

    In my 2 and a half decades of experience (a lot of it as a freelancer, so I worked in a lot of companies of all sizes in a couple of countries), most aren’t at all good at it, and very few are very good at it.

    Some are so bad that they actually amplify uncertainty and disorganisation by, every time they talk to a customer or higher up, totally changing the team’s direction and priorities.

    Mind you, all positions have good professionals and bad professionals, the problem with project management is that a bad professional can screw a lot of work of a lot of people, whilst the damage done by, for example, a single bad programmer, tends to be much more contained and generally mainly impacts the programer him or herself (so that person is very much incentivised to improve).


  • Half way into saving the World it turns out you need some data that’s not even being collected, something that nobody had figured out because nobody analysed the problem properly beforehand, and now you have to take a totally different approach because that can’t be done in time.

    Also the version of a library being include by some dependency of some library you included to do something stupidly simple is different from the version of the same library being included by some dependency of a totally different library somebody else includeed to do something else that’s just as stupidly simple and neither you nor that somebody else want to be the one to rewrite their part of the code.



  • It eliminates the dependency of specific distributions problem and, maybe more importantly, it solves the dependency of specific distribution versions problem (i.e. working fine now but might not work at all later in the very same distribution because some libraries are missing or default configuration is different).

    For example, one of the games I have in my GOG library is over 10 years old and has a native Linux binary, which won’t work in a modern Debian-based distro by default because some of the libraries it requires aren’t installed (meanwhile, the Windows binary will work just fine with Wine). It would be kinda deluded to expect the devs would keep on updating the Linux native distro (or even the Windows one) for over a decade, whilst if it had been released as a Docker app, that would not be a problem.

    So yeah, stuff like Docker does have a reasonable justification when it comes to isolating from some external dependencies which the application devs have no control over, especially when it comes to future-proofing your app: the Docker API itself needs to remain backwards compatible, but there is no requirement that the Linux distros are backwards compatible (something which would be much harder to guarantee).

    Mind you, Docker and similar is a bit of a hack to solve a systemic (cultural even) problem in software development which is that devs don’t really do proper dependency management and just throw in everything and the kitchen sink in terms of external libraries (which then depend on external libraries which in turn depend on more external libraries) into the simplest of apps, but that’s a broader software development culture problem and most of present day developers only ever learned the “find some library that does what you need and add it to the list of dependencies of your build tool” way of programming.

    I would love it if we solved what’s essentially the core Technical Architecture problem of in present day software development practices, but I have no idea how we can do so, hence the “hack” of things like Docker of pretty much including the whole runtime environment (funnilly enough, a variant of the old way of having your apps build statically with every dependency) to work around it.



  • Look for a processor for the same socket that supports more RAM and make sure the Motherboard can handle it - maybe you’re lucky and it’s not a limit of that architecture.

    If that won’t work, breakup your self-hosting needs into multiple machines and add another second hand or cheap machine to the pile.

    I’ve worked in designing computer systems to handle tons of data and requests and often the only reasonable solution is to break up the load and throw more machines at it (for example, when serving millions of requests on a website, just put a load balancer in front of it that assigns user sessions and associated requests to multiple machines, so the load balancer pretty much just routes request by user session whilst the heavy processing stuff is done by multiple machines in such a way the you can just expand the whole thing by adding more machines).

    In a self-hosting scenario I suspect you’ll have a lot of margin for expansion by splitting services into multiple hosts and using stuff like network shared drives in the background for shared data, before you have to fully upgrade a host machine because you hit that architecture’s maximum memory.

    Granted, if a single service whose load can’t be broken down so that you can run it as a cluster, needs more memory than you can put in any of your machines, then you’re stuck having to get a new machine, but even then by splitting services you can get a machine with a newer architecture that can handle more memory but is still cheap (such as a cheap mini-PC) and just move that memory-heavy service to it whilst leaving CPU intensive services in the old but more powerful machine.


  • I have a cheap N100 mini-PC with Lubuntu on it with Kodi alongside a wireless remote as my TV box, and use my TV as a dumb screen.

    Mind you, you can do it even more easily with LibreELEC instead of Lubuntu and more cheaply with one of its supported cheap SBCs plus a box instead of a mini PC.

    That said, even the simplest solution is beyond the ability of most people to set up, and once you go up to the next level of easiness to setup - a dedicated Android TV Box - you’re hit with enshittification (at the very least preconfigured apps like Netflix with matching buttons in your remote) even if you avoid big brands.

    Things are really bad nowadays unless you’re a well informed tech expert with the patience to dive into those things when you’re home.


  • I use a pretty basic one (with an N100 microprocessor and intel integrated graphics) as a TV box + home server combo and its excellent for that.

    It’s totally unsuitable for gaming unless we’re talking about stuff running in DOSEmu or similar and even then I’m using it with a wireless remote rather than a keyboard + mouse, which isn’t exactly suitable for PC gaming.

    Mind you, there are configurations with dedicated graphics but they’re about 4x the price of the one I got (which cost me about €120) and at that point you’re starting to enter into the same domain as small form factor desktop PCs using things like standard motherboards, which are probably better for PC gaming simply because you can upgrade just about anything in those whilst hardware upgradeability of mini PCs is limited to only some things (like SDD and RAM).