• 0 Posts
  • 54 Comments
Joined 11 months ago
cake
Cake day: December 6th, 2024

help-circle
  • A quick look through its documentations shows that it instructs the user how to got through a subset of the instructions the original user provided (or an alternative set of instructions if using Android 11+ as there it can use a different mechanism) plus a few more, in order to run a Shizuku service as user “adb”.

    From then on, that Shizuku service can then be used by other apps to do everything the “adb” user can, including installing and updating applications.

    So I guess it could be used by something like F-Droid to go around Google’s new mechanism to close down app installs.

    For Android < 11 it’s is no more non-expert friendly than the instructions already provided by the original user, though it’s better in Android 11+ as there it’s all interacting with menus on the Android side (see here under Start Shizuku)


  • Most of that stuff is automatable - except the bit about activating Developer mode and USB Debugging on the device (steps 3 to 6) which only needs to be done once per device - so I expect we will soon see several nice GUI tools that automate the rest and eventually we might even just see stuff that talks directly to the phone over USB via libusb and using the same protocol as ADB, so installing the Android Platform Tools won’t at all be needed.

    But yeah, at this point it requires people to at the very least be familiar with using the command line.


  • Well, I did do app development for Android for a couple of years, so I’ll be using ADB it install APKs in any device affected if needed.

    I’ll also never do development work for Android ever again, beyond making utilities for myself if need something like that.

    Beyond that, I’ll never buy an Android device that cannot be unlocked. Last one I got was a Xiaomi phone, which at the time could be unlocked (which I did and installed an alternative ROM on it before I started using it), but they stopped that so Xiaomi isn’t going to be getting any more money from me.

    Mid to long-term, I expect Linux devices are the solution. I’m especially interested in getting a Linux tablet (7" or 8") to replace the tablet I currently use mostly for book reading and internet browsing when I’m out and about (hence the size needs to be small enough to fit a back or jacket pocket).

    When I started looking into it, my expectation was that Linux tablets would make even more sense as devices than phones since they’re closer to notebooks in terms of how they’re used, but I haven’t really found all that many out there - there are more Linux phones than tablets - and all of them were 10" or more (so, too large for my use case).

    (PS: suggestions welcome, even just stuff I can root and install something like Ubuntu Touch on it)

    Am I so unusal in wanting an portable computing device with a big enough screen to read stuff, for the purpose of consuming media rather than working on (so no keyboard need), which is not so big that I need to haul it in a backpack, not a full-blown smartphone with all the bells as whistles (I already have a smarphone on my pocket with mobile data, camera and GPS, so why would I need that shit AGAIN on a tablet???) and not a locked-down system like iOS or Android?


  • Curiously, actual scams also go through “a speculative boom that looked like a scam in the moment”, and then they turn out to actually be an overhyped scam that doesn’t in fact change the World.

    Crypto currencies are a good example.

    Your “don’t throw the baby out with the bath water” statement makes a lot of sense in the early stages, when we don’t really know yet if what’s being overhyped might or not be just the beginning of something big, hence one shouldn’t just discount a tech because there’s a massive hype train on it. The thing is, this was maybe 1 or 2 years ago for things like LLMs, but by now it’s becoming obvious that it’s a dead end since the speed of improvement and cost relative to improvement ratio have become very bad.

    Whilst broader Machine Learning tech is useful, as it was useful already since when it started (back in the 90s Neural Networks were already used to recognized postal codes on mail envelopes for automated sorting), this bubble was never about the broader domain of Machine Learning, it was about a handful of very specific NN architectures with massive numbers of neurons and huge training datasets (generally scrapped from the Internet), and it’s those architectures and associated approaches to try and create a machine intelligence that are turning out to not at all deliver what was promised and as they’ve already reached a point very low incremental returns, seem to be a dead-end in the quest to reach that objective. What they do deliver - an unimaginative text fluff generator - turns out to be mainly useless.

    So yeah, if you’re betting on the kind of huge neural networks with huge datasets used in the subsection of ML which has been overhyped in this bubble and the kind of things they require such as lots of GPU power, you’re going to get burned because that specific Tech pathway isn’t going to deliver what was promised, ever.

    Does this mean that MLs will stop being useful for things like mail sorting or other forms of image recognition? Of course not, those are completelly different applications of that broad technique which have very little to do with what people now think of as being AI and the bubble around it.

    Machine Learning has a bright future, it’s just that what was pushed in this bubble wasn’t Machine Learning in general but rather very specific architectures within it - just like when the “Revolution in Transportation” which turned out to be the Segway and kind crap thus quickly fizzled didn’t destroy the entire concept of transportation, so the blowing up of the LLMs bubble isn’t going to destroy the concept of Machine Learning, but in both cases if you went all in into that specific expression a technology (or the artifacts around it, such as massive amounts GPU power for LLMs), that the broader domain will keep going one isn’t going to be much comfort to you.


  • This is really just a driver which sends a bunch of bytes via I2C to a microcontroller.

    I2C is a very standard way of communicating with digital integrated circuits at low speed so this is not specific to the microcontroller used on Synology NAS devices (which is actually a pretty old and simple one) much less specific to drive leds.

    So whilst technically this specific Linux Driver ends up controlling LEDs on a very specific device, the technique used in it is way more generic than that, and can be used to control just about any functionality sitting behind a digital integrated circuit that exposes an interface to control it via I2C, be it one that hardcodes it or one which, like this one, is a microcontroller which itself implements it in code.

    All this to say that this is a bit bigger than just “LED driver”.


  • Yeah, that’s much better.

    Personally I detest not understanding what’s going on when following a guide to do something, so I really dislike recipe style.

    That said, I mentioned recipes because recipes meant to be blindly followed are the style of guide which has the lowest possible “required expertise level” of all.

    I supposed a playbook properly done (i.e. a dumbed down set by step “do this” guide but with side annotations which are clearly optional reading, explaining what’s going on for those who have the higher expertise levels needed to understand them) can have as low a “required expertise level” as just a plain recipe whilst being a much nicer option because people who know a bit more can get more from it that they could from just a dumbed down recipe.

    That said, it has to be structured so that it’s really clear that those “explanation bits” are optional reading for the curious which have the knowhow to understand them, otherwise it risks scaring less skilled people who would actually be able to successfully do the taks by blindly following the step-by-step recipe part of it.


  • For “all documentation” to “cater to all levels” it would have to explain to people “how do you use a keyboard” and everything from there upwards, because there are people at that level hence it’s part of “all levels”.

    I mean the your own example of good documentation starts with an intro of “goals” saying:

    “Visual Studio (VS) does not (currently) provide a blank .NET Multi-platform Application User Interface (MAUI) template which is in C# only. In this post we shall cover how to modify your new MAUI solution to get rid of the XAML, as well as cover how to do in C# code the things which are currently done in XAML (such as binding). We shall also briefly touch on some of the advantages of doing this.”

    For 99% of people almost all that is about as understandable as Greek (expect for Greek people, for whom it’s about as understandable as Chinese).

    I mean, how many people out there in the whole World (non-IT people as illustrated in the actual article linked by the OP) do you think know what the hell is “Visual Studio”, “.Net”, “Multi-platform Application User Interface”, “template”, “C#”, “XAML”, “binding” (in this context).

    I mean, if IT knowledge was a scale of 1 to 10 with 10 the greatest, you’re basically thinking it’s “catering to all levels” when an explanation for something that is level 8 knowledge (advanced programming) has a baseline required level of 7 (programming). I mean, throw this at somebody that “knows how to use Excel” which is maybe level 4 and they’ll be totally lost, much less somebody who only knows how to check their e-mail using a browser without even properly understanding the concept of "browser (like my father) which is maybe level 2 (he can actually use a mouse and keyboard, otherwise I would’ve said level 1).

    I think you’re so way beyond the average person in your expertise in this domain that you don’t even begin to suspect just how little of our domain the average person knows compared to an mere programmer.


  • The more advanced the level of knowledge on something the more foundation knowledge somebody has to have to even begin to understand things at that level.

    It would be pretty insane to in a tutorial for something at a higher level of expertise, include all the foundational knowledge to get to that level of expertise so that an absolute beginner can understand what’s going on.

    Imagine if you were trying to explain something Mathematical that required using Integrals and you started by “There this symbol, ‘1’ which represents a single item, and if you bring another single item, this is calling addition - for which we use the symbol ‘+’ and the count of entities when you have one single entity and ‘added’ another single entity is represented by the symbol ‘2’. There is also the concept of equality, which means two matematical things represent the same and for which the symbol we use is ‘=’ - writting this with Mathematical symbols, ‘1 + 1 = 2’” and built the explanation up from there all the way to Integrals before you could even start to explain what you wanted to explain in the first place.

    That said, people can put it in “recipe” format - a set of steps to be blindly followed without understanding - but even there you have some minimal foundational knowlegde required - consider a cooking recipe: have you ever seen any that explains how does one weight ingredients or what is “boiling” or “baking”?

    So even IT “recipes” especially designed so that those with a much lower level of expertise than the one required to actually understand what’s going on have some foundational knowledge required to actually execute the steps of the recipe.

    Last but not least I get the impression that most people who go to the trouble of writting about how to do something prefere to do explanations rather than recipes, because there’s some enjoyment in teaching about something to others, which you get when you explain it but seldom from merely providing a list of steps for others to blindly follow without understanding.

    So, if one wants to do something way above the level of expertise one has, look for “recipe” style things rather than explanations - the foundational expertise required to execute recipes is way lower than the one required to undertand explanations - and expect that there are fewer recipes out there than explanations. Further, if you don’t understand what’s in a recipe then your expertise is below even the base level of that recipe (for example, if somebody writes “enter so and so in the command prompt” and you have no fucking clue what a “command prompt” is, you don’t meet the base requirements to even blindly follow the recipe), so either seek recipes with an even lower base level or try and learn those base elements.

    Further, don’t even try and understand the recipe if your expertise level is well below what you’re trying to achieve: sorry but you’re not going to get IT’s “Integrals” stuff if your expertise is at the level of understanding “multiplication”.


  • Kinda reminds me this Game one plays in Theatre which is to Play The Status (you’re given a number between 1 and 10, with 1 having the lowest social status and 10 the highest, and you try and act as such a person).

    Alongside the whole chin-down to chin-up thing, people tend to do more fast and confident moving the higher the status, but the reality is that whilst indeed up the scale in professional environment the higher the status the more busy and rushed they seem, the trully highest status people (the 10s) don’t at all rush: as I put it back then (this was the UK) “the Queen doesn’t rush because for everybody the right time for the Queen to be somewhere is when she’s there, even it it’s not actually so, hence she doesn’t need to rush”.

    There was also some cartoon making the rounds many years ago about how people on a company looked depending on their social status, were you started with the unkept shabbily dressed homeless person that lived outside the vuilding, and as you went up the professional scale people got progressively more well dressed and into suits and such, and then all of a sudden a big switch, as the company owner at the top dressed as shabbily as the homeless person.





  • Well, the N100 does have a lot more breathing space in terms of computing power, so it’s maybe a better bet for something you want to use for a decade or more, and that remote control I linked to above does work fine, except for the power button (which will power your Linux off but won’t power it back on).

    I actually tried an Android TV Box (which is really just and SBC in the same range of processing power as the Pi) for this before going for the Mini PC and it was simply not as smooth operating.

    That Mini-PC has enough computing power room (plus the right processing extensions) that I can be torrenting over OpenVPN on a 1Gb/s connection whilst watching a video from a local file and it’s not at all noticeable on the video playback.


  • Kodi install instructions are here

    I don’t use docker, I use lubuntu with normal packages. So for example Kodi is just installed from the Team Kodi PPA repository (which, granted, is outdated, but it works fine and I don’t need the latest and greatest) and just set it up to be auto-started when X starts so that on the TV it’s as if Kodi is the interface of that machine.

    Qbittorrent is just the server only package (qbittorrent-nox) which I control remotelly via its web interface and the rest is normal stuff like Samba.

    After the inital set up, the actual linux management can be done remotelly via ssh.

    That said, LibreELEC is a Linux distro which comes with Kodi built-in (it’s basically Kodi and just enough Linux to run it), so assuming it’s possible to install more stuff in it might be better - only found out about it when I had my setup running so never got around to try it. LibreELEC can even work in weaker hardware such as a Raspberry Pi or some of its clones.

    Also you can get Kodi as a Flatpak which works out of the box in various Linux distros so if you need the latest and greatest Kodi plus a full-blown Linux distro for other stuff you might do the choice of distro based on supporting flatpack and being reasonably lightweight (I actually originally went for Lubuntu exactly because it uses a lightweight Window Manager and I expected that N100 mini-pc to need it, though in practice the hardward can probably run a lot more heavy stuff than that, though lighter stuff means the CPU load seldom goes up significativelly hence the fan seldom turns on and so the thing is quiet most of the time and you only hear the fan spinning up and then down again once in a while even in the Summer).

    As for docker, there are a lot of instructions out there on how to install Kodi with Dockers, but I never tried it.

    Also you might want to get a remote like this, which is a wireless remote with a USB adapter, not because of the air-mouse thing (frankly, I never use it) but simply because the buttons are mapped to exactly the shortcuts that Kodi uses, so using it with Kodi in Linux is just like using a dedicated remote for a TV Media Box - in fact all those thinks are keyboard shortcuts (that remote just sends keypresses to the PC when you press a button) and they keyboard shortcuts for media players seem to be a standard.


  • It really depends on what you’re doing with it and on what old PCs you have available.

    I have an N100 Mini-PC at home in my living room connected to my TV which is both a home server and a TV-Box using Kodi (I even have a remote for it).

    Having modern image and video decoding in hardware is pretty useful when I’m using it as a TV Box (there is zero stutter with it), whilst the rest of the time the thing mostly sits doing some low CPU-intensive server tasks (mainly torrenting and SMB server stuff).

    Also, it’s a small box that fits fine on my TV stand without standing out and runs silent pretty almost all of the time.

    Further, I don’t have any low power consuming old PCs around - the best are some chunky old notebooks, the rest are old gaming PCs which eat more power idle than the mini PC does at full load - and even the notebooks aren’t that low power as all that.

    Mind you, for many years I used an old Asus EEE PC (a very small notebook running Linux) as home file server (with external HDs) and had a separated dedicated hardware TV Media Server box playing files from it, but eventually that PC stopped working and I found out I could just use my Router as a file server.

    Last but not least, judging for how long I kept using my TV Media Server boxes (which over almost 2 decades I had 2 different ones and which as dedicated hardware could not easilly be upgraded when new video compression standards came out) 10+ years is definitelly my time-frame for using that Mini-PC.

    All this to say that you should consider using old hardware, especially if you have some around and it’s task appropriate (like I did before using an old Asus EEE PC as a home file server), but also take in account what you’re going to do it and consider if new hardware won’t be better over the timespan you will likely be using it and if the being able to get a more task appropriate form factor (like how having a little box-size Mini PC lets me have it in my living room on a TV stand next to my TV and my fiber router) is worth it.

    In summary, before you get hardware you should ponder a bit about what you intend to do with it before you decide what to get, don’t be afraid of using stuff you already have and also don’t be afraid to get new stuff if it’s actually justified by hardnosed reasons rather than merely some variant of the “new stuff smell” psychological effect when buying new.


  • The skateboard would literally be a plank on top of some wheel axes pinched from a shopping cart, the scooter would just be a flimsy pole stuck through a hole in the “skateboard”, the bicycle would be 2 such poles, one with a small piece of wood as a seat and at the front the wheel axis had been moved to be soldered to the front pole so that one rotates with the other.

    All of them function only in the technical sense, are awkward to use, don’t last long under continuous use and look like shit because they were not done with the right techniques for resilience and have none of the finishing touches needed for ease of use and attractiveness.




  • The Cult Of Agile with its Holy Practices that Must Be Done without actual logical and well thought about reasons (instead, the reason are things like “It’s What It Says In This Agile Holy Book” and/or “That’s What I Saw Other Agile People Do”), is not at all the same as the class of Software Development Processes called Agile.

    Then again, Software Development Processes are the kind of thing you tackle at the level of Technical Architect, and since there aren’t really that many genuine Technical Architects (with the actual chops, rather than merelly 5-10 years experience in a single kind of development environment and a title obtained from a company that gives fancy titles as “promotion” instead of a proper salary raise) around, Agile is mostly just blindly followed without true understanding of what it does, what it doesn’t do, how that is does it or why it cannot do it, and thus were and how it actually adds value and were it doesn’t.


  • Whilst I agree that it’s nice to get people who do get some enjoyment from the work, I think it’s unrealistic to expect to actually find it in senior professionals: maybe you’ll be lucky, but don’t count on it - such people need to have started with a natural knack for that domain, not having had all their enjoyment of that kind of activity totally crushed over the years by the industry (I’m afraid that over time having to do something again and again because it has to be done rather than because one wants to do it, crushes the fun out of any task for even for the most enthusiastic about it person), and not having been accepted or even demanded to get promoted to management as they became more senior because they were so good in the Technical side (were they’ll most likely suck, but that’s not consolation for you as they won’t be available anymore).

    It simply is very unlikely to find experienced people combining all those things.

    Further, even if you do manage to find such people, don’t expect that enjoyment of such tasks to be enough to drive an employee most of the time, since most of the work we have to do is generally something that needs to be done rather than something which is enjoyable to do.

    If on the other hand you go for junior people who still retain their enthusiasm, you’re going to be “paying” for them doing all the mistakes in the book and then some as they learn, plus if you give them the really advanced complex stuff (say, designing a system to fit into existing business processes) they’re going to fuck it up beyond all recognition.

    So statistically going for enthusiasm is and experience is like hoping to win the lottery.

    If you do need to hire people with actual experience, it’s more realistic to aim for professionalism as their driver of doing the work well and in time, rather than enthusiasm.

    This is why, IMHO, asking people how they feel about the work is a bit silly unless you have yourself a truckload of recent graduates looking for their first job and you’re trying to separate the gifted from the ones who went for it for the money (and there you’re competing with the likes of Google and other companies with more brand recognition who will far more easily attract said gifted naive young things than the overwhelming majority of companies out there, so that too is probably not realistic an expectation)

    I suppose Lemmy is frequented by older Tech professionals, hence the “you must be joking!” reaction to your idea that asking people how they feel about the work is in any way form or shape a viable way of finding good professionals.