

Succinctly and well put.
Succinctly and well put.
Well now. My primary exposure to Go would be using it to take first place in my company’s ‘Advent of Code’ several years ago, in order to see what it was like, after which I’ve been pleased never to have to use it again. Some of our teams have used it to provide microservices - REST APIs that do database queries, some lightweight logic, and conversion to and from JSON - and my experience of working with that is that they’ve inexplicably managed to scatter all the logic among dozens of files, for what might be done with 80 lines of Python. I suspect the problem in that case is the developers, though.
It has some good aspects - I like how easy it is to do a static build that can be deployed in a container.
The actual language itself I find fairly abominable. The lack of exceptions means that error handling is all through everything, and not necessarily any better than other modern languages. The lack of overloads means that you’ll have multiple definitions of eg. Math.min
cluttering things up. I don’t think the container classes are particularly good. The implementation of pointers seems solely implemented to let you have null pointer exceptions, it’s a pointless wart.
If what you’re wanting to code is the kind of thing that Google do, in the exact same way that Google do it, and you have a team of hipsters who all know how it works, then it may be a fine choice. Otherwise I would probably recommend using something else.
I feel that Python is a bit of a ‘Microsoft Word’ of languages. Your own scripts are obviously completely fine, using a sensible and pragmatic selection of the language features in a robust fashion, but everyone else’s are absurd collections of hacks that fall to pieces at the first modification.
To an extent, ‘other people’s C++ / Bash scripts’ have the same problem. I’m usually okay with ‘other people’s Java’, which to me is one of the big selling points of the language - the slight wordiness and lack of ‘really stupid shit’ makes collaboration easier.
Now, a Python script that’s more than about two pages long? That makes me question its utility. The ‘duck typing’ everywhere makes any code that you can’t ‘keep in your head’ very difficult to reason about.
Frezik has a good answer for SQL.
In theory, Ansible should be used for creating ‘playbooks’ listing the packages and configuration files which are present on a server or collection of servers, and then ‘playing the playbook’ arranges it so that those servers exist and are configured as you specified. You shouldn’t really care how that is achieved; it is declarative.
However, in practice it has input, output, loops, conditional branching, and the ability to execute subtasks recursively. (In fact, it can quite difficult to stop people from using those features, since ‘declarative’ doesn’t necessarily come easily to everyone, and it makes for very messy config.) I think those are all the features required for Turing equivalence?
Being able to deploy a whole fleet of servers in a very straightfoward way comes as close to the ‘infinite memory’ requirement as any programming language can get, although you do need basically infinite money to do that on a cloud service.
No love for the ‘declarative’ programming paradigm? You can actually do some useful work with SQL or Ansible…
A binary tree is one way of preparing data, usually for sorting. Each node can have a left, right, or both, children.
A
/ \
B C
/ \
D E
“Inverting the tree” means swapping the children for each node, so that the order that the nodes are visited is reversed. Depending on whether you want to copy the tree or swap it in place then the algorithm is different. C++ provides iterators too, so providing a “order reversed” iterator can be done efficiently as well.
You’re going to have to visit every node and do at least one swap for every node, and an efficient algorithm won’t do much more than that. Bring unable to do it suggests that the student programmer doesn’t understand stacks or recursion yet, so they’ve more to learn.
Programming a robust global date-time system and having a transparent conversation between metric and *imperial/traditional" units is just a warm-up to show that you can work with the truly demented currency system. Make sure everything is rounded off to the nearest whole ha’penny.
You can only store rational numbers as a ratio of two numbers, and there’s infinitely times more irrational numbers than rational ones - as soon as you took (almost any) root or did (most) trigonometry, then your accurate ratio would count for nothing. Hardcore maths libraries get around this by keeping the “value in progress” as an expression for as long as possible, but working with expressions is exceptionally slow by computer standards - takes quite a long time to keep them in their simplest form whenever you manipulate them.
To be fair, compiling C code with a C++ compiler gets you all the warnings from C++'s strong-typing rules. That’s a big bonus for me, even if it only highlights the areas of your C that are likely to become a maintenance hazard - all those void*
casts want some documentation about what assumptions make them safe. Clang will compile variable-length arrays in C++, so you might want to switch off that warning since you’ve probably intended it. Just means that you can’t use designated initialisers, since C++ uses constructors for that and there’s no C equivalent. I’d be happy describing code that compiles in either situation as “C+”.
Also stops anyone using auto
, constexpr
or nullptr
as variable names, which will help if you want to copy-paste some well-tested code into a different project later.
Man alive, don’t get the managers working with audio. “Doubling the stream” might work if you’re using a signed audio format rather than an unsigned one, and the format is in the same endianness as the host computer uses. Neither of which are guaranteed when working with audio.
But of course, the ear perceives loudness in a logarithmic way (the decibel scale), so for it to be perceived as “twice as loud”, it generally needs an exponential increase. Very high and low frequencies need more, since we’re less sensitive to them and don’t perceive increases so well.
Finish the transition from X to Wayland?
“Register bit twiddling.” Setting all the modes that all their various cards can operate in, with the associated code for sending the bit updates over the connection bus. Tedious stuff that’s very prone to copy-paste errors if written by hand.
At some point you have to take AMDs word for it that these codes = this functionality, but if the right graphics come out then it can’t be so wrong.
Because if you disable browser autocomplete, what’s obviously going to happen is that everyone will have a text file open with every single one of their passwords in so that they can copy-paste them in. So prevent that. But what happens if you prevent that is that everyone will choose terrible, weak passwords instead. Something like September2025!
probably meets the ‘complexity’ requirement…
A bit like when we renamed all the master/slave terminology using different phrasing that’s frankly more useful a lot of the time, I think it’s about time we got rid of this “child” task nonsense. I suggest “subtask”. Then we can reword these books into something that no-one can make stupid jokes about any more, like “how to keep your subs in line” and “how to punish your subs when they’ve misbehaved”.
Well now. When we’ve been enforcing password requirements at work, we’ve had to enforce a bizarre combination of “you must have a certain level of complexity”, but also, “you must be slightly vague about what the requirements actually are, because otherwise it lets an attacker tune a dictionary attack against you”. Which just strikes me as a way to piss off our users, but security team say it’s a requirement, therefore, it’s a requirement, no arguing.
“One” special character is crazy; I’d have guessed that was a catch-all for the other strange password requirements:
We’ve had customers’ own security teams asking us if we can enforce “no right click” / “no autocomplete” to stop their users in-house doing such things; I’ve been trying to push back on that as a security misfeature, but you can’t question the cult thinking.
As an example of a language that many people are familiar with, which is likely to be in long-term use where maintainability is most important, and which can almost read like pseudocode anyway, sure - probably the best ‘real language’ choice.
You can write an unmaintainable fucking mess in any language. Rust won’t save you from cryptic variable naming, copy-paste code, a complete absence of design patterns, dreadful algorithms, large classes of security issues, unfathomable UX, or a hundred other things. “Clean code” is (mostly) a separate issue from choice of language.
Don’t get me wrong - I don’t like this book. It manages to be both long-winded and facile at the same time. A lot of people seem to read it and take the exact wrong lessons about maintainability from it. I think that it would mostly benefit from being written in pseudocode - concentrating on any particular language might distract from the message. But having a few examples of what a shitfest looks like in a few specific languages might help
My old job had a lot of embedded programming - hard real-time Z80 programming, for processors like Z800s and eZ80s to control industrial devices. Actually quite pleasant languages to do bit-twiddling in, and it’s great to be able to step through the debugger and see that what the CPU is running is literally your source code, opcode by opcode.
Back when a computers were very simple things - I’m thinking a ZX Spectrum, where you can read directly from the input ports and write directly into the framebuffer, no OS in your way just code, then assembly made a lot of sense, was even fun. On modem computers, it is not so fun:
x64 is just a fucking mess
you cannot just read and write what you want, the kernel won’t let you. So you’re going to be spending a lot of your time calling system routines.
99% of your code will just be arranging data to suit the calling convention of your OS, and doing pointless busywork like stack pointer alignment. Writing some macros to do it for you makes your code look like C. Might as well just use C, in that case.
Writing assembly makes some sense sometimes - required for embedded, you might be writing something very security conscious where timing is essential, or you might be lining up some data for vectorisation where higher-level languages don’t have the constructs to get it right - but these are very small bits of code. You would be mad to consider “making the whole apple pie” in assembly.
Got this installed on all my work machines - if you’re wanting to stick a screenshot on Jira or Slack with a couple of arrows, wavy lines, or a bit blurred out then it’s dead quick and has just the functionality that you need. Yes, it’s simple and lacks a lot of ‘power tools’. Sometimes that’s just what you need, tho.
Enough of that crazy talk - plainly
WheeledDeviceServiceFactoryBeanImpl
is where the dependency injection annotations are placed. If you can decide what the code does without stepping through it with a debugger, and any backtrace doesn’t have at least two hundred lines of Spring boot, then plainly it isn’t enterprise enough.Fair enough, though. You can write stupid overly-abstract shit in any language, but Java does encourage it.