“Assume it’s a map and treat like a map and then catch the type error if it’s not.” Paraphrased from actual advice by Guido on how you should write Python. Python isn’t a bad language but the philosophy that comes along with it is so fucked.
Principal Engineer for Accumulate
“Assume it’s a map and treat like a map and then catch the type error if it’s not.” Paraphrased from actual advice by Guido on how you should write Python. Python isn’t a bad language but the philosophy that comes along with it is so fucked.
What I mean is, from the perspective of performance they are very different. In a language like C where (p)threads are kernel threads, creating a new thread is only marginally less expensive than creating a new process (in Linux, not sure about Windows). In comparison creating a new ‘user thread’ in Go is exceedingly cheap. Creating 10s of thousands of goroutines is feasible. Creating 10s of thousands of threads is a problem.
Also, it still uses kernel threads, just not for every single goroutine.
This touches on the other major difference. There is zero connection between the number of goroutines a program spawns and the number of kernel threads it spawns. A program using kernel threads is relying on the kernel’s scheduler which adds a lot of complexity and non-determinism. But a Go program uses the same number of kernel threads (assuming the same hardware and you don’t mess with GOMAXPROCS) regardless of the number of goroutines it uses, and the goroutines are cooperatively scheduled by the runtime instead of preemptively scheduled by the kernel.
Key point: they’re not threads, at least not in the traditional sense. That makes a huge difference under the hood.
Really? Huh, TIL. I guess I’ve just never run into a situation where that was the bottleneck.
Definitely not a guarantee, bad devs will still write bad code (and junior devs might want to let their seniors handle concurrency).
It’s safe to assume that any non-trivial program written in Go is multithreaded
You can also just tell GitHub to not do that.
Why are you memeing about me? I don’t appreciate being made fun of.
If your job is to make websites and you make sites that don’t work on a browser that has over 100 million users you’re not doing your job.
If you actually have deep knowledge in a specialty, then you describe yourself as that specialty. ‘Full stack engineer’ coneys that you don’t have a specialty/are a master of nothing/your skills are _ shaped.
Experience != expertise or skill. I have never met someone who was actually good at both. Maybe if your backend is just some SQL queries. I am a backend engineer and I’m adequate at front end but I’d never hire someone whose skills were merely adequate unless I thought they had the potential to reach ‘good’.
Scripting languages being languages that are traditionally source distributed.
So the only ways that the distribution mechanism matter are really a difference between How does the distribution mechanism matter beyond that? And even those points are
They tend to be much easier to write
I’m assuming you are not saying “real” languages should be hard to write…
run slower
Objective-C and Go run slower than C and they’re all compiled languages. Sure, an interpreter will be slower than a compiled language but modern languages aren’t simply interpreted (i.e. JIT, etc).
often but not always dynamically typed, and operate at a higher level
There are dynamically typed compiled languages, and high level compiled languages.
It’s not a demeaning separation, just a useful categorization IMO.
Calling one class of languages “real” and another class something else is inherently demeaning. I wouldn’t have cared enough to type this if you used “compiled vs scripting” instead of “real vs scripting”. Though I disagree with using “scripting” at all to describe a language since that’s an assertion of how you use the language, not of the language itself. “Interpreted” on the other hand is a descriptor of the language itself.
As someone who loves C there are lots of languages that seem too limiting and high level, doesn’t mean they aren’t useful tho.
I personally can’t stand Java because the language designers decided to remove ‘dangerous’ features like pointers and unsigned integers because apparently programmers are children who are incapable of handling the risk. On the other hand I love Go. It’s high level enough to be enjoyable and easy to write, but if you want to get into the weeds you can.
That line is blurring to the point where it barely exists any more. Compiled languages are becoming increasingly dynamic (e.g. JIT compilation, code generation at runtime) and interpreted languages are getting compiled. JavaScript is a great example: V8 uses LLVM (a traditional compiler) to optimize and compile hot functions into machine code.
IMO the only definition of “real” programming language that makes any sense is a (Turing complete) language you can realistically build production systems with. Anything else is pointlessly pedantic or gatekeeping.
I’d rather use a language that doesn’t treat me like an incompetent child, removing unsigned ints because “they’re a source of bugs”.
Or use a statically typed language that’s actually modern instead of C
Why? In my experience using a real debugger is always the superior choice. The only time I don’t is when I can’t.
Huh? Main file? Do you mean main package? A module can contain an arbitrary number of main packages but I don’t see how that has anything to do with this post. Also are you saying modules are equivalent to classes? That may be the strangest take I’ve ever heard about Go.
How so?
The person who uses the shitty tool is a moron. The person who makes the shitty tool is an asshole. At least in this case where the shitty tool is actively promoting shitty PRs.
Assembly languages are always architecture specific. Thats kind of their defining feature. Assembly is readable machine code.