Also worth noting that some of the workflows that were available in languages like CL or Smalltalk back in the 80s are superior to what most languages offer today.
In what ways? I don’t have any experience with those so I’m curious.
I get what this is saying but on the other hand…
Programmers now:
💪 Can spin up a minimum viable product in a day
💪 Writes web applications that handle millions or even billions of requests per second
💪 Remote code execution and memory related vulnerabilities are rarer than ever now
💪 Can send data across the world with sub 1 second latency
💪 The same PCIe interface is now 32x faster (16x PICe 1 was 8GB/s, while PCIe 6 is 256GB/s)
💪 The same wireless bands now have more throughput due to better radio protocols and signal processing
💪 Writes applications that scale across the over 100 cores of modern top of the line processors
💪 JIT and garbage collection techniques have improved to the point where they have a nearly imperceptible performance impact in the majority of use cases
💪 Most bugs are caught by static analysis and testing frameworks before release
💪 Codebases are worked on by thousands of people at the same time
💪 Functional programming, which is arguably far less bug prone, is rapidly gaining traction as a paradigm
💪 Far more emphasis on immutability to the point where many languages have it as the default
💪 Virtual machines can be seamlessly transferred from one computer to another while they’re running
💪 Modern applications can be used by people anywhere in the world regardless of language, even things that were very difficult to do in the past like mirroring the entire interface to allow an application that was written for left to right languages to support right to left
💪 Accessibility features allow people who are blind, paralyzed, or have other disabilities to use computers just as well as anyone else
Just wanted to provide come counter examples because I’m not a huge fan of the “programmers are worse than they were back in the 80s” rethoric. While programmers today are more reliant on automated tools, I really disagree that programmers are less capable in general than they were in the past.
they can keep the system signed, secure and intact
If you cannot set up your own signed, secure and intact system outside of the one Apple controls, none of those matter. If they truly gave a shit about security they would do what Linux does and allow anyone to self host their entire software infrastructure including package repositories, or at the very least do what Android does and allow installing of other app stores (including one you can self host). Signed, secure and intact are worthless if you are forced to trust someone else’s app store and signatures.
Of course the real reason they do this is to prevent people from 1, running pirated versions of paid apps, and 2, bypassing their in app purchase commission. DRM to ensure they get their cut. Signed, secure and intact have not a damn thing to do with it.
If you have no qualms about pretending to be a sexual assault survivor, trauma counselor, and/or a Black man opposed to Black Lives Matter, you meet the dictionary definition of a sociopath. And I don’t give a shit about “oh it’s AI doing it not the people.” I’m sorry, is the AI sentient? No. The humans are, and they chose to run the AI. I don’t even care if those personas were entirely the AI’s creation, if you didn’t stop the AI as soon as you realize it was going there you’re still a horrible person. Same with if you didn’t even check what the AI was doing while running this “”“research”“”
So it segfaults after one whole second instead of immediately?
Wearing a tech company backpack in public, especially in the rougher parts of town, is like wearing a sign that says “scrawny nerd probably with expensive electronics, come rob me.”
I personally like programming too much to ever vibe code as they say. Solving problems and organising things is why I like programming in the first place.
Any legal experts want to weigh in on whether this is even allowed? CC0 by definition has no limitations, but GPL very explicitly has limitations for what the code can be used for, and also applies to derivatives. If it was their own code but was officially submitted to the Linux repo, who owns it?
There’s a saying in Mandarin that translates to something like: Being in different professions is like being on opposite sides of a mountain. It basically means you can never fully understand a given profession unless you’re actually doing it.
LLMs can’t even stay on topic when specifically being asked to solve one problem.
This happens to me all the damn time:
I paste a class that references some other classes which I have already tested to be working, my problem is in a specific method that doesn’t directly call on any of the other classes. I tell the LLM specifically which method is not working, I also tell it that I have tested all the other methods and they work as intended (complete with comments documenting what they’re supposed to do). I then ask the LLM to only focus on the method I have specified, and it still goes on about “have you implemented all the other classes this class references? Here’s my shitty implementation of those classes instead.”
So then I paste all the classes that the one I’m asking about depends on, reiterate that all of them have been tested and are working, tell the LLM which method has the problem again, and it still decides that my problem must be in the other classes and starts “fixing” them which 9 out of 10 times is just rearranging the code that I already wrote and fucking up the organisation that I had designed.
It’s somewhat useful for searching for well-known example code using natural language, i.e. “How do I open a network socket using Rust,” or if your problem is really simple. Maybe it’s just the specific LLM I use, but in my experience it can’t actually problem solve better than humans.
Assembly, LLVM IR, etc
Microsoft really creating the problem and then forcing you to use their solution.
The name is an insult to the people who write software for vibrators.
“Your responses will be used to train the AI. By participating in the interview you give us an exclusive, worldwide, non-revokable license to your voice, likeness, and anything you say or write during the interview.”
BREAJING NEWS: Alzheimer’s is just an SQL injection in the brain being exploited!
LLM scraping is a parasite on the internet. In the actual ecological definition of parasite: they place a burden on other unwitting organisms computer systems, making it harder for the host to survive or carry out their own necessary processes, solely for the parasite’s own benefit while giving nothing to the host in return.
I know there’s an ongoing debate (both in the courts and on social media) about whether AI should have to pay royalties to its training data under copyright law, but I think they should at the very least be paying to use infrastructure while collecting the data, even free data, given that it costs the organisation hosting said data real money and resources to be scraped, and it’s orders of magnitude more money and resources compared to serving that data to individual people.
The case can certainly be made that copying is not theft, but copying is by no means free either, especially when done at the scales LLMs do.
Why Linux Mint specifically, why not just Linux? Or if they want to pick a specific distro, why not Trisquel or another FSF-endorsed distro?
It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.
The years of specialized education and experience is not for writing code in and of itself. Anyone with an internet connection can learn to do that in not that long. What takes years to perfect is writing reliable, optimized, secure code, communicating and working efficiently with others, writing code that can be maintained by others long after you leave, knowing the theories behind why code written in a certain way works better than code written in some other way, and knowing the qualitative and quantitative measures to even be able to assess whether one piece of code is “better” than the other. Source: Self-learned programming, started building stuff on my own, and then went through an actual computer science program. You miss so much nuance and underlying theory when you self-learn, which directly translates bad code that’s a nightmare to maintain.
Finally, the most important thing you can do with the person that has years of specialized education and experience is you can actually have a conversation with them about their code, ask them to explain in detail how it works and the process they used to write it. Then you can ask them followup questions and request further clarification. Trying to get AI to explain itself is a complete shitshow, and while humans do have a propensity to make shit up to cover their own/their coworkers’ asses, AI does that even when it make no sense not to tell the truth because it doesn’t really know what “the truth” is and why other people would want it.
Will AI eventually catch up? Almost certainly, but we’re nowhere close to that right now. Currently it’s less like an actual professional developer and more like someone who knows just enough to copy paste snippets from Stack Overflow and hack them together into a program that manages to compile.
I think the biggest takeaway with AI programming is not that it can suddenly do just as well as someone with years of specialized education and experience, but that we’re going to get a lot more shitty software that look professional on the surface, but is a dumpster fire inside.
Interesting! Thank you!