- cross-posted to:
- programmerhumor@lemmy.ml
- cross-posted to:
- programmerhumor@lemmy.ml
Tinfoil hat time:
That Ace account is just an alt of the original guy and rage baiting to give his posting more reach.
everytime i see a twitter screenshot i just know im looking at the dumbest people imaginable
AI is fucking so useless when it comes to programming right now.
They can’t even fucking do math. Go make an AI do math right now, go see how it goes lol. Make it a, real world problem and give it lots of variables.
I asked ChatGPT to do a simple addition problem a while back and it gave me the wrong answer.
My favourite AI code test is code to point a heliostat mirror at (lattitude, longitude) at a target at (latitude, longitude, elevation)
After a few iterations to get the basics in place, “also create the function to select the mirror angle”
A basic fact that isn’t often described is that to reflect a ray you aim the mirror halfway between the source and the target. AI Congress up with the strangest non-working ways of aiming the mirror
Working with AI feels a lot like working with a newbie
I’ve read this so many times in the past few days that I’m just going to write this. As I see it using what we have available right now (which isn’t “AI” in any meaningful way) to do simple math is weird since we already have calculators for that.
Meanwhile, me, who’s at best absolute shit at python, just made a calculator with a rudimentary UI in about 45 minutes using nothing but an AI and ctrl+c/v and some sorting out the bits, as it were.
So far the math has checked out on that calculator, too.
Me, a person with no coding skills, had the ai write code and I can’t see if there’s anything wrong with the results. So the results must be good.
Expand that into 10k line custom programs and you’ll begin having nightmarish issues.
That might be the underlying problem. Software project management around small projects is easy. Anything that has a basic text editor and a Python interpreter will do. We have all these fancy tools because shit gets complicated. Hell, I don’t even like writing 100 lines without git.
A bunch of non-programmers make a few basic apps with ChatGPT and think we’re all cooked.
I work in QA, even devs who’ve worked for 10+ years make dumb mistakes every so often. I wouldn’t want to do QA when AI is writing the software, it’s just gonna give me even more work 🫠
I’m a senior developer and I sometimes even look back thinking “how the fuck did I make that mistake yesterday”. I know I’m blind to my own mistakes, so I know testers may have some really valid feedback when I think I did everything right :)
That’s what we’re for in the end
it’s funny that some people think programming has a human element that can’t be replaced but art doesn’t.
I get the idea that it’s only temporary, but I’d much rather have a current gen AI paint a picture than attempt to program a guidance system or a heart monitor
Art doesn’t have to fulfill a practical purpose nor does it usually have security vulnerabilities. Not taking a position on the substance, but these are two major differences between the two.
Art fulfills many practical purposes. You live in an abode designed by architects, presumably painted and furnished with many objects d’art such as, a couch, a wardrobe, ceiling fixtures, a bathtub; also presumably festooned with art on the walls; you cook and eat food in designed cookware, crockery and cutlery, and that food is frequently more than pure sustenance; and, presumably you spend a fair amount of time consuming media such as television, film, literature, music, comedy, dance, or even porn.
my point exactly. practical purpose and security are things you can analyze and solve for as a machine at least in theory. artistic value comes from the artistic intent. by intent I don’t mean to argue against death of the author, as I believe in it, but the very fact that there is intent to create art.
In all seriousness though I do worry for the future of juniors. All the things that people criticise LLMs for, juniors do too. But if nobody hires juniors they will never become senior
Sounds like a Union is a good thing. Apprenticeship programs.
This is completely tangential but I think juniors will always be capable of things that LLMs aren’t. There’s a human component to software that I don’t think can be replaced without human experience. The entire purpose of software is for humans to use it. So since the LLM has never experienced using software while being a human, there will always be a divide. Therefore, juniors will be capable of things that LLMs aren’t.
Idk, I might be missing a counterpoint, but it makes sense to me.
Other industries… ?
Thank you for your opinion.
Anyway.
Everyone’s convinced their thing is special, but everyone else’s is a done deal.
Meanwhile the only task where current AI seems truly competitive is porn.
False. Porn is sexy, and I can’t possibly be aroused by an image of a woman spreading her cheeks when her fingers are attached to her arse with a continuous piece of flesh, giving her skin the same topography as a teapot.
I absolutely hate this, thanks
Damning comments from 2023.
I’ll stop saying it if it stops being true.
AI is really good at creating images of Jesus that boomers say “amen” to.
So is toast.
I’d suggest that if you think AI porn is anywhere near the real thing, that’s probably because you think porn is already slop in the same way that these AI bros think of code or creative writing or whatever other information-based thing you already know AI can’t do well.
Porn isn’t slop, people aren’t just interestingly-shaped slabs of meat. Sex is fundamentally about interpersonal connection. It might be one of the things that LLMs and robots are the worst at.
Most porn is definitely slop.
Wanking is about the emotional connection to a JPG, said someone I deeply pity.
Who wouldn’t pity those who make do with a lossy compression image format?
I do not need lossless copies of an image someone didn’t even draw.
Not everyone is there for the interpersonal connection. Some really are just that base and pathetic.
Having said that, seeking personal connection (or just sex) is a mistake in this age. Best to learn to let go, and get used to suffering.
“Programmers are cooked,” he says in reply to a post offering six figures for a programmer
six figures for a junior programmer, no less
I almost added that, but I’ll be real, I have no clue what a junior programmer is lmao
For all I know it’s the equivalent to a journeyman or something
Most programmers don’t go on many journeys, it’s more like a basementman.
Hey I resemble that remark
Junior programmer is who trains the interns and manages the actual work the seniors take credit for.
This is not true. A junior programmer takes the systems that are designed by the senior and staff level engineers and writes the code for them. If you think the code is the work, then you’re mistaken. Writing code is the easy part. Designing systems is the part that takes decades to master.
That’s why when Elon Musk was spewing nonsense about Twitter’s tech stack, I knew he was a moron. He was speaking like a junior programmer who had just been put in charge of the company.
I thought Junior just meant they only had 3 or 4 pair of programming socks.
I was gonna say, if this person is making $145k, they are not a “junior” in any realistic sense of the term. It would be nice if computer programming and software development became a legitimate profession.
You can say “fucked” on the internet, Ace Rbk.
It’s better to future-proof your account for when Gilead is claimed.
Oh no, he’s a cannibal.
Know a guy who tried to use AI to vibe code a simple web server. He wasn’t a programmer and kept insisting to me that programmers were done for.
After weeks of trying to get the thing to work, he had nothing. He showed me the code, and it was the worst I’ve ever seen. Dozens of empty files where the AI had apparently added and then deleted the same code. Also some utter garbage code. Tons of functions copied and pasted instead of being defined once.
I then showed him a web app I had made in that same amount of time. It worked perfectly. Never heard anything more about AI from him.
I’m an engineer and can vibe code some features, but you still have to know wtf the program is doing over all. AI makes good programmers faster, it doesn’t make ignorant people know how to code.
“no dude he just wasn’t using [ai product] dude I use that and then send it to [another ai product]'s [buzzword like ‘pipeline’] you have to try those out dude”
AI is very very neat but like it has clear obvious limitations. I’m not a programmer and I could tell you tons of ways I tripped Ollama up already.
But it’s a tool, and the people who can use it properly will succeed.
Funny. Every time someone points out how god awful AI is, someone else comes along to say “It’s just a tool, and it’s good if someone can use it properly.” But nobody who uses it treats it like “just a tool.” They think it’s a workman they can claim the credit for, as if a hammer could replace the carpenter.
Plus, the only people good enough to fix the problems caused by this “tool” don’t need to use it in the first place.
I think its most useful as an (often wrong) line completer than anything else. It can take in an entire file and just try and figure out the rest of what you are currently writing. Its context window simply isn’t big enough to understand an entire project.
That and unit tests. Since unit tests are by design isolated, small, and unconcerned with the larger project AI has at least a fighting change of competently producing them. That still takes significant hand holding though.
Isn’t writing tests with AI like a really bad idea? I mean, the whole point of writing separate tests is hoping that you won’t make the same mistakes twice, and therefore catch any behavior in the code that does not match your intent. But If you use an LLM to write a test using said code as context (instead of the original intent you would use yourself), there’s a risk that it’ll just write a test case that makes sure the code contains the wrong behavior.
Okay, it might still be okay for regression testing, but you’re still missing most of the benefit you’d get by writing the tests manually. Unless you only care about closing tickets, that is.
“Unless you only care about closing tickets, that is.”
Perfect. I’ll use it for tests at work then.
I’ve used it most extensively for non-professional projects, where if I wasn’t using this kind of tooling to write tests they would simply not be written. That means no tickets to close either. That said, I am aware that the AI is almost always at best testing for regression (I have had it correctly realise my logic is incorrect and write tests that catch it, but that is by no means reliable) Part of the “hand holding” I mentioned involves making sure it has sufficient coverage of use cases and edge cases, and that what it expects to be the correct is actually correct according to intent.
I essentially use the AI to generate a variety of scenarios and complementary test data, then further evaluating it’s validity and expanding from there.
I’ve used them for unit tests and it still makes some really weird decisions sometimes. Like building an array of json objects that it feeds into one super long test with a bunch of switch conditions. When I saw that one I scratched my head for a little bit.
I most often just get it straight up misunderstanding how the test framework itself works, but I’ve definitely had it make strange decisions like that. I’m a little convinced that the only reason I put up with it for unit tests is because I would probably not write them otherwise haha.
Oh, I am right there with you. I don’t want to write tests because they’re tedious, so I backfill with the AI at least starting me off on it. It’s a lot easier for me to fix something (even if it turns into a complete rewrite) than to start from a blank file.
It’s great for verbose log statements
This. I have no problems to combine couple endpoints in one script and explaining to QWQ what my end file with CSV based on those jsons should look like. But try to go beyond that, reaching above 32k context or try to show it multiple scripts and poor thing have no clue what to do.
If you can manage your project and break it down to multiple simple tasks, you could build something complicated via LLM. But that requires some knowledge about coding and at that point chances are that you will have better luck of writing whole thing by yourself.
Co"worker" spent 7 weeks building a simple C# MVC app with ChatGPT
I think I don’t have to tell you how it went. Lets just say I spent more time debugging “his” code than mine.
I do enjoy the new assistant in JetBrains tools, the one that runs locally. It truly helps with the trite shit 90% of the time. Every time I tried code gen AI for larger parts, it’s been unusable.
It works quite nice as autocomplete
Yes, exactly.
Except in the 10% of times, in 30% of those you’ll have a hell of a lot of fun finding which exact line has one little variable name mismatch. But if you’re actually very careful, it’s a nice feature.
I will give it this. It’s been actually pretty helpful in me learning a new language because what I’ll do is that I’ll grab an example of something in working code that’s kind of what I want, I’ll say “This, but do X” then when the output doesn’t work, I study the differences between the chatGPT output & the example code to learn why it doesn’t work.
It’s a weird learning tool but it works for me.
It’s great for explaining snippets of code.
I tried out the new copilot agent in VSCode and I spent more time undoing shit and hand holding than it would have taken to do it myself
Things like asking it to make a directory matching a filename, then move the file in and append _v1 would result in files named simply “_v1” (this was a user case where we need legacy logic and new logic simultaneously for a lift and shift).
When it was done I realized instead of moving the file it rewrote all the code in the file as well, adding several bugs.
Granted I didn’t check the diffs thoroughly, so I don’t know when that happened and I just reset my repo back a few cookies and redid the work in a couple minutes.
I will be downvoted to oblivion, but hear me out: local llm’s isn’t that bad for simple scripts development. NDA? No problem, that a local instance. No coding experience? No problems either, QWQ can create and debug whole thing. Yeah, it’s “better” to do it yourself, learn code and everything. But I’m simple tech support. I have no clue how code works (that kinda a lie, but you got the idea), nor do I paid to for that. But I do need to sort 500 users pulled from database via corp endpoint, that what I paid for. And I have to decide if I want to do that manually, or via script that llm created in less than ~5 minutes. Cause at the end of the day, I will be paid same amount of money.
It even can create simple gui with Qt on top of that script, isn’t that just awesome?
As someone who somewhat recently wasted 5 hours debugging a “simple” bash script that Cursor shit out which was exploding k8s nodes—nah, I’ll pass. I rewrote the script from scratch in 45 minutes after I figured out what was wrong. You do you, but I don’t let LLMs near my software.
I take issue with the “replacing other industries” part.
I know that this is an unpopular opinion among programmers but all professions have roles that range from small skills sets and little cognitive abilities to large skill sets and high level cognitive abilities.
Generative AI is an incremental improvement in automation. In my industry it might make someone 10% more productive. For any role where it could make someone 20% more productive that role could have been made more efficient in some other way, be it training, templates, simple conversion scripts, whatever.
Basically, if someone’s job can be replaced by AI then they weren’t really producing any value in the first place.
Of course, this means that in a firm with 100 staff, you could get the same output with 91 staff plus Gen AI. So yeah in that context 9 people might be replaced by AI, but that doesn’t tend to be how things go in practice.
I know that this is an unpopular opinion among programmers but all professions have roles that range from small skills sets and little cognitive abilities to large skill sets and high level cognitive abilities.
I am kind of surprised that is an unpopular opinion. I figure there is a reason we compensate people for jobs. Pay people to do stuff you cannot, or do not have the time to do, yourself. And for almost every job there is probably something that is way harder than it looks from the outside. I am not the most worldly of people but I’ve figured that out by just trying different skills and existing.
deleted by creator
I’m not really clear what you’re getting at.
Are you suggesting that the commonly used models might only be an incremental improvement but some of the less common models are ready to take accountant’s and lawyer’s and engineer’s and architect’s jobs ?
Personally I prefer my junior programmers well done.
As long as they keep the rainbow 🌈 socks on, I’ll eat them raw.