• CapriciousDay@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 hours ago

    I don’t know if they quite appreciate how if programmers are cooked like this, everyone who isn’t a billionaire is too. Let me introduce you to robot transformer models.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      53 minutes ago

      Exactly, to eliminate the need for programmers you would need AGI, and that would simply mean the end of capitalism because at that point any job a human does can be automated.

  • Jimmycrackcrack@lemmy.ml
    link
    fedilink
    arrow-up
    32
    ·
    10 hours ago

    I realise the dumbass here is the guy saying programmers are ‘cooked’, but there’s something kind of funny how the programmer talks about how people misunderstand the complexities of their job and how LLMs easily make mistakes because of an inability to understand the nuances of what he does everyday and understands deeply. They rightly point out how without their specialist oversight, AI agents would fail in ridiculous and spectacular ways, yet happily and vaguely adds as a throw away statement at the end “replacing other industries, sure.” with the exact same blitheness and lack of personal understanding with which ‘Ace’ proclaims all programmers cooked.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      50 minutes ago

      I find this is a really common trope where people appreciate the complexity of the domain they work in, but assume every other domain is trivial by comparison.

  • Anna@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    11 hours ago

    When AI can sit through dozen meeting discussing stupid things only to finalize whatever you had decided earlier then I’ll be worried

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    34
    ·
    16 hours ago

    It’s like a conspiracy theory for that guy. Everyone who tells them it’s not true that you can get rid of programmers, has to be a programmer, and therefore cannot be trusted.

    • moseschrute@lemmy.ml
      link
      fedilink
      English
      arrow-up
      27
      ·
      15 hours ago

      To be fair, we should probably all start migrating to cybersecurity positions. They’ll need it when they discover how many vulnerabilities were created by all the non-programmers vibe coding.

      • bountygiver [any]@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        9 hours ago

        And will be a good time to start doing them for nefarious purposes, in particular hit it where it hurts, the company’s balance books, so that it actually starts driving demand for actually fixing them.

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    8 hours ago

    LLMs can’t even stay on topic when specifically being asked to solve one problem.

    This happens to me all the damn time:

    I paste a class that references some other classes which I have already tested to be working, my problem is in a specific method that doesn’t directly call on any of the other classes. I tell the LLM specifically which method is not working, I also tell it that I have tested all the other methods and they work as intended (complete with comments documenting what they’re supposed to do). I then ask the LLM to only focus on the method I have specified, and it still goes on about “have you implemented all the other classes this class references? Here’s my shitty implementation of those classes instead.”

    So then I paste all the classes that the one I’m asking about depends on, reiterate that all of them have been tested and are working, tell the LLM which method has the problem again, and it still decides that my problem must be in the other classes and starts “fixing” them which 9 out of 10 times is just rearranging the code that I already wrote and fucking up the organisation that I had designed.

    It’s somewhat useful for searching for well-known example code using natural language, i.e. “How do I open a network socket using Rust,” or if your problem is really simple. Maybe it’s just the specific LLM I use, but in my experience it can’t actually problem solve better than humans.

  • roux [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    14
    ·
    14 hours ago

    My favorite thing about ChatGPT is telling it consistently I’m writing stuff in AstroJS and not ReactJS.

    So, uh where’s this link for a junior position starting out at $145k? Asking for a friend…

  • hexaflexagonbear [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    13 hours ago

    I was making chatgpt do some tedious thing and I kept telling it “you got X wrong” and it kept going “oh you’re right I got X wrong, I will not do that again” and giving the exact same output. lol the one time ChatGPT was giving me consistent outputs for the same prompt

    • GnuLinuxDude@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      9 hours ago

      Just yesterday I asked Llama 3.3 70B params how to do something. I was pretty sure it wouldn’t be able to tell me the right command to run because I knew beforehand I was asking it something really obscure about how to use tar. I gave it all the relevant details. Imagine my surprise when it… told me the blatantly wrong thing. It even invented useless ways of running the command incorrectly.

    • Natanox@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      Yeah, same with Codestral. You have to tell it what to do very specifically, and once it gets stuck somewhere you have to move to a new session to get rid of the history junk.

      Both it and ChatGPT also repeatedly told me to save binary data I wanted to store in memory as a list, with every 1024 bytes being a new entry… in form of a string (supposedly). And the worst thing is that, given the way it extracted that data later on, this unholy implementation from hell would’ve probably even worked up to a certain point.

      • Jimmycrackcrack@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        11 hours ago

        That’s got to be the key to all this, specificity, it’s great that it’s got natural language processing to simplify things but sometimes that’s what’s actually getting in the way. What they should really do is have a special version of chatGPT for programming where users can interact with it in a very special form of structured English. It’s still natural language, this is the future after all, none of that zeroes and ones crap like the stone age, but just highly specific words with carefully defined meanings particular to making repeatable and executable steps in a pattern that does the same thing every time in response to inputs to produce outputs. You could then “speak” to one of these LLM things using this carefully structured English to automate specific tasks. The real kicker would be that you could tell it to chain together a bunch of these tasks you’ve had it automate for you to build up in to something much more complex. This would really harness the power of AI because at each step it’s made it for you, with minimal input from yourself because you’re just ‘talking’ to it in a very specific way. Admittedly this approach would be a little bit less obvious for new users than a standard LLM, but if an average person kept doing this for like a year or two they’d get pretty adept at this manner of speech, it’d be kind of like learning another language and people have been doing that for as long as there’s been people, I speak in a language everyday, I’m doing it right now. We could make it easier too, we could have courses and schools to help people get better at it faster.

  • zbyte64@awful.systems
    link
    fedilink
    arrow-up
    13
    ·
    15 hours ago

    We’re cooked because our leaders are pumping the AI bubble while crashing the rest of the economy. When that bubble pops those programmers are going to have to find a job in a nuclear sized crater of where the economy used to be.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 hours ago

      If it’s any consolation, there was a nuclear sized crater of a job market for all engineers (software and otherwise) after Reagan’s 8 were up. The .com aftershocks were pretty huge, and the 2008 housing crisis hit everything really hard too. Then in 2012 I got laid off due to the end of our Afghanistan debacle, then there was that pandemic thingy…

      So, yeah, the current foolishness is going to make a hell of a mess, but there has always been a huge mess either cleaning up, or coming soon for the past 35 years, and longer I’m sure, those are just the ones I’ve been hit by.

      • zbyte64@awful.systems
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        45 minutes ago

        I lived through those times, this is different. The punishment for false hope is the lesson that things can always get worse.