• needanke@feddit.org
    link
    fedilink
    arrow-up
    6
    ·
    33 minutes ago

    Tinfoil hat time:

    That Ace account is just an alt of the original guy and rage baiting to give his posting more reach.

  • Lucidlethargy@sh.itjust.works
    link
    fedilink
    arrow-up
    35
    arrow-down
    2
    ·
    5 hours ago

    AI is fucking so useless when it comes to programming right now.

    They can’t even fucking do math. Go make an AI do math right now, go see how it goes lol. Make it a, real world problem and give it lots of variables.

    • psud@aussie.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 hours ago

      My favourite AI code test is code to point a heliostat mirror at (lattitude, longitude) at a target at (latitude, longitude, elevation)

      After a few iterations to get the basics in place, “also create the function to select the mirror angle”

      A basic fact that isn’t often described is that to reflect a ray you aim the mirror halfway between the source and the target. AI Congress up with the strangest non-working ways of aiming the mirror

      Working with AI feels a lot like working with a newbie

    • andxz@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      12
      ·
      5 hours ago

      I’ve read this so many times in the past few days that I’m just going to write this. As I see it using what we have available right now (which isn’t “AI” in any meaningful way) to do simple math is weird since we already have calculators for that.

      Meanwhile, me, who’s at best absolute shit at python, just made a calculator with a rudimentary UI in about 45 minutes using nothing but an AI and ctrl+c/v and some sorting out the bits, as it were.

      So far the math has checked out on that calculator, too.

      • Asetru@feddit.org
        link
        fedilink
        arrow-up
        15
        ·
        2 hours ago

        Me, a person with no coding skills, had the ai write code and I can’t see if there’s anything wrong with the results. So the results must be good.

        • frezik@midwest.social
          link
          fedilink
          arrow-up
          1
          ·
          7 minutes ago

          That might be the underlying problem. Software project management around small projects is easy. Anything that has a basic text editor and a Python interpreter will do. We have all these fancy tools because shit gets complicated. Hell, I don’t even like writing 100 lines without git.

          A bunch of non-programmers make a few basic apps with ChatGPT and think we’re all cooked.

  • Kualdir@feddit.nl
    link
    fedilink
    arrow-up
    12
    ·
    5 hours ago

    I work in QA, even devs who’ve worked for 10+ years make dumb mistakes every so often. I wouldn’t want to do QA when AI is writing the software, it’s just gonna give me even more work 🫠

    • MoonRaven@feddit.nl
      link
      fedilink
      arrow-up
      10
      ·
      4 hours ago

      I’m a senior developer and I sometimes even look back thinking “how the fuck did I make that mistake yesterday”. I know I’m blind to my own mistakes, so I know testers may have some really valid feedback when I think I did everything right :)

  • pyre@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    4 hours ago

    it’s funny that some people think programming has a human element that can’t be replaced but art doesn’t.

    • whotookkarl@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      32 minutes ago

      I get the idea that it’s only temporary, but I’d much rather have a current gen AI paint a picture than attempt to program a guidance system or a heart monitor

    • schnurrito@discuss.tchncs.de
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      2 hours ago

      Art doesn’t have to fulfill a practical purpose nor does it usually have security vulnerabilities. Not taking a position on the substance, but these are two major differences between the two.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        13 minutes ago

        Art fulfills many practical purposes. You live in an abode designed by architects, presumably painted and furnished with many objects d’art such as, a couch, a wardrobe, ceiling fixtures, a bathtub; also presumably festooned with art on the walls; you cook and eat food in designed cookware, crockery and cutlery, and that food is frequently more than pure sustenance; and, presumably you spend a fair amount of time consuming media such as television, film, literature, music, comedy, dance, or even porn.

      • pyre@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        40 minutes ago

        my point exactly. practical purpose and security are things you can analyze and solve for as a machine at least in theory. artistic value comes from the artistic intent. by intent I don’t mean to argue against death of the author, as I believe in it, but the very fact that there is intent to create art.

  • miridius@lemmy.world
    link
    fedilink
    arrow-up
    27
    ·
    6 hours ago

    In all seriousness though I do worry for the future of juniors. All the things that people criticise LLMs for, juniors do too. But if nobody hires juniors they will never become senior

    • Grazed@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      5 hours ago

      This is completely tangential but I think juniors will always be capable of things that LLMs aren’t. There’s a human component to software that I don’t think can be replaced without human experience. The entire purpose of software is for humans to use it. So since the LLM has never experienced using software while being a human, there will always be a divide. Therefore, juniors will be capable of things that LLMs aren’t.

      Idk, I might be missing a counterpoint, but it makes sense to me.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    35
    arrow-down
    2
    ·
    10 hours ago

    Everyone’s convinced their thing is special, but everyone else’s is a done deal.

    Meanwhile the only task where current AI seems truly competitive is porn.

    • Susaga@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      7 hours ago

      False. Porn is sexy, and I can’t possibly be aroused by an image of a woman spreading her cheeks when her fingers are attached to her arse with a continuous piece of flesh, giving her skin the same topography as a teapot.

    • hperrin@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      8 hours ago

      AI is really good at creating images of Jesus that boomers say “amen” to.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      edit-2
      10 hours ago

      I’d suggest that if you think AI porn is anywhere near the real thing, that’s probably because you think porn is already slop in the same way that these AI bros think of code or creative writing or whatever other information-based thing you already know AI can’t do well.

      Porn isn’t slop, people aren’t just interestingly-shaped slabs of meat. Sex is fundamentally about interpersonal connection. It might be one of the things that LLMs and robots are the worst at.

      • starman2112@sh.itjust.works
        link
        fedilink
        arrow-up
        19
        ·
        12 hours ago

        I almost added that, but I’ll be real, I have no clue what a junior programmer is lmao

        For all I know it’s the equivalent to a journeyman or something

        • artiface@lemm.ee
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          1
          ·
          11 hours ago

          Junior programmer is who trains the interns and manages the actual work the seniors take credit for.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            10
            ·
            8 hours ago

            This is not true. A junior programmer takes the systems that are designed by the senior and staff level engineers and writes the code for them. If you think the code is the work, then you’re mistaken. Writing code is the easy part. Designing systems is the part that takes decades to master.

            That’s why when Elon Musk was spewing nonsense about Twitter’s tech stack, I knew he was a moron. He was speaking like a junior programmer who had just been put in charge of the company.

          • slappypantsgo@lemm.ee
            link
            fedilink
            English
            arrow-up
            12
            ·
            11 hours ago

            I was gonna say, if this person is making $145k, they are not a “junior” in any realistic sense of the term. It would be nice if computer programming and software development became a legitimate profession.

  • Anders429@programming.dev
    link
    fedilink
    arrow-up
    65
    arrow-down
    1
    ·
    14 hours ago

    Know a guy who tried to use AI to vibe code a simple web server. He wasn’t a programmer and kept insisting to me that programmers were done for.

    After weeks of trying to get the thing to work, he had nothing. He showed me the code, and it was the worst I’ve ever seen. Dozens of empty files where the AI had apparently added and then deleted the same code. Also some utter garbage code. Tons of functions copied and pasted instead of being defined once.

    I then showed him a web app I had made in that same amount of time. It worked perfectly. Never heard anything more about AI from him.

    • cantstopthesignal@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      ·
      8 hours ago

      I’m an engineer and can vibe code some features, but you still have to know wtf the program is doing over all. AI makes good programmers faster, it doesn’t make ignorant people know how to code.

    • _____@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      10 hours ago

      “no dude he just wasn’t using [ai product] dude I use that and then send it to [another ai product]'s [buzzword like ‘pipeline’] you have to try those out dude”

    • A_Union_of_Kobolds@lemmy.world
      link
      fedilink
      arrow-up
      28
      arrow-down
      3
      ·
      13 hours ago

      AI is very very neat but like it has clear obvious limitations. I’m not a programmer and I could tell you tons of ways I tripped Ollama up already.

      But it’s a tool, and the people who can use it properly will succeed.

      • Susaga@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        7 hours ago

        Funny. Every time someone points out how god awful AI is, someone else comes along to say “It’s just a tool, and it’s good if someone can use it properly.” But nobody who uses it treats it like “just a tool.” They think it’s a workman they can claim the credit for, as if a hammer could replace the carpenter.

        Plus, the only people good enough to fix the problems caused by this “tool” don’t need to use it in the first place.

      • Emily (she/her)@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        21
        ·
        13 hours ago

        I think its most useful as an (often wrong) line completer than anything else. It can take in an entire file and just try and figure out the rest of what you are currently writing. Its context window simply isn’t big enough to understand an entire project.

        That and unit tests. Since unit tests are by design isolated, small, and unconcerned with the larger project AI has at least a fighting change of competently producing them. That still takes significant hand holding though.

        • jorm1s@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          ·
          6 hours ago

          Isn’t writing tests with AI like a really bad idea? I mean, the whole point of writing separate tests is hoping that you won’t make the same mistakes twice, and therefore catch any behavior in the code that does not match your intent. But If you use an LLM to write a test using said code as context (instead of the original intent you would use yourself), there’s a risk that it’ll just write a test case that makes sure the code contains the wrong behavior.

          Okay, it might still be okay for regression testing, but you’re still missing most of the benefit you’d get by writing the tests manually. Unless you only care about closing tickets, that is.

          • Grazed@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            5 hours ago

            “Unless you only care about closing tickets, that is.”

            Perfect. I’ll use it for tests at work then.

          • Emily (she/her)@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            5 hours ago

            I’ve used it most extensively for non-professional projects, where if I wasn’t using this kind of tooling to write tests they would simply not be written. That means no tickets to close either. That said, I am aware that the AI is almost always at best testing for regression (I have had it correctly realise my logic is incorrect and write tests that catch it, but that is by no means reliable) Part of the “hand holding” I mentioned involves making sure it has sufficient coverage of use cases and edge cases, and that what it expects to be the correct is actually correct according to intent.

            I essentially use the AI to generate a variety of scenarios and complementary test data, then further evaluating it’s validity and expanding from there.

        • franzfurdinand@lemmy.world
          link
          fedilink
          arrow-up
          12
          ·
          11 hours ago

          I’ve used them for unit tests and it still makes some really weird decisions sometimes. Like building an array of json objects that it feeds into one super long test with a bunch of switch conditions. When I saw that one I scratched my head for a little bit.

          • Emily (she/her)@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            5
            ·
            10 hours ago

            I most often just get it straight up misunderstanding how the test framework itself works, but I’ve definitely had it make strange decisions like that. I’m a little convinced that the only reason I put up with it for unit tests is because I would probably not write them otherwise haha.

            • franzfurdinand@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              10 hours ago

              Oh, I am right there with you. I don’t want to write tests because they’re tedious, so I backfill with the AI at least starting me off on it. It’s a lot easier for me to fix something (even if it turns into a complete rewrite) than to start from a blank file.

      • De Lancre@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        12 hours ago

        This. I have no problems to combine couple endpoints in one script and explaining to QWQ what my end file with CSV based on those jsons should look like. But try to go beyond that, reaching above 32k context or try to show it multiple scripts and poor thing have no clue what to do.

        If you can manage your project and break it down to multiple simple tasks, you could build something complicated via LLM. But that requires some knowledge about coding and at that point chances are that you will have better luck of writing whole thing by yourself.

  • Lucy :3@feddit.org
    link
    fedilink
    arrow-up
    110
    arrow-down
    1
    ·
    edit-2
    15 hours ago

    Co"worker" spent 7 weeks building a simple C# MVC app with ChatGPT

    I think I don’t have to tell you how it went. Lets just say I spent more time debugging “his” code than mine.

    • state_electrician@discuss.tchncs.de
      link
      fedilink
      arrow-up
      5
      ·
      5 hours ago

      I do enjoy the new assistant in JetBrains tools, the one that runs locally. It truly helps with the trite shit 90% of the time. Every time I tried code gen AI for larger parts, it’s been unusable.

    • other_cat@lemmy.zip
      link
      fedilink
      English
      arrow-up
      15
      ·
      9 hours ago

      I will give it this. It’s been actually pretty helpful in me learning a new language because what I’ll do is that I’ll grab an example of something in working code that’s kind of what I want, I’ll say “This, but do X” then when the output doesn’t work, I study the differences between the chatGPT output & the example code to learn why it doesn’t work.

      It’s a weird learning tool but it works for me.

    • wise_pancake@lemmy.ca
      link
      fedilink
      arrow-up
      31
      ·
      edit-2
      13 hours ago

      I tried out the new copilot agent in VSCode and I spent more time undoing shit and hand holding than it would have taken to do it myself

      Things like asking it to make a directory matching a filename, then move the file in and append _v1 would result in files named simply “_v1” (this was a user case where we need legacy logic and new logic simultaneously for a lift and shift).

      When it was done I realized instead of moving the file it rewrote all the code in the file as well, adding several bugs.

      Granted I didn’t check the diffs thoroughly, so I don’t know when that happened and I just reset my repo back a few cookies and redid the work in a couple minutes.

    • De Lancre@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      8
      ·
      12 hours ago

      I will be downvoted to oblivion, but hear me out: local llm’s isn’t that bad for simple scripts development. NDA? No problem, that a local instance. No coding experience? No problems either, QWQ can create and debug whole thing. Yeah, it’s “better” to do it yourself, learn code and everything. But I’m simple tech support. I have no clue how code works (that kinda a lie, but you got the idea), nor do I paid to for that. But I do need to sort 500 users pulled from database via corp endpoint, that what I paid for. And I have to decide if I want to do that manually, or via script that llm created in less than ~5 minutes. Cause at the end of the day, I will be paid same amount of money.

      It even can create simple gui with Qt on top of that script, isn’t that just awesome?

      • Badabinski@kbin.earth
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        10 hours ago

        As someone who somewhat recently wasted 5 hours debugging a “simple” bash script that Cursor shit out which was exploding k8s nodes—nah, I’ll pass. I rewrote the script from scratch in 45 minutes after I figured out what was wrong. You do you, but I don’t let LLMs near my software.

  • null_dot@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    2
    ·
    13 hours ago

    I take issue with the “replacing other industries” part.

    I know that this is an unpopular opinion among programmers but all professions have roles that range from small skills sets and little cognitive abilities to large skill sets and high level cognitive abilities.

    Generative AI is an incremental improvement in automation. In my industry it might make someone 10% more productive. For any role where it could make someone 20% more productive that role could have been made more efficient in some other way, be it training, templates, simple conversion scripts, whatever.

    Basically, if someone’s job can be replaced by AI then they weren’t really producing any value in the first place.

    Of course, this means that in a firm with 100 staff, you could get the same output with 91 staff plus Gen AI. So yeah in that context 9 people might be replaced by AI, but that doesn’t tend to be how things go in practice.

    • andioop@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      53 minutes ago

      I know that this is an unpopular opinion among programmers but all professions have roles that range from small skills sets and little cognitive abilities to large skill sets and high level cognitive abilities.

      I am kind of surprised that is an unpopular opinion. I figure there is a reason we compensate people for jobs. Pay people to do stuff you cannot, or do not have the time to do, yourself. And for almost every job there is probably something that is way harder than it looks from the outside. I am not the most worldly of people but I’ve figured that out by just trying different skills and existing.

      • null_dot@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 hours ago

        I’m not really clear what you’re getting at.

        Are you suggesting that the commonly used models might only be an incremental improvement but some of the less common models are ready to take accountant’s and lawyer’s and engineer’s and architect’s jobs ?