Electron everywhere.
Electron everywhere.
You can actually apply those tools and procedures to automatically generated code, exactly the same as in any other piece of code. I don’t see the impediment here…
You must be able to understand that searching by name is not the same as searching by definition, nothing more to add here…
Why would you care of the shit code submitted to you is bad because it was generated with AI, because it was copied from SO, or if it’s brand new shit code written by someone. If it’s bad is bad. And bad code have existed since forever. Once again, I don’t see the impact of AI here. If someone is unable to find that a particular generated piece of code have issues, I don’t see how magically is going to be able to see the issue in copypasted code or in code written by themselves. If they don’t notice they don’t, no matter the source.
I will go back to the Turing test. If you don’t even know if the bad code was generated, copied or just written by hand, how are you even able to tell that AI is the issue?
Any human written code can and will introduce UB.
Also I don’t see how you will take more that 5 second to verify that a given function does not exist. It has happen to me, llm suggesting unexisting function. And searching by function name in the docs is instantaneous.
I you don’t want to use it don’t. I have been more than a year doing so and I haven’t run into any of those catastrophic issues. It’s just a tool like many others I use for coding. Not even the most important, for instance I think LSP was a greater improvement on my coding efficiency.
It’s like using neovim. Some people would post me a list of all the things that can go bad for making a Frankenstein IDE in a ancient text editor. But if it works for me, it works for me.
We still use it in my job.
Pls help.
That’s why you use unit test and integration test.
I can write bad code myself or copy bad code from who-knows where. It’s not something introduced by LLM.
Remember famous Linus letter? “You code this function without understanding it and thus you code is shit”.
As I said, just a tool like many other before it.
I use it as a regular practice while coding. And to be true, reading my code after that I could not distinguish what parts where LLM and what parts I wrote fully by myself, and, to be honest, I don’t think anyone would be able to tell the difference.
It would probably a nice idea to do some kind of turing test, a put a blind test to distinguish the AI written part of some code, and see how precisely people can tell it apart.
I may come back with a particular piece of code that I specifically remember to be an output from deepseek, and probably withing the whole context it would be indistinguishable.
Also, not all LLM usage is for copying from it. Many times you copy to it and ask the thing yo explain it to you, or ask general questions. For instance, to seek for specific functions in C# extensive libraries.
Plenty of good programmers use AI extensively while working. Me included.
Mostly as an advance autocomplete, template builder or documentation parser.
You obviously need to be good at it so you can see at a glance if the written code is good or if it’s bullshit. But if you are good it can really speed things up without any risk as you will only copy cody that you know is good and discard the bullshit.
Obviously you cannot develop without programming knowledge, but with programming knowledge is just another tool.
It also has a Firefox extension that add download and subscribe buttons for Tubearchivist directly on youtube website.
You won’t have time after spending all day complaining about bad documentation.
There is this bug in TIC-80.
When running the wasm version on firefox it has very bad framerate.
So as many before me I pulled my sleeved and opened the firefox profiler to see what’s going on. Well, the framerate has never been better. As soon as you turn off the profiler the framerate drops.
I thought I was going insane, until I saw that other people luckily found the same behavior. For now, the unofficial fix is opening the firefox profiler when playing on firefox.
Buying a 16 TB hard drive for… purposes.
If there were not for youtube shitty war on adblocks I was able to watch youtube 1080p on a 30 bucks android tv thingy.
I would have to check is someone built an alternative app to keep watching it because power of the device was no issue. When running on a minimal kodi installation it just worked fine.
I used to be able to watch yr on a 30 bucks android tv device in which I installed coreelec.
Sadly youtube apps on there stopped working for me a while ago due the war on adblocks. But the device was perfectly capable of playing YouTube.
I suppose that with tubearchivist and jellyfin you could still somehow watch youtube.
Around 18-20 Watts on idle. It can go up to about 40 W at 100% load.
I have a Intel N100, I’m really happy about performance per watt, to be honest.
Nothing says language of the year better than a language that needs to be compiled to an inefficient interpreted language made for browsers and then grossly stuffed into a stripped out Chrome engine to serve as backend. All filled with thousands of dependencies badly managed through npm to overcome the lack of a standard library actually useful for backend stuff.
print(“here”);
This is heresy.
I grew up with XP, vista was worse and windows 7 was ust better. Windows 8 was terrible. Windows 10 better than 8 but worse than 7.
I haven’t even try windows 11.
All my homies search for apps first in F-Droid and only use google play if there is no other option.
With caddy you can easily set up a local issued certificate for https. It would shine a nice warming on your browser unless you install the CA certificate on the computer you use to visit the site though.
https://caddyserver.com/docs/automatic-https#local-https
This is the easiest way I know how to do it. Caddy takes almost no configuration to get working.
Not every program is written for spacecraft, and does not net the critique level of safety and efficiency as the code for the Apollo program.
I don’t even know. If memory issues are your issue then using any program with safe memory embedded into it is the way to go. As most things are actually made right now. Unless you are working in legacy applications most programmers would never actually run into that many memory issues nowadays. Not that most programmers would even properly understand memory. Do you think the typical JavaScript bootcamp rookie can even differentiate when something is stored in the stack or the heap?
You are talking like every human made code have Linux Kernel levels of quality, and that’s not the case, not by far.
And it doesn’t need to. Not all computer programs are critically important, people be coding in lua for pico-8 in a gamejam, what’s the issue for them to use AI tools for assistance?
And AI have not existed before a couple of years and our critically important programs are everywhere. Written by smart humans who are making mistakes all the time. I still do not see the anti-AI point here.
Also programming is not concrete, and AI is not sugar. If you use AI to create a fast tree structure and it works fine, it’s not going to poison anything. It’s probably be just the same function that the programmer would have written, just faster.
Also, not addressing the fact thar if AI is bad because it’s just copying, then it’s the same as the most common programming texhnique, copying code from Stack Overflow.
I have a genuine question, how many programmers do you think that code in the way you just described?