I have Visual Studio and decided to see what copilot could do. It added 7 new functions to my game with no calls or feedback to the player. When I tested what it did …it used 24 lines of code on a 150 line .CS to increase the difficulty of the game every time I take an action.
The context here is missing but just imagine someone going to Viridian forest and being met with level 70s in pokemon.
It is not, not useful. Don’t throw a perfectly good hammer to the bin because some idiots say it can build a house on its own. Just like with hammers you need to make sure you don’t hit yourself in the thumb and use it for purpose
My favourite AI code test is code to point a heliostat mirror at (lattitude, longitude) at a target at (latitude, longitude, elevation)
After a few iterations to get the basics in place, “also create the function to select the mirror angle”
A basic fact that isn’t often described is that to reflect a ray you aim the mirror halfway between the source and the target. AI Congress up with the strangest non-working ways of aiming the mirror
Working with AI feels a lot like working with a newbie
I find it useful for learning once you get the fundamentals down. I do it by trying to find all the bugs in the generated code, then see what could be cut out or restructured. It really gives more insight into how things actually work than just regular coding alone.
This isn’t as useful for coding actual programs though, since it would just take more time than necessary.
So true, it’s an amazing tool for learning. I’ve never been able to learn new frameworks so fast.
AI works very well as a consultant, but if you let it write the code, you’ll spend more time debugging because the errors it makes are often subtle and not the types of errors humans make.
It’s not like I don’t have a basic calculator to test the output, is it?
I might’ve also understated my python a little bit, as in I understand what the code does. Obviously you could break it, that wasn’t the point. I was more thinking that throwing math problems at what is essentially a language interpreter isn’t the right way to go about things. I don’t know shit though. I guess we’ll see.
If you want to learn how to code, writing a calculator with a ui isn’t a bad idea. But then you should code it yourself because otherwise you won’t learn much.
If you want to try and see if llms can write code that executes, then fine, you succeeded. I absolutely fail to see what you gain from that experiment though.
I’ve done a few courses and learned the basics, but it wasn’t until I started using some assistance that I got a deeper understanding of Python in general.
I came in very late, obviously, but I’ve still tried to learn coding on and off by myself since the late 90’s, although I ended up on another career path altogether. I’m in my 40’s and I’ve finally at least made some decent executable code.
Made myself a scalable clock since my eyes are failing, for example. It was a success and I use it daily. Would never have figured that out without some AI help. Still had to do some registry tweaking and shit since I’m stuck on windows on my workstation but it works wonderfully. Just a little widget but it improved my life greatly.
I’ve also cobbled together a workable alternative to notepad that I use as a diary of sorts. Never would’ve figured that out alone either.
As I see it at least whatever AI assistant you use at least doesn’t give one the gatekeeping or abuse one gets if they ask a relatively simple question somewhere else. Kinda like this, I guess.
TL;DR: In some situations our current 'AI’s can be helpful.
That might be the underlying problem. Software project management around small projects is easy. Anything that has a basic text editor and a Python interpreter will do. We have all these fancy tools because shit gets complicated. Hell, I don’t even like writing 100 lines without git.
A bunch of non-programmers make a few basic apps with ChatGPT and think we’re all cooked.
No doubt, I was merely suggesting that throwing math problems might not have been the intended use for what is essentially a language interpreter, obviously depending on the in question.
AI is fucking so useless when it comes to programming right now.
They can’t even fucking do math. Go make an AI do math right now, go see how it goes lol. Make it a, real world problem and give it lots of variables.
I have Visual Studio and decided to see what copilot could do. It added 7 new functions to my game with no calls or feedback to the player. When I tested what it did …it used 24 lines of code on a 150 line .CS to increase the difficulty of the game every time I take an action.
The context here is missing but just imagine someone going to Viridian forest and being met with level 70s in pokemon.
It is not, not useful. Don’t throw a perfectly good hammer to the bin because some idiots say it can build a house on its own. Just like with hammers you need to make sure you don’t hit yourself in the thumb and use it for purpose
I asked ChatGPT to do a simple addition problem a while back and it gave me the wrong answer.
My favourite AI code test is code to point a heliostat mirror at (lattitude, longitude) at a target at (latitude, longitude, elevation)
After a few iterations to get the basics in place, “also create the function to select the mirror angle”
A basic fact that isn’t often described is that to reflect a ray you aim the mirror halfway between the source and the target. AI Congress up with the strangest non-working ways of aiming the mirror
Working with AI feels a lot like working with a newbie
I find it useful for learning once you get the fundamentals down. I do it by trying to find all the bugs in the generated code, then see what could be cut out or restructured. It really gives more insight into how things actually work than just regular coding alone.
This isn’t as useful for coding actual programs though, since it would just take more time than necessary.
So true, it’s an amazing tool for learning. I’ve never been able to learn new frameworks so fast.
AI works very well as a consultant, but if you let it write the code, you’ll spend more time debugging because the errors it makes are often subtle and not the types of errors humans make.
deleted by creator
It’s not like I don’t have a basic calculator to test the output, is it?
I might’ve also understated my python a little bit, as in I understand what the code does. Obviously you could break it, that wasn’t the point. I was more thinking that throwing math problems at what is essentially a language interpreter isn’t the right way to go about things. I don’t know shit though. I guess we’ll see.
I have no idea what you’re trying to say here.
If you want to learn how to code, writing a calculator with a ui isn’t a bad idea. But then you should code it yourself because otherwise you won’t learn much.
If you want to try and see if llms can write code that executes, then fine, you succeeded. I absolutely fail to see what you gain from that experiment though.
I’ve done a few courses and learned the basics, but it wasn’t until I started using some assistance that I got a deeper understanding of Python in general.
I came in very late, obviously, but I’ve still tried to learn coding on and off by myself since the late 90’s, although I ended up on another career path altogether. I’m in my 40’s and I’ve finally at least made some decent executable code.
Made myself a scalable clock since my eyes are failing, for example. It was a success and I use it daily. Would never have figured that out without some AI help. Still had to do some registry tweaking and shit since I’m stuck on windows on my workstation but it works wonderfully. Just a little widget but it improved my life greatly.
I’ve also cobbled together a workable alternative to notepad that I use as a diary of sorts. Never would’ve figured that out alone either.
As I see it at least whatever AI assistant you use at least doesn’t give one the gatekeeping or abuse one gets if they ask a relatively simple question somewhere else. Kinda like this, I guess.
TL;DR: In some situations our current 'AI’s can be helpful.
Expand that into 10k line custom programs and you’ll begin having nightmarish issues.
That might be the underlying problem. Software project management around small projects is easy. Anything that has a basic text editor and a Python interpreter will do. We have all these fancy tools because shit gets complicated. Hell, I don’t even like writing 100 lines without git.
A bunch of non-programmers make a few basic apps with ChatGPT and think we’re all cooked.
No doubt, I was merely suggesting that throwing math problems might not have been the intended use for what is essentially a language interpreter, obviously depending on the in question.