Just write the Rust build configuration as a Rust struct at this point.
- 29 Posts
- 195 Comments
HiddenLayer555@lemmy.mlto
Programmer Humor@lemmy.ml•Incredible stochastic algorithm, gets more reliable the larger your input, incredibly fast, trivial to implement and deterministic on its inputsEnglish
20·8 days agoJust put “Precondition: x must not be prime” in the function doc and it’ll be 100% accurate. Not my fault if you use it wrong.
HiddenLayer555@lemmy.mlto
Open Source@lemmy.ml•Time for Open Source Community EV's to Be Made. Anybody want to do something like that?English
12·8 days agoYeah, reading it back I realize I was being an ass. I apologize.
HiddenLayer555@lemmy.mlto
Open Source@lemmy.ml•Time for Open Source Community EV's to Be Made. Anybody want to do something like that?English
1·9 days agodeleted by creator
HiddenLayer555@lemmy.mlto
Programmer Humor@lemmy.ml•Amazon's internal agentic tool decided that existing code was inadequate and decided to replace it taking down a part of AWS for 13 hours, and was not the first time it had happened. 🤣English
12·9 days agoHow the fuck does a tech company as big as Amazon have this running in production and not a test/staging environment? This is the kind of mistake a platform run by one person makes and even then they probably won’t make it again.
Huh, something tells me that working in FAANG doesn’t actually make you a better engineer, they just have more money to not throw at R&D which somehow makes you feel superior to every other engineer 🤔
HiddenLayer555@lemmy.mlto
Open Source@lemmy.ml•Time for Open Source Community EV's to Be Made. Anybody want to do something like that?English
152·9 days agoBuses. They’re called buses.
HiddenLayer555@lemmy.mlto
Programmer Humor@lemmy.ml•Anyone else having issues with YouTube right now?English
32·12 days agoFor-profit cororate tech: Goes down all the time and horifically inefficient despite having trillions to throw at R&D
Some random critical infrastructure open source project with three maintainers basically working for free: The most goddamn robust and optimized code to ever touch your silicon.
HiddenLayer555@lemmy.mlto
Programmer Humor@lemmy.ml•With great power...ignorance is bliss?English
3·17 days agoUnlike C++ though, explosives usually degrade over time and become less dangerous.
Wym? You mean you don’t like typing out
unsigned long longa hundred times?
main =This message was brought to you by the Haskell gang
let () =This message was brought to you by the OCaml gang
This message was brought to you by the Python gang (only betas check
__name__, assert your dominance and force every import to run your main routine /s)
HiddenLayer555@lemmy.mlto
Programmer Humor@lemmy.ml•Fast-paced and exciting environmentEnglish
12·22 days agoThey’re talking about your commute on the highway to some office park in the middle of nowhere.
HiddenLayer555@lemmy.mlto
Programmer Humor@lemmy.ml•why does my knife need a web browser smhEnglish
42·24 days agoUnrelated, but anyone else think it’s really weird that we just casually accept our food utensils containing chromium? Like, I know it’s an alloy and not just free chromium, but would we accept a lead alloy spoon? Probably not, especially with most food being acidic. Honestly I’m just waiting for the paper that says we’ve been slowly poisoning ourselves with stainless steel every time we eat.
HiddenLayer555@lemmy.mlto
Programmer Humor@lemmy.ml•ChatGPT apparently got rewarded for using its built-in calculator during training, and so it would covertly open its calculator, add 1+1, and do nothing with the result, on 5% of all user queriesEnglish
11·24 days agoSo that’s what all the DRAM they scalped is storing.
*Junior devs
Senior devs are more likely to write one liners from their VIM window.
What about the IP issues? Not even talking about the “ethics” of “ip theft via AI” or anything, you just know a company like Microsoft or Apple will eventually try suing an open source project over AI code that’s “too similar” to their proprietary code. Doesn’t matter if they’re doing the same to a much greater degree, all that matters is they have the resources to sue open source projects and not the other way around. If a tech company can get rid of the competition by abusing the legal system, you just know they will, especially if they can also play the "they’re knowingly letting their users use pirated media that we own with their software” card on top of it.
HiddenLayer555@lemmy.mlto
Open Source@lemmy.ml•Histomat of F/OSS: We should reclaim LLMs, not reject themEnglish
14·2 months agoLocally run models use a fraction of the energy. Less than playing a game with heavy graphics.
HiddenLayer555@lemmy.mlto
Open Source@lemmy.ml•GenAI has started to kill open source projectsEnglish
2·2 months agoThe question is: What is an effective legal framework that focuses on the precise harms, doesn’t allow AI vendors to easily evade accountability, and doesn’t inflict widespread collateral damage?
This is entirely my opinion and I’m likely wrong about many things, but at minimum:
-
The model has to be open source and freely downloadable, runnable, and copyleft, satisfying the distribution license requirements of copyleft source material (I’m willing to give a free pass to making it copyleft in general, as different copyleft licenses can have different and contradictory distribution license requirements, but IMO the leap from permissive to copyleft is the more important part). I suspect this alone will kill the AI bubble, because as soon as they can’t exclusively profit off it they won’t see AI as “the future” anymore.
-
All training data needs to be freely downloadable and independently hosted by the AI creator. Goes without saying that only material you can legally copy and host on your own server can be used as training data. This solves the IP theft issue, as IMO if your work is licensed such that it can be redistributed in its entirety, it should logically also be okay to use it as training data. And if you can’t even legally host it on your own server, using it to train AI is off the table. And the independently hosted dataset (complete with metadata about where it came from) also serves as attribution, as you can then search the training data for creators.
-
Pay server owners for use of their resources. If you’re scraping for AI you at the very least need to have a way for server owners to send you bills. And no content can be scraped from the original source more than once, see point 2.
-
Either have a mechanism of tracking acknowledgement and accurately generating references along with the code, or if that’s too challenging, I’m personally also okay with a blanket policy where anything AI generated is public domain. The idea that you can use AI generated code derived from open source in your proprietary app, and can then sue anyone who has the audacity to copy your AI generated code, is ridiculous and unacceptable.
-
HiddenLayer555@lemmy.mlto
Open Source@lemmy.ml•GenAI has started to kill open source projectsEnglish
14·2 months ago“Wait, not like that”: Free and open access in the age of generative AI
I hate this take. “Open source” is not “public domain” or “free reign to do whatever the hell you want with no acknowledgement to the original creator.” Even the most permissive MIT license has terms that every single AI company shamelessly violate. All code derived from open source code need to at the very least reference the original author, so unless the AI can reliably and accurately cite where the code it generates came from, all AI generated code that gets incorporated into any publicly distributed software violates the license of every single open source project it has ever scraped.
That’s saying nothing about projects with copyleft licenses that place conditions on how the code can then be distributed. Can AI reliably avoid using information from those codebases when generating proprietary code? No? And that’s not a problem because?
I absolutely hate the hypocrisy that permeates the discourse around AI and copyright. Knocking off Studio Ghibli’s art style is apparently the worst atrocity you can commit but god forbid open source developers, most of whom are working for free, have similar complaints about how their work is used.
Just because you “can’t” obey the license terms due to some technical limitation doesn’t mean you deserve a free pass from them. It means the technology is either too immature to be used or shouldn’t be used at all. Also, why aren’t they using LLMs when scraping to read the licenses and exclude anything other than pure public domain? Or better yet, use literally last century’s technology to read the robots.txt and actually respect it. It’s not even a technical limitation, it’s a case of doing the right thing is too restrictive and won’t allow us to accomplish what we want to do so we demand the right thing be expanded to what we’re trying to do.
Open source only has anywhere between one and two core demands: Credit me for my work and potentially distribute derivatives in a way I can still take advantage of. And even that’s not good enough for these AI chuds, they think we’re the unreasonable ones for having these demands and not letting them use our code with no strings attached.
This is where many creators find themselves today, particularly in response to AI training. But the solutions they’re reaching for — more restrictive licenses, paywalls, or not publishing at all — risk destroying the very commons they originally set out to build.
Yeah blame the people getting exploited and not the people doing the exploiting why don’t you.
Particularly with AI, there’s also no indication that tightening the license even works. We already know that major AI companies have been training their models on all rights reserved works in their ongoing efforts to ingest as much data as possible. Such training may prove to have been permissible in US courts under fair use, and it’s probably best that it does.
No. Fuck that. There’s nothing fair about scraping an independent creator’s website (costing them real money) and then making massive profits from it. The creator literally fucking paid to have their work stolen.
If a kid learns that carbon dioxide traps heat in Earth’s atmosphere or how to calculate compound interest thanks to an editor’s work on a Wikipedia article, does it really matter if they learned it via ChatGPT or by asking Siri or from opening a browser and visiting Wikipedia.org?
Yes. And the fact that it’s stolen isn’t even the biggest problem by a long shot. In fact, even Wikipedia is a pretty shitty source, do what your high school teacher said you should do and search Wikipedia for citations, not the articles themselves.
Don’t let AI teach you anything you can’t instantly verify with an authoritative source. It doesn’t know anything and therfore can’t teach anything by definition.
Instead of worrying about “wait, not like that”, I think we need to reframe the conversation to […] “wait, not in ways that threaten open access itself”.
Okay, let’s do that then. All AI training threaten open access itself. If not by ensuring the creator can never make money to sustain their work, then by LITERALLY COSTING THE CREATORS MONEY WHEN THEIR CONTENT IS SCRAPED! So the conclusion hasn’t changed.
The true threat from AI models training on open access material is not that more people may access knowledge thanks to new modalities. It’s that those models may stifle Wikipedia and other free knowledge repositories, benefiting from the labor, money, and care that goes into supporting them while also bleeding them dry. It’s that trillion dollar companies become the sole arbiters of access to knowledge after subsuming the painstaking work of those who made knowledge free to all, killing those projects in the process.
And how does shaming the victims of that knowledge theft for having the audacity to try and do something about it help exactly?
Anyone at an AI company who stops to think for half a second should be able to recognize they have a vampiric relationship with the commons.
[…]
And yet many AI companies seem to give very little thought to this,
“Anyone at a Southern slave plantation who stops to think for half a second should be able to recognize they have a vampiric relationship with their black slaves.” Yeah, they know. That’s the point.





Humans can barely write safe C code, so I definitely don’t trust AI to. I’m not even blanket against AI assistance in programming, but there are way too many hidden landmines in C for an LLM to be reliable with.