This is bloody hilarious 🤣
This is bloody hilarious 🤣
I’m not anti ai, I use it generative ai all of the time, and I actually come from a family of professional artists myself ( though I am not ). I agree that its a tool which is useful; however, I disagree that it is not destructive or harmful to artist simply because it is most effective in thier hands.
it concentrates the power of creativity into firms which can afford to produce and distribute ai tools. While ai models are getting smaller, there are frequently licensing issues involved (not copywrite, but simply utilizing the tools for profit) in these small models. We have no defined roadmap for the Democratization of these tools, and most signs point towards large compute requirements.
it enables artist to effectively steal the intellectual labor of other artist. Just because you create cool art with it doesn’t mean it’s right for you to scrape a book or portfolio to train your ai. This is purely for practical reasons. Artists today work thier ass of to make the very product ai stands to consolidate and distribute for prennies to the dollar.
you fail to recognize that possibility that I support ai but oppose its content being copywritable purely because firms would immediately utilize this to evade licensing work. Why pay top dollar for a career concept artist’s vision when you can pay a starting liberal arts grad pennies to use Adobe suit to generate images trained in said concept artists?
Yes, that liberal arts grad deserves to get paid, but they also deserve any potential whatsoever of career advancement.
Now imagine instead if new laws required that generative ai license thier inputs in order to sell for profit? Sure, small generative ai would still scrape the Internet to produce art, but it would create a whole new avenue for artist to create and license art. Advanced generative ai may need smaller datasets, and small teams of artist may be able to utilize and license boutique models.
I disagree with this reductionist argument. The article essentially states that because ai generation is the “exploration of latent space,” and photography is also fundamentally the “exploration of latent space,” that they are equivalent.
It disregards the intention of copywriting. The point isn’t to protect the sanctity or spiritual core of art. The purpose is to protect the financial viability of art as a career. It is an acknowledgment that capitalism, if unregulated, would destroy art and make it impossible to pursue.
Ai stands to replace artist in a way which digital and photography never really did. Its not a medium, it is inference. As such, if copywrite was ever good to begin with, it should oppose ai until compromises are made.
I think my lead deferred it as a case covered by code inspection. So… probably not! I don’t work at ge anymore 😁
While investigating an uncovered node in some aviation datalink software, I discovered a 15 year old comment from 1993 along the lines of, “this function never runs, I’ll fix it later.” I wish will all my heart I could have heard their voice. Even if just for a moment.
My hope is that the mechanization of the written word / artistry will result in such a deludge of low tier nonsense that the people of earth will just stop using the Internet.
Then it can just be me and you ❤️
All of the people involved in the procecution of the first case should be disbarred and jailed. I can’t believe the poor man endured more than a year of solitary confinement for the crime of being responsible on the Internet. I hope his persecuters choke and die. Disgusting. Human trash.
In a world where arguably the second most advanced LLM on the planet (either gpt3.5 or Bing’s openai implementation) is completely free to use, why would I want to read anything on your website that wasn’t researched by a human?
I wish I could I could sear this question into every CEOs brain.
Original post:
So Long, and Thanks for All the Fish
Hi everyone,
Well, it has been a wild ride. I joined reddit over a decade ago, when it was still much smaller and different from today. I quickly stumbled upon r/theoryofreddit and was fascinated by all the discussion and theories about how communities work. So when after a while mod applications opened up I applied, which was my first experience modding on reddit. My experiences there also prompted me to start experimenting with ways to make moderation easier through various user scripts and CSS hacks. This eventually resulted in a very early version of toolbox, although some earlier experiments never made it to the general public.
In the decade that followed I was involved in various communities and Toolbox developed into a project that used by over 20.000 (twenty thousand) mods all over reddit. But over the past few years reddit has been moving slowly in a direction that I believe is not good for the health of many communities. So even before this whole API debacle properly started I was already burned out and tired with reddit.
What I said in this post holds true even more today. I am just tired with the platform’s now accelerated decline, see also this comment.
So, over the past two weeks I have decided that I am not going to use reddit anymore.
As a mod, I already did quit my last actual subreddit last year (r/history). Yesterday I cleaned up a few of the smaller subreddits I was still involved as well. As a user I went through all my subscriptions and unsubscribed from all of them with the exception of r/modnews and r/modcoord. The last two because I’ll stick around a bit for the meta stuff, certainly to see how things end up. But I think I have invested more than enough time in this platform, probably more than has been healthy at times.
I want to use this post to thank everyone who has been involved with me in a mod team, involved with toolbox and all users of toolbox. “Wait, why is this posted on r/creesch and not r/toolbox?”
Fair question, with a simple answer. This is me saying my goodbyes for now, not strictly a toolbox announcement. While a lot of people see me and toolbox as one and the same thing, many different people contributed over the years and the project itself is not going away. I am also not going nuclear by disabling it as that would make me no better than certain admin actions in the past couple of weeks. As I said here two weeks ago. I will speak my mind, but toolbox itself has since it’s inception be there for all mods to help them out. I am not going to abuse that trust we build over the years by forcing my opinion. “Why not quit reddit entirely, delete your account, be done with it?”
I thought about it. But I am not really the nuclear type. And to be completely honest, over a decade of work and effort is difficult to entirely let go. I really do dislike the direction reddit has chosen to go but I’d like to be able to check in to see if there is a shift in course. And yes, while reddit profits from the information on reddit it also is information regular people might benefit from. If I deleted my account, including scrubbing all comments my voice, over what has happened in the past two weeks (years, honestly) will also no longer be there.
This is an interesting take. I suppose in hindsight it was naive of us to think the government wouldn’t catch on and track / tax it.
Just an fyi, defederation doesn’t mean you as a user can’t see any content from a given instance or vice versa. It’s more like from the time of defederation, users on the other Lemmy can’t be seen commenting or posting on your Lemmy. I believe there are other consequences too, but it’s not as straightforward as a ban.
Defederation is a feature, not a bug. Lemmy was designed with the idea that instances could be more specific in thier content, so for your lemmy to defederate from a Ukraine war footage instance might not be a condemnation, so much as a curation decision.
Think of it like, an instance has the potential to be either a reddit alternative or a collection of related subreddits.
Woah, that’s a lot of mods! I had the same service for a couple years. It was actually more reliable than the copper in our old apartment, because they ran all the cables into an underground box filled with water (Florida). It was a good price too!
Yikes! I wonder how isolated the led has to be to the CPU power supply to prevent this sort of attack!
I’ve been using LLMs a lot. I use gpt 4 to help edit articles, answer nagging questions I can’t be bothered to answer, and other random things, such as cooking advice.
It’s fair to say, I believe, that all general purpose LLMs like this are plagiarizing all of the time. Much in the way my friend Patrick doesn’t give me sources for all of his opinions, Gpt 4 doesn’t tell me where it got its info on baked corn. The disadvantage of this, is that I can’t trust it any more than I can trust Patrick. When it’s important, I ALWAYS double check. The advantage is I don’t have to take the time to compare, contrast, and discover sources. It’s a trade off.
From my perspective, The theoretical advantage of bing or Google’s implementation is ONLY that they provide you with sources. I actually use Bing’s implementation of gpt when I want a quick, real world reference to an answer.
Google will be making a big mistake by sidelining it’s sources when open source LLMs are already overtaking Google’s bard’s ai in quality. Why get questionable advice from Google, when I can get slightly less questionable advice from gpt, my phone assistant, or actual, inline citations from bing?
Imo, the true fallacy of using AI for journalism or general text, lies not so much in generative AI’s fundamental unreliability, but rather it’s existence as an affordable service.
Why would I want to parse through AI generated text on times.com, when for free, I could speak to some of the most advanced AI on bing.com or openai’s chat GPT or Google bard or a meta product. These, after all, are the back ends that most journalistic or general written content websites are using to generate text.
To be clear, I ask why not cut out the middleman if they’re just serving me AI content.
I use AI products frequently, and I think they have quite a bit of value. However, when I want new accurate information on current developments, or really anything more reliable or deeper than a Wikipedia article, I turn exclusively to human sources.
The only justification a service has for serving me generated AI text, is perhaps the promise that they have a custom trained model with highly specific training data. I can imagine, for example, weather.com developing highly specific specialized AI models which tie into an in-house llm and provide me with up-to-date and accurate weather information. The question I would have in that case would be why am I reading an article rather than just being given access to the llm for a nominal fee? At some point, they are not no longer a regular website, they are a vendor for a in-house AI.