

Hell, nowadays most major corporations do this. Just nobody uses the scare word “propaganda” to describe it.
Hell, nowadays most major corporations do this. Just nobody uses the scare word “propaganda” to describe it.
Exactly, hence the root of the problem the original meme is getting at…
For healthy working relationships and solid infrastructure you under-promise and over-deliver.
For maximal profit and sustainable business models you over-promise and under-deliver.
It’s fascinating comparing that result to general trends.
It’s not, the underlying data is still just as biased. Taking a bunch of white people and saying they are “ethnically ambiguous” is just statistical blackface.
There’s a specific model for stable diffusion called riffusion that does an okay job. If you want to play with it I recommend downloading the automatic 1111 client and installing it from the “plugins” tab.
The academic name for the field is quite literally “machine learning”.
You are incorrect that these systems are unable to create/be creative, you are correct that creativity != consciousness (which is an extremely poorly defined concept to begin with …) and you are partially correct about how the underlying statistical models work. What you’re missing is that by defining a probabilistic model to objects you can “think”/“be creative” because these models dont need to see a “blue hexagonal strawberry” in order to think about what that may mean and imagine what it looks like.
I would recommend this paper for further reading into the topic and would like to point out you are again correct that existing AI systems are far from human levels on the proposed challenges, but inarguably able to “think”, “learn” and “creatively” solve those proposed problems.
The person you’re responding to isn’t trying to pick a fight they’re trying to help show you that you have bought whole cloth into a logical fallacy and are being extremely defensive about it to your own detriment.
That’s nothing to be embarrassed about, the “LLMs can’t be creative because nothing is original, so everything is a derivative work” is a dedicated propaganda effort to further expand copyright and capital consolidation.
I partially agree with you, but I think you’re missing the end goal of Facebook et al.
As HughJanus pointed out it’s not really any different than a person reading a book and by that reasoning using copyrighted material to train models like these falls well within the existing framework of “fair use”.
However, that depends entirely on “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes.” I agree completely with you that OpenAI’s products/business (the most blatant violator) does easily violate “fair use” due to that clause. However they’re doing it, at least partially, to “force the issue” on the open question of “how much can public information be privatized?” with the goal of further privatizing and increasing commercial applications of raw data.
As you pointed out LLMs can only create facsimiles and not the original work, and by that logic they can’t exactly replicate the inputs either.
No I don’t think artists can claim that they own any and all “cheap facsimiles” of their works, but by that same reasoning none of these models produced should be allowed to be the enforceable property of any individual/company either.
For further reading check out:
“embrace, extend, and exterminate”
ISBNDB approximates there to be 158,464,880 published books in existence.
Meta’s annual income was ~156 billion last year.
Assuming a one time purchase scenario and a $20 average cost that’s ~3.2 billion dollars. ~2% of their annual revenue.
Or you could assume assuming a $0.2 annual license (similar to a lot of technology licenses), or a 0.002 per “stream” (which I. This instance would be ‘use of data to train model’)
I agree with most of what you said, but if you buy into a lot of the economic paradigms your arguments are based on you must also realize that those require the copyrighted works must be paid for and it’s not unreasonable to do so.