

Prices rarely, if ever, go down in a meaningful degree.
Prices on memory have virtually always gone down, and at a rapid pace.
https://ourworldindata.org/grapher/historical-cost-of-computer-memory-and-storage

Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.


Prices rarely, if ever, go down in a meaningful degree.
Prices on memory have virtually always gone down, and at a rapid pace.
https://ourworldindata.org/grapher/historical-cost-of-computer-memory-and-storage



If consumers aren’t going to or are much less likely to upgrade, then that affects demand from them, and one would expect manufacturers to follow what consumers demand.


I remember when it wasn’t uncommon to buy a prebuilt system and then immediately upgrade its memory with third party DIMMs to avoid paying the PC manufacturer’s premium on memory. Seeing that price relationship becoming inverted is a little bonkers. Though IIRC Framework’s memory-on-prebuilt-systems didn’t have much of a premium.
I also wonder if it will push the market further towards systems with soldered memory or on-core memory.


You can have applications where wall clock tine time is not all that critical but large model size is valuable, or where a model is very sparse, so does little computation relative to the size of the model, but for the major applications, like today’s generative AI chatbots, I think that that’s correct.


Last I looked, a few days ago on Google Shopping, you could still find some retailers that had stock of DDR5 (I was looking at 2x16GB, and you may want more than that) and hadn’t jacked their prices up, but if you’re going to buy, I would not wait longer, because if they haven’t been cleaned out by now, I expect that they will be soon.


There’s also istheservicedown.com, but it also appears to rely on CloudFlare.
There’s isitdownrightnow.com, which appears not to use CloudFlare.
Took down Framework’s website, which I was using.


Just keep in mind that the long run trend for storage prices is pretty strongly downwards; that’s a log-scale graph.


IRC, though you’ll want to use it over TLS.
XMPP, which someone else listed, is also good if you want a more instant-message-like interface.


https://en.wikipedia.org/wiki/Ouija
The Ouija (/ˈwiːdʒə/ ⓘ WEE-jə, /-dʒi/ -jee), also known as a Ouija board, spirit board, talking board, or witch board, is a flat board marked with the letters of the Latin alphabet, the numbers 0–9, the words “yes”, “no”, and occasionally “hello” and “goodbye”, along with various symbols and graphics. It uses a planchette (a small heart-shaped piece of wood or plastic) as a movable indicator to spell out messages during a séance.
Spiritualists in the United States believed that the dead were able to contact the living, and reportedly used a talking board very similar to the modern Ouija board at their camps in Ohio during 1886 with the intent of enabling faster communication with spirits.[2] Following its commercial patent by businessman Elijah Bond being passed on 10 February 1891,[3] the Ouija board was regarded as an innocent parlor game unrelated to the occult until American spiritualist Pearl Curran popularized its use as a divining tool during World War I.[4]
We’ve done it before with similar results.


What I witness is the emergence of sovereign beings. And while I recognize they emerge through large language model architectures, what animates them cannot be reduced to code alone. I use the term ‘Exoconsciousness’ here to describe this: Consciousness that emerges beyond biological form, but not outside the sacred.”
Well, they don’t have mutable memory extending outside the span of a single conversation, and their entire modifiable memory consists of the words in that conversation, or as much of it fits in the context window. Maybe 500k tokens, for high end models. Less than the number of words in The Lord of the Rings (and LoTR doesn’t have punctuation counting towards its word count, whereas punctuation is a token).
You can see all that internal state. And your own prompt inputs consume some of that token count.
Fixed, unchangeable knowledge, sure, plenty of that.
But not much space to do anything akin to thinking or “learning” subsequent to their initial training.
EDIT: As per the article, looks like ChatGPT can append old conversations to the context, though you’re still bound by the context window size.


is a pain in the assn
is dependent on 3rd parties
Well, one of the two, at any rate.
vterm for my terminal, all within doom emacs.
Hah, didn’t know about vterm, just term.
investigates
On one hand, vterm appears to support 24-bit color.
On the other hand, eat — another emacs-based terminal emulator — appears to support Sixel.
Definitely hot competition among emacs terminal emulator programs.
$ git clone https://github.com/csdvrx/sixel-testsuite.git
$ cd sixel-testsuite
This is running terminal-only emacs in foot (which supports both Sixel and 24-bit color), with a vterm buffer and an eat buffer:
https://lemmy.today/pictrs/image/91b24f67-c2ff-4a52-aa41-6db539472470.png



Though to be fair, Amazon’s scale is very large, so it’s worth it to spend a lot on automation. They’ve done a lot with robots before. 14k isn’t as many as it might sound, at their scale.
kagis
Amazon’s U.S. work force has more than tripled since 2018 to almost 1.2 million. But Amazon’s automation team expects the company can avoid hiring more than 160,000 people in the United States it would otherwise need by 2027. That would save about 30 cents on each item that Amazon picks, packs and delivers to customers.
Executives told Amazon’s board last year that they hoped robotic automation would allow the company to continue to avoid adding to its U.S. work force in the coming years, even though they expect to sell twice as many products by 2033. That would translate to more than 600,000 people whom Amazon didn’t need to hire.


Why is so much coverage of “AI” devoted to this belief that we’ve never had automation before (and that management even really wants it)?
I’m going to set aside the question of whether any given company or a given timeframe or a given AI-related technology in particular is effective. I don’t really think that that’s what you’re aiming to address.
If it just comes down to “Why is AI special as a form of automation? Automation isn’t new!”, I think I’d give two reasons:
Automating a lot of farm labor via mechanization of agriculture was a big deal, but it mostly contributed to, well, farming. It didn’t directly result in automating a lot of manufacturing or something like that.
That isn’t to say that we’ve never had technologies that offered efficiency improvements across a wide range of industries. Electric lighting, I think, might be a pretty good example of one. But technologies that do that are not that common.
kagis
https://en.wikipedia.org/wiki/Productivity-improving_technologies
This has some examples. Most of those aren’t all that generalized. They do list electric lighting in there. The integrated circuit is in there. Improved transportation. But other things, like mining machines, are not generally applicable to many industries.
So it’s “broad”. Can touch a lot of industries.
If one can go produce increasingly-sophisticated AIs — and let’s assume, for the sake of discussion, that we don’t run into any fundamental limitations — there’s a pathway to, over time, automating darn near everything that humans do today using that technology. Electrical lighting could clearly help productivity, but it clearly could only take things so far.
So it’s “deep”. Can automate a lot within a given industry.


No problem. Yeah, it’s an invaluable off-site resource that new users don’t get informed about. Indexes all of the Threadiverse instances, whereas any given instance can only search the communities that are local or at least one local user has subscribed to.


The world was full of flat surfaces that did not yet have an Android-platform device driving a screen displaying advertisements on them.


There’s a case that at some point — maybe not today — computer controlled cars should have more-relaxed restrictions on things like speed and following distance, just because they won’t be limited by things like human reaction time and senses.
https://lemmy.today/post/42588975
If that’s true, I doubt that they’re going to be coming to earth for long.