

How does this fit into these two pieces of news?
Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.
How does this fit into these two pieces of news?
Licensing the algorithm would also have another consequence: SEO optimizers.
Google uses their algorithm to both push ads, and to punish content farms. Imagine for a second what Google results would look like, if content farms could run the algorithm preemptively to optimize massive amounts of AI slop.
The future of all computing is AI. Get on or get left behind.
Satire?.. hm… for quite some time already, people have been proposing we get rid of all software, and instead use real-time generative AI to render what some software would do.
AI cosplaying as software… imagine “web development”, where the “browser” were an AI simulating to be a browser, connecting to an AI simulating to be a server… what would “web development” even mean anymore?
Trichromatic NIR upconversion!
That’s more interesting than the modified title, just saying.
I’m saying AI is being shoved into all steps of media processing.
Let me illustrate: this is an AI-focused, AI-corrected, AI-remastered, AI-lifted sticker of a photo of my cat… AI-cropped from a screenshot… that got AI-moderated the moment I uploaded it here.
There are 10 kinds of people: those who think they understand neural networks, those who try to understand neural networks, and those whose neural networks can’t spot the difference.
Not a coincidence the amount of people who are bad at languages, communication, learning, or teaching. On the bright side, new generations are likely to be forced to get better.
What used to be done for decades, is being turned up to 100,000%. Instead of clever algorithms written directly by people, black-box AI algorithms and generative AI are being used to modify content so it fits better to the expectations of the old algorithms.
I wouldn’t be surprised if new compression algorithms came out in the next years, openly taking advantage of generative AI to recreate the “original image”… “original intent/concept?”
Do GIMP, Krita, Kdenlive or Inkscape use AI?
There are AI plugins for all of them… but they’re optional for now (2025). Kdenlive is working on integrating correction and background removal generative AI. Main offender is Adobe, which is the “standard” workflow for most media processing, and is forcing AI everywhere, including something as simple as color curves… then slapping a tag of “made using AI” in the output file. Inkscape is foremost a SVG editor, but Adobe Illustrator already has generative AI to allow stuff like rotating vector graphics “in 3D”, it’s only time for Inkscape to follow suit. Even Windows Notepad got some AI features recently 🤦
AI assisted compression and correction
JPG compression itself is a sort of “AI light”, where it analyzes chunks of an image for perceptual similarity, to drop “irrelevant” data. Adobe has added a feature to do that, but using AI in the analysis, tweaking/generating blocks so there are more similarities. It’s likely others will follow suit: “it’s lossy compression after all, right? …right?”
Lossy audio encoding (MP3, etc), also has a perceptual profile to increase block similarities, they’re adding AI there the same way as in images.
Videos… well, they’re a mix of images and audio, with temporal sequences already breaking images into key frames, intermediates, generated, etc. Generatively tweaking some of those to make them more similar, within perceptual limits, also improves compression.
Does this only apply to digital media used in mainstream sources or does it mean everyone who uses editing software is using AI?
Main issue lies at the source: cameras
Unless you’re using a large sensor professional camera, all the “prosumer” and smartphone sensors, are… let’s put it mildly… UTTER CRAP. They’re too small, with lenses too bad, unable to avoid CoC, diffraction, or chromatic aberration.
Before it even spits out a “RAW” image, it’s already been processed to hell and the way back. Modern consumer “better” cameras… use more AI to do a “better” processing job. What you see, is way past the point of whatever the camera has ever seen.
…and then, it goes into the software pipeline. ☠️
As long as they’re optional, I don’t really see a problem.
Pay with money, pay with ads, pay with privacy, pay with brainwashing… TANSTAAFL outside of charity.
Nowadays, all digital media is becoming AI:
Doesn’t mean “generative AI”, but spotting the difference is only going to become harder and harder.
Sounds like a boon for trans people… and a sensationalized title:
When we attempted to “try on” some products explicitly labeled as swimsuits and lingerie, or to upload photos of young schoolchildren and certain high-profile figures (including Donald Trump and Kamala Harris), the tool would not allow us to.
Google’s own policy requires shoppers to upload images that meet the company’s safety guidelines. That means users cannot upload “adult-oriented content” or “sexually explicit content,” and should use images only of themselves or images that they “have permission to use.”
The reporter admits to having broken those policies, then cries foul when photos of 14+ year olds get a virtual breast augmentation.
Some heads up: if you pay for the 🔵✔️ on 𝕏, a lot of people will instantly block you, and your post comments will get filled by other 🔵✔️ people trying to “market themselves”.
You may want to consider creating a second plain account, with proper tags and old fashioned brand building, to increase your reach. Block every 🔵✔️ on that one, to reduce the noise.
Somewhat ironically, Musk’s changes have split 𝕏 in two:
Don’t get me wrong, the “average” public voice is gone, it’s been replaced by influencer wannabes.
What saves the situation for niche communities, is the BLOCK feature. Just block everyone with a 🔵✔️, follow people you like, and suddenly you find yourself in the Twitter of long ago.
In the Nazi bar analogy, it’s like if Musk put up a Nazi rune shaped stage in the middle of the bar, everyone with a 🔵✔️ armband is fighting each other to get to the mic on the stage… while it’s all enclosed in a soundproof cage, and random people sitting by the walls keep their conversations to themselves.
Lots of people have left 𝕏 for that reason, either because they want a fair chance to get their voice heard, or because even being aware of the stage fight disgusts them, and that’s fine. Some have stayed and keep ignoring the stage fight, while the stage fight ignores them, with the rare notification for… another 🔵✔️ to block. Pretty much nobody tries to infiltrate the ⚪ discussions, because they need a 🔵✔️ to get their voice heard above others, but if they get one, then they get blocked.
This also doesn’t mean there are no Nazis or other awful people among the ⚪ no-marks, but the loudmouths quickly get pulled into the 🔵✔️ cage fight.
I doubt it’s been fed text about “bergro”, “parava”, and “rortx”, this looks like basic reasoning to me:
For the sake of completeness, this is qwen3:1.7b running on ollama on a smartphone. Its reasoning is more convoluted (and slow), yet the conclusion is the same:
If all bergro are rortx, and all parava are rortx, are all rortx parava?
Okay, let’s see. The user is asking if all rortx are parava, given that all bergro are rortx and all parava are rortx. Hmm, so first, I need to parse the statements correctly.
The first part says “If all bergro are rortx, and all parava are rortx, are all rortx parava?” So, the premises are that all bergro are rortx, and all parava are rortx. The conclusion is that all rortx are parava.
Wait, but the question is a bit confusing. Let me break it down. The premises are:
Conclusion: All rortx are parava.
So, the conclusion is that every rortx is a parava. But the premises only say that some things (bergro and parava) are rortx. But there’s no information about the relationship between bergro and parava. Are bergro and parava different sets? Or is there an overlap?
If bergro and parava are distinct groups, then the premises only state that bergro are rortx and parava are rortx. But that doesn’t necessarily mean that rortx are parava. Because rortx could be a subset of bergro or a separate set. For example, if bergro is a subset of rortx, and parava is another subset of rortx, then rortx would have two subsets, but the conclusion would be that all rortx are parava, which would require that rortx is entirely parava. But unless there’s a statement that rortx is a subset of parava, that conclusion wouldn’t hold.
But wait, the premises don’t say that bergro and parava are related. They just state that all bergro are rortx and all parava are rortx. So, if bergro and parava are separate, then rortx could have elements from both. So, the conclusion that all rortx are parava would require that there’s no bergro in rortx. But the premises don’t say that. So, unless there’s an implicit assumption that bergro and parava are disjoint, which isn’t stated here, the conclusion can’t be drawn.
Alternatively, if bergro and parava are the same set, but that’s not stated. The problem is that the premises don’t establish any relationship between bergro and parava. So, the conclusion that all rortx are parava can’t be logically derived from the given premises. Therefore, the answer would be no, not all rortx are parava, unless there’s additional information linking bergro and parava.
The premises state that all bergro are rortx and all parava are rortx. However, there is no information linking bergro and parava. If bergro and parava are distinct groups, rortx could consist of elements from both, making it impossible to conclude that all rortx are parava. Without additional constraints, the conclusion cannot be logically derived.
Answer: No, not all rortx are parava. The premises do not establish a relationship between bergro and parava, so rortx could include elements from both groups.
“AI” has been a buzzword basically forever, it’s a moving target of “simulates some human behavior”. Every time it does that, we call it an “algorithm” and move the goalpost for “true AI”.
I don’t know if we’ll ever get AGI, or even want to, or be able to tell if we get a post-AGI. Right now, “AI” stands for something between LLMs, and Agents with an LLM core. Agents benefit from MCP, so that’s good for AI Agents.
We can offload some basic reasoning tasks to an LLM Agent, MCP connectors allow them to interact with other services, even other agents. A lot of knowledge is locked in the deep web, and in corporate knowledge bases. The way to access those safely, will be through agents deciding which knowledge to reveal. MCP is aiming to become the new web protocol for "AI"s, no less no more.
Some careless people will get burned, the rest will be fine.
I feel like a better solution is to get an AI SO. Shape them into whatever you like, don’t forget it’s still an AI, and get whatever comfort you need in the moment.
You can even have several at once.
The connectors are still optional.
Haphazard code is not a new thing. Some statistics claim that almost 50% of “vibe coded” websites have security flaws. It’s not much different from the old “12345” password, or the “qwerty” one (not naming names, but have known people using it on government infrastructure), or the “who’d want to hack us?” attitude.
MCP is the right step forward, nothing wrong with it on itself.
People disregarding basic security practices… will suffer, as always… and I don’t really see anything wrong with that either. Too bad for those forced to rely on them, but that’s a legislative and regulatory issue, vote accordingly.
I would still be extremely hesitant of enabling any MCP connector on non-local model instances. People need to push harder for local and on-prem AI, it’s the only sane way forward.
One of the worst possible examples ever: Klarna is a payment processor, people don’t call their bank to get the same answer the system is already giving them, they call to negotiate something about their money. AIs are at a troubleshooting level, at best some very basic negotiation, nowhere near dealing with people actually concerned about their money… much less in 2023.
Seems like Klarna fell hook, line, and sinker for the hype. Tough luck, need to know the limits.
Randomly obfuscated database: you don’t get exactly the same data, and most of the data is lost, but sometimes can get something similar to the data, if you manage to stumble upon the right prompt.
Hm, makes sense, but I feel like we’re still missing something.
I saw comments about Durov, similar to this investigation, maybe around a month ago.
With the xAI partnership news, I looked into it and found this nice thing:
In Telegram, you can clear them one by one, or date ranges, or use disappearing messages, but this tool still found some I had missed.
(Disclaimer: I got pulled into Telegram by some friends leaving WhatsApp with the policy changes of 2021, my threat model is less one of FSB, and more one of indiscriminate AI siphoning for ad targeting)