

Haha, thanks for the correction. If you have to use your degree in ethics, perhaps you could add your perspective to the thread?
Haha, thanks for the correction. If you have to use your degree in ethics, perhaps you could add your perspective to the thread?
According to consequentialism:
From this perspective, the only issue one could have with deep fakes is the distribution of pornography which should only be used privately. The author dismisses this take as “few people see his failure to close the tab as the main problem”. I guess I am one of the few.
Another perspective is to consider the pornography itself to be impermissible. Which, as the author notes, implies that (1) is also impermissible. Most would agree (1) is morally fine (some may consider it disgusting, but that doesn’t make it immoral).
In the author’s example of Ross teasing Rachel, the author concludes that the imagining is the moral quandry, as opposed to the teasing itself. Drinking water isn’t amoral. Sending a video of drinking water isn’t amoral. But sending that video to someone dying of thirst is.
The author’s conclusion is also odd:
Today, it is clear that deepfakes, unlike sexual fantasies, are part of a systemic technological degrading of women that is highly gendered (almost all pornographic deepfakes involve women) […] Fantasies, on the other hand, are not gendered […]
Cool, you posted the original with the Tim Minchin callout.
The approach requires multiple base stations, each in the path of a ray which is detected at both the station and receiver, and the receiver’s position can only be known if there is communication with the stations.
So, thus far, the cost of ITER is less than the Manhattan project, but it has taken longer. The adage that it is easier to destroy than to create comes to mind.
It does seem like ITER could be more transparent, but the article is overly hyperbolic about one of the most important civil works going over time and budget.
America has spent 5x the ITER budget on Ukraine so far (and rightly so). I wish we lived in a world where that money could have supported research projects like this instead.
Yeah, on closer inspection it looks like kbin is still having federation issues
There’s already:
https://kbin.social/m/ai
https://kbin.social/m/ArtificialIntelligence
https://kbin.social/m/machinelearning
I don’t think the UI is doing the heavy lifting to make these links easy to use outside of kbin. To join from, for example, lemmy.world, I think you write: https://lemmy.world/c/ai@kbin.social
But unfortunately, federation is still a bit broken.
I asked the same question of GPT3.5 and got the response “The former chancellor of Germany has the book.” And also: “The nurse has the book. In the scenario you described, the nurse is the one who grabs the book and gives it to the former chancellor of Germany.” and a bunch of other variations.
Anyone doing these experiments who does not understand the concept of a “temperature” parameter for the model, and who is not controlling for that, is giving bad information.
Either you can say: At 0 temperature, the model outputs XYZ. Or, you can say that at a certain temperature value, the model’s outputs follow some distribution (much harder to do).
Yes, there’s a statistical bias in the training data that “nurses” are female. And at high temperatures, this prior is over-represented. I guess that’s useful to know for people just blindly using the free chat tool from openAI. But it doesn’t necessarily represent a problem with the model itself. And to say it “fails entirely” is just completely wrong.