• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle

  • This feels a little bit like Brainfuck tbh.

    For what it’s worth, I can think of one thing that would make brainfuck even worse: Instead of using 8 arbitrary characters (it only uses > < + - . , ] and [ for every instruction) for the coding, use the 8 most common letters of the alphabet. Since it ignores all other characters, all of your comments would need to be done without those 8 letters.

    For example, “Hello World” in brainfuck is the following:

    ++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.
    

    If we instead transposed those 8 instructions onto the 8 most common letters of the alphabet, it would look more like this:

    eeeeeeeeaneeeeaneeneeeneeenesssstonenentnneasostonnIntttIeeeeeeeIIeeeInnIstIsIeeeIttttttIttttttttInneIneeI
    



  • Yup. Rand() chooses a random float value for each entry. By default I believe it’s anywhere between 0 and 1. So it may divide the first bill by .76, then the second by .23, then the third by 0.63, etc… So you’d end up with a completely garbage database because you can’t even undo it by multiplying all of the numbers by a set value.


  • Was going to say the same. Windows and Linux both use “lazy” ways of deleting things, because there’s not usually a need to actually wipe the data. Overwriting the data takes a lot more time, and on an SSD it costs valuable write cycles. Instead, it simply marks the space as usable again, and removes any associations to the file that the OS had. But the data still exists on the drive, because it’s simply been marked as writeable again.

    There are plenty of programs that will be able to read that “deleted” content, because (again) it still exists on the drive. If you just deleted it and haven’t used the drive a lot since then, it’s entirely possible that the data hasn’t been overwritten yet.

    You need a form of secure delete, which doesn’t just mark the space is usable. A secure delete will overwrite the data with junk data. Essentially white noise 1’s and 0’s, so the data is completely gone instead of simply being marked as writeable.



  • I’m imagining an event like the old school Punkin Chunkin, where people build elaborate devices to try and fling a pumpkin as far and as accurately as possible. Call it The Billionaire Fling and wherever the billionaires land is where they get buried.

    Have amusement rides, bounce houses, fair food, etc, and donate all proceeds to food banks and homeless shelters.


  • It isn’t compressible at all, really. As far as a compression algorithm is concerned, it just looks like random data.

    Imagine trying to compress a text file. Each letter normally takes 8 bits to represent. The computer looks at 8 bits at a time, and knows which character to display. Normally, the computer needs to look at all 8 bits even when those bits are “empty” simply because you have no way of marking when one letter stops and another begins. It’s all just 1’s and 0’s, so it’s not like you can insert “next letter” flags in that. But we can cut that down.

    One of the easiest ways to do this is to count all the letters, then sort them from most to least common. Then we build a tree, with each character being a fork. You start at the top of the tree, and follow it down. You go down one fork for 0 and read the letter at your current fork on a 1. So for instance, if the letters are sorted “ABCDEF…” then “0001” would be D. Now D is represented with only 4 bits, instead of 8. And after reading the 1, you return to the top of the tree and start over again. So “01000101101” would be “BDBAB”. Normally that sequence would take 40 bits to represent, (because each character would be 8 bits long,) but we just did it in 11 bits total.

    But notice that this also has the potential to produce letters that are MORE than 8 bits long. If we follow that same pattern I listed above, “I” would be 9 bits, “J” would be 10, etc… The reason we’re able to achieve compression is because we’re using the more common (shorter) letters a lot and the less common (longer) letters less.

    Encryption undoes this completely, because (as far as compression is concerned) the data is completely random. And when you look at random data without any discernible pattern, it means that counting the characters and sorting by frequency is basically a lesson in futility. All the letters will be used about the same, so even the “most frequent” characters are only more frequent by a little bit due to random chance. So now. Even if the frequency still corresponds to my earlier pattern, the number of Z’s is so close to the number of A’s that the file will end up even longer than before. Because remember, the compression only works when the most frequent characters are actually used most frequently. Since there are a lot of characters that are longer than 8 bits and those characters are being used just as much as the shorter characters our compression method fails and actually produces a file that is larger than the original.



  • Not only that; You have to pay for updates too. Supposedly it’s because Apple takes time to verify that the app is legit and not going to do nefarious things. So they don’t want a bad actor to get a legit app on the store, then later push an update that infects everyone with a virus.

    But apparently a company did a study and realized that app testing rarely made it past the main page, with testers spending ~15-20 seconds per app. They’d basically open it and if it looked like it did what it said, they didn’t bother digging any deeper.





  • This is likely a server distribution issue, not an intentional act on Reddit’s side.

    Basically, big websites need a way to scale up to a huge number of users. A single server would quickly light on fire if it’s handling all of the API requests from every single user. Every single home feed refresh, comment, post, etc… So instead, you have a bunch of smaller servers, which all get the same info from a master server. Like satellites orbiting a central hub.

    The master server is able to keep up, because it’s only handling the update requests from the satellite servers. This master server basically holds all the “info” about the site, and all the smaller servers push/pull updates from it. The master server never has to deal with all of the small “user wants to refresh their feed” types of requests, because the satellite server already has a local copy of the info. No single satellite server gets bogged down by users, because there are hundreds (or even thousands) of them, and the central server is only having to deal with the updates.

    Another way to think about it is like a tech support company. You call in and talk to a tech support representative. You don’t talk to the CEO for minor tech support questions, because they’re only one person and can’t field every single call. So there is a small army of tech support roles, who deal with the small day-to-day stuff, then the boss deals with bigger things like policy changes.

    But they’re likely going to prioritize recent content, because users want to be able to interact in near real time. So any older content updates (like edits or deleting old comments) will likely be queued behind all of the more immediate stuff like new comments and posts.

    So let’s say you make a comment. Your request goes to one of the satellite servers. That server takes your request, processes it, and sends it off to the master server. The master server queues that update to get added based on priority, then all the other satellite servers pull that update from the master.

    So on your end, you made a comment, saw that it was processed by the satellite server immediately, and your comment appeared. Yay! But someone who is across the country may not see your specific comment immediately, because they’re on a different satellite server which hasn’t received that update yet. It may take a few minutes to get distributed over to the other satellite servers. And this delay is compounded if things are being pushed back in the queue due to being a lower priority.

    So how would this cause comments to get undeleted? When you’re editing then deleting 2000 posts, that’s (at minimum) 4000 updates that you’re sending to your satellite server. So if you come back to it a day later and happen to connect to a different satellite server, it may not have received all of those updates yet because they’re still queued.

    Yes, it’s possible that Reddit is somehow automatically detecting things like PowerDeleteSuite and somehow restoring the edited/deleted comments from a backup. But the much more likely scenario is that your edits/deletes never actually made their way to the satellite servers due to being low priority, or got lost in the queue and timed out.


  • Well, kind of. They were already working on the official app. Buying AlienBlue was simply a way to force a migration. AlienBlue actually had very little impact on the official app’s functionality, because they weren’t interested in AlienBlue’s code at all; The buyout was for the user base, not the app itself.

    This most recent API change is simply the next step in the process. They realized they couldn’t buy out every single third party app, so they just cut off access instead.