• 0 Posts
  • 289 Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle

  • The larger issue is that anyone who controls a Steam developer account has the right to install unsandboxed software on any user’s computer who owns a game from that developer.

    And you have to remember that the party in control of the account doesn’t even need to be the people who originally developed the thing. Publishers go under and get purchased all the time. It’d also be possible to compromise the build systems of a publisher.

    This one apparently was caught by users after acting in a particularly-incautious fashion. But it’d be pretty easy to have code that doesn’t do that. An example would be putting, say, an intentional buffer overflow in a game that phones home. That’s pretty hard to catch, and deniable if it is and all you find is the buffer overflow. Then the game reports enough information — like, say, configured full name of user on the computer, which I’m sure that plenty of games send today — to indicate whether a user is a desirable target; the remote server would also have the IP. If they are, an exploiting payload gets pushed over. Not easy to pick up on something like that in a trivial way.

    There hasn’t been a “big disaster” yet, or at least not one we know about, but I don’t think that there’s going to be a real fix other than having Steam switch to having games run in some form of isolated sandbox.




  • I agree that it’s less-critical than it was at one point. Any modern filesystem, including ext4 and btrfs, isn’t at risk of filesystem-level corruption, and a DBMS like PostgreSQL or MySQL should handle it at an application level. That being said, there is still other software out there that may take issue with being interrupted. Doing an apt upgrade is not guaranteed to handle power loss cleanly, for example. And I’m not too sanguine about hardware not being bricked if I lose power during an fwupd updating the firmware on attached hardware. Maybe a given piece of hardware has a safe, atomic upgrade procedure…and maybe it doesn’t.

    That does also mean, if there’s no power backup at all, that one won’t have the system available for the duration of the outage. That may be no big deal, or might be a real pain.


  • Yeah, I listed it as one possibility, maybe the best I can think of, but also why I’ve got some issues with that route, why it wouldn’t be my preferred route. Maybe it is the best generally available right now.

    The “just use a UPS plus a second system” route makes a lot of sense with diesel generator systems, because there the hardware physically cannot come up to speed in time. A generator cannot start in 10ms, so you need a flywheel or battery or some other kind of energy-storage system in place to bridge the gap…but that shouldn’t be a fundamental constraint on those home large-battery backup systems. They don’t have to be equipped with an inverter able to come online in 10ms…but they could. In the generator scenario, it’s simply not an option.

    I’d like to, if possible, have the computer have a “unified” view of all of the backing storage systems. In the generator case, the “time remaining” is a function of the fuel in the tank, and I’m pretty sure that it’s not uncommon for someone to be able to have some kind of secondary storage that couldn’t be measured; I remember reading about a New Orleans employee in Hurricane Katrina that stayed behind to keep the datacenter functioning mostly hauling drums of diesel up the stairs to the generator. But that’s not really a fundamental issue with those battery backup systems, not unless someone is planning on hauling more batteries in.

    If one gets a UPS and then backs it with a battery backup system, then there are two sets of batteries — one often lead-acid, with a shorter lifespan — and multiple inverters and battery charge controllers in multiple layers in the system. That’s not the end of the world, a “throw some extra money at it” issue, but one is having to get redundant hardware.


  • I’ll add one other point that might affect people running low-power servers, which I believe some people here are running for low-compute-load stuff like home automation: my past experience is that low-end, low power computers often have (inexpensive) power supplies that are especially intolerant of wall power issues. I have had multiple consumer broadband routers and switches that have gotten into a wonky, manual-reboot-requiring state after brownouts or power loss, even when other computers in the house continued to function without issue. I’d guess that those might be particularly-sensitive to a longer delay in changing over to a backup power source. I would guess that Raspberry Pi-class machines might have power supplies vulnerable to this. I suppose that for devices with standard barrel connectors and voltage levels, one could probably find a more-expensive power supply that can handle dirtier power.

    If you run some form of backup power system that powers them, have you had issues with Raspberry Pis or consumer internet routers after power outages?



  • Like, the Powerwall things? Yeah, sure, they’re in the same sort of class. I think — not gonna go looking through all of 'em — that the things I linked to above all are intended to have someone plug devices directly into them, and the Powerwalls get wired into the electrical panel, but same basic idea. They aren’t really devices where energy density matters all that much, because once you put the battery somewhere, it probably isn’t going to move much after that.


  • If people want to get one for the hell of it, I’m not going to stand in their way, but I really don’t think that this product plays well to the strength of sodium-ion batteries.

    My understanding is that sodium-ion batteries are not as energy-dense, but are expected to be cheaper per-kilowatt-hour than lithium-based batteries.

    But this is a small, very-expensive-relative-to-storage-capacity, portable battery.

    I’d think that sodium-ion batteries would be more interesting for things like an alternative to this sort of thing — large-capacity, mostly-non-moved-around batteries used for home backup during power outages, stuff like that. Maybe grid buffering.


  • Facts are not copyrightable, just their presentation. So I don’t think that it’s possible to say that it’s impossible to summarize material. A court is going to say that some form of summary is legal.

    On the other hand, simply taking material and passing it through an AI and producing the same material as the source — which would be an extreme case — is definitely copyright infringement. So there’s no way that a court is going to just say that any output from an AI is legal.

    We already have criteria for what’s infringing, whether a work is “derivative” or not.

    My bet is that a court is going to tell Brave “no”, and that it’s up to Brave to make sure that any given work it produces isn’t derivative, using existing case law. Like, that’s a pain for AI summary generators, but it kind of comes with the field.

    Maybe it’s possible to ask a court for clearer and harder criteria for what makes a work derivative or not, if we expect to be bumping up against the line, but my guess is that summary generators aren’t very impacted by this compared to most AI and non-AI uses. If the criteria get shifted to be a little bit more permissive (“you can have six consecutive words identical to the source material”, say) or less permissive (“you can have three consecutive words identical to the source material”), my guess is that it’s relatively easy for summary generators to update and change their behavior, since I doubt that people are keeping these summaries around.



  • “Where to find the time of day changes depending on what [driving] mode you’re in,” he said. “The buttons that go through your six favorite channels don’t work if it’s satellite radio channels. It takes so many tries to hit one button in your jiggly car, and it just doesn’t work.”

    Well, Woz. You’re famous for doing a universal control panel for another prominent piece of consumer electronics and figuring out how to interface it to lots of different brands.

    https://en.wikipedia.org/wiki/Universal_remote

    In 1987, the first programmable universal remote control was released. It was called the “CORE” and was created by CL 9, a startup founded by Steve Wozniak, the inventor of the Apple I and Apple II computers.[2]

    All you had to do then was to reverse-engineer the infrared protocols used to communicate with the televisions.

    I bet that it’s probably possible to figure out a way to have a third-party control panel interface with various auto UIs. Like, build a universal interface, and then just design mounting hardware on a per-car basis? Use Android Auto or CarPlay, OBD-II, and such?

    Can Android Auto do climate control?

    kagis

    Sounds like it doesn’t, but may start being able to do so:

    https://www.androidauthority.com/android-auto-climate-controls-3533161/

    Android Auto could be about to turn up the heat (and AC) on car comfort

    Climate control may finally be coming to Google’s in-car interface.

    Android phones don’t have physical buttons for car features. But…that’s not a physical limitation. Just is a result of reusing a phone as a car panel.

    So instead of having third-party car computers being the province of a few hobbyist hardware hackers, there’s an out-of-box solution for everyone? Make the “Wozpanel” or whatever that I just mount in my car? Stick physical buttons on it? Maybe have a case and faceplate that wraps it to match interiors?


  • never change

    Nah, that’s not a problem.

    So, if you send a password at some point, someone could theoretically intercept and get the password, and then impersonate you.

    PGP keys are public-private. The key never leaves your possession. Instead, the other side asks you to cryptographically sign something using your private key, which they can validate using your public key.

    You never expose your private key to any intermediary, and even the other side doesn’t have it.

    TOTPs have a shared secret, and generate a temporary passphrase using both time and the secret. Those also protect (mostly) against interception, since the OTP becomes invalid within probably seconds. Just as with PGP keys, the secret does not change. However, unlike PGP, the other side does also have all the information required to authenticate as you.





  • and uses btrfs send/receive to create backups.

    I’m not familiar with that, but if it permits for faster identification of modified data since a given time than scanning a filesystem for modified files, which a filesystem could potentially do, that could also be a useful backup enabler, since now your scan-for-changes time doesn’t need to be linear in the number of files in the filesystem. If you don’t do that, your next best bet on Linux – and this way would be filesystem-agnostic – is gonna require something like having a daemon that runs and uses inotify to build some kind of on-disk index of modifications since the last backup, and a backup system that can understand that.

    looks at btrfs-send(1) man page

    Ah, yeah, it does do that. Well, the man page doesn’t say what time it runs in, but I assume that it’s better than linear in file count on the filesystem.


  • You’re correct and probably the person you’re responding to is treating one as an alternative as another.

    However, theoretically filesystem snapshotting can be used to enable backups, because they permit for an instantaneous, consistent view of a filesystem. I don’t know if there are backup systems that do this with btrfs today, but this would involve taking a snapshot and then having the backup system backing up the snapshot rather than the live view of the filesystem.

    Otherwise, stuff like drive images and database files that are being written to while being backed up can just have a corrupted, inconsistent file in the backup.


  • Wouldnt the sync option also confirm that every write also arrived on the disk?

    If you’re mounting with the NFS sync option, that’ll avoid the “wait until close and probably reorder writes at the NFS layer” issue I mentioned, so that’d address one of the two issues, and the one that’s specific to NFS.

    That’ll force each write to go, in order, to the NFS server, which I’d expect would avoid problems with the network connection being lost while flushing deferred writes. I don’t think that it actually forces it to nonvolatile storage on the server at that time, so if the server loses power, that could still be an issue, but that’s the same problem one would get when running with a local filesystem image with the “less-safe” options for qemu and the client machine loses power.