• 1 Post
  • 168 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle

  • I see, yeah there is something about it in the blurb. How do you like the tablet? Is it responsive? Is it full of Android bloatware? Do you know if it is rootable?

    I see there is a 14 inch version that’s about $300 and that starts to get interesting. It’s not “2nd gen” though. And, I had thought of TCL as a lower tier manufacturer with quality issues, but I hadn’t looked into it much.

    I like that the tablet has an SD (probably microSD) slot. Don’t like that there’s no headphone jack. There’s plenty of space in those things compared to a phone.






  • It looks like they ran the test case and triggered the crash. Therefore the issue is not confabulated.

    Also, I’m unconvinced that use of ffmpeg inside of Google services is relevant to this. Google services can sandbox executables as much as they like, and given the amount of transcoding they do (say for youtube), it would surprise me if they’re not using gpu or hardware transcoders instead of ffmpeg anyway. Instead, they may care more about ffmpeg as used in browsers, TV boxes, and that sort of thing. That puts them in the same position as the Amazon person who said the ffmpeg devs could kill 3 major [Amazon] product lines by sending an email.

    If a zillion cable boxes get pwned because of a 0-day in ffmpeg, well that’s unfortunate but at least they did their due diligence. But if they get pwned because the vendor knew about the vulnerability and decided to deploy anyway, that potentially puts the vendor on the hook for a ton more liability. That’s what “ffmpeg can kill 3 major product lines” means. So “send the email” (i.e. temporarily flag that codec as vulnerable and withdraw it from the default build), seems like a perfectly good response from ffmpeg.

    The Big Sleep article is really good, I read it a week or so ago, sometime after this thread had died down.





  • Maybe you could describe what you mean by self-hosted and resilient. If you mean stuff running on a box in your house connected through a home ISP, then the home internet connection is an obvious point of failure that makes your box’s internet connection way less reliable than AWS despite the occasional AWS problems. On the other hand, if you are only trying to use the box from inside your house over a LAN, then it’s ok if the internet goes out.

    You do need backup power. You can possibly have backup internet through a mobile phone or the like.

    Next thing after that is redundant servers with failover and all that. I think once you’re there and not doing an academic-style exercise, you want to host your stuff in actual data centers, preferably geo separated ones with anycast. And for that you start needing enough infrastructure like routeable IP blocks that you’re not really self hosting any more.

    A less hardcore approach would be use something like haproxy, maybe multiple of them on round robin DNS, to shuffle traffic between servers in case of outages of individual ones. This again gets out of self hosting territory though, I would say.

    Finally, at the end of the day, you need humans (that probably means yourself) available 24/7 to handle when something inevitably breaks. There have been various products like Heroku that try to encapsulate service applications so they can reliably restart automatically, but stuff still goes wrong.

    Every small but growing web site has to face these issues and it’s not that easy for one person. I think the type of people who consider running self-hosted services that way, has already done it at work and gotten woken up by PagerDuty in the middle of the night so they know what it’s about, and are gluttons for punishment.

    I don’t attempt anything like this with my own stuff. If it goes down, I sometimes get around to fixing it whenever, but not always. I do try to keep the software stable though. Avoid the latest shiny.


  • A high-cpu small machine will have noisy fans, there’s no avoiding that. The fans have to be of small diameter so they will spin at high RPM. Maybe you can say what you’re actually trying to run, and make things easier for us.

    I gave up on this approach a long time ago and it’s been liberating. My main personal computer is a laptop and for a while I had a Raspberry Pi 400 running some server-like things. It’s currently not in use though maybe later. All my bigger computational stuff is remote. So the software is self-hosted but not the hardware. IDK if that counts as self-hosting around here. But it’s much more reliable that way, with the boxes in multiple countries for geo separation.









  • Have any of the google-submitted vulnerability reports turned out to be invalid? Project Zero was pretty well regarded.

    Yes I know about the asm code in ffmpeg though IDK if it’s doing anything that could lead to a use after free error. I can understand if an OOB reference happens in the asm code since codecs are full of lookup tables and have to jump around inside video frames for motion estimation, but I’d hope no dynamic memory allocation is happening there. Instead it would be more like a GPU kernel. But, I haven’t examined any of it.

    Anyway there’s a big difference between submitting concrete input data that causes an observable crash, and sending a pile of useless spew from a static analyzer and saying “here, have fun”. The Curl guy was getting a lot of absolute crap submitted as reports.

    From the GCC manual “bug criteria” section:

    If the compiler gets a fatal signal, for any input whatever, that is a compiler bug. Reliable compilers never crash.

    That sounds like timelessly good advice to me.