• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • It makes little sense why it works on an offsite WiFi, but not mobile data.

    I’d agree with unbuckled above, it’s a DNS issue. If your mobile device is capable, use nslookup or dig to see what responses you are getting in different scenarios. It’s possible that your VPN software is leaking DNS queries out to the mobile data provider’s DNS servers while you are on mobile data and only using the correct DNS settings when you are on wifi. Possibly look for split tunnel settings in the VPN software, as this can create this type of situation.

    You can also confirm this from the pihole side. Connect to the VPN via mobile data and browse to some website you don’t use often, but is not your own internal stuff. Then open the query log on your pihole and see if that domain shows up. I’d put money on that query not showing in the pihole query log.


  • Along with the things others have said (Backups, Linux, Docker, Networking) I’d also recommend getting comfortable with server and network security. A lot of this is wrapped up in the simple mantra “install your goddamn updates!” But, there is more to it than that. For example, if you go with Nextcloud, read through their hardening guide and seriously consider implementing all of the recommendation. Also think through how you intend to manage both the server and instance. If this is all local, then it is easier as you can keep SSH access to the server firewalled off from the internet. If you host part of your stuff “in the cloud”, you’ll want to start looking at limiting down access and using keys to login (which is good practice for all situations). Also, never use default credentials. You may also want to familiarize yourself with the logs provided by the applications and maybe setup some monitoring around them. I personally run Nextcloud and I feed all my logs into Splunk (you can run a free instance in a docker container). I have a number of dashboards I look at every morning to keep an eye on things. E.g. Failed/successful logins, traffic sources, URI requests, file access, etc. If your server is attached to the internet it will be under attack constantly. Fail2Ban on my wireguard container banned 112 IP addresses over the last 24 hours, for 3 failed attempts to login via SSH. Less commonly, attackers try to log in to my Nextcloud instance. And my WordPress site is under constant attack. If you choose to run Wordpress, be very careful about the plugins you choose to install, and then keep them up to date. Wordpress itself is reasonably secure, the plugins are a shit-show and worse when they aren’t kept up to date.



  • Whether anyone likes it or not, Trump will be the US President for the next four years. For businesses, especially large ones, that means they will need to deal with his shit for the next four years. They can either take a “principled stand”, which will likely hurt profit and not gain them anything tangible (read: money); or, they can suck up and try to get in on the grift gravy train which is going to be running out of the White House for the next four years. This has a much better expected value. So, the choice is pretty clear. Businesses are amoral and only care about profit, don’t count on them to do anything which runs contrary to that motivation.

    For politicians, especially those on the right, the calculus is about the same. Trump has basically been the flag bearer for the right for the last decade and will continue to be the center of the right for the next four years. Where is a GOP politician is going to seek electoral support? No one on the left is voting for a GOP candidate, swing voters are damned near a myth at this point in history; so, that leaves seeking votes from the right. Which brings the politicians right back to Trump. Not sucking up is political suicide for a GOP politician right now. Expect more DeSantis style embracing of Trump and more Susan Collins “I’m voting with Trump, but pretending I won’t just long enough to keep my seat in Maine”.


  • I’m sure there are several out there. But, when I was starting out, I didn’t see one and just rolled my own. The process was general enough that I’ve been able to mostly just replace the SteamID of the game in the Dockerfile and have it work well for other games. It doesn’t do anything fancy like automatic updating; but, it works and doesn’t need anything special.


  • I see containers as having a couple of advantages:

    1. Separation of dependencies - while not as big of issue as it used to be, just knowing that you won’t end up with the requirements for one application conflicting with another is one less issue to worry about. Additionally, you can do anything you want to one container, without having an effect on another container. You don’t get stuck wanting to reboot or revert the system, but not wanting to break a different running service.
    2. Portability - Eventually, you are going to replace the OS of that VM (at least, you should). Moving a container to a new OS is dead simple. Re-installing an application on a new OS, moving data and configs can be anywhere from easy to a pain in the arse, depending on the software.
    3. Easier fall back - Have you ever upgraded an application and had everything go to shit? In my years working as a sysadmin, I lost way too many evenings to this sort of bullshit. And while VM snapshots should make reverting easy, sometimes it just didn’t work out that way. Containers force enough separation of applications that you can do just about anything to one container and not effect others.
    4. Less dependency on a single install - Have you ever had a system just get FUBAR, and after a few hours of digging the answer seems to be, just format the drive and start over? Maybe you tried some weird application out and the uninstall wasn’t really clean. By having all that crap happen in containers, you can isolate the damage. Nuke the container, nuke the image, and the base OS is still clean.
    5. Easier version testing - Want to try out upgrading to version 2 of an application, but worried that it may not be fully baked yet or the new configs may take a while to get right? Do it off in a separate container on a copy of the data. You can do this with VMs and snapshots; but, I find containers to be less overhead.

    That all said, if an application does not have an official container image, the added complexity of creating and maintaining your own image can be a significant downside. One of my use cases for containers is running game servers (e.g. Valheim). There isn’t an official image; so, I had to roll my own. The effort to set this up isn’t zero and, when trying to sort out an image for a new game, it does take me a while before I can start playing. And those images need to be updated when a new version of the game releases. Technically, you can update a running container in a lot of cases; but, I usually end up rebuilding it at some point anyway.

    I’d also note that, careful use of VMs and snapshots can replicate or mitigate most of the advantages I listed. I’ve done both (decade and a half as a sysadmin). But, part of that “careful use” usually meant spinning up a new VM for each application. Putting multiple applications on the same OS install was usually asking for trouble. Eventually, one of the applications would get borked and having the flexibility to just nuke the whole install saved a lot of time and effort. Going with containers removed the need to nuke the OS along with the application to get a similar effect.

    At the end of the day, though. It’s your box, you do what you are most comfortable with and want to support. If that’s a monolithic install, then go for it. While I, or other might find containers a better answer for us, maybe it isn’t for you.


  • My list of items I look for:

    • A docker image is available. Not some sort of make or build script which make gods know what changes to my system, even if the end result is a docker image. Just have a docker image out on Dockerhub or a Dockerfile as part of the project. A docker-compose.yaml file is a nice bonus.
    • Two factor auth. I understand this is hard, but if you are actually building something you want people to seriously use, it needs to be seriously secured. Bonus points for working with my YubiKey.
    • Good authentication logging. I may be an outlier on this one, but I actually look at the audit logs for my services. Having a log of authentication activity (successes and failures) is important to me. I use both fail2ban to block off IPs which get up to any fuckery and I manually blackhole entire ASNs when it seems they are sourcing a lot of attacks. Give me timestamps (in ISO8601 format, all other formats are wrong), IP address, username, success or failure (as a independent field, not buried in a message or other string) and any client information you can (e.g. User-Agent strings).
    • Good error logging. Look, I kinda suck, I’m gonna break stuff. When I do, it’s nice to have solid logging giving me an idea of what I broke and to provide a standardized error code to search on. It also means that, when I give up and post it as an issue to your github page, I can provide you with some useful context.

    As for that hackernews response, I’d categorically disagree with most of it.

    An app, self-contained, (essentially) a single file with minimal dependencies.

    Ya…no. Complex stuff is complex. And a lot of good stuff is complex. My main, self-hosted app is NextCloud. Trying to run that as some monolithic app would be brain-dead stupid. Just for the sake of maintainability, it is going to need to be a fairly sprawling list of files and folders. And it’s going to be dependent on some sort of web server software. And that is a very good place to NOT roll your own. Good web server software is hard, secure web server software is damn near impossible. Let the large projects (Apache/Nginx) handle that bit for you.

    Not something so complex that it requires docker.

    “Requires docker” may be a bit much. But, there is a reason people like to containerize stuff, it avoids a lot of problems. And supporting whatever random setup people have just sucks. I can understand just putting a project out as a container and telling people to fuck off with their magical snowflake setup. There is a reason flatpak is gaining popularity.
    Honestly, I see docker as a way to reduce complexity in my setup. I don’t have to worry about dependencies or having the right version of some library on my OS. I don’t worry about different apps needing different versions of the same library. I don’t need to maintain different virtual python environments for different apps. The containers “just work”. Hell, I regularly dockerize dedicated game servers just for my wife and I to play on.

    Not something that requires you to install a separate database.

    Oh goodie, let’s all create our own database formats and re-learn the lessons of the '90s about how hard databases actually are! No really, fuck off with that noise. If your app needs a small database backend, maybe try SQLite. But, some things just need a real database. And as with web servers, rolling your own is usually a bad plan.

    Not something that depends on redis and other external services.

    Again, sometimes you just need to have certain functionality and there is no point re-inventing the wheel every time. Breaking those discrete things out into other microservices can make sense. Sure, this means you are now beholden to everything that other service does; but, your app will never be an island. You are always going to be using libraries that other people wrote. Just try to avoid too much sprawl. Every dependency you spin up means your users are now maintaining an extra application. And you should probably build a bit of checking into your app to ensure that those dependencies are in sync. It really sucks to upgrade a service and have it fail, only to discover that one of it’s dependencies needed to be upgraded manually first, and now the whole thing is corrupt and needs to be restored from backup. Yes, users should read the release notes, they never do.
    The corollary here is to be careful about setting your users up for a supply chain attack. Every dependency or external library you add is one more place for your application to be attacked. And just because the actual vulnerability is in SomeCoolLib.js, it’s still your app getting hacked. You chose that library, you’re now beholden to everything it gets wrong.

    At the end of it all, I’d say the best app to write is the one you are interested in writing. The internet is littered with lots of good intentions and interesting starts. There is a lot less software which is actually feature complete and useful. If you lose interest, because you are so busy trying to please a whole bunch of idiots on the other side of the internet, you will never actually release anything. You do you, and fuck all the haters. If what you put out is interesting and useful, us users will show up and figure out how to use it. We’ll also bitch and moan, no matter how great your app is. It’s what users do. Do listen, feedback is useful. But, also remember that opinions are like assholes: everyone has one, and most of them stink.


  • This could just be a really stupid format, put out by a specific application for creating PDFs, because the original authors didn’t want to pay Adobe (never attribute to malice, that which can be sufficiently explained with stupidity).

    Does pdfinfo give any indication of the application used to create the document? If it chokes on the Java bit up front, can you extract just the PDF from the file and look at that? You might also dig through the PDF a bit using Dider Stevens 's Tools, looking for JavaScript or other indicators of PDF fuckery.

    Does the file contain any other Java bytecode? If so, can you pass that through a decompiler?

    would love it if attempts to reach the cloud could be trapped and recorded to a log file in the course of neutering the PDF.

    This is possible, but it takes a bit of setup. In my own lab, I have PolarProxy running in one Virtual Machine (VM), using QEMU/KVM. That acts as a gateway between an isolated network and a network with internet access. It runs transparent TLS break and inspect on port 443/tcp and tcpdump capturing port 80/tcp. It also serves DNS using Bind.

    There is then the “victim” VM which is running bog standard Windows 10. The PolarProxy root cert has been added to the Trusted Roots certificate store. The Default Gateway and DNS servers are hard coded to the PolarProxy VM. Suspicious stuff is tested on this system and all network traffic is recorded on the PolarProxy system in standard pcap format for analysis.



  • I would add the admittance of China to the WTO as another proximate cause. And one which probably had more of a material effect than NAFTA; but, NAFTA had already become a GOP talking point and it just stuck. China’s entry to the WTO was also moved over the finish line by Bush II, though most of the ground work was laid by Clinton. So, it wouldn’t have had the same clean narrative as NAFTA. US Employment in manufacturing went into freefall in late 2000 and early 2001. This was also during a recession, so that is intermixed with the effects of those changes in international trade. But, even as the recession receded and the US entered an economic boom, leading up to the 2008 crash, manufacturing employment in the US either held steady or decreased slightly. It’s unsurprising that the same period saw a lot of offshoring of manufacturing to China. And this was also the period of Neoliberal economists pushing “comparative advantage” and how the US losing all those manufacturing jobs was a good thing.

    So it’s not surprising then that they get bitter, they cling to guns or religion or antipathy to people who aren’t like them or anti-immigrant sentiment or anti-trade sentiment as a way to explain their frustrations.
    – Barack Obama, 2008


  • Have you considered just beige boxing a server yourself? My home server is a mini-ITX board from Asus running a Core i5, 32GB of RAM and a stack of SATA HDDs all stuffed in a smaller case. Nothing fancy, just hardware picked to fulfill my needs.

    Limiting yourself to bespoke systems means limiting yourself to what someone else wanted to build. The main downside to building it yourself is ensuring hardware comparability with the OS/software you want to run. If you are willing to take that on, you can tailor your server to just what you want.


  • As much “doom and gloom” as the article pushes, I kinda feel that the compromised keys being well known makes detection easier. The malicious binary needs to be signed with one of these keys, this means that there will be very specific structures (e.g. the public key) at well known locations in the file. This is exactly the type of threat which anti-virus is good at detecting. Assuming a network’s security folks aren’t completely asleep at the switch, these attacks should get picked up and blocked pretty fast.

    There is a reason attackers spend so much time and effort obfuscating code and keeping files off the disk. While A/V may be a pretty terrible security control and easily bypassed in many cases, watching for files with well known patterns is one of the few things A/V tends to do well.



  • No, but you are the target of bots scanning for known exploits. The time between an exploit being announced and threat actors adding it to commodity bot kits is incredibly short these days. I work in Incident Response and seeing wp-content in the URL of an attack is nearly a daily occurrence. Sure, for whatever random software you have running on your normal PC, it’s probably less of an issue. Once you open a system up to the internet and constant scanning and attack by commodity malware, falling out of date quickly opens your system to exploit.


  • Short answer: yes, you can self-host on any computer connected to your network.

    Longer answer:
    You can, but this is probably not the best way to go about things. The first thing to consider is what you are actually hosting. If you are talking about a website, this means that you are running some sort of web server software 24x7 on your main PC. This will be eating up resources (CPU cycles, RAM) which you may want to dedicated to other processes (e.g. gaming). Also, anything you do on that PC may have a negative impact on the server software you are hosting. Reboot and your server software is now offline. Install something new and you might have a conflict bringing your server software down. Lastly, if your website ever gets hacked, then your main PC also just got hacked, and your life may really suck. This is why you often see things like Raspberry Pis being used for self-hosting. It moves the server software on to separate hardware which can be updated/maintained outside a PC which is used for other purposes. And it gives any attacker on that box one more step to cross before owning your main PC. Granted, it’s a small step, but the goal there is to slow them down as much as possible.

    That said, the process is generally straight forward. Though, there will be some variations depending on what you are hosting (e.g. webserver, nextcloud, plex, etc.) And, your ISP can throw a massive monkey wrench in the whole thing, if they use CG-NAT. I would also warn you that, once you have a presence on the internet, you will need to consider the security implications to whatever it is you are hosting. With the most important security recommendation being “install your updates”. And not just OS updates, but keeping all software up to date. And, if you host WordPress, you need to stay on top of plugin and theme updates as well. In short, if it’s running on your system, it needs to stay up to date.

    The process generally looks something like:

    • Install your updates.
    • Install the server software.
    • Apply updates to the software (the installer may be an outdated version).
    • Apply security hardening based on guides from the software vendor.
    • Configure your firewall to forward the required ports (and only the required ports) from the WAN side to the server.
    • Figure out your external IP address.
    • Try accessing the service from the outside.

    Optionally, you may want to consider using a Dynamic DNS service (DDNS) (e.g. noip.com) to make reaching your server easier. But, this is technically optional, if you’re willing to just use an IP address and manually update things on the fly.

    Good luck, and in case I didn’t mention it, install your updates.



  • And once you have found your specific collection of plugins that happen not to put the exact features you need behind a paywall but others, you ain’t touching those either.

    And this is why, when I’m investigating phishing links, I’ve gotten used to mumbling, “fucking WordPress”. WordPress itself is pretty secure. Many WordPress plugins, if kept up to date, are reasonably secure. But, for some god forsaken reason, people seem to be allergic to updating their WordPress plugins and end up getting pwned and turned into malware serving zombies. Please folks, if it’s going to be on the open internet, install your fucking updates!


  • The answer to that will be everyone’s favorite “it depends”. Specifically, it depends on everything you are trying to do. I have a fairly minimal setup, I host a WordPress site for my personal blog and I host a NextCloud instance for syncing my photos/documents/etc. I also have to admit that my backup situation is not good (I don’t have a remote backup). So, my costs are pretty minimal:

    • $12/year - Domain
    • $10/month - Linode/Akamai containers

    The Domain fee is obvious, I pay for my own domain. For the containers, I have 2 containers hosted by the bought up husk of Linode. The first is just a Kali container I use for remote scanning and testing (of my own stuff and for work). So, not a necessary cost, but one I like to have. The other is a Wireguard container connecting back to my home network. This is necessary as my ISP makes use of CG-NAT. The short version of that is, I don’t actually have a public IP address on my home network and so have to work around that limitation. I do this by hosting NGinx on the Wireguard container and routing all traffic over a Wireguard VPN back to my home router. The VPN terminates on the outside interface and then traffic on 443/tcp is NAT’d through the firewall to my “server”. I have an NGinx container listening on 443 and based on host headers traffic goes to either the WordPress or NextCloud container which do their magic respectively. I also have a number of services, running in containers, on that server. But, none of those are hosted on the internet. Things like PiHole and Octoprint.

    I don’t track costs for electricity, but that should be minimal for my server. The rest of the network equipment is a wash, as I would be using that anyway for home internet. So overall, I pay $11/month in fixed costs and then any upgrades/changes to my server have a one-time capital cost. For example, I just upgraded the CPU in it as it was struggling under the Enshrouded server I was running for my Wife and I.