• 0 Posts
  • 55 Comments
Joined 3 months ago
cake
Cake day: January 2nd, 2025

help-circle
  • At idle, SSD is usually better (like you said if the SSD has proper power management, and that takes research to know).

    Spinning platters are generally still better for power per gig/terabyte, because write time they consume less power than SSD.

    I dont really look at drive power consumption, because even with ~10 drives running in my environment, a single cpu doing anything moderate blows away their power consumption numbers (I’ve tested, not that it was needed, heat dissipation alone makes it clear).

    I have a ten-year old 5 drive NAS that runs 24/7, and it’s barely above room temp. Average draw is a few watts (the number was so low I put it out of my mind, maybe 5 watts - Raspberry Pi territory).

    My SFF desktop is 12w at idle, with either 2 small SSDs (500GB each) or a single large drive (12TB). So much for SSD having better idle power.




  • Scans for open ports run continuously these days.

    Ten years ago I opened a port for something for a couple days - for months after that I was getting regular scans against that port (and others).

    At one point the scans were so constant it was killing my internet performance (poor little consumer router had no defense capability).

    I don’t think the scans ever fully stopped until I moved. Whoever has that IP now probably gets specifically scanned on occasion.

    And just because you don’t run a business doesn’t mean you have nothing to lose.

    DMZ should be enough… But routers have known flaws, so I’d be sure to verify whatever I’m using.





  • I’m tech savvy, been in IT for nearly 40 years. Wrote my first program in Fortran on punched cards.

    Linux is no easy switchover. It’s problematic, regardless of the distro (I’ve tried many over the years).

    My latest difficulty - went to install Debian and it hung multiple times trying to install wifi drivers.

    Mint can’t use my Logitech mouse until I researched it and discovered someone wrote an app to enable it. The most popular mouse on the planet doesn’t work out of the box.

    Typical user would be stumped by these problems.

    I can go on for days about “Year of the Linux Desktop” (which I first heard in 2000). Can Linux work as a desktop? Definitely. And it can be pretty damn good, too, if your use-case aligns with it’s capabilities. But if you’re an end-user type, what do you do a year in and realize you need a specific app that just doesn’t exist in Linux?

    Is it a direct replacement for Windows? No. Because Windows has always been about general use - it trades performance for the ability to do a lot of varied things, it includes capabilities that not everyone needs.

    Linux is the opposite, it’s about performance for specific things. If you want a specific capability, it has to be added. This is the challenge these different distros attempt to meet: the question for all of them is which capabilities to include “out of the box” (see my mouse example - Debian handles it just fine).

    This is also the power of Linux, and why it’s so great for specific use-cases. Things like Proxmox, TrueNAS, etc, really benefit from this minimalism. No wasted cycles on a BITS service or all the other components Windows runs “just in case”.








  • Are you looking for selective sync, and just over the LAN or over the internet too?

    If just LAN, there’s many Windows sync tools for this with varying levels of complexity and capability. Even just a simple batch file with a copy command.

    I’ll often just setup a Robocopy job for something that’s a regular sync.

    If you open files over a network connection, they stay remote and remain remote when you save. Though this isn’t best practice (Windows and apps are known for having hiccups with remotely opened files).

    Two other approaches:

    1. ResilioSync enables selective sync. If you change a file you’ve synchronized locally, the changed file will sync back to the source.

    2. Mesh network such as Wireguard, Tailscale, Hamachi. Each enables you to maintain an encrypted connection between your devices that the system sees as a LAN (with encryption). If you’re only using Windows, I’d recommend starting with Hamachi, it’s easier to get started. If mobile device support is needed, use Wireguard or Tailscale (Tailscale uses Wireguard, but easier to setup).




  • Just that you don’t need a beast of a machine (with it’s higher cost and power consumption) to just serve files at reasonable performance. If you want to stream video, you’ll need greater performance.

    For example, my NAS is ten years old, runs on ARM, with maybe 2gigs of ram. It supposedly can host services and stream video. It can’t. But it’s power draw is about 4 watts at idle.

    My newer (5 year old) small form factor desktop has a multi-core Intel cpu, true gigabit network card, a decent video card, with an idle draw of under 12 watts, and peaks at 200w when I’m converting video. It can easily stream videos.

    My gaming desktop draws 200w at idle.

    My SFF and gaming rig are both overkill for simple file sharing, and both cost 2x to 4x more than the NAS (bought the NAS and SFF second hand). But the NAS can’t really stream video.

    Power draw is a massive factor these days, as these devices run 24/7.

    RPi is great for it’s incredibly low power draw. The negative of RPi is you still need enclosure, and you’ll have drives that draw power attached to it. In my experience once I’ve built a NAS, RPi doesn’t draw significantly less than my SFF with the same drives installed, as it seems the drives are the greatest consumer. As I mentioned, my SFF with 1TB of storage draws 12 watts, and RPi will draw upwards of 8 watts on its own (my Pi Zero draws 2, but I’d never use it for a NAS). It’s all so close that for me the downside of RPi isn’t worth the difference in power.