• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • Multiple backups may be kept.

    Nice work, but if I may suggest - it lacks hardlink support, so’s quite wasteful in terms of disk space - the number of ‘tags’ (snapshots) will be extremely limited.

    At least two robust solutions that use rsync+hardlinks already exist: rsnapshot.org and dirvish.org (both written in perl). There’s definitely room for backup tools that produce plain copies, instead of packed chunk data like restic and Duplicacy, and a python or even bash-based tool might be nice, so keep at it.

    However, I liken backup software to encryption - extreme care must be taken when rolling and using your own. Whatever tool you use, test test test the backups. :)


  • There’s no point doing anything fancy like that - wireguard over Tailscale is pretty pointless, as Tailscale is literally wireguard with NAT traversal and authentication bolted on. Unless you enable subnetting, it can’t get more secure than that.

    And even if you do enable subnetting (which you might wanna do if you need access to absolutely everything), you can use Tailscale ACLs to keep tighter control - say, from specific (tagged) devices.



  • 100% this. OP, whatever solution you come up with, strongly consider disentangling your backup ‘storage’ from the platform or software, so you’re not ‘locked in’.

    IMO, you want to have something universal, that works with both local and ‘cloud’ (ideally off-site on a own/family/friend’s NAS; far less expensive in the long run). Trust me, as someone who came from CrashPlan and moved to Duplicacy 8 years ago, I no longer worry about how robust my backups are, as I can practice 3-2-1 on my own terms.




  • You should take it to a data recovery specialist if the data is really really important but for lightly-damaged sectors, you want ddrescue (oldie but goodie) or HDDSuperClone (no longer developed) or OpenSuperClone (fork of HDDSuperClone, more actively developed).

    You can combine some of these tools with commercial programs like dmde, UFS Explorer, or R-Studio - to target specific files for a quick result - but basically it’s best to get a full disk image off the bad drive onto another drive/image.






  • Yes, I also work in IT.

    The paid GUI version is extremely cautious on the auto-updates (it’s basically a wrapper for the CLI) - perhaps a bit too cautious. The free CLI version is also very cautious about making sure your backup storage doesn’t break.

    For example, they recently added zstd encryption, yet existing storages stay on lz4 unless you force it - and even then, the two compression methods can exist in the same backup destination. It’s extremely robust in that regard (to the point that if you started forcing zstd compression, or created a new zstd backup destination, you can use the newest CLI to copy data to the older lz4 method and revert - just as an example). And of course you can compile it yourself years from now.





  • Well your account is on lemmy.world so how d’ya know the issue isn’t with your own access to the front end?

    Many don’t interact with the lemmy.world directly, so we might only see delays in post propogation (if there is such an issue on the backend - I don’t see any but could be wrong).

    I agree picking the biggest instances isn’t great from a scaling perspective, but s’gonna be hard to move any community once established.