

No it’s a bit complex. The transmissions are sent constantly at regular intervals and are a very specific size and are then combined later. So it’s not “instant” messaging. It’s closer to email.
No it’s a bit complex. The transmissions are sent constantly at regular intervals and are a very specific size and are then combined later. So it’s not “instant” messaging. It’s closer to email.
Not only that, but every app will constantly appear to be sending messages, so real messages are greatly obfuscated. That’s honestly the real innovative part of the product IMHO.
Yeah, and it was a one off restore, so others who are mentioning self hosting will still be taken down as long as that policy remains.
Yeah one thing I find these kinds of tools good for is warranty tracking I’d something breaks and insurance claims if there’s a fire or robbery or something.
Personally, I find Traefik much simpler than Nginx, especially with Kubernetes, but even with pure docker, but it’s definitely not as performant. That’s balanced by the fact that it does a lot of automatic detection and has dynamic config loading so I don’t have to break other services when changing configurations.
Yeah, video streaming is not a good thing to put on a limited bandwidth server either directly or as a VPN or proxy passing data.
Best bet would be if you can set up a reverse proxy on your router and have that accept all inbound requests and direct to the correct internal server and port.
Real problem is that the rules change every year so the software has to be constantly updated and that sometimes requires insider information about what changes are coming. Often the IRS publications aren’t available until the last minute or later, definitely not in enough time for proper quality processes. So, while simple returns can sometimes be done with software like this, a lot of people rely on the software or agencies to know all the new rules.
That being said, I would like it a lot if there was a way to file very single form, but fill it out manually in the software, without calculations being done by the software. At least then you could file electronically regardless of what complex forms you need to file with complex worksheets and sub-forms, if the software didn’t need to know about those things, just the forms you actually file. As it is, the only way to file these is with expensive software or on paper which can take many months for the IRS to process and you could be on the hook for interest if you file something wrong and the IRS doesn’t reject in time for you to correct it and resubmit before interest charges accrue.
Also, a lot of IRS processes require the software to be certified (or at least did the last time I looked at it) because their software isn’t sophisticated enough to validate the complex forms, so getting that certification might be difficult for FOSS software. I’ll be interested to see how that plays out.
Mobilizon works well for me. I only wish more organizers used it so I could get events from local communities without having to enter it myself.
There are three points I could make:
Most software that is not free these days is also stealing all your private data. The value in these applications is generally greatly reduced, and in many cases, truly free alternatives exist, so the need to pirate should be much reduced from the past.
Where the first point doesn’t apply, there is usually a reason. Either the company has used their monopoly powers to force people to use their software in order to do their job or to interact with government agencies (Adobe is one that often comes to mind). In this case, the ethics of the situation IMHO mean that pirating is OK. If the company is doing unethical things to force you to buy something, then doing something unethical to not pay for it is an exception in my opinion. The person would not be buying the software if they weren’t forced to and purchases should not be forced.
Access for the poor is another issue where I don’t see this as an issue. The poor will never be able to afford the software, so no one is losing money on the sale and it only benefits the company to have people using it if it’s a locally running application. There may be some concerns if there are essential services involved that require servers or other systems that have to be maintained by the vendor, but otherwise, Windows having been pirated for decades made it ubiquitous. Without that, poor people likely would never have touched Windows and would have learned Linux or Mac or something else instead and Windows wouldn’t have as many people locked in as it does now. So, for the poor, assuming it’s software that runs locally, I see no issues from an ethical standpoint in general.
These are just my opinions, but I’m not alone. And this is not to be used as justification for specific actions, just very general points about the ethics of software piracy. For reference, I’ve done a lot of research on software ethics from both the user and vendor side and used to run a nonprofit on this subject.
There’s a plugin for compose, but podman itself does have some differences here and there. I’m starting to migrate my own stuff as Docker is getting more money hungry. Womder if they’ll try to IPO in a few years. Seems like that’s what these kinds of companies do after they start to decline from alienating users. Just wish that portainer and docker hadn’t killed all the GUIs for docker and swarm was better supported.
The company i work for has also required us to migrate from Docker as the hub and desktop app are no longer totally free. I expect more and more limitations will show up on the free versions as usually is the case with companies like this.
If the meter is plugged into the UPS, then the UPS has nothing to do with the power flowing into the meter. Power is “pulled” not “pushed” to devices in that a device supplying power can limit the amount of power provided, but can’t increase it beyond what the devices request.
Just like with plumbing. The water company can’t force your faucets to open and use more water. Now they could increase pressure and break pipes, similarly the UPS could provide the wrong voltage and short or burn out wires or devices causing them to draw more, but that is unlikely to be the issue here. As long as voltage is constant, amperage (the other component in wattage) is pulled, not pushed.
What you’re seeing in the input load, if it matches what is flowing out of the meter, is some device requesting more power and thus more power flowing into the UPS to be passed to those devices, not the UPS forcing something to use power which isn’t possible as explained above, or the UPS itself using power because the meter has no connection to what power is being used by the UPS, only things plugged into the meter.
So, there must be something else using the power. Likely the devices, even if they aren’t really doing anything you consider significant, are doing something. Probably maintenance, checking for updates, the monitoring proceses requesting information from the devices since the TrueNAS server is on that end, etc. You’d need to put a meter on each device to determine what is drawing the power specifically.
Also, does the power meter only display power used by devices plugged into it, or does it also display it’s own power usage? Could be that the plug itself is using WiFi or something to communicate with external services to log that data. But that would be quick bursts.
Also, without putting a meter on each device, this is probably cumulative. For example, if the NAS is requesing info for monitoring the network, that would spin up the processors on the RPi an cause the switch to draw more power as it transmits that information across the network. Again, this should only be small bursts, but it’s also possible the devices are not sleeping properly after whatever process wakes them so they continue to run their processors at higher amperage for some time. Tweaking power profiles can help with something like tuned on Linux or similar to make things sleep more agressively. With the drawback that they take some amount of time to spin back up when needed.
It you’re talking about TOTP exclusively, that only needs the secret and the correct time on the device. The secret is cached along with the passwords on the device.
LLMs are perfectly fine, and cool tech. Problem is they’re billed as being actual intelligence or things that can replace humans. Sure they mimic humans well enough, but it would take a lot more than just absorbing content to be good enough at it to replace a human, rather than just aiding them. Either the content needs to be manually processed to add social context, or new tech needs to be made that includes models for how to interpret content in every culture represented by every piece of content, including dead cultures who’s work is available to the model. Otherwise, “hallucinations” (e.g. misinterpretation and thus miscategorization of data) will make them totally unreliable without human filtering.
That being said, there many more targeted uses of the tech that are quite good, but always with the need for a human to verify.
There’s not a need to have vaultwarden up all of the time unless you use new devices often or create and modify entries really often. The data is cached on the device and kept encrypted by the app locally. So a little downtime shouldn’t be a big issue in the large majority of cases.
A desktop environment is a waste of resources on a system where you’ll only use it to install and occasionally upgrade a few server applications. The RAM, CPU power, and electricity used to run the desktop environment could be instead powering another couple of small applications.
Selfhosting is already inefficient with computing resources just like everyone building their own separate infrastructure in a city is less efficient. Problem is infrastructure is shared ownership whereas most online services are not owned by the users so selfhosting makes sense, but requires extra efficiencies.
How do you connect? Is there a domain? Is that domain used for email or any other way that it might circulate?
Also, depends on if the IP address was used for something in the past that was useful to target or not. And finally do you use that IP address outbound a lot, like do you connect to a lot of other services, websites, etc. And finally, does your ISP have geolocation blocks or other filters in place?
It’s rare for a process to just scan through all possible IP addresses to find a vulnerable service, there are billions and that would take a very long time. Usually, they use lists of known targets or scan through the addresses owned by certain ISPs. So if you don’t have a domain, or that domain is not used for anything else, and you IP address has never gotten on a list in the past, then it’s less likely you’ll get targeted. But that’s no reason to lower your guard. Security through obscurity is only a contributory strategy. Once that obscurity is broken, you’re a prime target if anything is vulnerable. New targets get the most attention as they often fix their vulnerabilities once discovered so it has to be used fast, but tend to be the easiest to get lots of goodies out of. Like the person who lives on a side street during trick-or-treat that gives out handfuls of candy to get rid of it fast enough. Once the kids find out, they swarm. Lol
At work we have 6 environments other than production. At home just one. I created a way to ease deployment of the environment from scratch using a k0sctl config and argocd and the data gets backed up regularly if I need to restore that, too.
Note that often it’s more efficient to move infrequently accessed memory for background tasks to swap rather than having to move that out to swap when something requires the memory causing a delay in loading the application trying to get the RAM, especially on a system with lower total RAM. This is the typical behavior.
However, if you need background tasks to have more priority than foreground tasks, or it truly is a specific application that shouldn’t be using swap and should be quickly accessible at all times, or if you need the disk space, then you might benefit from reducing the swap usage. Otherwise, let it swap out and keep memory available.
I mean LLC is just a nice option if you want it to be easy to transfer it to someone else next time so they don’t have to go through any hassle. Adding someone to an LLC to have control over the assets is just easier than if an individual owns those assets.
But this all comes down to ownership. Someone owns the rights to the domain. Sonatype obeys that ownership. So it really comes down to how the owner wants to handle it. And in the US anyway, lawyers aren’t really required for an LLC, depending on the state you live in. Many it’s just a couple of simple documents and a small fee. That’s why LLCs are used by rich people to hide their money, it’s cheap and easy. I’ve done it many times in multiple states for various projects and never had any legal background. The nonprofit part is a little more work, but as long as you aren’t bringing in any money, its not necessary. Still easy in practice, but more research to figure out. Also, it comes with a lot of benefits like free access to a lot of stuff, including some from Sonatype. But again, not required, just thinking ahead and how I would do it.
First step would be just to contact the domain owner. If they are no longer interested in owning that asset, then they may just give it to you. If they are unresponsive and the domain is not in use for anything else, you could also contact the registrar and report it and if they can’t contact the domain owner there’s a possibility that they may allow you to purchase it depending on their policies.
Again, don’t get discouraged, and I’m totally willing to give pointers if you decide to go the nonprofit LLC route, but first, just contact the owner and maybe they’ll just give you the login for the domain registrar or if they don’t want to give up the ownership of the domain, maybe just authorize you with Sonatype to publish the artifacts. Essentially, because it’s an ownership issue, the owner needs to be involved.
Yeah, companies have abused that to release buggy, incomplete products faster and only make the software stable and feature complete if they make a good profit.