

Fuck if this isn’t the truth… Saying this as a Sr. SRE with no degree or certs.
Fuck if this isn’t the truth… Saying this as a Sr. SRE with no degree or certs.
I have a function called up
. I do up X
where X
is the number of directories I want to go up.
up() {
if [[ $# -eq 0 ]]; then
cd ..
return 0
fi
local path i
for (( i=0; i < $1; i++ )); do
path+=../
done
cd "$path"
}
EDIT: Don’t know if it’s just me but if you see <
it should be the less than character.
Don’t overthink this. Just start using something.
I used to be in love with Awesome but I think it’s been more than 10 years since I used it last. I remember the software being wonderful and the people in the people in the community being stereotypical, smug, “rtfm” types… That was more frustrating than anything else about Awesome.
Agreed. I could run water sensors and solenoid valves for my basement water heater off of an arduino or rpi. I could also use a commercial product that has a warranty and a product engineering team and a QA department and etc etc…
I’m going commercial. The potential for damage to be done is too high for some hack job.
I’ve been in FOSS software for more than 20 years but honestly find the absolutism insufferable. It’s not always practical and there are more important hills to die on.
Mostly as kodi/plex front ends. I’ve set them up as a kubernetes cluster in the past but they didn’t have enough ram to run my torrent client. Now I just use an old Thinkpad running talos.
I kinda don’t care. The providers do all of the work anyway and, I think more importantly, terraform still feels like transitional tech. I might use it to stand up an initial working cluster but, in the long run and if given the choice, I’d want to use something closer to Crossplane for managing infrastructure.
Terraform is still quite manual and doesn’t mandate consistency… You have to build automation around it and because drift is so easy it results in a system that can’t just be fully automated… You always have to check to see if changing a simple resource tag is going to revert a manual IAM permissions change that was made to a service account 3 weeks ago…
I’ve been using terraform almost daily for years but I wouldn’t be sad if it stopped existing.
Certbot in cron if you’re still managing servers.
I’m using cert-manager in kube.
I haven’t manually managed a certificate in years… Would never want to do it again either.
It auto discovers machines/instances/VMs/containers in the mesh and figures out the secure routing on the fly. If you couldn’t ensure a consistent IP from the home address it wouldn’t matter… The service mesh would work it out.
It is probably overkill for this project though… Something to think about…
With Prometheus I would add a section to the scrap config to rewrite the labels attached to each metric. Does such a thing exist for telegraf? I’ve never used it.
Or could you change the grafana query to just aggregate the values for all pods in that deployment?
Istio is a service mesh. You basically run proxies on the vps and the rpi. The apps make calls to localhost and the proxy layer figures out the communication between each proxy.
Duck dns is just a dynamic dns service. It gives you a stable address even if you don’t have a static ip.
This would be nice because I don’t need a static ip and I don’t have to leak my ip address.
How does the VPS know how to find your rpi?
Could you not just use something like duck dns on a cronjob and give out that url?
I would also need to figure out how to supply ejabberd with the correct certificates for the domain. Since it’s running on a different computer than the reverse proxy, would I have to somehow copy the certificate over every time it has to be renewed?
Since the VPS is doing your TLS termination, you would need an encrypted tunnel of some sort. Have you considered something like Istio? That provides mTLS out of the box really… I’ve never seen it for this kind of use case but I don’t see why it wouldn’t work.
Figured this would be one of the responses. Thanks. I don’t interact with node very often. I assumed there was a better option but wasn’t sure which… This is just the first result.
You can do it bro. Dockerfiles are basically just shell scripts with a few extras.
It uses npm to build so start with a node base container. You can find them on docker hub. Alpine-based images are a good starting point.
FROM appdynamics/nodejs-agent:23.5.0-19-alpine
RUN git clone https://github.com/stophecom/sharrr-svelte.git && \
cd sharrr-svelt/ && \
npm run build
If you need to access files from outside of the container, include a VOLUME
line. If it needs to be accessible from a specific network port, add an EXPOSE
line. Add a CMD
line at the end to start whatever command needs to be run to start the process.
Save your Dockerfile and build.
docker build . -t my-sharrr-image
There are build instructions in the readme. What’s stopping you?
I would love an Onion for software. This was great.
Yep. IO.
OP, this might be overkill for you but it might be worth standing up a grafana/prometheus stack… You’d be able to see this stuff a lot faster and potentially narrow in on a root cause.
Ages ago I did this with some cli tool that found episode/movie metadata stored in themoviedb or somewhere and just built a shell script around it. I don’t remember the name unfortunately. Now I just let Plex manage it and I don’t bother.
This looks like it might work though:
A small container running in kubernetes on an old laptop.