

I paid for the lifetime membership ~6 years ago so I’m going to stick with it. Plus I just use it for my own home. It’s not like I’m serving a bunch of other clients. But I’ll switch to Jellyfin if the lifetime membership ever gets taken away.
I paid for the lifetime membership ~6 years ago so I’m going to stick with it. Plus I just use it for my own home. It’s not like I’m serving a bunch of other clients. But I’ll switch to Jellyfin if the lifetime membership ever gets taken away.
In a nutshell, if your app isn’t able to make a direct connection to your Plex Media Server when you’re away from home, we can act as sort of a middle man and “relay” the stream from your server to your app. To accomplish this, your Plex Media Server establishes a secure connection to one of our Relay servers. Your app then also connects securely to the same Relay server and accesses the stream from your Plex Media Server. (In technical terms, the content is tunneled through.)
So, your Plex Media Server basically “relays” the media stream through our server so that your app can access it since the app can’t connect with your server directly.
Source: https://support.plex.tv/articles/216766168-accessing-a-server-through-relay/
It’s not a requirement to stream and it’s sort of dumb they are lumping this relay service as a part of the remote streaming. Remote streaming should be allowed for free - if you are not a subscriber. The relay should just be a paid service, which makes sense. But if it’s a direct connection to my server, it should be free.
That being said, I understand how Plex may have built some technical debt into this relay system. It might be hard for them to decouple the relay from the remote streaming. What they should have done is:
We are removing the relay service as a free service, but you can still do remote streaming with a direct connection.
And they should have built their architecture in a way that’s easy to decouple the two services.
That’s true for PC gaming but might not be true for these NUCs. I’m not an expert but I wouldn’t be surprised if they shaved costs with bulk purchases of RAM and SSDs for these devices. Regardless, I was just addressing a point you made about being forced to buy the components yourself which isn’t the case. Pricing is probably going to vary wildly depending on what you pick and you might be able to save money if you’re ok with less RAM or less storage.
You shouldn’t have to pick it all separately. Some Nucs are “barebones” and others come with RAM, CPU, Storage, etc. This Amazon link purportedly claims to have RAM, SSD, etc. pre-built into the ASUS NUC.
That being said, any NUC’s integrated GPU is likely going to be far superior to the RPi5’s CPU encoding (which I assume was your previous setup).
If you want to spend ~$1000, you might be better off with just going straight for the dedicated NAS.
The app isn’t available on as many platforms. The original comment claimed the TV their MIL uses doesn’t have a Jellyfin app and would require side loading. I would argue that’s a pretty big barrier for most people.
Why didn’t you like Hashicorps Vault? I want to know for my own edification.
I hope so. I don’t want to manage two different address spaces in my head. I prefer if one standard is just the standard.
Display and layout rules aren’t difficult at all. Maybe I’m just not experienced enough. I’ve been a web dev for nearly a decade now and I feel like I’ve got the hang of it. That being said, I don’t work on projects that have to work on everything from a Nokia to an ultra wide monitor. We shoot for a few common sizes and hope it clears between edge cases nicely. What is an example of something that wraps randomly?
Genuinely, though, CSS is fairly clear cut about the rules of positioning and space. Relative positioning is one of the most important concepts to master since it allows things to flow via the HTML structure and not extra CSS. Fixed positioning is as if you had no relative container other than the window itself. Absolute positioning is a little weird, but it’s just like fixed positioning except within the nearest parent with relative positioning.
Everything else is incredibly straight forward. Padding adds space within a container. Margins add space outside a container. Color changes text color. Background-color changes the background color of an element.
Top, left, right, and bottom dictate where the element should be positioned after the default rules are applied. So if you have a relative div inside a parent which is half way down the page, top/right/left/bottom would move the element relative to it’s position within the parent. If you made the div fixed, it would be moved relative to the window.
Lastly, if you’re designing a webpage just think in boxes or rows and columns. HTML can define 75% of the webpage structure. Then with just a bit of CSS you can organize the content into rows/columns. That’s pretty much it. Most web pages boil down to simple boxes within boxes. It just requires reading and understanding but most people don’t want to do that to use CSS since it feels like it should just “know”.
As someone who has built QT, Swing, and JavaFx applications, I way prefer the separation of concerns that is afforded us via HTML JS and CSS.
I think this article does a good job of asking the question “what are we really measuring when we talk about LLM accuracy?” If you judge an LLM by its: hallucinations, ability analyze images, ability to critically analyze text, etc. you’re going to see low scores for all LLMs.
The only metric an LLM should excel at is “did it generate human readable and contextually relevant text?” I think we’ve all forgotten the humble origins of “AI” chat bots. They often struggled to generate anything more than a few sentences of relevant text. They often made syntactical errors. Modern LLMs solved these issues quite well. They can produce long form content which is coherent and syntactically error free.
However the content makes no guarantees to be accurate or critically meaningful. Whilst it is often critically meaningful, it is certainly capable of half-assed answers that dodge difficult questions. LLMs are approaching 95% “accuracy” if you think of them as good human text fakers. They are pretty impressive at that. But people keep expecting them to do their math homework, analyze contracts, and generate perfectly valid content. They just aren’t even built to do that. We work really hard just to keep them from hallucinating as much as they do.
I think the desperation to see these things essentially become indistinguishable from humans is causing us to lose sight of the real progress that’s been made. We’re probably going to hit a wall with this method. But this breakthrough has made AI a viable technology for a lot of jobs. So it’s definitely a breakthrough. I just think either I finitely larger models (of which we can’t seem to generate the data for) or new models will be required to leap to the next level.
#moustache {
position: absolute;
bottom: 10px;
margin: 0 auto;
}
If that doesn’t work:
#moustache {
position: absolute;
bottom: 10px;
left: 50%;
transform: translateX(-50%);
}
Relative positioning is preferred but not always available if the parent face is positioned absolutely.
Edit: adjusted bottom from 0 -> 10px since 0 would be at the bottom of the chin but there is obviously some padding to bring it nearer the lip
They probably have tutorials on their website. Most MVC frameworks have pretty decent guides.
Cause if you don’t forget your own stainless steel, you can’t be trained on proper defibrillator techniques?
I was comparing it to civil or mechanical engineering. I agree that programming/software is growing and “infiltrating” our lives. That’s why I think it will become a licensed/certified term in the future. Software engineer will require a cert and some products will require certified engineers. Whereas web apps developers (most likely) will not use that title most of the time and we will just bifurcate those who work on “critical software” and those that do not.
Well that’s my point. The term “engineer” is protected in a lot of other industries but not software.
Software engineering is just what any “engineering” field would be if they didn’t have standards. We have some geniuses and we have some idiots.
Mechanical engineers, civil engineers, electrical engineers, etc. are often forced to adhere to some sort of standard. It means something to say “I’m a civil engineer” (in most developed nations). You are genuinely liable in some instances for your work. You have to adhere to codes and policies and formats.
Software engineering is the wild west right now. No rules. No standards. And in most industries we may never need a standard because software rarely kills.
However, software is becoming increasingly important in our daily lives. There will likely come a day wherein similar standards take precedence and the name “software engineer” is only allowed to those who adhere to those standards and have the proper certs/licenses. I believe Canada already does this.
Software engineers would be responsible for critical software, e.g: ensuring phones connecting to an emergency operator don’t fail, building pacemakers, securing medical records, etc. I know some of these tasks already have “experts” behind them. But I don’t think software has any licensing/governing.
Directly opposed to “engineering” would be the grunt work which I do.
TBF if you want, you can have a bastion server which is solely whitelisted by IP to stream your content from your local server. It’s obviously a pivot point for hackers, but it’s the level of effort that 99% of hackers would ignore unless they really wanted to target you. And if you’re that high value of a target, you probably shouldn’t be opening any ports on your network, which brings us back to your original solution.
I, too, don’t expose things to the public because I cannot afford the more safe/obfuscated solutions. But I do think there are reasonable measures that can be taken to expose your content to a wider audience if you wanted.