Best thing is, they introduced some settings to turn that auto-conversion off and they don’t work 🤣 can’t make that stuff up.
Best thing is, they introduced some settings to turn that auto-conversion off and they don’t work 🤣 can’t make that stuff up.
Hetzner ❤️
I’m pretty sure it’s not FALLS()
but WENN()
, at least the last time I used Excel.
I’d suggest “vorzeichnenbehaftete Ganzzahl” (maybe vbGanz) and “vorzeichenlose Ganzzahl” (vlGanz) 🤣 please don’t make that a thing
Does that get translated if someone else with a different language opens that file?
Could be because Ganz is short for Ganzzahl and a noun.
Don’t use floats when you need to get accurate stuff
Floats are accurate. Could you name a situation (except money) where you think floats are not accurate enough to handle it?
Sure, just asign them a random Greek letter and call it a day 🤣
Not only for audio, but everything that doesn’t have to be an exact base 10 representation (like money). Anything that represents something “analog” or “measured” is perfectly fine to store in a float. Temperature, humidity, windspeed, car velocity, rocket acceleration, etc. Calculations with floats are perfectly accurate and given the same bit length are as accurate as decimal types. The only thing they can’t do is exactly(!) represent base 10 decimals but for a very large amount of applications that doesn’t matter.
That’s not really true and it depends on what you mean. If your decimal datatype has the same number of bits it’s not more accurate than base 2 floats. This is often hidden because many decimal implementations aren’t 64 bit but 128 bit or more. But what it can do is exactly represent base 10 numbers which is not a requirement for a lot of applications.
You can use floats everywhere where you don’t need numbers to be base 10. With base 2 floats the operations couldn’t be more accurate given the limit of 64 bits. But if you write f64 x = 0.1;
and one assumes that the computer somehow stored 0.1
inside x they already made a wrong assumption. 0.1 can’t be converted into a float because it’s a periodic in base 2. A very very pedantic compiler wouldn’t even let you compile that and force you to pick a value that actually can be represented.
Down the rabbit hole: https://zeta.one/floats-are-not-inaccurate/
But that’s not because floats are inaccurate. A very very pedantic compiler wouldn’t even let you write f64 x = 0.1;
because 0.1 (and also 0.2 and 0.3) can’t be converted to a float exactly (note that 0.5, 0.25, 0.125, etc. can be stored exactly!)
The moment you write f64 x = 0.1;
and expect the computer to store that inside a float you already made a wrong assumption. What the computer actually stores is the float value that is as close as possible to 0.1. But not because floats are inaccurate, but because floats are base 2. Note that floating point types in general don’t have to be base 2 - they can be any base (for example decimal types are base 10) but IEEE754 floats are base 2, because it allows for simpler hardware implementations.
An even more pedantic compiler would only let you write floating point in binary like 10.10110001b
and let you do the conversation, because it would make it blatantly obvious that most base 10 decimals can’t even be converted without information loss. So the “inaccuracy” is not(!) because float calculations are inaccurate but because many people wrongly assume that the base 10 literal they wrote can be stored inside a float.
Floats are actually really accurate (ignoring some Intel FPU hardware bugs). I skipped a lot of details which you can find here: https://zeta.one/floats-are-not-inaccurate/
Equipped with that knowledge your calculation 0.1+0.2 != 0.3
can simply be translated into: “The closest float to 0.1” + “The closest float to 0.2” is not equal to “The closest float to 0.3”. Keep in mind that the addition itself is perfectly accurate and without any error/rounding(!) on every EEE754 conforming implementation.
Floating point numbers and arithmetic is not inaccurate. They are actually very accurate but a lot of developers have inaccurate assumptions about them. They can’t exactly represent base 10 decimals. That’s the only inaccuracy. If you have two floating point numbers and you let’s say add or multiply them the result is always the closest floating point representation of the real result.
The list of misconceptions wouldn’t reasonably fit in a comment, but if you are really interested and have a few minutes you could give that a read: https://zeta.one/floats-are-not-inaccurate/
Quite the contrary. I have a red-green deficiency (and so do about 6% of men). Viridis Color scale is pretty nice but two much colors are hard to read for a lot of people
I also think that this is what will happen (not only discussed) but unless we master fusion it’s practically just fixing a symptom and we’d have to do that for quite a while and the oceans will probably become too acidic.
There is a leading space in the string itself, so OP is either a top tier troll or put in no effort at at and either way deserves the hate 😄
BirdNET Ergebnis über das ganze File hinweg: Rotkehlchen 100%, Rotrückenmeise 28%, Singdeossel 21%
That’s exactly what I did and never looked back. Just installed code-server + a few vs code plugins. Automatically synced via some some scripts that push and pull+merge git commits, done. No need for one of those million note taking apps. I also installed polyglot notebooks for vs code to embed code into notes.
Node: You fill up ecosystems hard drives.
I’m pretty sure the mobile teams UI is also just web stack with a webview wrapper. They want you to use the app so you have to install intune and to get deeper into your system.