I told them to leave a pamphlet and fuck off, but the pamphlet was a fixed-point combinator that put them back on my porch again.
Formerly u/CanadaPlus101 on Reddit.
I told them to leave a pamphlet and fuck off, but the pamphlet was a fixed-point combinator that put them back on my porch again.
Well, that’s maddening.
Hmm, so kinda O(n1.5) scaling? (Of the ratio between definitely required time and possibly required time, anyway, since a -110% error wouldn’t make sense)
Taking a wild guess at the source of the confusion, I should be clear that I love Haskell. It’s great for a lot of what I personally end up coding, namely math things that are non-heavy by computer standards but way too heavy to solve by hand. This isn’t naysaying.
I mean, you’re not going to be using an SQL database most likely for either of those applications (I realize I assumed that was obvious when talking about transactions, but perhaps that was a mistake to assume), so it’s not really applicable.
To be clear, I was introducing two new examples where I think this problem would come up. It could be that I’m missing something, but I’ve had this exchange a few times and been unimpressed by the solutions offered. The IO
in those cases could get pretty spaghetti-ish. At that point, why not just use a state?
Like, using a list, which is a monad, you could code a Turing machine, and it could have a tape specifying literally anything. I can’t imagine that one would ever come up, though.
Ironically, I actually probably wouldn’t use Haskell for heavy data processing tasks, namely because Python has such an immense ecosystem for it (whether or not it should is another matter
It certainly is, haha. If it’s heavy Python is just calling Fortran, C or Rust anyway.
I 'member.
So what’s the actual error margin for estimating feature implementation time? It’s going to be nearly the whole thing, right?
I’m not sure what you mean by “locality of reference”. I assume you mean something other than the traditional meaning regarding how processors access memory?
Shit! Sorry, got my wires crossed, I actually meant locality of behavior. Basically, if you’re passing a monad around a bunch without sugar you can’t easily tell what’s in it after a while. Or at least I assume so, I’ve never written anything big in Haskell, just tons of little things.
To give a concrete example:
Yeah, that makes tons of sense. It sounds like Transaction
is doing what a string might in another language, but just way more elegantly, which fits into the data generation kind of application. I have no idea how you’d code a game or embedded real-time system in a non-ugly way, though.
It also has a type system that is far, far more powerful than what mainstream imperative programming languages are capable of.
Absolutely. Usually the type system is just kind of what the person who wrote the language came up with. The Haskell system by contrast feels maximally precise and clear; it’s probably getting close to the best way to do it.
Yeah, no side-effects seems like it could only improve readability.
It is, although I’m not sure it’s complete. A list is one kind of monad, despite working like non-mutable linked lists would in any other language. They just happen to behave monadically, providing an obvious and legal interpretation of the monad functions. Going off of OP you might think monads are all Maybe
.
I will say that the concept is overhyped in at this point, at least in Haskell, and there’s a lot of monads available that do what plain functional code could but worse.
That’s a good run down of the “why”. The thing is, there’s way more things that are monads than things that have to be looked at as monads. AFAIK it only comes up directly when you’re using something like IO
or State
where the monad functions are irreversible.
From the compiler end, are there optimisations that make use of the monadic structure of, say, a list?
Whatever Haskell programmers decide to call a monad today. It’s wandered pretty far away from whatever mathematical definition, despite insistences to the contrary.
(Technically, the requirement is to implement a few functions)
It heavily depends on the application, right? Haskell is life for algorithmically generating or analysing data, but I’m not really convinced by the ways available in it to do interaction with users or outside systems. It pretty much feels like you’re doing imperative code again just in the form of monads, after a while. Which is actually worse from a locality of reference behavior perspective.
He’s not the first upstart dictator to try for autarky, so I’d also say ideology. There’s lots of things in the equation other than making money, basically.
And don’t forget Canadian oil and auto parts - those ones are pretty much sector-killing if they died suddenly. Mexico controls the supply of vegetables.
Really, any idea that the (US) government only cares about money should be dead now, unless you’re willing to introduce massive amounts of conspiracy theory to explain the disconnect.
Yes, definitely. Rereading the StackOverflow, “stack compatible” just means it can mutate the stack frame in place without resizing it in the optimised code. There’s a number of ways trying to handle tail (co)recursion sucks if you try and get around that. Here’s a Dr. Dobbs about it.
I mean, “criticism” is a little extreme even, because it’s a humour post, and I was just riffing back.
Apparently GCC does indeed do tail-call optimization at -O2
Hmm, I wonder why it’s considered O2 heavy. The concept of turning tail recursion into loops is simple.
But in that case, I’m not sure why the solution to the denial of service vulnerability isn’t just “compile with -foptimize-sibling-calls.”
Probably because some of the recursion involved is non-tail. Actually, it looks like GCC might still be able to cases of corecursion where the functions are “stack compatibale”, but presumably most functions aren’t, and who knows what little knots they tied the parsing functions in this XML library into.
The basic definition would be something like use of a function in that function’s own code. It’s pretty easy to find examples that aren’t tail-recursive specifically, like mergesort, and examples within those that would overflow a hardware stack, like in OP. And that’s without looking at (mildly) exotic examples like the Ackermann function.
Basically, the “Please leave recursion to math and keep it out of (in particular C) software” in OP means don’t define functions using those functions. It’s pretty and it can work, but not reliably.
I was answering a specific question put directly to me. There’s no “point”, exactly.
Who here is misunderstanding how computers implement recursion?
Taking a wild stab at how you might be reading this exchange, my original reply was about the title of the post, which implies a CompSci professor would be unhappy about someone criticising the use of recursion in code.
(in a language that doesn’t have tail-recursion optimization)
Wait, it doesn’t? I had kind of assumed GCC (for example) would do that at anything greater than -O0.
They don’t prevent it, they just don’t implement it. A real (physical) CPU is a fixed electrical circuit and can’t just divide in two the way it would have to for the ideal mathematical version of recursion. If you want a simple way to visualise this, how would you implement recursion (as opposed to just loops) on a Turing machine with tapes?
Different CPUs might have strategies to approximate certain kinds of recursion, but at some point my own knowledge ends, and there has been many different designs. Tail recursion in particular is usually just turned back into a loop at the compiler, and typical modern architectures implement a call stack at the hardware level, which allows you to do limited-depth recursion, but breaks like in OP if you try to go too deep.
Oh shit, there might actually be a guy with that name.