The estimate is not a promise, it’s a guess. I prefer to estimate in sprints because that’s about the resolution we can have confidence in, but management want hours so my process is to estimate the number of hours in a sprint (73.5 for us) plus one sprint
200% overruns are common, especially when requirements change significantly
Wildly depends on the complexity of the feature. If it only takes 4 hours to implement, you might have good enough of an idea what needs to be done that you can estimate it with 1-hour-precision. That is, if you’re only doing things that you’ve done in a similar form before.
If the feature takes two weeks to implement, there’s so many steps involved in accomplishing that, that there’s a high chance for one of the steps to explode in complexity. Then you might be working on it for six weeks.
But yeah, I also double any estimate that I’m feeling, because reality shows that that ends up being more accurate, since I likely won’t have all complexity in mind, so in some sense my baseline assumed error is already 100%.
Hmm, so kinda O(n1.5) scaling? (Of the ratio between definitely required time and possibly required time, anyway, since a -110% error wouldn’t make sense)
Really not sure an estimate for algorithmic complexity is the right way to specify this. 😅
But if your supposed input unit is days, then I guess, yeah, that kind of works out.
So what’s the actual error margin for estimating feature implementation time? It’s going to be nearly the whole thing, right?
The estimate is not a promise, it’s a guess. I prefer to estimate in sprints because that’s about the resolution we can have confidence in, but management want hours so my process is to estimate the number of hours in a sprint (73.5 for us) plus one sprint
200% overruns are common, especially when requirements change significantly
Wildly depends on the complexity of the feature. If it only takes 4 hours to implement, you might have good enough of an idea what needs to be done that you can estimate it with 1-hour-precision. That is, if you’re only doing things that you’ve done in a similar form before.
If the feature takes two weeks to implement, there’s so many steps involved in accomplishing that, that there’s a high chance for one of the steps to explode in complexity. Then you might be working on it for six weeks.
But yeah, I also double any estimate that I’m feeling, because reality shows that that ends up being more accurate, since I likely won’t have all complexity in mind, so in some sense my baseline assumed error is already 100%.
Hmm, so kinda O(n1.5) scaling? (Of the ratio between definitely required time and possibly required time, anyway, since a -110% error wouldn’t make sense)
Really not sure an estimate for algorithmic complexity is the right way to specify this. 😅
But if your supposed input unit is days, then I guess, yeah, that kind of works out.