Readit News logoReadit News
wilun commented on Not all bugs are worth fixing and that's okay   blog.bugsnag.com/applicat... · Posted by u/kinbiko
izzydata · 8 years ago
Those measurements inherently make no sense as you can't know unknown unknowns. Sure, for all intents and purposes if you never encounter a particular defect in a billion years of usage then a bug may as well not exist, but that doesn't mean it doesn't.
wilun · 8 years ago
Those measurements inherently makes more sense than hand-waving; and although mathematically I agree with you, the world is not mathematically pure.

Regardless, I stand that implying that it would be exceptional to be able to write 100 lines of bug-free useful code is ridiculous. I'm not stating that it is easy, nor that most of chunks of 100 lines are written like that. Just that not only this is possible, but this is accessible. Now depending on the field it might be more or less difficult, but in general I suspect there are tons of chunks of 100 lines that have been developed correctly on the first try, and those metrics tends to, non-formally I concede (but if you dig enough what is even formal enough?), weight more in favor of my view point than in favor of the difficulty level being astonishingly high.

wilun commented on Not all bugs are worth fixing and that's okay   blog.bugsnag.com/applicat... · Posted by u/kinbiko
izzydata · 8 years ago
I dare you to write 100 lines of useful code without a bug in it.
wilun · 8 years ago
Are you even trying?

A random search tells me that "The mean DD for the studied sample of projects is 7.47 post release defects per thousand lines of code (KLoC), the median is 4.3 with a standard deviation of 7.99." ( https://ieeexplore.ieee.org/document/6462687/ )

So clearly if you are careful and use state of the art practices, this is very doable.

Not only this is doable, but various individuals and teams in history have been able to reach way lower defect densities. Hey, for all practical purposes, TeX is bug free, for example.

If you are not able to write 100 lines of useful code without a bug in it (not in an infallible way, but at least sufficiently often enough), maybe you should simply study and practice to get that ability.

wilun commented on Not all bugs are worth fixing and that's okay   blog.bugsnag.com/applicat... · Posted by u/kinbiko
ataggart · 8 years ago
>at a certain point, it’s too expensive to keep fixing bugs because of the high-opportunity cost of building new features.

While I may agree with this in the abstract, in practice most folks don't really know whether they're at that point. It also doesn't consider cumulative effects over time.

Bugs don't just affect application stability or user experience. A system that does not behave as designed/documented/expected is a system that will be more difficult to reason about and more difficult to safely change. This incidental complexity directly increases the cost of building new features in ways difficult to measure. Further, new features implemented by hacking around unfixed flaws will themselves be more difficult to reason about and more difficult to change, exacerbating the problem.

The larger the system grows over time, the more people working on it over time, the faster this incidental complexity problem grows over time. At a certain point, it's too expensive to not fix the bugs because of the increasingly high cost of building new features. At that point, folks start clamouring for a rewrite, and the cycle begins anew.

wilun · 8 years ago
If the only alternative is between a rewrite, and not-fixing the mess gradually, then I'll take the rewrite anytime and let the cycle continue.

The problem is: is your rewrite really going to be a full-rewrite, or some kind of hybrid monster (at the architectural level, of course, there is no problem in reusing little independent pieces, if any exist)? Because you can easily fall in all the traps of both sides, if the technical side is not mastered well enough by the project management...

wilun commented on Not all bugs are worth fixing and that's okay   blog.bugsnag.com/applicat... · Posted by u/kinbiko
ryandrake · 8 years ago
One of the toughest things I struggled with while transitioning from a larval junior developer to a senior tech lead to a project manager was the fact that (at least in the context of a for-profit business) not all bugs need to be fixed, even the ones you personally think are really really bad. The goal is to make money, not necessarily by producing the most perfect software. The quality bar needs to be high, but there is always a point beyond which the returns for fixing a bug are outweighed by the costs of fixing it: the direct engineering cost, the opportunity cost of not working on a feature, the cost of missing a deadline and not releasing in time for Christmas, the cost associated with the extra risk you’re taking by making a late change, etc. Good places judge all these costs, and the best ones have formal processes for judging lots of bugs at scale and constantly re-evaluate whether a bug should be fixed at this point in the project or not. Sometimes the clear right answer is “no.”
wilun · 8 years ago
That's fair, but one has to remember that some of the key points are to be balanced, and that, like you said, "the quality bar needs to be high".

And I'm more for prioritizing trying to not introduce bugs than to fix all the old ones. Which is challenging on, how could we call that?, "legacy" software. So that priority can and must be reversed temporarily when that "legacy" is too much.

So it's all very context dependent, and not having anybody (or too few) working on making things better when its needed is not going to deliver any kind of velocity in the long term (and probably the short term velocity is already way too low in those cases). Too bad for the mythical time to market...

So you have to be able to say no to bugfixes, but you certainly also have to be able to say no to the eternal rush of new half-backed features, when needed. A short-term ever obsession on "opportunity cost of not working on a feature" could yield quite paradoxical results if trying to build them on some kind of zombie legacy code (that is only ever edited with disgust and great difficulty, but never seriously refactored).

Not only this balance is hard to achieve, but your role as a senior tech lead and project manager is certainly to consider carefully the cleanup needs, and be an advocate for them when needed, including by pushing back against feature creep pressure. Because if you are not, most of the time nobody else will. As a tech lead, this mean among other things, that a black box approach of parts of the maintained software is out of the question (of course you can delegate, but even then its imperative to stay in the equation for that purpose, only with less details). Paradoxically, even if the quality is crap and the organization notices and tracks loads of bugs, most people will be happy at the moment the bugs are triaged and assigned and eventually "fixed" by more horrible garbage (that is, the impression that something is done), rather than doing the right thing that is to organize a cleanup of the software more in depth.

I've got the impression that it is rare to find projects where this balance is achieved correctly, but maybe it's only because of bad luck. Well in lots of cases, the famous ones (I'm thinking on the level of Linux, Firefox, Python, etc. not just your random niche software) are actually not that bad, and their competitors have a way shorter lifespan when not as balanced...

wilun commented on Google warns Android might not remain free because of EU decision   theverge.com/2018/7/18/17... · Posted by u/sahin
Anon1096 · 8 years ago
Play services makes sure that things like fine location services work on your phone. It can't be uninstalled or disabled because it's tied much deeper to the os than a normal app, and nothing else offers the same functionality that you can normally install. (There's competitors like microg but you can't install them without having an unlocked bootloader)
wilun · 8 years ago
Oh the good old IE argument "but it is technically integrated to the OS and providing all kind of essential services, so we are not abusing our monopoly"

While it is not (can be removed/replaced, the limitations preventing to do that are completely artificial and this is probably playing a good role in what has been judged), and even if it was, things should have been bundled differently to begin with (if they can't, that can be considered a conscious decision potentially motivated by a desire to abuse a monopoly, so in all cases that should be redesigned)

So it's mostly same cause/same effects from an high level overview -- and I'm not surprised. Maybe the way to become compliant (after their pointless whining phase has passed) will even be similar? I'm not buying the business model argument. Google browser, play store and so over are now extremely well established and won't be abandoned by any kind of mass exodus any time soon. In ten years, they can be challenged, but that's the fucking POINT: practical competition should be allowed.

It's astonishing that everybody and their dog was scandalized by MS behavior in the time (and some even are today, despite present MS being quite different from the old one), while Google has somehow managed to be considered friendly regardless of the doing exactly the same shit, if not worse, while simultaneously even pretending that they are not evil. Well maybe evil is a strong word, and I can concede that they did not pretend they are not hypocrites :p

wilun commented on Why Isn't Debugging Treated as a First-Class Activity?   robert.ocallahan.org/2018... · Posted by u/dannas
dotancohen · 8 years ago
I hope that you're leaving a comment in the function noting why you think that NUL should not be checked. Also, depending on the field, you might want to consider defensive programming.
wilun · 8 years ago
The meaning of "defensive programming" is highly contextual even within a given "field". Also, major influential parts of the industry (last example in date being the C++ committee) are moving away from indistinct checks and exceptions, and toward contractual programming.
wilun commented on Why Isn't Debugging Treated as a First-Class Activity?   robert.ocallahan.org/2018... · Posted by u/dannas
zamalek · 8 years ago
> WHY

If a developer doesn't check for a null value, are you really going to head to the documentation to find out why? Is the segfault really by-design? It might sound facetious, but that's the 90% of bugs: not specifically segfaults, but trivial mistakes, invalid values and logic problems. Only a debugger can show you the steps and conditions involved that lead to that invalid value. Sometimes, through debugging, you find a problem that does span multiple layers or components.

If something spans multiple layers or components then that's a design problem. When you are solving a bug like this you have to "debug" at a much higher abstraction level. You shouldn't use a debugger for this (as you've pointed out); you should use a team of peers (documentation alone is not the correct tool). That team might rely on documentation, but would have to keep in mind that documentation rots over time.

If you're solving the former with the processes of the latter, I'm surprised you get anything done. Documentation won't tell you where an invalid value originates from or why. Only extremely verbose logging or debugging can do that.

wilun · 8 years ago
> Only a debugger can show you the steps and conditions involved that lead to that invalid value.

No. Static analysis can and is actually what is used most of the time. Either with an automated tool, or with your brain.

Of course debugging is also needed, but the right mix between the two is needed, and you are actually doing it even if you use a debugger a lot (at least I hope so, otherwise you are probably not fixing your bugs correctly way too often).

wilun commented on Why Isn't Debugging Treated as a First-Class Activity?   robert.ocallahan.org/2018... · Posted by u/dannas
wilun · 8 years ago
While we probably will always have to debug, and if not the "code" it will be a specification formal enough so that it can be considered code anyway, there are different ways to approach its role within the lifecycle of software development: on the two extremes, one can "quickly" but somehow randomly throw lines without thinking much about it, then try the result and correct the few defects that their few tries reveal (and leave dozen to hundreds of other defects to be discovered at more inconvenient times); or one can think a lot about the problem, study a lot about the software where the change must be done, and carefully write some code that perfectly implement what is needed, with very few defects both on the what and on the how side. Note that the "quick" aspect of the first approach is a complete myth (if taken to the extreme, and except for trivially short runs or if the result does not matter that much), because a system can not be developed like that in the long term without collapsing on itself, so there will either be spectacular failures or unplanned dev slowdown, and if the slowdown route is taken, the result will be poorer as if a careful approach would have been taken in the first place, while the velocity might not even be higher.

Of course, all degrees exists between the two extremes, and when going too far on one side for a given application is going to cause more problems than it solves (e.g. missing time to market opportunities).

Anyway, some projects, maybe those related to computer infrastructure or actually any kind of infrastructure, are more naturally positioned on the careful track (and even then it depends on which aspect, for ex cyber security is still largely an afterthought in large parts of the industry), and the careful track only need debugging as a non-trivial activity when everything else has failed, so hopefully in very small quantities. But it is not that when it is really needed as a last resort, good tooling are not needed. It is just that it is confined to unreleased side projects / tooling, or when it happens in prod it marks a so serious failure that compared to other projects, those hopefully do not happen that often. In those contexts, a project which need too much debugging can be at the risk of dying.

So the mean "value" of debugging might be somehow smaller than the mean "value" of designing and writing code and otherwise organizing things so that we do not have to debug (that often).

wilun commented on Ancient “su – hostile” vulnerability in Debian 8 and 9   j.ludost.net/blog/archive... · Posted by u/l2dy
caf · 8 years ago
There is a model, it's just not particularly well publicised: a file descriptor is a capability.

That's it.

wilun · 8 years ago
Is it efficient and sufficient though? And can and do we build real security on top of it?

This issue shows systems have been built for decades with blatant holes because it was not taken into account in even core os admin tools.

There is the other problem corresponding to the myth that everything is a fd. Which has never been true, and is even less and less as time passes.

Also, extensive extra security hooks and software using them are built, but not of top of this model.

Finally, sharing posix fd across security boundaries often causes problems because of all the features available for both sides, for which the security impact are not studied.

A model just stating that posix fd are capa is widely insufficient. So if this is the only one, even in the context in pure Posix we already know this is an extremely poor one.

wilun commented on Ancient “su – hostile” vulnerability in Debian 8 and 9   j.ludost.net/blog/archive... · Posted by u/l2dy
wilun · 8 years ago
Posix TTY and more precisely stdin/stdout/stderr inheritance and internals of FD have a completely insane design. There is the famous divide between file descriptors and file descriptions. Hilarity can and will ensue in tons of domains. I nearly shipped some code with bugs because of that mess (and could only avoid those bugs by using threads; you can NOT switch your std fd to non-blocking without absolutely unpredictable consequences), and obviously some bugs of a given class can create security issues. Especially, and in a way, obviously, when objects are shared across security boundaries.

Far is the time when Unix people were making fun of the lack of security in consumer Windows. Today, there is no comprehensive model on the most used "Unix" side, while modern Windows certainly have problems in the default way they are configured, but at least the security model exist with well defined boundaries (even if we can be sad that some seemingly security related features are not considered officially as security boundaries, at least we are not deluding ourselves into thinking that a spaghetti of objects without security descriptors can be shared and the result can be a secure system...)

u/wilun

KarmaCake day386April 24, 2016View Original