Some users are not using a system because they like it but because their company bought it.
In those situations biz > user by definition and the developers end up having to cater to the needs of the middle managment of their customers rather than the needs of the actual users. The price of not doing this is failing to win the contract. Users then get locked in to whatever crap you have time to provide for them while you're really busy implementing new features that middle managment like.
You essentially only need a nice looking login screen and some sort of reporting and the rest.....doesn't matter much.
I am being a bit cynical but it does pay as an engineer to know if that's fundamentally the kind of company you're in.
An online retailer, for example, is hypersensitive to its users and I know of at least one that has different versions of its website for different countries because they know that Germans like X and Americans like Y - small changes make a huge difference to sales.
Other companies have no sensitivity to the usability of their products because the people who buy their products are never the users.
I used to work at a company that sold SaaS to big corporations.
We needed to win contracts, so we needed to tick their checkboxes, but we also cared for the user experience (good UX was almost never a strict requirement from our customer).
Our competitors' software was very painful to use, so we wanted to differentiate in this regard.
This made our own lives easier as the training was easier, the users we interacted with (that usually had no say in whether our solution was bought to them or our competitors') were happier and recommended to their managers (where they could) buying more stuff from us.
In the end this was 80% driven by pride (our software doesn't suck) and empathy (I couldn't stand using the software if it was as bad as our competitors') but to some extent this was also in our interest (especially in the long term, where your brand is built).
> the users we interacted with (that usually had no say in whether our solution was bought to them or our competitors') were happier and recommended to their managers (where they could) buying more stuff from us
And then the happy users switch to another company and start recommending your stuff to their new managers. It's an extra source of leads and sales :-)
> This made our own lives easier as the training was easier
Unfortunately that works until some brilliant mba shows up with a plan to sell 100x more consulting and more training by making the product intentionally much more difficult to use.
I worked at a company in a market with similar purchasing dynamics, but we focused exclusively on the users. We committed to a "product-led growth strategy," which meant no salespeople. Product focused entirely on the user experience. The problem was, we weren't selling to users. We were selling to people in the users' organizations who purchased software. These people did not have the same job as the users, and they had no first-hand experience with the product.
It was a doomed approach. We needed salespeople to get inside the heads of the purchasers, learn how to explain the benefit to them, and coach the users on explaining the benefits to other people in their organization. We needed salespeople to bridge the chasm between the users and the purchasers.
Those that write the checks are also a user, just with different expectations than the daily user. Management is also a user that differs between the check writers and actually end users.
Typically middle management are users as well, but they are a minority of the user base and use a different set of features (like reporting). So now this becomes a question of which users are prioritized and finding a balance between prioritizing the experience of the small number of users who hold power over the rest, and keeping the product usable enough for the rest of the users to provide some value in data to management.
I've experienced this. I worked for a company that sold software to municipal governments. All that mattered was the Mayor/Town Manager/City Councils opinion. If the reports looked good and the price was right, they were going to renew.
I remember being in site meetings where people who used it every day would tell us to our face how terrible it was. Without fail, that site renewed with some promises to fix a couple specific bugs and a minimal price increase.
This is short term thinking - users who hate software can voice enough complaints to get things changed in at least some situations. (not all: many SAP programs with garbage UIs exist)
I wouldn’t say you have to cater to middle management instead of the end user. You just can if you want to. Of course you need to consider what middle management needs, since they’re paying you, but there is usually room for craftsmanship to bring a truly great UX to the end user. Most software engineers are lazy and lack a true sense of craft, so they usually skip building a great UX when it’s not a requirement.
In enterprise software, you cater exclusively to management, they will turn a blind eye to 99% of issues as long as the team is able to get the work done.
Take one look at EMRs and realize that its sold / marketed to .01% of the hospital, despite 80% of the health system using it.
> Most software engineers are lazy and lack a true sense of craft, so they usually skip building a great UX when it’s not a requirement.
From my observations, it's usually not that devs are lazy or lack a sense of craft, it's that their employers are not willing to spend money building something that isn't actually a requirement.
> I know of at least one that has different versions of its website for different countries because they know that Germans like X and Americans like Y - small changes make a huge difference to sales
Can you speak any more to this? Do you or anyone else have any examples? I would be very interested to see.
This is a very narrow minded take. Every big software success story - Gmail, Slack, Dropbox, Zoom… - was business-to-consumer explicitly or in disguise.
Then again, I’m not saying much that vendors screw up pricing when they choose a price other than “$0,” which is an essential part of B2B-disguised-as-B2C software. Easy for me to say.
Anyway, the boomers you are talking about are retiring out of the workforce, and it will be more likely than ever that the audience will be extremely sensitive to UX, desiring something the most like TikTok and Instagram than ever.
Slack is a weird choice of example, and so is Zoom. They're both currently getting their lunches eaten by MS Teams for specifically the reasons laid out in the grandparent.
Slack in particular had to take a just-OK buyout from Salesforce and the product has seriously stagnated.
Most software is not written for such large audiences. Typical enterprise software is used by tens to hundreds of people. There is simply no budget to create user interfaces at TikTok standards.
TIL of ≹, which "articulates a relationship where neither of the two compared entities is greater or lesser than the other, yet they aren't necessarily equal either. This nuanced distinction is essential in areas where there are different ways to compare entities that aren't strictly numerical." (https://www.mathematics-monster.com/symbols/Neither-Greater-...)
The example z_1 ≹ z_2 for complex numbers z_1, z_2 is weird. Imo it would be clearer to state |z_1| = |z_2|, that is both complex numbers have the same absolute value.
> To conclude, the ≹ symbol plays a crucial role in providing a middle ground between the traditional relational operators.
As a PhD student in math, I have never seen it before. I do not believe that it plays any crucial role.
Reminds me of the concept of games in combinatorial game theory, they are a superset of surreal numbers (which are themselves a superset of the real numbers) in which the definition of the surreal numbers is loosened in a way which looses the property of they being totally ordered. This creates games (read weird numbers) which can be "confused with" or "fuzzy" with other numbers, the simplest example is * (star) which is confused with 0, i.e. not bigger or smaller than it, it's a fuzzy cloud around zero (notated 0║*). More complex games called switches can be confused with bigger intervals of numbers and are considered "hot". By creating numbers from switches you can create even more interesting hot games.
I find this concept is important in understanding causal ordering for distributed systems, for example in the context of CRDTs. For events generated on a single device, you always have a complete ordering. But if you generate events on two separate devices while offline, you can't say one came before the other, and end up with a ≹ relationship between the two. Or put differently, the events are considered concurrent.
So you can end up with a sequence "d > b > a" and "d > c > a", but "c ≹ b".
Defining how tie-breaking for those cases are deterministically performed is a big part of the problem that CRDTs solve.
It is false - real numbers fulfill the trichotomy property, which is precisely the lack of such a relationship: every two real number is either less than, equal or greater than.
But the numerical context can still be correct: (edit: ~~imaginary~~) complex numbers for example don’t have such a property.
This doesn't really pass the small test for me either, but to play devils advocate:
Imagine you have 2 irrational numbers, and for some a priori reason you know they cannot be equal. You write a computer program to calculate them to arbitrary precision, but no matter how many digits you generate they are identical to that approximation. You know that there must be some point at which they diverge, with one being larger than the other, but you cannot determine when or by how much.
Think how any number on the z axis of complex plain isn't equal to the a number of same magnitude, on x and y axis.
Now if you really think about, a number of a given magnitude on x axis also isn't exactly "equal" to a name of same magnitude on y axis or vice versa. Other wise, -5 and 5 should be equal, because they're the same magnitude from 0.
Probably does not apply for real numbers, but could totally apply to, e.g., fuzzy numbers, whose 'membership function' bleeds beyond the 'crisp' number into nearby numbers.
You could imagine two fuzzy numbers with the same 'crisp' number having different membership profiles, and thus not being "equal", while at the same time being definitely not less and not greater at the same time.
Having said that, this all depends on appropriate definitions for all those concepts. You could argue that having the same 'crisp' representation would make them 'equal' but not 'equivalent', if that was the definition you chose. So a lot of this comes down to how you define equality / comparisons in whichever domain you're dealing with.
Perhaps this: if they represent angles, then 1 and 361 represents the same absolute orientation, but they're not the same as 361 indicates you went one full revolution to get there.
The jargon from category theory for this phenomenon is - partial ordering.
It really is an interesting thing. In fact, as human beings who by nature think in terms of abstract, non-concrete units (as opposed to mathematically precise units like a computer program), we tend to compare two related things. They might belong to the same category of things, but they might not be eligible for direct comparison at all.
Once you internalize partial ordering, our brain gets a little more comfortable handling similar, yet incomparable analogies.
One example would be if you define one set A to be "less than" another B if A is a subset of B. Then ∅ < {0} and {0} < {0, 1} but {0} ≹ {1}.
Such a thing is called a partial ordering and a set of values with a partial ordering is called a partially ordered set or poset (pronounced Poe-set) for short.
That article is misinterpreting the meaning of the symbol. It isn't useful in mathematics because it is a contradiction in terms: if "neither of the two compared entities is greater or lesser than the other" then they are equal.
The author of the original article uses it correctly - think about it more in regards to importance for their example.
The business is no more or less important than the developer, but they are NOT equal.
It doesn't have to mean importance though, just the method by which you are comparing things.
Monday ≹ Wednesday
Come to think of it, it should be called the 'No better than' operator.
> if "neither of the two compared entities is greater or lesser than the other" then they are equal.
Not in a partial order.
For example in this simple lattice structure, where lines mark that their top end in greater than their bottom end:
11
/ \
01 10
\ /
00
11 is > to all other (by transitivity for 00), 00 is < to all other (by transitivity for 11), but 01 is not comparable with 10, it is neither lesser nor greater given the described partial order.
You can actually see this kind of structure everyday: unix file permissions for example. Given a user and a file, the permissions of the user are is an element of a lattice where the top element is rwx (or 111 in binary, or 7 in decimal, which means the user has all three permissions to read, write, and execute) and the bottom element is --- (or 000, in binary, or 0 in decimal, which means the user has no permissions). All other combination of r, w, and x are possible, but not always comparable: r-x is not greater nor lesser than rw- in the permissions lattice, it's just different.
> It isn't useful in mathematics because it is a contradiction in terms: if "neither of the two compared entities is greater or lesser than the other" then they are equal.
That’s only true for a total order; there are many interesting orders that do not have this property.
It holds for the usual ordering on N, Z, Q and R, but it doesn’t hold for more general partially ordered sets.
In general one has to prove that an order is total, and this is frequently non-trivial: Cantor-Schröder-Bernstein can be seen as a proof that the cardinal numbers have a total order.
That’s only true for linearly ordered structures, but isn’t true for partially ordered ones.
For example, set inclusion. Two different sets can be neither greater than not smaller than each other. Sets ordered by inclusion form a partially ordered lattice.
That suggests to me that you've got a multi objective optimization problem with conflicting objectives and a pareto optimal solution that balances the tradeoffs between the two objectives. If you swing it too far one way you've got a '>' and need to swing it back the other way, but go too far the other way and you've got a '<'. And they're definitely not equal since they pull in different directions.
It seems like the author agrees with you and picked a confusing title for the article. The article ends with a set of equations:
user > ops > dev
biz > ops > dev
biz ≹ user
The conclusion seems to be that code exists in service to the end-user and the business. The last equation (≹) is a neat way of describing that both end-user and the business are equally important to the existence of the code, even though their needs aren’t the same.
Neat summary. I think many developers experience the degree to which biz and user diverges as a source of problems: the more divergence, the more problems.
The problem with "costs less than developer's time" math is that, usually, it's not you who's paying. Your users are, often in nonobvious ways, such as through higher electricity bills, reduced lifespan[0], lost opportunities, increased frustration, and more frequent hardware upgrades.
(And most of your users don't have developer's salary, or developer's quality of life, so it hurts them that many times more.)
I've been calling the thinking of the parent "trickle-down devonomics": that by making the code better for the user, the benefits will trickle down to the users. Obviously, as the name suggests, that never happens. Devs get the quality of life and users end up paying for our disregard.
This implies that if a developer works half as long on something, all of that money that would be spent on them is spread out amongst their users. Which makes absolutely no sense.
Abstractions MIGHT make your code slower. But there's a reason we're not using assembly: Because the minor efficiency hit on the software doesn't match up with the bugs, the salary towards the experts, the compilation errors, etc.
A VM is a pretty good tradeoff for users, not just devs.
> When I say “run” I don’t just mean executing a program; I mean operating it in production, with all that it entails: deploying, upgrading, observing, auditing, monitoring, fixing, decommissioning, etc
That depends on the application and the use case, but good performance and good readability aren't mutually exclusive. Easy to read software might not always be the most performant, but it's far easier to improve performance in an easy to read codebase than it is to make a hard to read but performant codebase easier to read.
The universe puts a hard speed limit on latency, but will give you all the bandwidth you want. There’s something almost mystical about latency, we should be very prudent when spending it.
Also, for my compiled code (in Go), the code that I write is not the code that the compiler generates. I can write simple code that's easy for {{me in 3 months}} to read and let the compiler do the fancy stuff to make it fast.
1million users waiting even a 1 second longer is about a month of single developers time.
This sort of calculations you preach are inherently untrue, as they completely ignore that 1second times million. After all, nobody bothers economically evaluate just a single second.
But it does account to much, when multiplied by the ammount of users.
And when we multiply again, by the times a single user uses your software, and then again, by the time a users uses different software from other developers who also thought "it's only 1 second, nobody cares", we end up living in world where software usability gets lower and lower despite hardware getting faster and faster.
We end up living in a world where literally weeks are wasted everyday, waiting for slow windows file explorer.
If you'd want to evaluate that honestly, you would probably come to conclusion that microsoft should have a dedicated team, working for decade on nothing but explorer startup optimization, and it would still pay for it self.
But they don't.
Because at the end of the day, this whole "lets evaluate developers time working on given improvement" is just a cope and justification of being us lazy, that only pretends to be an objective argument so we can make ourself feels better
I think the headline is misleading in that regard. I think that having easy-to-run software is part of having a well-designed one. Just last week I heard about a software upgrade that would require a downtime of 5 days. The software has about 100 users and is mostly a flexible web application to collect basic information (nothing fancy). Imagine the cost this creates for the business compared to an upgrade that takes a few hours.
So running the software includes more than just the server costs.
It seems like a false trade-off in the first place. The point of writing readable, maintainable code is that your team will be able to edit it later. Including adding performance enhancements.
Another way of stating the relationship could be something like: You have fewer brain-cycles to apply to optimization than the combined sum of everyone who will ever read your code, if your code matters. But that is a mouthful and kind of negative.
I think the corollary to the title (to turn it around on the author) is not 'Code is read more than written' but 'code that can't be read won't run for long'. Disclaimer: Experienced sysadmin trying to make a lateral move to development and as such a complete noob.
Or just code that works and nobody wants or needs to spend money on changing. I've written such code, very crappy stuff. And then coming back to the client many, many years later, finding it all just humming along, driving their business critical applications without it or the computer it runs on ever having been updated once in 5 years or so. I was so surprised, and a bit scared.
Sometimes when you don't change anything, it just keeps working.
So I guess that makes it a very boring:
code that can't be read won't be changed and will not be relevant for long, but not always
I (very briefly) worked in a startup whose business was search (on a certain domain) and they had no tests for their "Search.java" file (the real name was 300x longer, because java devs…).
I had found some operations to tweak the scoring, except that some were multiplications by one, so I removed them. But I got told to not touch them because they wouldn't know if I had broken it until some customer complained.
The CTO told me off for my completely irresponsible behaviour.
Proprietary software you don't have sources for (say e.g. a third-party library), or just about any black-box system, are counterexamples to your corollary.
Yeah, some code runs for so long that the system's owners have issues with repairing/replacing the hardware it can only run on if it fails (and then sometimes resort to emulation of the old hardware to be able to keep running it).
Your point isn't a bad one, but it's really a separate topic. Assuming we aren't deailing with deliberate obfuscation, most code can be read by people who can be bothered to try, and there are always code formatters if necessary.
I have a corollary to this: there are a series of exponential increases in usage counts between each of:
1. Language designers & standard lib developers.
2. Shared module or library developers.
3. Ordinary developers.
4. End-users.
For many languages, the ratios are on the order of 1,000x at each stage, so for each 1x language designer there might be 1,000 people designing and publishing modules for it, a million developers, and a billion users. Obviously these numbers change dramatically depending on the specific circumstances, but the orders of magnitude are close enough for a qualitative discussion.
The point is that the tiniest bit of laziness at the first or second tiers has a dramatic multiplicative effect downstream. A dirty hack to save a minute made by someone "for their own convenience" at step #1 can waste literally millions of hours of other people's precious lives. Either because they're waiting for slow software, or frustrated by a crash, or waiting for a feature that took too long to develop at steps #2 or #3.
It takes an incredible level of self-discipline and personal ethics to maintain the required level of quality in the first two steps. Conversely, it deeply saddens me every time I hear someone defending an unjustifiable position to do with core language or standard library design.
"You just have to know the full history of why this thing has sharp edges, and then you'll be fine! Just be eternally vigilant, and then it's not a problem. As long as you don't misuse it, it's not unsafe/insecure/slow/problematic." is the type of thing I hear regularly when discussing something that I just know will be tripping up developers for decades, slowing down software for millions or billions.
So, the author hijacks a perfectly good rule of thumb to build their grand theory of everything. It is all nice, clean, and wise, and besides the tortured turn of phrase is just rechewing of popular truisms.
Usual reminder that many people in our industry are not native speakers and don't live in an English speaking country, and yet they make the effort to write in English, which may explain the "tortured turn of phrase".
> just rechewing of popular truisms
And yet these "popular truisms" are particularly well put together in a coherent way, which makes this post a useful reference.
Honestly, I think there's value both in riffing off rules of thumb and using that riff to revisit and re-contextualise things we already think we know.
Everything is new to someone, and even if this was just confirming my own biases I found it an interesting take.
The author's framing can be misunderstood in so many ways that it is not a useful shorthand at all. There can be no absolute rank order of these tokens.
Firstly, in this framing, the "dev" is not one person but it is a collective for lots of people with varied expertise and seniority levels in different orgs – product, engineering and design orgs.
Then, "ops" is again not one thing and not just engineering ops. It could be bizops, customer support etc. too.
Then, "biz" isn't one thing either. There's branding/marketing/sales/legal etc. and execteam/board/regulators/lenders/investors etc.
All of these people affect what code is written and how it is written and how and when it is shipped to users. Everyone should be solving the same "problem".
A lot of the times, a lot of people within the org are just there to make sure that everyone understands/sees the same "problem" and is working towards the same goals.
But that understanding is continuously evolving. And there is lag in propagation of it throughout the org. And hence, there is lag in everyone working towards the same goal – while goal itself is being evolved.
Finally, "user" is not one thing either nor any one cohort of users are static. There are many different cohorts of users and these cohorts don't necessarily have long-term stable behaviors.
So, it helps to understand and acknowledge how all the variables are changing around you and make sense of the imperfect broken world around you with that context. Otherwise it is very easy to say everyone else sucks and everything is broken and you want to restart building everything from scratch and fall into that and other well-known pitfalls.
I'm glad to see something approaching ethics discussed:
> There’s a mismatch between what we thought doing a good job was and what a significant part of the industry considers profitable, and I think that explains the increasing discomfort of many software professionals.
"Discomfort" is quite the understatement. This leaves so much unsaid.
I will add some questions:
- What happens when your users are not your customers (the ones that pay)?
- Does your business have any ethical obligation to your users -- all of them -- even the ones that do not pay?
- What happens when your paying customers seek to use your business in ways that have negative downstream effects for your users?
For example, what if:
- Your platform makes fraud easier than the existing alternatives?
- Your platform makes it easier to misinform people in comparison to the alternatives?
- Your platform makes it easier to shape user opinions in ways that are attractive (habit-forming) but destructive in the long-term?
All of these have proven to be successful business models, over some time scales!
Given the reality of the dynamic, should a business pursue such an exploitative model? If so, can it do so more or less responsibly? Can a more ethical version of the business mitigate the worst tendencies of competitors? Or will it tend to become part of the problem?
A key take-away is clear: some classes of problems are bigger and more important than the business model. There are classes of problems that can be framed as: what are the norms and rules we need _such that_ businesses operate in some realm of sensibility?
Finally, I want to make this point crystal clear: a business inherently conveys a set of values: this is unavoidable. There is no escaping it. Even if a business merely takes the stance of 'popularity wins', that is in itself a choice that has deep implications on values. Political scientists and historians have known for years about the problems called 'tyranny of the majority'. Food for thought, no matter what your political philosophy.
I don't know 'the best' set of ethics, but I know that some are better than others. And I hope/expect that we'll continue to refine our ethics rather than leave them unexamined.
[Updates/edit complete as of 8:03 AM eastern time]
I believe this is a different problem than the essay is speaking about. You can choose what problems and domains fit your ethics. This is about how you build a system and how you prioritize the work.
> I believe this is a different problem than the essay is speaking about.
Hardly. I'll quote the last paragraph and the three inequalities:
> There’s a mismatch between what we thought doing a good job was and what a significant part of the industry considers profitable, and I think that explains the increasing discomfort of many software professionals. And while we can’t just go back to ignoring the economic realities of our discipline, perhaps we should take a stronger ethical stand not to harm users. Acknowledging that the user may not always come before the business, but that the business shouldn’t unconditionally come first, either:
user > ops > dev
biz > ops > dev
biz ≹ user
First, I want to emphasize "perhaps we should take a stronger ethical stand not to harm users". The author did a nice job of "throwing it our faces" but the underlying ethical currents are indeed there.
Second, "the user may not always come before the business, but that the business shouldn’t unconditionally come first". This is very much aligned with my question "what are the norms and rules we need _such that_ businesses operate in some realm of sensibility?"
...
Ok, putting aside debates around the author's intent or 'valid scope' of this discussion (which by the way, is a rather organic thing, computed lazily by the participants, rather than by fiat), I'd like to add some additional thoughts...
In much of the software world there is a mentality of "We'll figure out Problem X (such as a particular problem of scaling) if we get to that point." I'll make this claim: naively deferring any such problems that pertain to ethics are fraught. Of course there are practical considerations and people are not angels! For precisely these reasons, ethics must be something we study and put into practice before other constraints start to lock in a suboptimal path.
I often look at ethics from a consequentialist point of view that includes probabilities of system behavior. This could be thought of as computing an 'expected future value' for a particular present decision.
If one applies such an ethical model, I think the impacts of choices become clearer. And it becomes harder to use false-choice reasoning to demonize others and exonerate ourselves. For example, if a particular business model has a significant probability of harming people, one cannot claim ignorance much less complete innocence when those harms happen. They were no surprise, at least to people who pay attention and follow the probabilities.
Ascribing blame is quite difficult. I like to think of blame as being a question largely of statistical inference. [1] But even if we all agreed to a set of ethical standards, the statistical inference problem (multicollinearity for example) would remain. There is plenty of blame to go around, so to speak. But certain actions (very highly influenced by mental models and circumstances) contribute more than others. [2]
To what degree is ignorance an ethical defense? This is a tough one. Not all people nor entities have the same computational horsepower nor awareness. I don't have the answers, but I have not yet found a well-known ethicist in the public eye that speaks in these terms to a broad audience. To me, the lack of such a voice means I need to find more people like that and/or contribute my voice to the conversation. The current language around ethics feels incredibly naive to my ears.
[1] I agree that for most people, ethical rules of thumb get us 'most of the way there'. But these heuristics are imperfect.
[2] To be clear, I see blame as often overused. I care relatively less about blaming a person for mistakes. I care much more about what a person's character tells us about how they will behave in the future. That said, a corporate entity is not a person deserving of such generosity. Corporate entities are not first-class entities deserving human-rights level protection. A legal entity is a derivative entity; one created in the context of laws which should rightly function to better the society in which it is formed. A corporate entity can rightly be judged / evaluated in terms of its behaviors and internal structure and what this entails for its future likely behavior. We don't expect corporate entities to be charities, for sure, but we also didn't consciously design the legal environment so that corporate entities can actively undermine the conditions for a thriving society with impunity.
In those situations biz > user by definition and the developers end up having to cater to the needs of the middle managment of their customers rather than the needs of the actual users. The price of not doing this is failing to win the contract. Users then get locked in to whatever crap you have time to provide for them while you're really busy implementing new features that middle managment like.
You essentially only need a nice looking login screen and some sort of reporting and the rest.....doesn't matter much.
I am being a bit cynical but it does pay as an engineer to know if that's fundamentally the kind of company you're in.
An online retailer, for example, is hypersensitive to its users and I know of at least one that has different versions of its website for different countries because they know that Germans like X and Americans like Y - small changes make a huge difference to sales.
Other companies have no sensitivity to the usability of their products because the people who buy their products are never the users.
We needed to win contracts, so we needed to tick their checkboxes, but we also cared for the user experience (good UX was almost never a strict requirement from our customer).
Our competitors' software was very painful to use, so we wanted to differentiate in this regard.
This made our own lives easier as the training was easier, the users we interacted with (that usually had no say in whether our solution was bought to them or our competitors') were happier and recommended to their managers (where they could) buying more stuff from us.
In the end this was 80% driven by pride (our software doesn't suck) and empathy (I couldn't stand using the software if it was as bad as our competitors') but to some extent this was also in our interest (especially in the long term, where your brand is built).
And then the happy users switch to another company and start recommending your stuff to their new managers. It's an extra source of leads and sales :-)
Unfortunately that works until some brilliant mba shows up with a plan to sell 100x more consulting and more training by making the product intentionally much more difficult to use.
It was a doomed approach. We needed salespeople to get inside the heads of the purchasers, learn how to explain the benefit to them, and coach the users on explaining the benefits to other people in their organization. We needed salespeople to bridge the chasm between the users and the purchasers.
I remember being in site meetings where people who used it every day would tell us to our face how terrible it was. Without fail, that site renewed with some promises to fix a couple specific bugs and a minimal price increase.
this is why all enterprise software sucks
This is why I use Windows and Visual Studio at work. I don't like either of them, but it's not my call.
SAP absolutely delights its users to the tune of a $200 billion market cap.
Take one look at EMRs and realize that its sold / marketed to .01% of the hospital, despite 80% of the health system using it.
From my observations, it's usually not that devs are lazy or lack a sense of craft, it's that their employers are not willing to spend money building something that isn't actually a requirement.
or worse think they are good at it. Then guard that GUI like it is their first born.
Can you speak any more to this? Do you or anyone else have any examples? I would be very interested to see.
Then again, I’m not saying much that vendors screw up pricing when they choose a price other than “$0,” which is an essential part of B2B-disguised-as-B2C software. Easy for me to say.
Anyway, the boomers you are talking about are retiring out of the workforce, and it will be more likely than ever that the audience will be extremely sensitive to UX, desiring something the most like TikTok and Instagram than ever.
Slack in particular had to take a just-OK buyout from Salesforce and the product has seriously stagnated.
Your uses of "every" and "big" here seem idiosyncratic.
> To conclude, the ≹ symbol plays a crucial role in providing a middle ground between the traditional relational operators.
As a PhD student in math, I have never seen it before. I do not believe that it plays any crucial role.
This symbol for it may be useful, but it's the concept that matters.
Dead Comment
I really love that video.
Well, don't leave us hanging! What are some of your favorite hot games on top of switches?
So you can end up with a sequence "d > b > a" and "d > c > a", but "c ≹ b".
Defining how tie-breaking for those cases are deterministically performed is a big part of the problem that CRDTs solve.
"Example 1: Numerical Context
Let's consider two real numbers, a and b. If a is neither greater than nor less than b, but they aren't explicitly equal, the relationship is ≹"
How can that be possible?
But the numerical context can still be correct: (edit: ~~imaginary~~) complex numbers for example don’t have such a property.
Imagine you have 2 irrational numbers, and for some a priori reason you know they cannot be equal. You write a computer program to calculate them to arbitrary precision, but no matter how many digits you generate they are identical to that approximation. You know that there must be some point at which they diverge, with one being larger than the other, but you cannot determine when or by how much.
Now if you really think about, a number of a given magnitude on x axis also isn't exactly "equal" to a name of same magnitude on y axis or vice versa. Other wise, -5 and 5 should be equal, because they're the same magnitude from 0.
You could imagine two fuzzy numbers with the same 'crisp' number having different membership profiles, and thus not being "equal", while at the same time being definitely not less and not greater at the same time.
Having said that, this all depends on appropriate definitions for all those concepts. You could argue that having the same 'crisp' representation would make them 'equal' but not 'equivalent', if that was the definition you chose. So a lot of this comes down to how you define equality / comparisons in whichever domain you're dealing with.
Contrived, but only thing I could think of.
\inf and $\inf + 1$ comes to mind but I don't think it really counts
It really is an interesting thing. In fact, as human beings who by nature think in terms of abstract, non-concrete units (as opposed to mathematically precise units like a computer program), we tend to compare two related things. They might belong to the same category of things, but they might not be eligible for direct comparison at all.
Once you internalize partial ordering, our brain gets a little more comfortable handling similar, yet incomparable analogies.
Such a thing is called a partial ordering and a set of values with a partial ordering is called a partially ordered set or poset (pronounced Poe-set) for short.
https://en.wikipedia.org/wiki/Partially_ordered_set
The author of the original article uses it correctly - think about it more in regards to importance for their example.
The business is no more or less important than the developer, but they are NOT equal.
It doesn't have to mean importance though, just the method by which you are comparing things.
Monday ≹ Wednesday
Come to think of it, it should be called the 'No better than' operator.
Not in a partial order.
For example in this simple lattice structure, where lines mark that their top end in greater than their bottom end:
11 is > to all other (by transitivity for 00), 00 is < to all other (by transitivity for 11), but 01 is not comparable with 10, it is neither lesser nor greater given the described partial order.You can actually see this kind of structure everyday: unix file permissions for example. Given a user and a file, the permissions of the user are is an element of a lattice where the top element is rwx (or 111 in binary, or 7 in decimal, which means the user has all three permissions to read, write, and execute) and the bottom element is --- (or 000, in binary, or 0 in decimal, which means the user has no permissions). All other combination of r, w, and x are possible, but not always comparable: r-x is not greater nor lesser than rw- in the permissions lattice, it's just different.
That’s only true for a total order; there are many interesting orders that do not have this property.
It holds for the usual ordering on N, Z, Q and R, but it doesn’t hold for more general partially ordered sets.
In general one has to prove that an order is total, and this is frequently non-trivial: Cantor-Schröder-Bernstein can be seen as a proof that the cardinal numbers have a total order.
For example, set inclusion. Two different sets can be neither greater than not smaller than each other. Sets ordered by inclusion form a partially ordered lattice.
Hell, I could spend $200 for a month of server time on AWS and run a lot of my (web API) code 100 billion times.
Optimizing for human readers is always better until you're working on something that proves itself to be too slow to be economical anymore.
Or they picked a title that would let you rapidly spot who didn't even skim the article.
(And most of your users don't have developer's salary, or developer's quality of life, so it hurts them that many times more.)
--
[0] - Yes, wasting someone's time reduces QALY.
Abstractions MIGHT make your code slower. But there's a reason we're not using assembly: Because the minor efficiency hit on the software doesn't match up with the bugs, the salary towards the experts, the compilation errors, etc.
A VM is a pretty good tradeoff for users, not just devs.
[0] https://en.m.wikipedia.org/wiki/Quality-adjusted_life_year
> When I say “run” I don’t just mean executing a program; I mean operating it in production, with all that it entails: deploying, upgrading, observing, auditing, monitoring, fixing, decommissioning, etc
the inference process in article is interesting, but title tempt us to debate about a less related topic.
thanks for your comments let me finish reading.
Like I said, it works until it doesn't, and then you do have to optimize for performance to some extent.
Some LLVM-based language would fit the bill better, like Rust, C (also true of the intel compiler and gcc), C++, etc.
This sort of calculations you preach are inherently untrue, as they completely ignore that 1second times million. After all, nobody bothers economically evaluate just a single second. But it does account to much, when multiplied by the ammount of users. And when we multiply again, by the times a single user uses your software, and then again, by the time a users uses different software from other developers who also thought "it's only 1 second, nobody cares", we end up living in world where software usability gets lower and lower despite hardware getting faster and faster.
We end up living in a world where literally weeks are wasted everyday, waiting for slow windows file explorer. If you'd want to evaluate that honestly, you would probably come to conclusion that microsoft should have a dedicated team, working for decade on nothing but explorer startup optimization, and it would still pay for it self.
But they don't. Because at the end of the day, this whole "lets evaluate developers time working on given improvement" is just a cope and justification of being us lazy, that only pretends to be an objective argument so we can make ourself feels better
So running the software includes more than just the server costs.
Another way of stating the relationship could be something like: You have fewer brain-cycles to apply to optimization than the combined sum of everyone who will ever read your code, if your code matters. But that is a mouthful and kind of negative.
There's plenty of ossified code people are scared to touch because they don't understand it, but stake their business on it :)
Sometimes when you don't change anything, it just keeps working.
So I guess that makes it a very boring:
Non-coders are weird.
I had found some operations to tweak the scoring, except that some were multiplications by one, so I removed them. But I got told to not touch them because they wouldn't know if I had broken it until some customer complained.
The CTO told me off for my completely irresponsible behaviour.
I'd say it's more 'code that can't be read won't be modifiable for long'.
Bad news: too few experienced ops people became one less!
The point is that the tiniest bit of laziness at the first or second tiers has a dramatic multiplicative effect downstream. A dirty hack to save a minute made by someone "for their own convenience" at step #1 can waste literally millions of hours of other people's precious lives. Either because they're waiting for slow software, or frustrated by a crash, or waiting for a feature that took too long to develop at steps #2 or #3.
It takes an incredible level of self-discipline and personal ethics to maintain the required level of quality in the first two steps. Conversely, it deeply saddens me every time I hear someone defending an unjustifiable position to do with core language or standard library design.
"You just have to know the full history of why this thing has sharp edges, and then you'll be fine! Just be eternally vigilant, and then it's not a problem. As long as you don't misuse it, it's not unsafe/insecure/slow/problematic." is the type of thing I hear regularly when discussing something that I just know will be tripping up developers for decades, slowing down software for millions or billions.
Usual reminder that many people in our industry are not native speakers and don't live in an English speaking country, and yet they make the effort to write in English, which may explain the "tortured turn of phrase".
> just rechewing of popular truisms
And yet these "popular truisms" are particularly well put together in a coherent way, which makes this post a useful reference.
Everything is new to someone, and even if this was just confirming my own biases I found it an interesting take.
Deleted Comment
Firstly, in this framing, the "dev" is not one person but it is a collective for lots of people with varied expertise and seniority levels in different orgs – product, engineering and design orgs.
Then, "ops" is again not one thing and not just engineering ops. It could be bizops, customer support etc. too.
Then, "biz" isn't one thing either. There's branding/marketing/sales/legal etc. and execteam/board/regulators/lenders/investors etc.
All of these people affect what code is written and how it is written and how and when it is shipped to users. Everyone should be solving the same "problem".
A lot of the times, a lot of people within the org are just there to make sure that everyone understands/sees the same "problem" and is working towards the same goals.
But that understanding is continuously evolving. And there is lag in propagation of it throughout the org. And hence, there is lag in everyone working towards the same goal – while goal itself is being evolved.
Finally, "user" is not one thing either nor any one cohort of users are static. There are many different cohorts of users and these cohorts don't necessarily have long-term stable behaviors.
So, it helps to understand and acknowledge how all the variables are changing around you and make sense of the imperfect broken world around you with that context. Otherwise it is very easy to say everyone else sucks and everything is broken and you want to restart building everything from scratch and fall into that and other well-known pitfalls.
> There’s a mismatch between what we thought doing a good job was and what a significant part of the industry considers profitable, and I think that explains the increasing discomfort of many software professionals.
"Discomfort" is quite the understatement. This leaves so much unsaid.
I will add some questions:
- What happens when your users are not your customers (the ones that pay)?
- Does your business have any ethical obligation to your users -- all of them -- even the ones that do not pay?
- What happens when your paying customers seek to use your business in ways that have negative downstream effects for your users?
For example, what if:
- Your platform makes fraud easier than the existing alternatives?
- Your platform makes it easier to misinform people in comparison to the alternatives?
- Your platform makes it easier to shape user opinions in ways that are attractive (habit-forming) but destructive in the long-term?
All of these have proven to be successful business models, over some time scales!
Given the reality of the dynamic, should a business pursue such an exploitative model? If so, can it do so more or less responsibly? Can a more ethical version of the business mitigate the worst tendencies of competitors? Or will it tend to become part of the problem?
A key take-away is clear: some classes of problems are bigger and more important than the business model. There are classes of problems that can be framed as: what are the norms and rules we need _such that_ businesses operate in some realm of sensibility?
Finally, I want to make this point crystal clear: a business inherently conveys a set of values: this is unavoidable. There is no escaping it. Even if a business merely takes the stance of 'popularity wins', that is in itself a choice that has deep implications on values. Political scientists and historians have known for years about the problems called 'tyranny of the majority'. Food for thought, no matter what your political philosophy.
I don't know 'the best' set of ethics, but I know that some are better than others. And I hope/expect that we'll continue to refine our ethics rather than leave them unexamined.
[Updates/edit complete as of 8:03 AM eastern time]
Hardly. I'll quote the last paragraph and the three inequalities:
> There’s a mismatch between what we thought doing a good job was and what a significant part of the industry considers profitable, and I think that explains the increasing discomfort of many software professionals. And while we can’t just go back to ignoring the economic realities of our discipline, perhaps we should take a stronger ethical stand not to harm users. Acknowledging that the user may not always come before the business, but that the business shouldn’t unconditionally come first, either:
First, I want to emphasize "perhaps we should take a stronger ethical stand not to harm users". The author did a nice job of "throwing it our faces" but the underlying ethical currents are indeed there.Second, "the user may not always come before the business, but that the business shouldn’t unconditionally come first". This is very much aligned with my question "what are the norms and rules we need _such that_ businesses operate in some realm of sensibility?"
...
Ok, putting aside debates around the author's intent or 'valid scope' of this discussion (which by the way, is a rather organic thing, computed lazily by the participants, rather than by fiat), I'd like to add some additional thoughts...
In much of the software world there is a mentality of "We'll figure out Problem X (such as a particular problem of scaling) if we get to that point." I'll make this claim: naively deferring any such problems that pertain to ethics are fraught. Of course there are practical considerations and people are not angels! For precisely these reasons, ethics must be something we study and put into practice before other constraints start to lock in a suboptimal path.
I often look at ethics from a consequentialist point of view that includes probabilities of system behavior. This could be thought of as computing an 'expected future value' for a particular present decision.
If one applies such an ethical model, I think the impacts of choices become clearer. And it becomes harder to use false-choice reasoning to demonize others and exonerate ourselves. For example, if a particular business model has a significant probability of harming people, one cannot claim ignorance much less complete innocence when those harms happen. They were no surprise, at least to people who pay attention and follow the probabilities.
Ascribing blame is quite difficult. I like to think of blame as being a question largely of statistical inference. [1] But even if we all agreed to a set of ethical standards, the statistical inference problem (multicollinearity for example) would remain. There is plenty of blame to go around, so to speak. But certain actions (very highly influenced by mental models and circumstances) contribute more than others. [2]
To what degree is ignorance an ethical defense? This is a tough one. Not all people nor entities have the same computational horsepower nor awareness. I don't have the answers, but I have not yet found a well-known ethicist in the public eye that speaks in these terms to a broad audience. To me, the lack of such a voice means I need to find more people like that and/or contribute my voice to the conversation. The current language around ethics feels incredibly naive to my ears.
[1] I agree that for most people, ethical rules of thumb get us 'most of the way there'. But these heuristics are imperfect.
[2] To be clear, I see blame as often overused. I care relatively less about blaming a person for mistakes. I care much more about what a person's character tells us about how they will behave in the future. That said, a corporate entity is not a person deserving of such generosity. Corporate entities are not first-class entities deserving human-rights level protection. A legal entity is a derivative entity; one created in the context of laws which should rightly function to better the society in which it is formed. A corporate entity can rightly be judged / evaluated in terms of its behaviors and internal structure and what this entails for its future likely behavior. We don't expect corporate entities to be charities, for sure, but we also didn't consciously design the legal environment so that corporate entities can actively undermine the conditions for a thriving society with impunity.