It looks almost as if humans have a nearly infinite backlog of things they would do if they only had time and capability, and a limit on the amount of effort they are capable of exerting per day. Then, once new tools increase their productivity and free up a bit of resources, they pick more desiderata from the backlog, and try to also accomplish that. Naturally they seek more tools for the newly-possible activities, and the loop closes.
This applies to any activity, leisure emphatically included. Travel became simpler → more vacations now involve flying a plane and thus obtaining tickets online and thus comparison-shopping, aggregating reviews of faraway places, etc → omg, vacation travel is complex again. It just allows to fulfill more of a dream.
I like to apply a similar lesson taught to me about content to consume - with the internet, there is a nearly infinite stream of entertainment and news, and it can feel overwhelming. In the past, our predecessors could read their 1 local printed newspaper and be "finished". So you have to change your thinking, to be we are able to curate a high-quality stream that constantly flows by, and when we desire, we can dip in and scoop up 1 serving.
To your comment about vacations, the issue is people subconsciously want to ensure their trip value is "maximized" - oh no, do I have time to see all 10 best spots in the city? Or some historical building is closed, and you read online how it's a lifechanging experience to see, and now you feel left out. So you have to push that aside, follow the 80/20 rule, and appreciate what you ARE able to do on your trip.
"Everything Everywhere All At Once" tried to capture this feeling of digital abundance and diversity of viewpoints / experiences. Best explained in the following video.
Here’s the thing: We don’t need an actual multiverse to put cracks in the clay pot
of our mind when we already have devices for careening through the endless imaginations of
the multitudes, when we exist in an environment where you can encounter the personal stories and experiences from people on every continent, all who are living their own unique life in just a few minutes, all from the comfort of your own toilet. When more interesting ideas and concepts, and people and places can fly by in the space of one 30 minute TikTok binge then some of our ancestors experienced in the entirety of their localized illiterate lives.The internet, for those who are inspired to spend lot of time on it and use it in a certain way, for those who envelope themselves in it’s self-referential world of constantly evolving novelty and imagery, will inevitably have a profound effect on the way you see the world.
> Mankind has ah only one mm-m-m science," the Count said as they picked up their parade of followers and emerged from the hall into the waiting room - a narrow space with high windows and floor of patterned white and purple tile.
> "And what science is that?" the Baron asked.
> "It's the um-m-m-ah-h science of ah-h-h discontent," the Count said.
There are various ways to interpret that, but I prefer a more Stoic or Buddhist view, where it's a bad habit but we can be better at it. (As opposed to a more god-worm-totalitarian one, where humans are dissatisfied cattle to be managed.)
That quote seems about right. Nice connection. Thanks for sharing.
Indeed, desire and dissatisfaction are quite productive forces! They don't necessarily entail dysphoria, though. Or more pithily if you prefer, "lack is a kind of abundance."
GP's "near infinite backlog" framing still implicitly hints at something like an underlying state of pure satisfaction if only we could address all the issues or whatnot. IMHO, desire actively functions in its own peculiar ways, and the personal narratives we attach to those functions can frame them as a helpful, collaborative things, rather than obstacles to be overcome.
The nearly infinite backlog also means that there is nearly infinite demand for labor and Luddite adjacent arguments that labor saving technology causes persistent underemployment are invalid.
Even if we shouldn't be concerned about "persistent" underemployment, I still think that rapid "transient" unemployment due to rapidly evolving tech over the coming decades may cause significant societal upheaval that we should be concerned about - even if it's "just luddites" coming to burn our data centers.
Friendly reminder that things ended up quite shit for the actuall ludites, and the advantages only 'trickled down' after a generation or two. So I will keep being worried for everyone who works now, and their kids.
One trick is to hold your desires relatively constant (remind yourself that just X years ago, you dreamed of doing Y, which you can do now for much less effort). We somehow let the cost involved in a task influence how much we can enjoy it.
The interesting question here of course is whether travelling the other side of the planet for a holiday ( and the people who live there travelling to somewhere near you! ) is better than going somewhere more local.
ie isn't the key trick, in both software and life, to distinguish between busy work, 'nice to have' new features and true value?
Human as an aggregate, yes. Individually, not so much. I’ve seen way too many people getting lost in life when traditional values no longer applicable to them. They still have desire, but lose all their purpose.
Tog's paradox is the main reason why I suspect that generative AI will never destroy art, it will enhance it. It allows you to create artworks within minutes that until recently required hours to create and years to master. This will cause new art to emerge that pushes these new tools to the limit, again with years of study and mastery, and they will look like nothing we've been able to produce so far.
This is exactly what happened when digital tools like Photoshop became mainstream, where you can copy-paste, recolor, adjust, stretch and transform. It didn't obsolete the manual creation of art, but instead enhanced it. It's common for artists to sketch on paper (or tablet) and later digitize and color on their computer, achieving results faster and better than what was possible in the past.
I crave authenticity. I recognise the creativity and talent in digital painting, but it lacks authenticity. I hardly feel I'll like AI art more.
Not all art needs to be high art, of course. I've bought prints of digital paintings and woodblock prints. Nonetheless, /r/ArtPorn today is like going to the cinema and being shown a compilation of TV adverts. AI art is probably not going to improve that.
We don’t align completely with the part on mastering, at least as stated here.
That is, yes, we can make large amount of images/videos/texts with generative AI that we would never have been able to produce otherwise, because we didn’t dedicated enough time in mastering corresponding arts. But mastering an art is only marginally about the objects you can craft. The main change it brings is how we perceive objects and how we imagine that we can transform the world (well at least a tiny peace of it) through that new perspective.
Of course "mastering generative AI" can be an interesting journey of it’s own.
My opinion, art meant to capture and communicate the emotions and truth of an exact moment will always have a place. But also, as the time cost to represent a single frame of an idea becomes achievable in fractions of a second, what it unlocks is the ability to represent ideas that are best expressed through longer time sequences. What we wait on is tools that better allows us to constrain, guide and sculpt the generated sequence as it evolves.
As someone who has always loved fractal and Mandelbrot zooms, infinite AI zooms are already cool new art experience made possible in terms of feasible time cost to make. https://www.youtube.com/watch?v=L1vrPpM4eyM
It depends on how the AI is used. If it's too high-level or abstract, it will produce "slop". Solving for AI generated content to be non-slop is probably very close to solving for AGI. But the statistical tools have proven useful in streamlining or automating what once were challenging processes. For example, generating an animated character still yields slop, but you can take a hand-crafted character and have an AI model analyze a live actor's movements and then rotoscope them onto the character. This makes life easier for the animator AND the actor: the actor can give a more natural performance without having to wear cumbersome motion capture gear; and the animator can apply those movements directly to the character without having to clean up motion capture data, let alone rotoscope the movements by hand as was done in the classic Disney animation days.
However, it seems to me that most people just think they are some kind of Rick Rubin, who just need the right tools ato be finally appreciated for their taste and I don't think even a fraction of them has taste.
First I've seen this, but also: this feels like a slightly long-winded explanation of what we're actually trying to achieve through improving efficiency and such through software, right?
Make things easier and improve productivity, because we humans can do more with technology. Especially relevant in the current AI dialogue around what it's going to do to different industries.
> Consider an HR platform that automates payroll and performance management, freeing up HR staff from routine tasks. HR teams will need to justify what they do the rest of the time...
This quote, though, is one I'd like to further mull: added software complexity that is the result of job justification.
> added software complexity that is the result of job justification.
I have found that some folks like to be "high priest gatekeepers." They want to be The Only One That Understands The System, so they are indispensable, and it also strokes their own ego.
If possible, they might customize the system, so they are the only ones that can comprehend it, and they can often be extremely rude to folks that don't have their prowess.
I suspect that we've all run into this, at one time or another. It's fairly prevalent, in tech.
I like that! I'll be adding that to my back pocket for an appropriate conversation in the future.
I've absolutely experienced this, and, to a degree, I'm dealing with it now in supporting a huge enterprise platform that's a few decades old.
The really interesting (frustrating?) piece is that the "high priest gatekeepers" are on both sides of the equation - the people who have used the system for years and know the appropriate incantations and the people who have developed it for years and understand the convoluted systems on the backend.
This dynamic (along with other things, because organizations are complex) has led to a very bureaucratic organization that could be far more efficient.
>I have found that some folks like to be "high priest gatekeepers." They want to be The Only One That Understands The System, so they are indispensable, and it also strokes their own ego.
I agree that that happens, but I suspect a lot of times it's not a conscious decision by the person who is doing the gatekeeping. The end result is more or less the same, but often those people feel like they are the only one that understands, not that they intentionally want to be the only one that understands.
It seems like a trivial difference, but having some empathy for these people and finding out which is which makes it possible to deal with at least a subset of these people.
I don’t know, I tend to prefer honing my skill at crafting simpler solutions. And if some colleague come with something simpler than my proposal, I will rather be pleased and honored to be able to work with bright minds that can cast more lights for me on path to more elegant patterns.
There's a flip side to this that I think is quite positive.
When you build a tool that improves efficiency, the users either do more with the same effort or do the same with less effort. The former might be more constructive, both are good.
When the tool is particularly effective, it enables use cases that were not even considered before because they just took too much effort. That's fantastic, but I suppose that's the paradox described here, the new use case will come with new requirements, now there's new things to make more efficient. That's what progress is all about isn't it?
The examples he gives aren't very clear. Let's just state one that's fairly obvious to me:
Back in the day, someone introduced tabs in browsers that made it possible to browse several websites in a single browser window. People loved it so much that they started running browsers with dozens of opened tabs. But then this caused more pain, because now people had too much tabs to navigate. And this sparked the creation of tab managers, which introduce more complexity in how people browse the web than they used to.
A couple of decades ago, browsing the web was considered a specific "activity" that you do on a computer for a specific need, and then and close the browser window when you're done.
A few decades earlier, using a personal computer at all was considered to be a specific activity, and people didn't really "know they needed" to have multiple applications running at the same time.
Tog's paradox seems to explain this evolution really well.
It is somewhat similar to Jevons paradox: when technological progress increases the efficiency with which a resource is used, but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced
E.g. People who purchase cars with Improved Fuel Economy ends up driving so much more that they end up using even more fuel than they would have with a less efficient car.
That 'paradox' is pretty much just basic economics when dealing with an elastic product, though. 'When efficiency gains reduce the price of a good that people would buy more of if it were cheaper, consumption of that good will rise'.
There's a friction, between delivering the highest reasonable Quality, yet also allowing the initial users to provide feedback, and helping us to adjust the UX.
I deal with that, by using what I call "Constant Beta." My projects are designed to reach "beta" (or betta), as quickly as possible, so people can start actually using them, even if incomplete. Since I write Apple stuff, I can use Apple's TestFlight. I tend to start using it very early in the project, and there's often hundreds of releases, by the time it actually ships.
I have found that users will almost never provide feedback, no matter how easy I make it, or how much I beg, so I need to infer their usage, via requests for help, or features that I can tell are being used/not used.
The stuff I write has extremely strict privacy requirements, so I often can't collect metrics, or have to anonymize the shit out of the ones I do collect, so there's a lot of tea-leaves-reading, in my work.
This applies to any activity, leisure emphatically included. Travel became simpler → more vacations now involve flying a plane and thus obtaining tickets online and thus comparison-shopping, aggregating reviews of faraway places, etc → omg, vacation travel is complex again. It just allows to fulfill more of a dream.
To your comment about vacations, the issue is people subconsciously want to ensure their trip value is "maximized" - oh no, do I have time to see all 10 best spots in the city? Or some historical building is closed, and you read online how it's a lifechanging experience to see, and now you feel left out. So you have to push that aside, follow the 80/20 rule, and appreciate what you ARE able to do on your trip.
https://www.youtube.com/watch?v=VvclV0_o0JE
Here’s the thing: We don’t need an actual multiverse to put cracks in the clay pot of our mind when we already have devices for careening through the endless imaginations of the multitudes, when we exist in an environment where you can encounter the personal stories and experiences from people on every continent, all who are living their own unique life in just a few minutes, all from the comfort of your own toilet. When more interesting ideas and concepts, and people and places can fly by in the space of one 30 minute TikTok binge then some of our ancestors experienced in the entirety of their localized illiterate lives.The internet, for those who are inspired to spend lot of time on it and use it in a certain way, for those who envelope themselves in it’s self-referential world of constantly evolving novelty and imagery, will inevitably have a profound effect on the way you see the world.
HN front page is almost slow-moving enough to replicate this experience! (This appears to be by design?)
> Mankind has ah only one mm-m-m science," the Count said as they picked up their parade of followers and emerged from the hall into the waiting room - a narrow space with high windows and floor of patterned white and purple tile.
> "And what science is that?" the Baron asked.
> "It's the um-m-m-ah-h science of ah-h-h discontent," the Count said.
There are various ways to interpret that, but I prefer a more Stoic or Buddhist view, where it's a bad habit but we can be better at it. (As opposed to a more god-worm-totalitarian one, where humans are dissatisfied cattle to be managed.)
Indeed, desire and dissatisfaction are quite productive forces! They don't necessarily entail dysphoria, though. Or more pithily if you prefer, "lack is a kind of abundance."
GP's "near infinite backlog" framing still implicitly hints at something like an underlying state of pure satisfaction if only we could address all the issues or whatnot. IMHO, desire actively functions in its own peculiar ways, and the personal narratives we attach to those functions can frame them as a helpful, collaborative things, rather than obstacles to be overcome.
Check out the works of S. Gautama on the topic; it's enlightening! :)
ie isn't the key trick, in both software and life, to distinguish between busy work, 'nice to have' new features and true value?
I crave authenticity. I recognise the creativity and talent in digital painting, but it lacks authenticity. I hardly feel I'll like AI art more.
Not all art needs to be high art, of course. I've bought prints of digital paintings and woodblock prints. Nonetheless, /r/ArtPorn today is like going to the cinema and being shown a compilation of TV adverts. AI art is probably not going to improve that.
That is, yes, we can make large amount of images/videos/texts with generative AI that we would never have been able to produce otherwise, because we didn’t dedicated enough time in mastering corresponding arts. But mastering an art is only marginally about the objects you can craft. The main change it brings is how we perceive objects and how we imagine that we can transform the world (well at least a tiny peace of it) through that new perspective.
Of course "mastering generative AI" can be an interesting journey of it’s own.
As someone who has always loved fractal and Mandelbrot zooms, infinite AI zooms are already cool new art experience made possible in terms of feasible time cost to make. https://www.youtube.com/watch?v=L1vrPpM4eyM
What we have today isn't very useful. But once it gets good, gen AI will probably have a similar impact.
However, it seems to me that most people just think they are some kind of Rick Rubin, who just need the right tools ato be finally appreciated for their taste and I don't think even a fraction of them has taste.
Make things easier and improve productivity, because we humans can do more with technology. Especially relevant in the current AI dialogue around what it's going to do to different industries.
> Consider an HR platform that automates payroll and performance management, freeing up HR staff from routine tasks. HR teams will need to justify what they do the rest of the time...
This quote, though, is one I'd like to further mull: added software complexity that is the result of job justification.
I have found that some folks like to be "high priest gatekeepers." They want to be The Only One That Understands The System, so they are indispensable, and it also strokes their own ego.
If possible, they might customize the system, so they are the only ones that can comprehend it, and they can often be extremely rude to folks that don't have their prowess.
I suspect that we've all run into this, at one time or another. It's fairly prevalent, in tech.
I like that! I'll be adding that to my back pocket for an appropriate conversation in the future.
I've absolutely experienced this, and, to a degree, I'm dealing with it now in supporting a huge enterprise platform that's a few decades old.
The really interesting (frustrating?) piece is that the "high priest gatekeepers" are on both sides of the equation - the people who have used the system for years and know the appropriate incantations and the people who have developed it for years and understand the convoluted systems on the backend.
This dynamic (along with other things, because organizations are complex) has led to a very bureaucratic organization that could be far more efficient.
I agree that that happens, but I suspect a lot of times it's not a conscious decision by the person who is doing the gatekeeping. The end result is more or less the same, but often those people feel like they are the only one that understands, not that they intentionally want to be the only one that understands.
It seems like a trivial difference, but having some empathy for these people and finding out which is which makes it possible to deal with at least a subset of these people.
When you build a tool that improves efficiency, the users either do more with the same effort or do the same with less effort. The former might be more constructive, both are good.
When the tool is particularly effective, it enables use cases that were not even considered before because they just took too much effort. That's fantastic, but I suppose that's the paradox described here, the new use case will come with new requirements, now there's new things to make more efficient. That's what progress is all about isn't it?
- If you complete your work faster, you will be assigned more work, reducing free time as well as hourly wage.
- Any improvements in productivity will become the new baseline for performance.
- Any cost savings will be absorbed by the company, while any cost overruns will be passed on to the rank and file workforce.
And the "conservation" paradoxes:
- The more you reduce power consumption, the more the power company will raise rates to compensate.
- The reward for reducing water usage by 10% this year is a mandate to reduce water usage by 10% next year.
Those are only true if the gain in productivity it not shared between owner and worker.
And whether that happens depends on the balance of power between the two in terms of supply and demand and various laws.
Back in the day, someone introduced tabs in browsers that made it possible to browse several websites in a single browser window. People loved it so much that they started running browsers with dozens of opened tabs. But then this caused more pain, because now people had too much tabs to navigate. And this sparked the creation of tab managers, which introduce more complexity in how people browse the web than they used to.
A few decades earlier, using a personal computer at all was considered to be a specific activity, and people didn't really "know they needed" to have multiple applications running at the same time.
Tog's paradox seems to explain this evolution really well.
E.g. People who purchase cars with Improved Fuel Economy ends up driving so much more that they end up using even more fuel than they would have with a less efficient car.
https://en.wikipedia.org/wiki/Jevons_paradox
There's a friction, between delivering the highest reasonable Quality, yet also allowing the initial users to provide feedback, and helping us to adjust the UX.
I deal with that, by using what I call "Constant Beta." My projects are designed to reach "beta" (or betta), as quickly as possible, so people can start actually using them, even if incomplete. Since I write Apple stuff, I can use Apple's TestFlight. I tend to start using it very early in the project, and there's often hundreds of releases, by the time it actually ships.
I have found that users will almost never provide feedback, no matter how easy I make it, or how much I beg, so I need to infer their usage, via requests for help, or features that I can tell are being used/not used.
The stuff I write has extremely strict privacy requirements, so I often can't collect metrics, or have to anonymize the shit out of the ones I do collect, so there's a lot of tea-leaves-reading, in my work.