Readit News logoReadit News
jarpschope commented on If AI replaces workers, should it also pay taxes?   english.elpais.com/techno... · Posted by u/PaulHoule
overrun11 · 3 months ago
No that's just a really misleading graph. Most of the gap disappears once you include variable pay like benefits, overtime, bonuses, stock comp etc.

See this explanation and corrected graph: https://fraser.stlouisfed.org/title/economic-synopses-6715/w...

jarpschope · 3 months ago
There has been extensive debate around that topic since that paper came out. Some points to discuss:

1. Even the article you shared mentions that starting in 2003, earnings has stopped tracking productivity. "Total compensation remains close until 2003, but does not follow 2003’s uptick in productivity growth (behavior which remains a topic for future research)."

2. They use average earnings and not median earnings. Average earnings include people like CEOs. This by consequence shows that inequality among workers has also increased. Check out chart 4 here to see how much smaller median wages are compared to average: (https://www.csls.ca/ipm/23/IPM-23-Mishel-Gee.pdf)

3. Apart from the average vs median difference, the biggest point of contention between that study and more recent ones is the measure of inflation used. The 2007 study you cite uses a measure of inflation that also includes things paid by employers like medical insurance. It turns out that using that one leads to significantly lower inflation. If you use consumer price index, what workers actually pay out of pocket, the difference again becomes larger. Citing page 37 of the study above: "In other words, that the prices of consumer items has risen faster than a broader index of prices that includes net exports, government goods and services, and investment goods. Therefore, for a given increase in income, the purchasing power of the consumer has fallen faster than that of business for investment goods and foreigners for U.S. exports."

The article I shared before plus this other one describe all the discrepancies (https://www.epi.org/productivity-pay-gap/). Specially see chart 10 in the PDF study. That shows all possible variations of how you measure productivity and income. No matter how you look at it, the most substantiated conclusion is that income has NOT matched productivity.

jarpschope commented on If AI replaces workers, should it also pay taxes?   english.elpais.com/techno... · Posted by u/PaulHoule
kortilla · 3 months ago
No it’s not. If the increased productivity is realized by multiple industries, then they all compete on price and the price of their goods comes down. That means the consumers of the product capture the gains in productivity.

Farmers using machinery instead of labor has meant cheaper food for everyone, not rich farmers.

jarpschope · 3 months ago
This is possible in theory.

I think that if we look at inflation-adjusted productivity, and inflation-adjusted average income, then that would indeed prove increasing inequality, right?

I believe the chart in this link is adjusted by inflation. Showing overall the same trend:

https://www.epi.org/productivity-pay-gap/

jarpschope commented on If AI replaces workers, should it also pay taxes?   english.elpais.com/techno... · Posted by u/PaulHoule
zwnow · 3 months ago
> salaries for contributors grow

I dont see that happening

jarpschope · 3 months ago
Indeed that has not happened: https://tinyurl.com/3dutardj
jarpschope commented on If AI replaces workers, should it also pay taxes?   english.elpais.com/techno... · Posted by u/PaulHoule
Treegarden · 3 months ago
I see a natural equilibrium with a tension: automation (also through AI) causes unit economics to drop and results in cheaper prices. At the same time, salaries for contributors grow because their impact is so high. So you end up with a new equilibrium of much cheaper prices and much higher salaries. What, however, about the people who can’t contribute? IMO the most natural and fair approach is to support (through whatever means) people’s “education”, allowing them to upgrade their skills so that they can contribute. IMO this leads to a new tension: not rich vs poor, or useful vs useless, but people who can up-level their skills vs those who don’t. And I think, at its extreme, it boils down to this: how much plasticity does your brain have? Because every other constraint, society can adapt or accommodate for.
jarpschope · 3 months ago
Yeah, that definitely won't work at scale. The bar for what constitutes being "educated" keeps increasing. Previously it was knowing how to code, now it is having an ML PhD, for example. At the same time, AI keeps getting more and more capable, so no matter how much "education" you have, AI will eventually get to you.

In any case, the argument won't work for majority of the population without a college degree. Are you going to have 50+ year old truck drivers upskilling in a fancy new tool to keep a job? And again, how long until that new skill you upgraded them to is now done by AI as well.

jarpschope commented on If AI replaces workers, should it also pay taxes?   english.elpais.com/techno... · Posted by u/PaulHoule
CrazyStat · 3 months ago
To what extent is productivity a sign of the system getting imbalanced towards capital? That relationship is not at all clear to me.
jarpschope · 3 months ago
If productivity is increasing but not average salary, then by definition the additional wealth is being taken by the owners of capital.
jarpschope commented on Who Can Understand the Proof? A Window on Formalized Mathematics   writings.stephenwolfram.c... · Posted by u/ColinWright
markisus · a year ago
I was sort of puzzled by the meaning of "axiom for boolean algebra" as well, and I looked into this more.

The way I learned boolean algebra was by associating certain operations (AND, NOT, OR, etc) to truth tables. In this framework, proving a theorem of boolean algebra would just involve enumerating all possible truth assignments to each variable and computing that the equation holds.

There is another framework for boolean algebra that does not involve truth tables. This is the axiomatic approach [1]. It puts forth a set of axioms (eg "a OR b = b OR a"). The symbol "OR" is not imbued with any special meaning except that it satisfies the specified axioms. These axioms, taken as a whole, implicitly define each operator. It then becomes possible to prove what the truth tables of each operator must be.

One can ask how many axioms are needed to pin down the truth table for NAND. As you know, this is enough to characterize boolean algebra, since we can define all other operators in terms of NAND. It turns out only one axiom is needed. It is unclear to me whether this was first discovered by Wolfram, or the team of William McCune, Branden Fitelson, and Larry Wos. [2]

[1] https://en.wikipedia.org/wiki/Boolean_algebra_(structure)

[2] https://en.wikipedia.org/wiki/Minimal_axioms_for_Boolean_alg...,.

jarpschope · a year ago
Thanks for the wonderful explanation brother!
jarpschope commented on Amazon tells employees to return to office five days a week   cnbc.com/2024/09/16/amazo... · Posted by u/jbredeche
smcleod · a year ago
I don't know how true it is I've heard that the ratio of managers to engineers has been increasing over the last few years, if that's true this policy lines up with what we've seen from management-heavy / inverted triangle org cultures
jarpschope · a year ago
> Inverted triangle

That can't be the case my man. Even if there was a manager for every two employees below, the number of all managers at all levels would at most be equal to the number of employees at the lowest level given that:

x (lowest level) = x/2 (first) + x/4 (second) + x/8 (third) + ...

In reality, there are way more employees per manager and the levels of management is not infinite, so the ratio of managers to employees is way less than 1.

jarpschope commented on AI Is a False God   thewalrus.ca/ai-hype/... · Posted by u/pseudolus
_wire_ · 2 years ago
> The classic story here is that of an AI system whose only—seemingly inoffensive—goal is making paper clips. According to Bostrom, the system would realize quickly that humans are a barrier to this task, because they might switch off the machine.

I am at a loss to understand how this agent of doom (the AI not Bostrom) can be both "intelligent" and not understand that there are enough paperclips.

Unless I assume the argument rests on the word intelligent being meaningless.

But go on...

jarpschope · 2 years ago
Would a superintelligence reach the conclusion that humans are a cancer on earth that must be destroyed? That's a better example at the core of the alignment issue. Some values that humans hold in high regard, like the continued existence of billions of humans on earth, may not be there in a non-human-biased superintelligence.

u/jarpschope

KarmaCake day9June 2, 2024View Original