Readit News logoReadit News
andrewla · 8 years ago
My only objection to this is a semantic one -- the word "algorithm" is not being well-served here. The correct word for this sort of thing is "heuristic". The concern isn't that algorithms themselves are incorrect, the concern is that the problem they are trying to solve is a heuristic one, not a formal one.

Saying "let's write an algorithm to improve search results" is meaningless; "let's design and implement a heuristic that improves search results". The algorithmic part of this is how to efficiently implement that heuristic.

I can usually get through articles like this by silently replacing "algorithm" with "heuristic"; the problem arises when some articles attempt to draw equivalencies between "algorithmic" concepts, like running time and space, and "heuristic" concepts, like optimizing for the wrong thing.

usrusr · 8 years ago
Algorithms running on "social hardware" can be surprisingly formal. A famously well-documented example are early modern witchhunts. The humorous depiction in Monty Python and the Holy Grail does a surprisingly good job at conveying the algorithmic nature.
stult · 8 years ago
Many aspects of the law are algorithmic. Even though there is no one, settled formal definition for algorithm, statutory and common law meet many informal definitions. Laws usually lay out an ordered, (theoretically) unambiguous set of steps for deciding a legal issue. When lawyers talk about "elements of a test," they are referring to this structured logic.

For example, the elements required to prove a negligence claim are:

1. Duty

2. Breach of Duty

3. Cause in Fact

4. Proximate Cause

5. Damages

When evaluating a negligence claim, a lawyer first tries to determine if the defendant owed the plaintiff any duty of care, then whether the plaintiff breached that duty, then if that breach was the factual cause of a harm suffered by the plaintiff, then whether the causal relationship was close enough to be considered legally proximate, and then, finally, whether the plaintiff actually suffered measurable damages.

Arguably, that superficially algorithmic process frequently breaks down in practice. For example, it's often easier to start with the damages suffered by a plaintiff and work backwards by identifying the causes, then who was responsible for those causes and any duties they may have owed to the plaintiff. However, regardless of how the lawyer and plaintiff identify whom to sue, they must frame their pleadings to allege the elements of the tort in the order specified by their jurisdiction's law, so the actual practice of law in court amounts to an algorithmic exercise.

pdfernhout · 8 years ago
Along those lines, here are some of my comments on this general topic from an email I posted to the Doug Engelbart Unfinished Revolution II Colloquium in 2000: http://www.dougengelbart.org/colloquium/forum/discussion/012...

===

... I personally think machine evolution is unstoppable, and the best hope for humanity is the noble cowardice of creating refugia and trying, like the duckweed, to create human (and other) life faster than other forces can destroy it. [Although in 2017 I'd add other possibilities like symbiosis or trying to create friendlier AI as a partner (or at least AIs with a sense of humor -- see James. P. Hogan's AIs, or the ones like Libbry in EarthCent Ambassador series, or the Old Guy Cybertank series example), improved sensemaking through better intelligence-augmenting tools, and trying to help human society be more compassionate in the hopes our path out of a singularity will somehow reflect our path going in...]

Note, I'm not saying machine evolution won't have a human component -- in that sense, a corporation or any bureaucracy is already a separate machine intelligence, just not a very smart or resilient one. This sense of the corporation comes out of Langdon Winner's book "Autonomous Technology: Technics out of control as a theme in political thought".

You may have a tough time believing this, but Winner makes a convincing case. He suggests that all successful organizations "reverse-adapt" their goals and their environment to ensure their continued survival. These corporate machine intelligences are already driving for better machine intelligences -- faster, more efficient, cheaper, and more resilient.

People forget that corporate charters used to be routinely revoked for behavior outside the immediate public good, and that corporations were not considered persons until around 1886 (that decision perhaps being the first major example of a machine using the political/social process of its own ends).

Corporate charters are granted supposedly because society believe it is in the best interest of society for corporations to exist. But, when was the last time people were able to pull the "charter" plug on a corporation not acting in the public interest? It's hard, and it will get harder when corporations don't need people to run themselves.

I'm not saying the people in corporations are evil -- just that they often have very limited choices of actions. If a corporate CEOs do not deliver short term profits they are removed, no matter what they were trying to do. Obviously there are exceptions for a while -- William C. Norris of Control Data was one of them, but in general, the exception proves the rule. Fortunately though, even in the worst machines (like in WWII Germany) there were individuals who did what they could to make them more humane ("Schindler's List" being an example).

Look at how much William C. Norris of Control Data got ridiculed in the 1970s for suggesting the then radical notion that "business exists to meet society's unmet needs". Yet his pioneering efforts in education, employee assistance plans, on-site daycare, urban renewal, and socially-responsible investing are in part what made Minneapolis/St.Paul the great area it is today. Such efforts are now being duplicated to an extent by other companies. Even the company that squashed CDC in the mid 1980s (IBM) has adopted some of those policies and directions. So corporations can adapt when they feel the need.

Obviously, corporations are not all powerful. The world still has some individuals who have wealth to equal major corporations. There are several governments that are as powerful or more so than major corporations. Individuals in corporations can make persuasive pitches about their future directions, and individuals with controlling shares may be able to influence what a corporation does (as far as the market allows).

In the long run, many corporations are trying to coexist with people to the extent they need to. But it is not clear what corporations (especially large ones) will do as we approach this singularity -- where AIs and robots are cheaper to employ than people. Today's corporation, like any intelligent machine, is more than the sum of its parts (equipment, goodwill, IP, cash, credit, and people). It's "plug" is not easy to pull, and it can't be easily controlled against its short term interests.

What sort of laws and rules will be needed then? If the threat of corporate charter revocation is still possible by governments and collaborations of individuals, in what new directions will corporations have to be prodded? What should a "smart" corporation do if it sees this coming? (Hopefully adapt to be nicer more quickly. :-) What can individuals and governments do to ensure corporations "help meet society's unmet needs"?

Evolution can be made to work in positive ways, by selective breeding, the same way we got so many breeds of dogs and cats. How can we intentionally breed "nice" corporations that are symbiotic with the humans that inhabit them? To what extent is this happening already as talented individuals leave various dysfunctional, misguided, or rogue corporations (or act as "whistle blowers")? I don't say here the individual directs the corporation against its short term interest. I say that individuals affect the selective survival rates of corporations with various goals (and thus corporate evolution) by where they choose to work, what they do there, and how they interact with groups that monitor corporations. To that extent, individuals have some limited control over corporations even when they are not shareholders. Someday, thousands of years from now, corporations may finally have been bred to take the long term view and play an "infinite game".

However, if preparations fail, and if we otherwise cannot preserve our humanity as is (physicality and all), we must at least adapt with grace whatever of our best values we can preserve or somehow embody in future systems. So, an OHS/DKR [Open Hyperdocument System / Dynamic Knowledge Repository] to that end (determining our best values, and strategies to preserve them) would be of value as well.

When aluminum was first discovered around 1827, and for decades afterward, it was worth more than platinum, and now just under two centuries later we throw it away. In perhaps only two decades from now, children may play "marbles" using diamonds, and a child won't bother to pick a diamond up from the street unless it is exceptionally pretty (although you or I probably would out of habit -- "see a diamond, pick it up, and all the day you have good luck").

This long essay is my own current perspective on this developing situation, and part of the process of my formulating my own thinking on these trends and how I as an individual will respond to them.

To conclude, I think all the "classical" problems like food, energy, water, education, and materials will be technically solvable by 2050 even if we don't do much specifically about them (and like hunger are solved today except for politics). The dynamics of technology and economics are just taking us there whether we like it or not. Those goods may all may essentially be "free" or "extremely cheap" by 2050. Obviously the complex politics of these issues need to be resolved, and the solutions need to be actually implemented. If they are "extremely cheap", people still need a tiny amount of income to buy them.

Still, I think Doug [Engelbart] is right. We face huge problems that only collaborative efforts can solve -- especially the problems of intelligent machines, technology-amplified conflict, and a complete disruption of our scarcity-based materialistic economic and social systems. These problems dwarf technical issues like energy, food, goods, education, and water.

The problem has always been, and will always be, "survival with style" (to amplify Jerry Pournelle). The next twenty years will fundamentally change what the survival issues are: environment, threats, and allies. They will also very well change what we value as "style" -- when diamonds are cheap as glass [perhaps from nanotechnology], what will one give to impress?

===

platz · 8 years ago
talking machines had an episode on the difference between algorithms and models, and how the general public understands the meaning of the word "algorithm". In general these are conflated terms which is hard to be absolutist about, at the very least

The general public (& journalists) use the word 'algorithm' to mean any computerized process that "does things to them", such your facebook news feed, or what a credit agency does.

This is a different meaning from how social scientists use these words.

http://www.thetalkingmachines.com/blog/2017/9/22/the-long-vi...

In the episode, he talks about how even something like Principal Components Analysis (PCA), which is something that normally we would call an algorithm, which follows a discrete sequence of steps, can also be thought of as resting on something that resembles a model

jordigh · 8 years ago
I don't think there's a correct word here. He's talking about widely differing things and using a vague word to try to relate those things. The correct thing is to reject the relation and treat each of the notions he's talking about (such as web search results, government policy, and economic models) as the distinct things that they are.
smallnamespace · 8 years ago
We already have a great word that exactly describes our approach to capitalism though: ideology.
ScottBurson · 8 years ago
Yes, a lot of people use "algorithm" to mean simply "procedure". I'm glad to see someone besides me pointing out the distinction between algorithms, in the strict sense, and heuristics. Complaining about the misuse of technical terms is unlikely to have impact on usage in the popular press ([0]), but I think it's appropriate in a technical discussion.

One of the most egregious misuses of "algorithm", in my opinion, is the term "genetic algorithms". Not only are these not algorithms in the strict sense, but referring to the procedures as "genetic heuristics" or "genetic search" would be much clearer.

[0] https://news.ycombinator.com/item?id=10475884

dwringer · 8 years ago
Genetic algorithms are algorithms in that they are a description of a specific process or set of processes (which may used in heuristics, or studied in isolation) at an implementation level of abstraction. "Genetic heuristics" suggests heuristic applications, rather than algorithms in isolation, and "genetic search" suggests a specific application, but a "genetic algorithm" can be extremely simple and isn't fundamentally different than an algorithm like quicksort. Either could be used as part of some heuristic, or not.
naasking · 8 years ago
> The concern isn't that algorithms themselves are incorrect, the concern is that the problem they are trying to solve is a heuristic one, not a formal one.

Heuristic problems are simply problems that aren't yet formally understood. I don't think it's meaningless to use "algorithm" in the examples you cite, as long as its understood that a good algorithmic solution requires a good model of the actual problem being solved.

justonepost · 8 years ago
well, heuristics are made up of algorithms.
andrewla · 8 years ago
This is not true. A (bad) heuristic for search results might be "rank the documents by the total number of occurrences of each search term in that document".

That's not an algorithm -- that's a desired result. Similar to how "sorting a list" is a description of a class of algorithms; it gives no description of how a machine can accomplish that goal.

The difference between the heuristic above and "sort a list" is that the success criteria of the latter can be very well defined, whereas the heuristic presented is an attempt at approximating the desired result, which is something like "present the best search results first, for some meaning of best".

mathgenius · 8 years ago
The financial system is a giant message passing algorithm. It is pretty much just a min-sum algorithm [1] whose sole purpose is to answer the question "what should we do?". Anyone who has played around with these algorithms for solving decoding problems knows that they are fabulously powerful.

But these message passing algorithms have two weaknesses:

(A) When there is more than one solution

(B) When there are small loops in the message network

By far the worst problem is (B): it's a kind of a "corruption" of the network and causes it to pretty much go off the rails. I think people already do understand the consequences of these problems in the financial system, but we don't seem to see how we can just change the topology and/or the messages themselves, in particular, to try to fix up these self-reinforcing loops. Or: move away from min-sum towards sum-product (which often works an order of magnitude better) by perhaps implementing basic income. Etc. Etc.

[1] https://en.wikipedia.org/wiki/Generalized_distributive_law

KirinDave · 8 years ago
Having worked in the financial industry, your view of it is what we'd call, "50,000" feet.

I don't think it reflects reality on the ground, nor does it really reflect how humans in groups make decisions according to research.

The message network loop issue isn't even an issue for message passing systems except in the most pathological cases (where it results in an unbounded growth in messages or an infinite delta on the values the algorithm calculates.

mathgenius · 8 years ago
I'm not saying the finance system doesn't work, it obviously does, and amazingly well. I'm just pointing out the weaknesses of the system, and these weaknesses could easily cause humanity to walk off a cliff.

I also have a finance background. This idea that the system is a kind of message passing is not a "50,000 feet" idea, it actually came to me from writing algorithms to arbitrage markets. There you really are performing message passing (think of forex legs.) It's Dijkstra's algorithm. So this is the view from 50,000 nano-seconds. But I believe it holds for many other scales. Plenty of people walk into the supermarket and buy the product with minimum cost. Yes? How is this a view from "50,000 feet"? But obviously we are not just min-sum automata: we are all free agents, and so on.

> how humans in groups make decisions according to research

Well this is obviously a huge topic.

> message network loop issue isn't even an issue for message passing systems except in the most pathological cases

I totally disagree with this. Short loops completely screwup message passing.

AndrewKemendo · 8 years ago
Great analysis.

(A) When there is more than one solution

And this is exactly why the concept of a "growing pie" is flawed. Markets generally pool around a single solution versus averaging around multiple and there isn't enough capital (labor or dollars). I think it comes back to power law driving behaviors here and everyone wanting a huge win. So in effect markets look like fixed pies in the short term and the winner is the one that grows the pie.

Trouble with this scheme is, the "pie growth", when it happens, is distributed to a very small group of people who have the ability to make big bets - so it compounds.

In terms of (B) those are basically local maxima tied to (A) so that distribution of growth is chaotic and skewed. So while it might seem like corruption of the network, it actually is a function of the "winner take all" nature of any market in the absence of either consumer/user self regulation or some deus ex machina regulator (government etc...).

Bottom line, it's a problem with how humans act (or fail to act) collectively around information sharing.

AnthonyMouse · 8 years ago
> Trouble with this scheme is, the "pie growth", when it happens, is distributed to a very small group of people who have the ability to make big bets - so it compounds.

It seems like the real trouble is that we've set things up so that "big bets" are required.

Huge tomes of regulations that have one-time compliance costs, so the cost to a tiny shop is the same as it is to Comcast or Microsoft, and regulatory capture to keep it that way.

Tax laws that encourage profits to be kept within the corporation instead of returned to investors, which requires successful companies to become conglomerates.

Laws that allow Hollywood and Apple to control content distribution and reject anything from anyone who poses a competitive threat to them.

Fix things like that and you won't have to be so big to grow the pie.

platz · 8 years ago
> The financial system is a giant message passing algorithm. It is pretty much just a min-sum algorithm [1] whose sole purpose is to answer the question "what should we do?".

no, no no. To even propose that the answer to the question of "what should we do?" can be solved by the financial system is laughable; that is pure free-market absolutism.

The anwer to the question "what should we do?" is _political_ .

Let us not confuse the market with politics.

nine_k · 8 years ago
I'm afraid you read "what should we do" in much more general sense than the original author. I think that algorithm answers a narrower question of "what should we do to optimize monetary resource allocation", and it only answers within the boundaries you set, for instance, trust and reputation play major roles.

Deleted Comment

platz · 8 years ago
Downvoters, explain your logic please
rsync · 8 years ago
"(B) When there are small loops in the message network"

Can you elaborate a real-world example of this for the layperson ? 50k foot view is fine :)

smallnamespace · 8 years ago
Here's the most important loop that's been driving up staggering wealth inequality in our lifetimes:

- Corporations and the ultra-rich tend to be the ones who try to profit-maximize the most (because people who care about other things, don't work as hard to accumulate wealth, and because corporations that profit-maximize the best tend to survive and grow better)

- National governments since WW2 had done a decent job of redistributing wealth, but since there are increasing returns to scale on investing in tax avoidance/evasion, it is the richest individuals and corporations that are best able to avoid taxes and move wealth offshore

- It is cheaper and easier to cut a sweetheart tax with a corporation or a rich individual to temporarily attract capital to a nation-state than to generate wealth the hard way through education and infrastructure investment

So our global capitalist system for the past few decades has simultaneously selected for the most selfish, profit-maximizing institutions and individuals, while also setting up a race to the bottom between nations (and even cities and states -- see Amazon's bid for a 2nd headquarters or Tesla's Gigafactory) to see who can give the biggest tax breaks to those at the top.

nine_k · 8 years ago
> "(B) When there are small loops in the message network"

> Can you elaborate a real-world example of this for the layperson ? 50k foot view is fine :)

pdfernhout · 8 years ago
A real-world example of such dysfunction happening right now is that most of the money that could be used by all people to signal demand via cash "messages" is now tied up in a small loop of messages sent between the wealthiest people a "Casino Economy". See for example "Money as Debt II": https://www.youtube.com/watch?v=6MwHgpFSQMo

Money can be seen as a form of kanban unit or ration token for signalling demand. Essentially, the richest 1% or less of investors now use their "messaging" tokens (cash) for speculative investments in games against other wealthy investors in the financial sector (foreign exchange, derivatives market, etc.). That starves the rest of the economy for kanban tokens (cash) so it can't function. It would be like you walked into a Toyota factory using Kanban tokens and randomly removed 90% of the tokens -- that would prevent each industrial operation from signalling its need for required parts from other operations, causing all operations to slow down as they wait on all their dependencies to arrive. https://en.wikipedia.org/wiki/Kanban

Almost any economist will agree that if 90% of the money supply suddenly disappeared we would suffer a great economic depression in the USA. But the same economists seem unable to accept the same depression will happen for the 99% if the richest 1% take most of the money supply out of general circulation and just use it to play poker with each other. There are other complexities (including velocity of money message passing), but it seems to me the big issue many people overlook -- it is not just the total amount of money supply but how it is distributed.

Unfortunately, most of the governmental solutions (to satisfy wealthy donors taking part in legalized bribery of campaign donations) are based on supply-side "voodoo economics", like giving trillions of more dollars to the wealthy via bank loans or tax cuts. This is done in a foolish unfounded hope the wealthy will use extra money differently than they already have in the casino economy disconnected from meeting the needs of the 99%.

Even the slightest amount of thought will show how absurd supply-side economics is compared to demand-side economics. Almost anyone who can show a predictable demand for a good or service (like booked orders) can get a bank loan to fulfill those orders -- and to get orders, the customers need to have cash to signal demand. It is demand that makes businesses successful -- not supply.

Markets can work well to meet needs and wants, but they only hear the needs and wants of people who have money. Thus the value of a basic income to ensure all people's needs and wants are heard by the market to at least a basic extent.

Other options for dealing with the cash crisis caused by the triumph of the Casino Economy include strengthening non-market parts of the overall society such as: subsistence production (home 3D printing, home robotics, home gardening, home power via solar panels); the gift economy with more volunteering, freecycling, and sharing knowledge via the internet; and better democratic planning.

Unfortunately, with the big move of women into the workforce in the USA over the past few decades, home production, volunteering, and civic participation have all been reduced. Expanded entertainment options as a form of "Supernormal Stimuli" also distracted many people from physical daily life, also reducing participation in those other three sectors of society. Thus, a growing percentage of total societal interactions take place via exchange in the market instead of via subsistence, gift exchange, or civic planning.

Ironically, the "Two Income Trap" means families have very little to show for the second income between extra expenses involved in working outside the home, two-income families bidding up the price of houses and other items, and an increased supply of workers leading to lower compensation and poorer working conditions for everyone. See Elizabeth Warren's book on that: http://www.motherjones.com/politics/2004/11/two-income-trap/

With an increased supply of workers, there was decreased power of workers to demand wage increases in parallel with ongoing productivity increases. This in turn created the situation whether the owners of capital could take profits and lend them to workers instead of paying them in wages. Richard Wolff talks about in "Capitalism Hits The Fan", (whatever one thinks about his proposals for reform): http://issuepedia.org/Capitalism_Hits_the_Fan

To be clear, I'm not saying women should not have a choice as to what they do with their time. This is just about the societal implications of certain trends given men did not leave the work force to stay at home and be subsistence producers, volunteers, and civic actors in the same numbers as women who joined the work force.

I'm also not saying these "supernormal stimuli" are all bad (see for pros and cons: http://www.paulgraham.com/addiction.html ).

How we deal with that situation is a political question -- but to deal with it, we first need to acknowledge and understand what happened. And beyond a decrease in activity in the non-market sectors of society, one of the consequences of multiple trends has been a concentration of wealth in a smaller number of hands which has made the shift to a Casino Economy more likely.

Automation also has a role to play in that concentration of wealth from a different direction. Marshall Brain talks about that in "Robotic Freedom": http://marshallbrain.com/robotic-freedom.htm

Better technology has also increased options for participation in those three non-market areas (via cheaper tools, cheaper robots, and cheaper communications), so it is hard to say the entire trend has been downward for those non-market sectors. We may yet see them rise again as those other costs continue to fall -- and perhaps if people learn to move beyond the supernormal stimuli of mass-produced entertainment and back to making their own fun and using their own creativity.

While this won't directly break the tight loop of the 1% and the Casino Economy, it may bypass it so it does not matter as much -- in which case all that vast amount of money controlled by the 1% may just becomes like Monopoly money -- meaningless to most people most of the time because their life is built around non-market interactions.

pishpash · 8 years ago
The gist is right but over all this is some handwavy gibberish.

Dead Comment

jordigh · 8 years ago
Is this his new meme-hustling? "Algorithm"?

https://thebaffler.com/salvos/the-meme-hustler

I've never liked how Tim O'Reilly frames his discussion around vague terms like "open" which could mean participatory, transparent, available, or any other number of vague, feel-good terms. Now he seems to be calling "algorithm" things like economic models and government policy.

These are widely disparate things, but by using vague terms in different contexts, he pushes discussion towards the direction he wants to steer it: in the case of "open", away from free software. In the case of "Web 2.0", towards anything that involved crowd participation.

With "algorithms", he seems to be wanting to push the notion that technology is both scary but liberating and we need tech messiahs like Bezos or Musk to bring it under control.

tenaciousDaniel · 8 years ago
That's been bothering me, the word "algorithm" is slowly becoming known as this ambiguously scary thing.
Chaebixi · 8 years ago
It's kinda understandable, though. Whenever the services of Google, Facebook, etc. behave in an inscrutable, nonsensical, or offensive way, they blame it on their "algorithm" (for a recent example, see https://www.theguardian.com/us-news/2017/oct/06/youtube-alte...). That's really the only context where the term "algorithm" surfaces in mainstream discussion.

ML has had a lot of successes, but one of its failures has been more unpredictability on the level of individuals and events that people actually experience.

KirinDave · 8 years ago
Maybe if humans took responsibility for the algorithms they created and didn't shrug and say, "It's the algorithm" then "the algorithm" wouldn't be the antagonists in this story.
coliveira · 8 years ago
You should get used to that. When technology does harm, then the creators of this technology will point to the algorithm, the same way that Google talks about an algorithm as being a living thing. Of course, whenever they do good things, then the merit goes to the (human) creators of the algorithm -- in fact, to the founder(s) of the company that created the algorithm.
fmihaila · 8 years ago
Yeah, 'algorithms' are the new 'chemicals'.
mfoy_ · 8 years ago
An "algorithm" is simply a way to do a thing based on a set of rules to be followed... of course it's ambiguous.

Getting scared of anything that generic is silly. Might as well fear the outside because anything can happen out there!

zghst · 8 years ago
Everyday people couldn't even tell you the definition of "algorithm", even then, they wouldn't recognize that algorithms are not only encoded into chips but also business process, legal compliance, etc.
sukilot · 8 years ago
O'Reilly's in an interesting story. He became fabulously wealth by selling partly/mostly closed-source books about open-source software, in the short time-window when opens source existed while it was also possible to make money selling books (before Internet samizdat became cheap as free)
packetslave · 8 years ago
WTF is a "closed-source book"?
dcre · 8 years ago
Can anyone explain why O'Reilly thinks nobody knew until recently that companies are biased toward part-time work in order to avoid providing benefits?

"We can’t see, for example, that the algorithms that manage the workers at McDonald’s or The Gap are optimized toward not giving people full-time work so they don’t have to pay benefits. All that was invisible. It wasn’t until we really started seeing the tech-infused algorithms that people started being critical."

And here's one that's more subtle, so I don't blame him quite as much, but he is naive to think "ideas" are what cause corporations to act the way they do. Material and institutional conditions cause their behavior, which is then justified after the fact by appeal to shareholder value.

"Somebody planted the idea that shareholder value was the right algorithm, the right thing to be optimizing for. But this wasn’t the way companies acted before. We can plant a different idea. That’s what this political process is about."

pdimitar · 8 years ago
> Can anyone explain why O'Reilly thinks nobody knew until recently that companies are biased toward part-time work in order to avoid providing benefits?

How do you know he's thinking that? The way he was talking, I read him as "well it's obvious that nobody has stopped this behavior so it's fair to assume that not enough people noticed". Wouldn't you agree with that?

> And here's one that's more subtle, so I don't blame him quite as much, but he is naive to think "ideas" are what cause corporations to act the way they do. Material and institutional conditions cause their behavior, which is then justified after the fact by appeal to shareholder value.

That was the only part of the interview where I strongly disagreed with him -- and you're right. It's not about ideas; there are a lot of people out there who are extremely good in bean-counting, micro-management, and of course awful at promoting a positive work environment. They will never change. Only regulators can limit them a bit, if even that.

JustSomeNobody · 8 years ago
> How do you know he's thinking that? The way he was talking, I read him as "well it's obvious that nobody has stopped this behavior so it's fair to assume that not enough people noticed". Wouldn't you agree with that?

I wouldn't. Anyone... everyone... who's worked retail _knows_ this. Anyone who's worked retail management knows this because when you ask to hire people you're told to hire part timers. Two part timers is always better than a full timer, you're told. This is NOT invisible to anyone. It's simply unspoken.

ballenf · 8 years ago
>> Can anyone explain why O'Reilly thinks nobody knew until recently that companies are biased toward part-time work in order to avoid providing benefits?

> How do you know he's thinking that?

From the article:

> We had plenty of bias before but we couldn’t see it. We can’t see, for example, that the algorithms that manage the workers at McDonald’s or The Gap are optimized toward not giving people full-time work so they don’t have to pay benefits.

His thinking on the point seems pretty clear and the parent seemed to summarize it pretty well. I had the same question upon reading it and thinking back to the very prominent criticism that companies like Domino's and other fast food operators were taking for cutting workers' hours below the 32-hour max to avoid health care costs.

dcre · 8 years ago
Workers have been agitating against this behavior for many decades, so no, I would not agree that it is fair to assume nobody noticed.
jpster · 8 years ago
>We had plenty of bias before but we couldn’t see it. We can’t see, for example, that the algorithms that manage the workers at McDonald’s or The Gap are optimized toward not giving people full-time work so they don’t have to pay benefits. All that was invisible. It wasn’t until we really started seeing the tech-infused algorithms that people started being critical.

I’m not sure how this can be said with a straight face. An algorithm was really not needed to perceive this and it’s insulting and strange to suggest it.

vpribish · 8 years ago
“Tech-infused”. Like tech is an herb or a spice? Surely a sign that there are no coherent ideas to be found from that writer.
zghst · 8 years ago
"why capitalism is like a rogue AI"

These people seriously need to take a step back. At some point you cross the bridge between reporting and actively advertising someone's personal views.

I can't ever trust or read people who constantly try to push an agenda, it's disingenuous. Even more so, when reporting, engaging in personal exchange without the discussing data and statistics, i.e. the facts, you are just allowing yourself to become a personal blog of someone else.

Reporters are supposed to fact check, look for concrete evidence in someone's statements, hold people accountable to their words, and yet certain people get a pass all the time, even the star treatment.

saulrh · 8 years ago
Yes? This is an interview with an author that was conducted specifically to talk about the book and the material in it. What did you expect?

Unrelated, I'd like to see your argument against that particular line. IMO the comparison is an excellent one; its only issue is that the chosen scope ("the financial market") is too small. Corporations and other bureaucratic entities like governments are powerful cross-domain optimizers with utterly alien cognitive processes and goals. Intelligence, certainly, artificial, might as well call it that.

zghst · 8 years ago
Well I'd hope that we'd expect better when an author makes outrageous, nonsensical claims.

They are rehashing the book instead of analyzing its claims and assessing its validity. Conveniently, the blame of all the issues are left on the firms instead of the rules of the marketplace, conditions, etc which lead to distortions and structural issues in the market.

edanm · 8 years ago
Calling financial markets AI is wrong in pretty much the only way a word can be "wrong": It's not what most people mean when they talk about AI.

This makes it really easy to make statements that sound deep and meaningful, but really aren't. E.g. "I'm not worried about Artificial Intelligence - we already have artificial intelligence, its called a company. Companies are artificial, and they behave intelligently".

This just isn't what people are worried about. What people are worried about is:

1. Soon we will be able to create software/robots that replace tons of human jobs. This has nothing to do with "companies as an AI".

2. A super-intelligence will be created that is vastly smarter than any human, and can make itself even smarter, but will have different goals than humanity. Again, this is only very thinly related to the "companies as AI" spiel (companies are not superintelligent, they don't actually have coherent goals of their own).

dbingham · 8 years ago
Everyone had an agenda of some kind. Whether conscious or unconscious everyone has their own opinions and belief systems which color their perceptions of the world. That includes their perceptions of hard data. There's no such thing as objective reporting -- it's simply not possible for humans to remove their own opinions and feelings from their perception of an event or an issue and report on it in an unbiased way. No matter how hard they may try, their bias will slip in.

This is why I prefer intentional debate as a way of understanding the world rather than poor attempts at "objective" reporting. I always feel far more informed about a topic after I've listened to people with opposing agendas duke it out on an intellectual stage than I do when I've read a supposedly "neutral" article by someone either masking feelings on the topic or who doesn't care about it much. Intelligence Squared US is fantastic for this and I really wish there were more outlets like them.

With that in mind, if you think this article has an agenda, then -- instead of complaining about it -- go find one with the opposite agenda and read that. Then compare their points and arguments.

kijin · 8 years ago
> I can't ever trust or read people who constantly try to push an agenda, it's disingenuous.

On the contrary, I can't bear to trust people who don't display a clear agenda.

Everyone has an agenda. It's either spelled out, in which case we can clearly see it and proceed to discuss its merits; or it's hidden under a mask of neutrality, in which case it's much harder to notice and counteract. Pretending to be neutral is both disingenuous and manipulative.

Meanwhile, one of the major points that OP makes is that "algorithms" (or heuristics) are the same. All human-made algorithms reflect human biases. Some are easier to notice and counteract. Others are hidden deeper under a guise of neutrality. Either way, it does not help to pretend that there are no biases.

sp332 · 8 years ago
It's just a paraphrase. Here's the quote: "Yes, financial markets are the first rogue AI." I didn't read the paraphrase as an endorsement. It's just a description of what was said.
nailer · 8 years ago
> This one touches on the effects of Uber’s behavior and misbehavior, why capitalism is like a rogue AI, and whether Jeff Bezos might be worth voting for in the next election.

I'd prefer 'Why OReilly believes capitalism is like a rogue AI'. Having Wired repeat OReilly's statement, rather than attribute it to him, makes it seem like they're not critically examining a very significant claim.

zellyn · 8 years ago
Steven Levy is one of the best writers when it comes to profiles and the gestalt of the tech scene. His stuff is consistently well done. I think you're looking for a different _type_ of writing, perhaps?
pdimitar · 8 years ago
Maybe you're looking for PhD thesis presentations and not interviews with book authors, then. YouTube will be happy to help.
s73ver_ · 8 years ago
I think you're taking an idealized version of reporting that was never, ever true. Otherwise all papers would be the same, more or less. There has always been agenda pushing in reporting, both in what was reported, and what was not reported.
banned1 · 8 years ago
Hysteria sells.
pgtan · 8 years ago
Capitalism and AI bashing also. This book will become a hit in Germany.
egypturnash · 8 years ago
"I’m not sure that Jeff would make a great president, but he might."

So let me tell you about my experience with Amazon Fulfilment. I was gonna pay Amazon, who has this huge expertise in packing and shipping stuff, to fulfil my Kickstarter. I'd made a large, delicate art book. Amazon, in their infinite wisdom, stuffed them in bubble wrap envelopes and dropped them in the mail. They were getting bent, the envelopes were getting ripped up, it was a mess.

I spent a month in customer service hell e-mailing someone who was following a script that said I would have to turn on something they called "prep", which would ask them to look at it and package it better. Three times I checked that this was set, and three times I sent myself a test book that came in a bubble mailer.

Finally I got clued in that there is a high-level support team that you access by sending a complaint directly to Bezos. This person, after some back and forth, ultimately informed me that due to their internal systems, it is completely impossible for them to put a book in anything but this level of shitty packaging; "prep" is just not a process that can ever happen to a book.

They covered shipping my books out to someone actually capable of finding sensible boxes and shipping them. And they intimated that someone maybe lost a job over this. But changing this, they said, might take on the order of years.

I'm not sure I want a man who presides over a system like this running the country.

Especially given that O'Reilly points out that the "rogue algorithms" of the title are corporations, and the only reason Amazon is headquartered in WA is that the tax was lowest there...

jacknews · 8 years ago
"Our fears ultimately should be of ourselves and other people."

Indeed. The short-term fear at least should not be about machine overlords, but about how people in power use AI to increase their power and/or make life worse for everyone else.