> After decades of investment, oversight, and standards development, we are not closer to total situational awareness through a computerized brain than we were in the 1970s.
Hard to see how that could be true. In just about any field, computers today provide much better situational awareness than was possible in 1970.
The article makes the usual complaints about self driving cars:
> Despite $16 billion in investment from the heavy hitters of Silicon Valley, we are decades away from self-driving cars.
Yet cars are much more intelligent today than they were in the 1970s. And we are not decades away from self-driving cars - Waymo runs self driving cars today in very specific locations.
FYI, it was written by a woman. I looked up her book, Kill It with Fire, and as a mainframer I have to say it seems pretty interesting.
I think what she alludes to in this essay, though, is more like that AI cannot solve socioeconomic problems of humans. And even humans seem to struggle with it.
Whenever I read stories where the metrics became the targets, and the like, I am reminded of Varoufakis' book Economic Indeterminacy. He doesn't give any answers there, but there is this "strange loop" in rationalism that nobody really understands.
I also think that AI might be a wrong target, because you need to understand the problem before you can solve it, and once humans understand the problem, they don't need AI anymore, they just code the solution as an algorithm. On the other hand, if humans don't fully understand the problem, it's extremely difficult (except artificial circumstances like games) to explain to AI what the problem is, so it would arrive at a "reasonable" solution (and avoided, at the very least, killing all humans).
If we can't understand the problem, will we be able to understand the solution presented by that AI? Or we'd just apply it, trusting blindly the unfathomable reasons the AI used? Do we have an AI where the decision tree can be grasped by humans?
How much would you bet that human interviewers' opinions of a candidate after a video interview wouldn't be affected if they were visibly in a room full of books? Or if they wore glasses, or had a painting hanging on the wall, or the various other things the researchers found made a difference to the AI's assessment?
To be clear, I am super-skeptical about the ability of AI systems to do a good job of judging an interviewee's personality from a short video clip. But (1) this seems obviously to be a really hard problem, and one that couldn't even have been attempted in 1970, and (2) I am also pretty skeptical about the ability of human interviewers to do it.
Its easy to point at the bookshelf example and say "Haha AI is stupid", but its actually quite impressive. One could easily argue that most human interviewers have similar bias, and that it can detect such complex signals (books, glasses etc) IS impressive.
The problem is this case is the data and/or wrong objectives, but the "AI" here has a lot of awareness, just on the "wrong" signals.
I don't see how this is relevant. Computers in the 1970 had no situational awareness about people interviewing for jobs. So yes, that software might be crap, it still has infinitely more awareness.
> Yet cars are much more intelligent today than they were in the 1970s
Therein lies the problem. Your definition of intelligence presumes that it is a simple quantitative scale, measuring I guess, something like "system complexity".
The relevant sense here, in which no progress has been made, is qualitative -- ie., it is a distinct property. And this property has not been acquired.
What is the property? It is dynamical, not formal. It is more like gravity (, pregnancy) than it is like addition.
It is the ability many animals have of adaption in the shifting and challenging environments in which they are embedded.
That type of adaption is not formal: it is not adaption in the sense of "updating a weight parameter". Rather, of the cells of their bodies coordinating themselves differently, and thus of their tissues, and thus of their organs, and thus of their whole brain-body system. Both from a top-down command ("I want to run now, and so my cells...") and from a bottom-up ("my cells... so I ...").
What enables animals to be fully embedded in their physical environment, to cope and adapt to its radical shifts, is this capacity. The type of "crossword puzzle" "intelligence" we obsess with is entirely derivative of this more basic --and vastly more powerful -- intelligence.
Cognition is just a semi-formal process, parasitical on the body's intelligence; whose role is simply to notice when it fails and problem-solve it.
We have, at best, merely the architecture of this formal reasoning. But there is still nothing for it to reason about. And in this sense, computer science has made no progress -- and indeed, cannot. It is not a formal problem.
And yet computers continue to perform tasks that were talked about for years as something uniquely human / intelligence driven. This is a nice philosophical debate, but in practice I think it falls flat.
You have a valid point here. But it might be that for practical reasons the progress in that direction won't be needed. Like, brute forcing it might be enough to reach a level higher than we can grasp. And if we can't grasp it, it's all Greek to us anyway...
Let’s say you and I were going to race each other by walking. You start on the east side of Los Angeles and I’m in Santa Monica.
Does your lead mean you’re in a much better position than me? What if the finish line is in Amsterdam?
That’s how I see AI (particularly self driving tech) today. Yes, technically there’s been advancements but we don’t even know whether it’s possible to get to the finish line today.
> Let’s say you and I were going to race each other by walking. You start on the east side of Los Angeles and I’m in Santa Monica. Does your lead mean you’re in a much better position than me? What if the finish line is in Amsterdam?
I can't figure out how to parse this. You refer to my starting location as a "lead", but they ask if it means I am in a better position - that is the definition of "lead". I think your point is that we are so far from what is needed that it is hard to know if we are even moving in the right direction.
Which is a weird argument. My brother drove me in his Nissan Rogue today, which does automatic lane following. You don't have to steer your car, or use the gas or brake for many driving conditions. That is unambiguously an improvement over full manual control.
If by “finish line” you mean super-human cognition in every single sense, then sure (although it is possible). That might be a century away while AI has nonetheless been stupendously successful in several areas.
This author makes broad sweeping claims, supprting them with numerous references that (in all instances that I checked) actually counter their argument. I'm not even sure the author knows _what subject they want to talk about_, never mind what argument to present.
I waded through half of it hoping a coherent point would emerge before heading over to hn comments to confirm my suspicions.It’s a mash up of a couple of different pop opinions on the state of ML without any real insight.
Yes. "AI" (read Deep Learning/LogReg/SVM models) do indeed perform better given more data. I can vouch for this myself. And there was also a paper regarding this.
Amazon made $645 million net profit in 2008, $476 net profit in 2007, and $190 million in 2006.
Where did this myth of "Amazon doesn't make profits" come from? Why are people seemingly unable to check publicly shared historical 10k and fact-check themselves before making statements like this?
Yeah, the 2008 number is wrong, but the meme comes from earlier. Its first profitable quarter was Q4 2001, three years after IPO, and it's first profitable year was 2003, six years after IPO.[0] This seems like a long time, especially for the late nineties/early 2000s. (Though tbh, IPO three years after founding feels early to me too.)
Additionally, I seem to recall that they talked this up. Not "we're working on becoming profitable" but instead "We plan to continue losing money for several years. Deal with it."
I believe that prior to 2016, any profitable years were pretty much entirely thanks to Q4, and they were pretty small for its size[1]. A profitable year is good, but three quarters of losses each year will stand out. Sure, they're retail, but they're also tech. Sky high margins are expected year-round.
Amazon was unprofitable because they poured all their opearating profits into growth projects, not because they were subsidizing operations with investment.
Like many retailers, the business is seasonal and Q4 has more shopping. This is modeled as part of the business.
We don't say lawn care businesses are unstable because they do most work in the summer.
From what I remember, Amazon's strategy early on was to take as much revenue as it could and invest it back into itself. It intentionally ran in the red to try to grow faster.
Title aside (which is silly since AI is a toolset for solving a variety of problems), it is just so poorly written that it's not until more than halfway through it that I think I see it's main points (that present day A.I. systems are too dependenct on 'clean' data, and some nebulous discussion of how AI contributes to decision making in organizations). And the main point wrt data quality is rather silly in itself, because plenty of research is done on learning techniques that take into account adversaries or bad data. And all the discussion wrt how AI should be used to improve decision making is just super vague and makes it seem like the author has little understanding of what AI is and how it is actually used.
It is interesting that the author assumes that the intent of industrial applied AI is to make better decisions - from my experience, in the vast majority of cases companies are applying various techniques (both AI/ML and hard-coded heuristics) with the explicit intent to get cheaper decisions, knowing very well that they aren't going to be as good as a dedicated, caring human could make them.
The goal is either business process automation (do the same thing with less people) or to enable processing at a scale where doing it manually is impractical. For example, nobody would assert that an automated email spam filtering system is going to better than a human filtering my email, but an automated filter is quite useful since most of us can't afford a personal secretary. The bar for "good enough to be useful" often is lower than "human equivalent".
This point is totally valid but in the case at my work it is actually both. The old saying in marketing is "I know i'm wasting 50% of my marketing budget I just don't know which 50%." It still holds true for companies with large budgets. We have applied XGBoost to produce many and better models for how to best allocate these budgets. The results are both better and cheaper outcomes.
>People don’t make better decisions when given more data, so why do we assume A.I. will?
Because humans aren't computers. Computers are much better at being able to handle processing large amounts of data than humans can.
>we are decades away from self-driving cars
Self driving cars already exist. In college I had a lab where everyone had to program essentially a miniature car with sensors on it to drive around by itself. Making a car drive by itself is not a hard thing to accomplish.
>the largest social media companies still rely heavily on armies of human beings to scrub the most horrific content off their platforms.
This content is often subjective. It's impossible for a computer to always make the correct subjective choice, no humans will always be necessary
I read the whole article and thought it was worth my time. I liked to broad strokes of goals of anti fragile AI.
I have been thinking of hybrid AI systems since I retired from managing a deep learning team a few years ago. My intuition is that hybrid AI systems will be much more expensive to build but should in general be more resilient, kind of like old fashioned multi agent systems with a control mechanism to decide which agent to use.
People don’t make better decisions when given more data, so why do we assume A.I. will?
It's recognized that many machine learning systems today need very large amounts of training data, far more than humans facing the same task.
That's a property of the current brute-force approaches, where you often go from no prior knowledge to some specific classification in one step.
This often works better than previous approaches involving feature extraction as an intermediate step, so it gets used.
This is probably an intermediate phase until someone has the next big idea in AI.
Hard to see how that could be true. In just about any field, computers today provide much better situational awareness than was possible in 1970.
The article makes the usual complaints about self driving cars:
> Despite $16 billion in investment from the heavy hitters of Silicon Valley, we are decades away from self-driving cars.
Yet cars are much more intelligent today than they were in the 1970s. And we are not decades away from self-driving cars - Waymo runs self driving cars today in very specific locations.
Wondering if this article is written by GTP-3.
FYI, it was written by a woman. I looked up her book, Kill It with Fire, and as a mainframer I have to say it seems pretty interesting.
I think what she alludes to in this essay, though, is more like that AI cannot solve socioeconomic problems of humans. And even humans seem to struggle with it.
Whenever I read stories where the metrics became the targets, and the like, I am reminded of Varoufakis' book Economic Indeterminacy. He doesn't give any answers there, but there is this "strange loop" in rationalism that nobody really understands.
I also think that AI might be a wrong target, because you need to understand the problem before you can solve it, and once humans understand the problem, they don't need AI anymore, they just code the solution as an algorithm. On the other hand, if humans don't fully understand the problem, it's extremely difficult (except artificial circumstances like games) to explain to AI what the problem is, so it would arrive at a "reasonable" solution (and avoided, at the very least, killing all humans).
Yes, the question is: do we want AI to solve first world problems, or real problems?
You sure about that?
https://twitter.com/hatr/status/1361756449802768387?s=20
>Waymo runs self driving cars today in very specific locations
Ernst Dickmann had autonomous cars on the road in very specific locations in the 1980s
https://youtu.be/_HbVWm7wdmE
To be clear, I am super-skeptical about the ability of AI systems to do a good job of judging an interviewee's personality from a short video clip. But (1) this seems obviously to be a really hard problem, and one that couldn't even have been attempted in 1970, and (2) I am also pretty skeptical about the ability of human interviewers to do it.
Single dumb human posts singular dumb ai example to show that all ai are dumb and fails to recognize the irony.
The problem is this case is the data and/or wrong objectives, but the "AI" here has a lot of awareness, just on the "wrong" signals.
https://m.youtube.com/watch?v=FmXLqImT1wE
I recall some Ford(?) project which guided the cars by rail that was quite old.
Therein lies the problem. Your definition of intelligence presumes that it is a simple quantitative scale, measuring I guess, something like "system complexity".
The relevant sense here, in which no progress has been made, is qualitative -- ie., it is a distinct property. And this property has not been acquired.
What is the property? It is dynamical, not formal. It is more like gravity (, pregnancy) than it is like addition.
It is the ability many animals have of adaption in the shifting and challenging environments in which they are embedded.
That type of adaption is not formal: it is not adaption in the sense of "updating a weight parameter". Rather, of the cells of their bodies coordinating themselves differently, and thus of their tissues, and thus of their organs, and thus of their whole brain-body system. Both from a top-down command ("I want to run now, and so my cells...") and from a bottom-up ("my cells... so I ...").
What enables animals to be fully embedded in their physical environment, to cope and adapt to its radical shifts, is this capacity. The type of "crossword puzzle" "intelligence" we obsess with is entirely derivative of this more basic --and vastly more powerful -- intelligence.
Cognition is just a semi-formal process, parasitical on the body's intelligence; whose role is simply to notice when it fails and problem-solve it.
We have, at best, merely the architecture of this formal reasoning. But there is still nothing for it to reason about. And in this sense, computer science has made no progress -- and indeed, cannot. It is not a formal problem.
Specific locations where the streets are practically tracks.
Does your lead mean you’re in a much better position than me? What if the finish line is in Amsterdam?
That’s how I see AI (particularly self driving tech) today. Yes, technically there’s been advancements but we don’t even know whether it’s possible to get to the finish line today.
I can't figure out how to parse this. You refer to my starting location as a "lead", but they ask if it means I am in a better position - that is the definition of "lead". I think your point is that we are so far from what is needed that it is hard to know if we are even moving in the right direction.
Which is a weird argument. My brother drove me in his Nissan Rogue today, which does automatic lane following. You don't have to steer your car, or use the gas or brake for many driving conditions. That is unambiguously an improvement over full manual control.
Everyone is an expert in AI now!
>> Jeff Bezos’s Amazon operated on extremely tight margins and was not profitable
https://www.sec.gov/Archives/edgar/data/1018724/000119312509...
Amazon made $645 million net profit in 2008, $476 net profit in 2007, and $190 million in 2006.
Where did this myth of "Amazon doesn't make profits" come from? Why are people seemingly unable to check publicly shared historical 10k and fact-check themselves before making statements like this?
Additionally, I seem to recall that they talked this up. Not "we're working on becoming profitable" but instead "We plan to continue losing money for several years. Deal with it."
I believe that prior to 2016, any profitable years were pretty much entirely thanks to Q4, and they were pretty small for its size[1]. A profitable year is good, but three quarters of losses each year will stand out. Sure, they're retail, but they're also tech. Sky high margins are expected year-round.
[0] https://en.wikipedia.org/wiki/History_of_Amazon
[1] https://qz.com/1925043/the-days-of-amazons-profit-struggles-...
Like many retailers, the business is seasonal and Q4 has more shopping. This is modeled as part of the business. We don't say lawn care businesses are unstable because they do most work in the summer.
Title aside (which is silly since AI is a toolset for solving a variety of problems), it is just so poorly written that it's not until more than halfway through it that I think I see it's main points (that present day A.I. systems are too dependenct on 'clean' data, and some nebulous discussion of how AI contributes to decision making in organizations). And the main point wrt data quality is rather silly in itself, because plenty of research is done on learning techniques that take into account adversaries or bad data. And all the discussion wrt how AI should be used to improve decision making is just super vague and makes it seem like the author has little understanding of what AI is and how it is actually used.
Yeah I stopped reading at this point:
> Facebook’s moderation policies, for example, allow images of anuses to be photoshopped on celebrities but not a pic of the celebrity’s actual anus.
The goal is either business process automation (do the same thing with less people) or to enable processing at a scale where doing it manually is impractical. For example, nobody would assert that an automated email spam filtering system is going to better than a human filtering my email, but an automated filter is quite useful since most of us can't afford a personal secretary. The bar for "good enough to be useful" often is lower than "human equivalent".
Because humans aren't computers. Computers are much better at being able to handle processing large amounts of data than humans can.
>we are decades away from self-driving cars
Self driving cars already exist. In college I had a lab where everyone had to program essentially a miniature car with sensors on it to drive around by itself. Making a car drive by itself is not a hard thing to accomplish.
>the largest social media companies still rely heavily on armies of human beings to scrub the most horrific content off their platforms.
This content is often subjective. It's impossible for a computer to always make the correct subjective choice, no humans will always be necessary
I have been thinking of hybrid AI systems since I retired from managing a deep learning team a few years ago. My intuition is that hybrid AI systems will be much more expensive to build but should in general be more resilient, kind of like old fashioned multi agent systems with a control mechanism to decide which agent to use.
How much of this is just because it says something they do not want to hear or because there are incentives to not consider it?
It's recognized that many machine learning systems today need very large amounts of training data, far more than humans facing the same task. That's a property of the current brute-force approaches, where you often go from no prior knowledge to some specific classification in one step. This often works better than previous approaches involving feature extraction as an intermediate step, so it gets used.
This is probably an intermediate phase until someone has the next big idea in AI.
And computers are already better at tasks involving a lot of math, which is the main reasons they've become commonplace.