Everything is always painted in such an adversarial light, it makes you despair sometimes.
I think that The Atlantic's recent article on this topic is a more nuanced insight[1]; human-machine cooperation is probably where the big money will be. Companies that seek to cut people out of the loop will probably run into a lot of problems, as will those that smash the looms. Whereas trying to smooth the interface between AI/ML conclusions and human oversight is probably going to see the most success.
> human-machine cooperation is probably where the big money will be.
As it has been for as long as machines have existed, really. This reminds me of Douglas Engelbart and his vision for computers. I'll cite the section of his wikipedia page that paraphrases an interview with him from 2002[0][1].
> [Douglas Engelbart] reasoned that because the complexity of the world's problems was increasing, and that any effort to improve the world would require the coordination of groups of people, the most effective way to solve problems was to augment human intelligence and develop ways of building collective intelligence. He believed that the computer, which was at the time thought of only as a tool for automation, would be an essential tool for future knowledge workers to solve such problems.
He was right of course, and his work lead to "The Mother of All Demos"[1].
Machine learning is the next step in using computers as thought enhancement tools. What we still need to figure out is an appropriate interface that is not as "black-boxy" as "we trained a neural net, and now we can put X in and get Y out".
EDIT: Now that I read that quoted section of wikipedia again, it's funny to note that computers were "only seen as tools of automation", and how modern fears of AI are also about automation. Automation of thinking.
It's funny that you bring that up - it does seem like the concept of 'extended cognition' is one of the biggest benefits that we've collectively realized from computers (and other relatively nonvolatile communication mediums like books.)
This is a computer-oriented analogy, but most fields have their own tables and charts and maths that are tedious to keep on the tip of your mind. Still, for example, I don't need to remember the details of every API that I use; I can just remember that there is a 'do X' call available, and refer to the documentation when and if I need to actually use it.
In the same vein, I can quickly get a feel for whether an idea is possible by stringing together a bunch of abstract mental models. "Can I do X?" becomes, "are there good tools available for doing A, B, C, and D?", and that information is only a quick search away. Actually using those tools involves an enormous amount of detail, but it's detail that I can ignore when putting an idea together.
And in most cases, that 'detail' is a library or part that already abstracts a broad range of deeper complexities into something that I don't have to think about.
The question becomes something like: how do we expose people to enough information that they are aware of how much they can learn if they need to, without drowning them in trivia that they will never be interested in?
Ironically enough, Engelbart was often derided by his colleagues at the time who thought hard AI was just around the corner and so all of this intelligence augmentation stuff would be obsolete soon enough. Today we are closer than ever (always just 20 years away!), but still IA rather than AI is very much the way to go.
This is a fantastic point, and I think a lot of AI development that goes in the direction of trying to replace human beings is essentially absurd. We already have humans, why would we want something that can do what humans can already do? Rather, we want something that extends the capabilities of humans into areas where they aren't proficient. For instance, why do we put all this effort into natural language processing when humans are already totally optimized for it? What we need is a solution to scaling, not a solution to NLP itself.
EDIT:
To expand; one way to do something like Siri would be to have a system that routed requests to human operators. The human operator would give the correct answer to the request, and then the system would use that as training data. If the system was reasonably confident it already knew the answer to the request from previous training data, it would answer right away, but if it was below a certain confidence it would route to a human. This seems like the smartest way to leverage machine learning in these kinds of scenarios, and I'd be surprised if someone hasn't already tried it or something similar in the past.
Some of the most successful investors, like Warren Buffet, are old school and also operating on a completely different level than most investors since he can force companies to change course, or like after the 2008 crash he was able to come in and offer deals to big banks in return for preferred shares.
It would be difficult to see how to apply narrow AI to this kind of thing. It seems really good for routine tasks, like high frequency trading, but maybe not so great for these big one-off deals which constitute many of the best investments.
Of course, Buffet might still benefit from AI analysis of broad market trends, and the like. If I'm wrong, I'd be interested to know.
> It seems really good for routine tasks, like high frequency trading, but maybe not so great for these big one-off deals which constitute many of the best investments.
I understand what you mean from an opportunity identification perspective, but you have to keep in mind that even the "big one-off deals" require routine tasks at a lower level to verify the merit of such deals. If you think about tasks like reviewing financial statements, AI could provide faster evaluation and potentially identify trends that would elude a human analyst. In any case, Buffett is known for avoiding investments in companies he doesn't deeply understand and I would bet the same stance holds for employing new technology in his investment process.
Buffett’s investing algorithm hasn’t changed in 65 years. Every day he reads financial reports. If he likes a companies business model, he makes a mental estimate of the companies value, and then looks up their trading price. If there is a big enough discrepancy in his favor he will start buying.
At a certain point humans and AI can also play off one another predictably.
Everyone has played games where the AI can beat you in a straight shot, but you can lead the AI into situations that are predictable where you can gain a predictable advantage, and vice versa with humans.
Example: Buy on the dip strategy and technical strategies, big players could drive down the market and HFT can buy in the dip based on fundamentals. Bad news floods the market and HFT reacts.
Humans can predict what AI would do, then AI will reactively start to predict what humans will do when this happens, but humans always have one step ahead in new techniques, AI will be built to defend against it.
Regarding defending against a buy on the dip strategy, AI can start to learn player specifics and not react, or react differently (preemption) if those players return, however this can also eventually be played.
Humans and AI will be playing a cat and mouse game for eternity, microcosms of this can be seen in gaming AI. I think of it more like a game that will be fun to play, yes sometimes you will lose, other times you will predictably win. Bots will be challenging bots unexpectedly and predictably, but they will almost always originate from human programming originally.
This is sort of tangental.. I think it's curious that in a lot of writing AI is almost being defined as that which replaces human labour. The context of technological unemployment and whatnot.
In that frame, I think its natural that discomfort is linked to autonomy. Autonomous taxis & cruise control may be points on a continuum technically, but economically, no human involvement is different. Autonomy separates the PCs from the Looms. Cooperation where a human is involved the machine is tool use, recognizably. The humans labour gets more efficient with tools. More trinkets per human.
Maybe the Luddites thought of looms as autonomous, with humans in a supporting role.
Anyway, I think it's hard to predict where this goes on a 25y scale.
This kind of my beef with most SciFi movies. They always seem to paint an antagonistic relationship between man and machine when the reality will probably be something in the middle.
That's because science fiction movies are mostly not projecting a predicted future, they are projecting and exploring our fears and aspirations with the future.
I actually find it really funny that a lot of SciFi predicts a future where we are actually talented and intelligent enough to create a human-like, sentient being. The reality seems so distinctly far off from that- though I guess that is the point of fiction.
> human-machine cooperation is probably where the big money will be
The sceptical counterargument to that, which I go back and forth on, is "that's what they said about chess". There was a transitional period when this was true, then the engines disappeared into the middle distance.
I work on the hunch that the middle-ground of tasks where humans improve on, or with, machines is both small and unpredictable; computers will tend towards being either useless or strongly superhuman for each problem.
this is still true in chess. Humans use chess computers to play the game of chess. The tournament money still goes to humans, everybody still cares about the world's best players, and so forth. Even as chess engines vastly have surpassed humans in technical capability, they haven't somehow sidelined humans in all aspects of the domain. Not even literally in competitive games.
It's not like everybody has somehow switched to watching engine games. That is in fact just a niche market of the chess world. We are humans and as such we still enjoy seeing real humans thrive and compete in chess more than we care about machines.
If anything chess is the perfect example that the pessimism is misplaced, chess engines have not killed chess as a human endeavour.
AI arrived in Investing a long, long time ago. If you limit AI to deep learning, as in deep neural networks, maybe only 3-5 years ago. Strategies based on news have been around for decades. Figuring out what the news means isn't necessarily as helpful as it seems because it's hard to put much size on when there is limited time even if you are first. However, various patterns around news is much easier and to do that all you needed to know was that some important news had arrived, not necessarily whether it was good or bad, the goodness or badness was plainly visible in how the price action. Figuring out the magnitude but not the sign of the importance of a news item has not been difficult for a long time. Yet somehow we keep getting articles about how AI has arrived in investing.
As far as the return forecastability deniers out there, particularly the ones who claim to be doing it on the basis of some sort of empirical thing, well, if you can't be bothered to actually look at the data or even read academic literature on the subject, I can't be bothered to educate you.
I've literally missed the sign on a trade before, and it was 7-figure disastrous. (I've missed the direction of movement on individual symbols a number of times, but this one time I literally went the wrong way on everything by accident.)
Markets adjust too quickly to flip your position and profit in any reliable way. On planned or anticipated events, people are all locked and loaded waiting for something to happen.
However, I'd much rather know the sign because at least I can put on some position and guess a little at the magnitude.
Yes, but there are easier ways to make money trading volatility than forecasting single asset volatility. While you can fairly easily forecast volatility with R^2 higher than 60% for most assets vs 5-7% for the best models for returns, that's not the important bit. The important bit is whether you are better than the rest of the market. I would argue that implied volatility is harder to trade off a forecast than straight return because a greater proportion of the participants in the vol market are professionals, and also more likely to be highly quantitative geeks. Also my comment wasn't about being able to forecast large moves but being able to determine how much a news item was going to move an asset. As far as handling important news events and getting out of the way is concerned, options market participants are very good at it and have been for a while.
Only if the vol mkt is mispriced. Spreads on most single names are wide enough to make this difficult or impossible, and deeper/tighter mkts tend to be much more efficient anyway.
Breaking news: robots and humans both equally unable to predict the next digit in a random sequence. Obviously an incredible over simplification of whats happening in finance and this article.
The entirety of the article is summed up with this statement hidden inside.
> Mr. Amador attributed the underperformance to a normal variability in returns. The fund’s programming beat the market when tested against historical data, he said, and he expects the same in real life as time passes.
Backfitting in all its forms is known to give false confidence and usually fails. It may work for a moment, but then other traders exploit whatever the backfitting had noticed causing the backfit to no longer work.
Side note: Why is it that we need something so physical to attach these concepts to?
The photo of the monolithic POWER7 rig that houses Watson with it's translucent logo is akin to all of the Bitcoin articles with shiny gold coins with an icon. I understand the need to have some kind of image, but it's just so detached from the reality of what's going on in practice.
Getting back on topic, I do wonder how much data they're feeding in - it's one thing to pass masses of historical trades into the algorithm, quite another to have it watch for relevant news events that affect the asset prices.
It's compression, not noise. Markets compress corporate reality. A perfectly compressed stream is by definition perfectly random (because if it had a predictable pattern, you could use that to compress it further), but the getting there is different - markets are not intrinsically but mechanically random; they approach randomness because of how they operate. The hope is that humans are not all that good at compression, so AI may be able to pick up on patterns that we can't.
That's just not true. There are whole companies based on algorithmic trading. Jane Street, for instance, has a tech talk on how they take advantage of Caml in automated trading.
Since our reality is based on randomness at a very low level (eg radiation patterns), what makes you believe that this randomness is somehow lost on a higher level?
I run a similar experiment, with real money and allow my robot to trade on my behalf. For long-term investments, I continue to follow the indexed-only ETF-based couch potato model, but I'm happy to let this run. I view it as a risky investment, akin to investing in any startup, and have invested accordingly.
The other reality is that over the long-term it's highly unlikely to beat the market. Realistically (almost) nothing beats the market over a long-enough period. At the same time in my testbed, with real data, real 'money where your mouth is' it worked. It's no crazier than any other idea.
Ultimately whether humans or AI drive investment is immaterial if you believe in an indexed portfolio. Should those investment approaches succeed, they'll join the indexes in some way. Similarly, should they fail, they won't
I'd also really love to create a trader bot for part of my money. Any chance you could give a few pointers on how to get started in this field? (good resources to read, frameworks to use, etc...)
Sure. I use a variety of free datasources - including Alphavantage, and the nightly Nasdaq dumps, to collect a bunch of data nightly, in addition to real-time. My robot is based on errbot - which I integrate with a private slack organization/channel so that I can interact, and have all the logging infrastructure I need.
The database is MySQL, and communicated with via SQLAlchemy (through errbot of course), with a series of commands and crons (errcron) set up, in order to both notify myself and execute on various data gathering activities. The rest of the processing code is likewise - in python. I don't rely on scipy, numpy, or anything else, given that I don't see the need.
The reality is that there are a series of activities that are profitable at the micro) level in the geography in which I trade, which is why my robot currently integrates with Questrade - specifically so that I can execute from Slack, while I work at my 'regular' job. All passwords and reusable tokens are stored in an ansible-vault, so that I can commit and push my repository around.
I'm running two different experiments actively: one that does an arbitrage based on data I'm looking into, the other than specifically tries to eke out a $0.10 gain per share, closed daily. Going into Jan 1 2018, I'd made ~57% from August 31 (first day of trading). This year, I'm down ~8% overall so far. Passively, the return has been great.
Now, I'm changing my focus - enough people I know are generally interested and willing to light the same amount of money that I am on fire. So, I'll keep experimenting, but I'm taking 1% of the overall return for the 'bank' (i.e. my corp).
> the E.T.F. runs most of its calculations on I.B.M.’s Watson supercomputer
Every time I read an article that mentions Watson, it's sprouted a new thing the name is applied to. Previously it was a question-answering system, which famously won Jeopardy. Then it became a general NLP platform. Then it became a brand name for basically all IBM machine learning offerings. Now it's also a supercomputer?
If what this really means is that they built a bot that plugs a bunch of data into IBM's cloud ML platform and trades on that basis, I'm not really surprised it's not beating the market. Building an auto-trading bot using off the shelf ML techniques is actually a pretty popular university project that's worth trying if you're curious, though (at least with simulated money, or money you can afford to lose). They can probably do better than a typical university project, because I assume they have more extensive financial data feeds. But everyone else serious about automated trading (which lots of people are) also has those data feeds plus the same off-the-shelf ML, so unless they have something else...
Watson is a marketing term, and a division of IBM.
Think of it similar as "Amazon Cloud", which really consists of over 100 different type of services/products, some of them very different, and build by different teams, but the "Amazon Cloud" is more of an umbrella.
and one that hasn't been terribly successful in a lot of areas! It's often sold as almost a software/business consulting effort, which requires a ton of money and time to get up and running
I don't like bringing politics into HN, but I like this quote:
"IBM Watson is the Donald Trump of the AI industry—outlandish claims that aren’t backed by credible data."
To take your comment a little deeper despite me knowing you are being facetious, I think that's exactly it: the algorithms cannot communicate to facilitate these types of advantages. They cannot, in essence, be human.
In a world ran and dominated by humans, there will always be an inherent advantage to being part of the race that creates the game. If algorithms perfect a system in such a way that there stands no gain to be made by those at the top, people will simply create a new game to play.
until they can. And at that point it gets really weird. I have heard reports (but cannot confirm them obviously) that machine learning techniques are already creating trading strategies that exploit weaknesses in other trading system algorithms. At what point does the algorithm correlate what it can see in email inboxes on a connected cloud service with advantageous stock trades ...
I wasn't being facetious -- I meant exactly what you said. Two humans having coffee and trading secrets "they heard around town" will beat an algo any day of the week.
In college in the 70's, a fellow student was developing a stock trading program on the institute's PDP-11. He figured it was going to make him rich. I asked him what the algorithm was, but he was very secretive about it.
I think that The Atlantic's recent article on this topic is a more nuanced insight[1]; human-machine cooperation is probably where the big money will be. Companies that seek to cut people out of the loop will probably run into a lot of problems, as will those that smash the looms. Whereas trying to smooth the interface between AI/ML conclusions and human oversight is probably going to see the most success.
[1]: https://www.theatlantic.com/education/archive/2018/02/employ...
As it has been for as long as machines have existed, really. This reminds me of Douglas Engelbart and his vision for computers. I'll cite the section of his wikipedia page that paraphrases an interview with him from 2002[0][1].
> [Douglas Engelbart] reasoned that because the complexity of the world's problems was increasing, and that any effort to improve the world would require the coordination of groups of people, the most effective way to solve problems was to augment human intelligence and develop ways of building collective intelligence. He believed that the computer, which was at the time thought of only as a tool for automation, would be an essential tool for future knowledge workers to solve such problems.
He was right of course, and his work lead to "The Mother of All Demos"[1].
Machine learning is the next step in using computers as thought enhancement tools. What we still need to figure out is an appropriate interface that is not as "black-boxy" as "we trained a neural net, and now we can put X in and get Y out".
EDIT: Now that I read that quoted section of wikipedia again, it's funny to note that computers were "only seen as tools of automation", and how modern fears of AI are also about automation. Automation of thinking.
[0] https://en.wikipedia.org/wiki/Douglas_Engelbart
[1] https://www.youtube.com/watch?v=VeSgaJt27PM
[2] https://www.youtube.com/watch?v=yJDv-zdhzMY
This is a computer-oriented analogy, but most fields have their own tables and charts and maths that are tedious to keep on the tip of your mind. Still, for example, I don't need to remember the details of every API that I use; I can just remember that there is a 'do X' call available, and refer to the documentation when and if I need to actually use it.
In the same vein, I can quickly get a feel for whether an idea is possible by stringing together a bunch of abstract mental models. "Can I do X?" becomes, "are there good tools available for doing A, B, C, and D?", and that information is only a quick search away. Actually using those tools involves an enormous amount of detail, but it's detail that I can ignore when putting an idea together.
And in most cases, that 'detail' is a library or part that already abstracts a broad range of deeper complexities into something that I don't have to think about.
The question becomes something like: how do we expose people to enough information that they are aware of how much they can learn if they need to, without drowning them in trivia that they will never be interested in?
EDIT: To expand; one way to do something like Siri would be to have a system that routed requests to human operators. The human operator would give the correct answer to the request, and then the system would use that as training data. If the system was reasonably confident it already knew the answer to the request from previous training data, it would answer right away, but if it was below a certain confidence it would route to a human. This seems like the smartest way to leverage machine learning in these kinds of scenarios, and I'd be surprised if someone hasn't already tried it or something similar in the past.
It would be difficult to see how to apply narrow AI to this kind of thing. It seems really good for routine tasks, like high frequency trading, but maybe not so great for these big one-off deals which constitute many of the best investments.
Of course, Buffet might still benefit from AI analysis of broad market trends, and the like. If I'm wrong, I'd be interested to know.
I understand what you mean from an opportunity identification perspective, but you have to keep in mind that even the "big one-off deals" require routine tasks at a lower level to verify the merit of such deals. If you think about tasks like reviewing financial statements, AI could provide faster evaluation and potentially identify trends that would elude a human analyst. In any case, Buffett is known for avoiding investments in companies he doesn't deeply understand and I would bet the same stance holds for employing new technology in his investment process.
Everyone has played games where the AI can beat you in a straight shot, but you can lead the AI into situations that are predictable where you can gain a predictable advantage, and vice versa with humans.
Example: Buy on the dip strategy and technical strategies, big players could drive down the market and HFT can buy in the dip based on fundamentals. Bad news floods the market and HFT reacts.
Humans can predict what AI would do, then AI will reactively start to predict what humans will do when this happens, but humans always have one step ahead in new techniques, AI will be built to defend against it.
Regarding defending against a buy on the dip strategy, AI can start to learn player specifics and not react, or react differently (preemption) if those players return, however this can also eventually be played.
Humans and AI will be playing a cat and mouse game for eternity, microcosms of this can be seen in gaming AI. I think of it more like a game that will be fun to play, yes sometimes you will lose, other times you will predictably win. Bots will be challenging bots unexpectedly and predictably, but they will almost always originate from human programming originally.
In that frame, I think its natural that discomfort is linked to autonomy. Autonomous taxis & cruise control may be points on a continuum technically, but economically, no human involvement is different. Autonomy separates the PCs from the Looms. Cooperation where a human is involved the machine is tool use, recognizably. The humans labour gets more efficient with tools. More trinkets per human.
Maybe the Luddites thought of looms as autonomous, with humans in a supporting role.
Anyway, I think it's hard to predict where this goes on a 25y scale.
Deleted Comment
The sceptical counterargument to that, which I go back and forth on, is "that's what they said about chess". There was a transitional period when this was true, then the engines disappeared into the middle distance.
I work on the hunch that the middle-ground of tasks where humans improve on, or with, machines is both small and unpredictable; computers will tend towards being either useless or strongly superhuman for each problem.
It's not like everybody has somehow switched to watching engine games. That is in fact just a niche market of the chess world. We are humans and as such we still enjoy seeing real humans thrive and compete in chess more than we care about machines.
If anything chess is the perfect example that the pessimism is misplaced, chess engines have not killed chess as a human endeavour.
On the other hand, if you are optimistic and excited in a world where everyone else is in despair, you have some distinct advantages. :)
As far as the return forecastability deniers out there, particularly the ones who claim to be doing it on the basis of some sort of empirical thing, well, if you can't be bothered to actually look at the data or even read academic literature on the subject, I can't be bothered to educate you.
I've literally missed the sign on a trade before, and it was 7-figure disastrous. (I've missed the direction of movement on individual symbols a number of times, but this one time I literally went the wrong way on everything by accident.)
Markets adjust too quickly to flip your position and profit in any reliable way. On planned or anticipated events, people are all locked and loaded waiting for something to happen.
However, I'd much rather know the sign because at least I can put on some position and guess a little at the magnitude.
> Mr. Amador attributed the underperformance to a normal variability in returns. The fund’s programming beat the market when tested against historical data, he said, and he expects the same in real life as time passes.
Backfitting in all its forms is known to give false confidence and usually fails. It may work for a moment, but then other traders exploit whatever the backfitting had noticed causing the backfit to no longer work.
Side note: Why is it that we need something so physical to attach these concepts to?
The photo of the monolithic POWER7 rig that houses Watson with it's translucent logo is akin to all of the Bitcoin articles with shiny gold coins with an icon. I understand the need to have some kind of image, but it's just so detached from the reality of what's going on in practice.
Getting back on topic, I do wonder how much data they're feeding in - it's one thing to pass masses of historical trades into the algorithm, quite another to have it watch for relevant news events that affect the asset prices.
Posts with images get more clicks.
https://www.youtube.com/watch?v=hKcOkWzj0_s
The other reality is that over the long-term it's highly unlikely to beat the market. Realistically (almost) nothing beats the market over a long-enough period. At the same time in my testbed, with real data, real 'money where your mouth is' it worked. It's no crazier than any other idea.
Ultimately whether humans or AI drive investment is immaterial if you believe in an indexed portfolio. Should those investment approaches succeed, they'll join the indexes in some way. Similarly, should they fail, they won't
The database is MySQL, and communicated with via SQLAlchemy (through errbot of course), with a series of commands and crons (errcron) set up, in order to both notify myself and execute on various data gathering activities. The rest of the processing code is likewise - in python. I don't rely on scipy, numpy, or anything else, given that I don't see the need.
The reality is that there are a series of activities that are profitable at the micro) level in the geography in which I trade, which is why my robot currently integrates with Questrade - specifically so that I can execute from Slack, while I work at my 'regular' job. All passwords and reusable tokens are stored in an ansible-vault, so that I can commit and push my repository around.
I'm running two different experiments actively: one that does an arbitrage based on data I'm looking into, the other than specifically tries to eke out a $0.10 gain per share, closed daily. Going into Jan 1 2018, I'd made ~57% from August 31 (first day of trading). This year, I'm down ~8% overall so far. Passively, the return has been great.
Now, I'm changing my focus - enough people I know are generally interested and willing to light the same amount of money that I am on fire. So, I'll keep experimenting, but I'm taking 1% of the overall return for the 'bank' (i.e. my corp).
This will all clearly catch fire.
Every time I read an article that mentions Watson, it's sprouted a new thing the name is applied to. Previously it was a question-answering system, which famously won Jeopardy. Then it became a general NLP platform. Then it became a brand name for basically all IBM machine learning offerings. Now it's also a supercomputer?
If what this really means is that they built a bot that plugs a bunch of data into IBM's cloud ML platform and trades on that basis, I'm not really surprised it's not beating the market. Building an auto-trading bot using off the shelf ML techniques is actually a pretty popular university project that's worth trying if you're curious, though (at least with simulated money, or money you can afford to lose). They can probably do better than a typical university project, because I assume they have more extensive financial data feeds. But everyone else serious about automated trading (which lots of people are) also has those data feeds plus the same off-the-shelf ML, so unless they have something else...
Think of it similar as "Amazon Cloud", which really consists of over 100 different type of services/products, some of them very different, and build by different teams, but the "Amazon Cloud" is more of an umbrella.
MD Anderson Cancer Center wasted $62 million on it: https://www.healthnewsreview.org/2017/02/md-anderson-cancer-...
Dead Comment
In a world ran and dominated by humans, there will always be an inherent advantage to being part of the race that creates the game. If algorithms perfect a system in such a way that there stands no gain to be made by those at the top, people will simply create a new game to play.
> Those programs may be useful, but they are not A.I. because they are static; they do the same thing over and over until someone changes them.
Oh, I see. It's better because it's AI. My mistake, then.
It was likely some form of technical analysis.
I wonder sometimes if it ever worked out for him.