Markets around the world are determining prices on a massive variety of instruments that derive value from the current and future value of products such as currencies, interest rates, equities, grains, livestock, metals, oil, gasoline, natural gas, and electricity. These prices allow us to prioritize resources, make fair transactions, and manage risk (i.e. buy insurance on the value of critical products so that we can plan and invest more effectively). The value of all these instruments are related to one another, as well as to realtime world events, in complex ways that no one entity has a perfect view of. Consequently, markets work by polling the expertise of many different parties who all understand a piece of how things should be valued. This results in millions, if not billions, of interconnected price-discovery feedback loops. The markets are kind of like a massive, distributed, realtime, ensemble, recursive predictor that performs much better than any one of its individual component algorithms could. The reason why shaving a few milliseconds (or even microseconds) can be beneficial is because the price discovery feedback loops get faster, which allows the system to determine a giant pricing vector that is more self-consistent, stable, and beneficial to the economy. It's similar to how increasing the sample rate of a feedback control system improves performance and stability. Providers of such benefits to the markets get rewarded through profit.
Is there any empirical evidence that these perceived benefits to society actually ever materialize? It's clear that there is a benefit to a trader from knowing something milliseconds before the rest of the market (otherwise Goldman wouldn't be doing this), but it's not clear at all to me that it helps the rest of us.
We know, empirically, that a lack of liquidity increases trading costs, which in turn is directly channeled to the prices of goods and services that rely on this liquidity (more or less everything in the world, even more indirectly ones like education).
It's difficult to say 'things would be X% more expensive' because of the interconnected complexity the GP was talking about, but there is definitely a very apparent benefit.
Nope, it's pure loss to everyone else. Every dollar that an HFT makes by front-running your trade is a dollar you don't make as an investor. The money isn't coming out of nowhere.
That gives these mechanisms way too much credit. Markets can benefit from a more accurate valuation but ms-response times are not needed for this.
The real reason are competitors. You have an advantage if you have the faster line. In the name of fairness there are lines of the exact same length* in many trade centers precisely for this reason.
There are bots that feign transaction so that others react in a specific way. In the last moment these are canceled again, too late for competitors to still react. This is an example for when you need a faster line.
To suggest this arms race in high frequency trade has a serious economic benefit is ridiculous in my opinion.
* I do literally mean cable length. Yes, they have become that crazy
> There are bots that feign transaction so that others react in a specific way. In the last moment these are canceled again, too late for competitors to still react.
In stock exchanges at least, what you describe is called spoofing. The regulators are not friendly towards it, because spoofing enables to skew price discovery.
It's generally hard to detect and hard to prove, but there have been cases where the regulators have proved sufficiently well that in certain cases, there were orders never intended to be executed. And yes, if you get caught, there are sanctions.
I very seriously doubt you'd lose any of those benefits by just trading on 10 second or even 1 minute increments. Everyone has a bunch of time to submit orders, the auction is ran, and then everyone gets the result back. I'd really like to see a use case in any of those real-world markets for pricing with sub-second granularity. It seems much more likely that trading on sub-second granularity is just extracting money from the actual participants in those markets into the pockets of financial firms with no value returned.
All that'd happen if you did that would be to increase the risk of any market maker having the wrong price at execution time (they have less data to make an informed price discovery), so you'd get wider spreads to compensate.
Wider spreads just makes it more expensive for everyone. Personally, I'd like my pension money going towards the actual investment rather than paying for a wider spread, but I'm just strange.
The upshot of this argument is that this is valuable activity. We need markets to price tradable assets and provide liquidity.
The counterargument is that there are diminishing and/or negative returns to increased liquidity and velocity.
Take just stocks. Liquidity is not a problem. You have liquidity whether trades take minutes or milliseconds. Pricing? I'd say we have pricing covered too, at least the pricing that more/faster algorithmic trading will contribute.
Meanwhile, all this stuff costs money, people, resources that aren't available for actual productive work instead of overhead.
My view is that current liquidity and pricing are far from perfect, particularly when it matters most (e.g. a market panic) and particularly in the global context of the vast universe of interrelated instruments that need better relative pricing. Given that we benefit from realtime pricing, milliseconds matter when you must determine a large vector of prices with complex dependencies using the ensemble recursive system that is the global market.
>that aren't available for actual productive work instead of overhead.
That’s a very biased view. Another view would be that improving the efficiency of the largest markets in the world have a much larger positive impact on society than the vast majority of the “productive” work you refer to.
Meh, the short term price fluctuations are almost pure noise. Faster trading just incase the frequency of the noise. Increasing frequency only works if the feedback loops are stable.
Over very short time scales, the price fluctuations vary from almost pure noise to almost pure signal. The almost pure noise situation occurs far more often, but even then, with good estimation techniques, you can extract a signal component that can improve pricing a small amount. Improving many interrelated prices by small amounts can lead to a significant overall improvement to the markets. The almost pure signal situation occurs far less often, but it is almost always extremely important to overall stability. Handling it well can minimize market overreactions and extreme events such as flash crashes and market panics.
This is such a confident statement, and I don't mean that as a compliment.
For starters, is the evidence behind this Hayekian market efficiency really so strong as to warrant this kind of absolute confidence in the wisdom of markets?
> markets work by polling the expertise of many different parties who all understand a piece of how things should be valued.
…as well as orders of magnitude more people who do not understand how things should be valued. → noise, which is fine ("excess volatility"), but which can also become highly persistent in the presence of correlated expectations ("bubbles")
> This results in millions, if not billions, of interconnected price-discovery feedback loops.
Well, there are negative and positive feedback loops, only one of which is stabilizing!
> beneficial […] because the price discovery feedback loops get faster
This can also backfire. In fact, this is why a number of stock markets have instituted a trading stop if an asset moves "too fast". Slowing things down / reducing liquidity can stabilize a situation. Actually, this reminds me of
[1] W. A. Brock, C. H. Hommes, and F. O. Wagener. More hedging instruments may destabilize markets. Journal of Economic Dynamics and Control, 33:1912–1928, 2009.
where you have a similar counterintuitive argument.
The history of the idea of market efficiency is long and the idea remains controversial or contested. See e.g. Philip Mirowski's writings.
You misread my post. I never said the markets were efficient. On the contrary, they are generally far from it. What I said was that the markets combine the efforts of many different entities in order to determine prices in a way that is superior to what any one entity could do.
> Well, there are negative and positive feedback loops, only one of which is stabilizing!
Absolutely. Entities that consistently contribute positive feedback cause harm to markets and they are generally doing something that is either prohibited or foolish. I don't consider either a good long term profit strategy. The market regulation departments work to remove one and large losses tend to remove the other.
> This can also backfire. In fact, this is why a number of stock markets have instituted a trading stop if an asset moves "too fast". Slowing things down / reducing liquidity can stabilize a situation.
Sure, exchanges use a variety of market integrity controls, including limits on rapid and/or large price changes that can trigger order rejections or trading halts. These controls can be beneficial when the price fluctuation was due to poor trading, but can be damaging to a market when the fluctuation was due to significant new information or because there is a natural high volatility situation such as a derivative that is about to expire or is rarely traded. Consequently, the exchanges have to be careful about how and when halts are invoked. Some exchanges often get it wrong.
The main point I was making is that lowering the latency of the multitude of price discovery feedback loops making up the global market can be very beneficial because it allows the pricing dependencies to be more fully determined.
The theory sounds great. But why then, our streets are lined with homeless, and our nations are stricken with poverty?
Could it be that the only real aim and motivation of market traders is to earn money?
One day, maybe.... when these are replaced with DAOs on the blockchain. But until then it's the Wolf of Wall Street.
Because having homeless people lining our streets on our commutes to/from our jobs is a daily reminder that if we don't work hard enough to increase corporate profits, then our bosses might lay us off and we'll end up like them.
That or moral apathy. At some point in the 80s we decided that markets driven by business profits should dictate every aspect of society.
I imagine 100 years from now they'll look back at today in disgust.
The markets on which financial instruments are traded are effectively separate from the ones where policy decisions are being made on, such as labour, vocational education, and healthcare. It's not the derivatives trading that is problematic, it's the market inefficiencies not being resolved by governments.
Yes, the real aim and motivation of traders is to earn money and respect. What else should it be?
Why pick on traders? I know a lot of developers making well into six figures. I hear them talking about getting 3 new graphics cards for their gaming rigs instead of how they worked at a soup kitchen.
What is your point? That GS should be donating 100m to charity instead of reinvesting into their business?
Where do you think that money goes? Workers will be paid to implement their plan and taxes will be paid on those wages. In fact about 40% of that 100m will eventually end up being paid in taxes.
I just have a nitpick with your analogy to feedback loops.
Assuming the market can be said to have a Nyquist rate, then once you hit that you have all relevant information. Increasing the sample rate past Nyquist does not make a system more stable unless you have a very specific system designed specifically to take advantage of that. More typically, it just increases your noise-bandwidth product and can decrease total system stability and accuracy.
>> The markets are kind of like a massive, distributed, realtime, ensemble, recursive predictor that performs much better than any one of its individual component algorithms could.
That's really interesting, I never thought of it that way.
>> markets work by polling the expertise of many different parties who all understand a piece of how things should be valued.
Does the whole picture ever become apparent to all of the interested parties after the fact? Or do market movements remain subject to a high degree of interpretation even after they happen?
That argument is basically Hayek's (and Mises's) answer to the economic calculation problem. Hayek argued (in "The Use of Knowledge in Society") that free markets are better than centralised planning, because of dispersed knowledge.
Nobody has the whole picture. I would say that the more extreme an individual event is, the easier it tends to be to understand in retrospect. This depends greatly upon one's analytical sophistication, level of market data, fundamental product understanding, and professional network (to know what happened in other firms).
Is there any evidence defining where the speed of information reaches a diminishing return?
To dilate on your feedback loop comment, physical systems may benefit from a higher sampling rate but usually only to a point. This point is often related to the physical dynamics of the system (e.g., natural frequencies). For example, a small thruster may benefit much more from increasing sampling rates from 1000Hz to 10kHz than a large rocket engine. I assume/wonder if there's a similar analogy to diminished returns in stock information systems, like more volatile markets benefiting more from higher frequency of data. It would be interesting to see where the diminishing returns are.
Yes, that's the model. So, tell me. Would any profit be possible in an ideally functioning market? If so, how is any item worth more than the total cost of all inputs including externalities? If not, how is such a dynamic system attracted to an ideal state, given that those who would make the market ideally functional are best placed to gain from market inefficiency and dysfunction? Given an answer, do you have a sound game theory explanation for this?
Additionally, what is the nyquist limit for such an ideally realised market, if it is indeed to be modelled as a recursive sampled approximator and how is this derived? Given an infinitely recursive network of arbitrarily connected market agents, is any such calculation convergent? If so, why? If not, how does the market ever converge to any appropriate price - a price which accurately reflects the market conditions excluding pricing operations and market costs which aren't directly related to the production of the instrument in question?
Keep in mind that, if the market is functioning ideally no market participant will exceed the nyquist rate as all participants knowledge of market conditions converges to zero. How is any sampling rate, excluding zero, convergent? If not, how is any such market realisable? If so, what is the loss function between ideal model and realisable, perfectly imperfect real world implementation? What is the minimum profit, if not zero, and why?
However, it does seem that arbitrage opportunities decrease when such high-speed trading is occurring and, does so even more quickly the faster trading speed and market sampling are increased. How can we account for this, if not by increased market efficiency?
I conjecture that, by ever increasing the sampling rate and the speed at which transactions complete, markets are not being made more efficient. Instead, I hypothesise that, as markets directly effect the price of the instrument reflexively, the feedback latency produced creates relative local pockets of perceived value - which are only profitable trades in relation to local information asymmetry. As the vast majority of high-speed trading holds market positions on extremely short time scales, shifting exposure constantly, this profit is immediately realised locally resulting in the gradual diffuision of this inefficiency as the increase in price of all instruments. This is a direct result of the cost of trading being factored directly into the agent's local acceptable sale price of held instruments. Every local agent trading action is ideal, but the global market is a divergently inefficient one. Indeed, it is a market in which its pricing inefficiency is maximally concealed from all market participants.
In a sense, I conjecture that the estimator is not functioning to increase market efficiency but is, instead amplifying local inefficiency globally, in effect, much like a charge pump would operate in a voltage multiplier circuit. In essence such a scheme acts to conceal increased market cost and overhead (including the profit of market participants) into market instrument pricing. However, it does so in an extremely small and diffuse way so as to make the rise in price of a single instrument, as a result of this activity, extremely difficult to detect as all instruments increase similarly on the same time scale.
This behaviour appears to be similar in nature to 'salami slicing', an often effective embezzlement technique - except that, instead of exploiting an information asymmetry created by lack of interest in small quantities in the part of auditing accountants, it exploits the information asymmetry created by the speed of light itself.
Of course, the faster the sampling rate, the more efficient the described amplification process would take place. Does this effect correlate between markets with differing but estimable information asymmetry? If there is no correlation, this hypothesis is invalid. It would seem to be an area ripe for research and analysis of market data.
Do you see any technical issue with this conjecture by which we may discount it immediately?
A perfect market where all participants get information at the same time does not exist.
>This profit is immediately realised by the increase in price of all commodities globally.
This would only hold if there weren’t profitable short trades. The profit can also be realized by the decrease in global commodities that would have been slower before.
This is an accurate description of the game theoretic argument.
It is not very precise about the premisses, "the market", "the reward" or "the critical products"--variables in a non-linear equation, so to speak, that do not necessarily have a unique solution, or no solution.
> The reason why shaving a few milliseconds (or even microseconds) can be beneficial is because the price discovery feedback loops get faster
Berkshire Hathway has couple of trades every minute and difference in bid/ask prices is huge around $1000+, yet you dont see people complaining about that.
The bid/ask spread is a separate issue from latency (Edit: although it should tend to be less with better price discovery). To appreciate the benefit of lower latency, I think you have to consider the bigger picture of many interrelated price discovery feedback loops involving many instruments. Many small speedups can result in a much more stable and beneficial system. In the case of Berkshire Hathaway, the value of their stock is dependent on the value of many other equities, interest rates, energies, raw materials, etc.
This system is the only one unbiased estimator / decision maker humans ever found. Though it has quite high variance, in the long-term its results are just astounding
For what it's worth, I used to work for GS and was in the algorithmic brokerage business for 2009-2010. One thing a lot of commenters on this thread are missing here is this isn't the same as an HFT or hedge fund, this is the brokerage business - executing orders on behalf of GSs clients, who are largely institutional investors, hedge and pension funds etc.
The real-world impact of increased speed of execution by brokers is less money left on the table for HFTs to snap up and less slippage to the actual economic beneficiary (ie the actual retail investor or pension fund investing people's money gets a better trade).
I'm not a current employee, but at the time that I was, they did have a small unit who did pseudo-HFT on a prop basis and various electronic market-making businesses, but they were all firewalled off from the people who did brokerage business. We didn't share code or information with them and for obvious reasons they had no ability to view client order flow, unexecuted orders etc. Likewise people in the high-touch (ie voice) execution business couldn't see anything in the GSAT business even though they could use our algos to execute orders if they want to.
My understanding (could be incorrect) is that their quant trading business (what they called their 'HFT' shop, although it wasn't really high-frequency compared to real HFTs like Winton, Knight, Jump, Citadel or whatever) was probably going to get shuttered as they moved out of proprietary risk-taking generally, but I don't have any information either way.
In GSAT at the time, compared to others on the street our tech was pretty sophisticated intellectually but not fast (eg we didn't have ultrafast marketdata, our exchange latency was quite high and our execution algo speed was pretty slow) and so we had to do a lot of smart coding to prevent ourselves being ripped off by actual HFTs given they could move so much faster than we could.
There's a lot more to HFT than reg NMS by the way, I was working in London, so we did all the GSAT trading on European exchanges none of which has anything crazy like reg NMS and there was still HFT shenanigans of various kinds that people would try (eg timing arbitrages if they could see that you had different execution speeds on different venues etc).
It depends on what you mean by HFT, if you mean prop trading, that was made illegal by Dodd Frank for large cap businesses to engage in with investor funds. If you mean market making HFT (which some prop firms also do), that's what this article is about.
People love to rail on HFT, but at this point, its really not that profitable. It's just a reality of trading in the markets. There was a blip of time between 2008 and 2014 when HFT was extremely profitable. Those inefficiencies have been gone from the market for years. People were whooped into anger about how much money was being made, at this point its a complete non issue and needs to be removed from the highlight reels aimed at generating anger in the public. Lets move on from discussing the boogey man that is HFT, its really nothing.
Adding to this, HFT is a product of rule 612 of Reg NMS (the sub-penny rule). Markets are not allowed to show quotes in increments of less than $0.01 for most names. Since traders cannot compete on price, they have been forced to compete exclusively on speed.
The impact of such regulation was tested by the SEC recently with the 'tick size' program. Instead of reducing the minimum increment, some names saw it increased to $0.05. The hope was to increase liquidity while decreasing volatility in these names. In fact, those names experienced decreased liquidity with no decrease in volatility.
On the other hand, the Intercontinental Exchange reduced the tick size for sterling interest rate futures towards the end of 2018, and the result was ... decreased liquidity! And resulting increased volatility.
I'm not sure anyone knows for sure why this happened, but the best theory i've heard is that the reduction in tick size reduced the expected profits of market makers, because they are collecting less spread on every contract they turn around, while not affecting their potential losses, because external factors which cause the market to jump three basis points will still cause it to jump three basis points. Halved regular profits divided by constant occasional losses equals no longer worth bothering with.
The reason this provides little value to society now is because there’s no easy way to leverage price information to make inferences about the world-state. HFT on a prediction market would provide lots of value to society.
As always, some profit is to be made from some trading practice - the market removes it within a few years of "mass discovery". All working as intended.
people aren't mad only because tons of money is made on hft. It's also because money is _wasterd_ on hft. That's $100 million dollars spent on something that has 0 use to society. It's just rich people playing weird games.
Think about the social benefits of $100 million invested in nyc transit infrastructure.
The economy's incentive structure is broken and this is a prime example.
Those 100m are not destroyed by burning them in an HFT furnace but rather used to pay developers, hardware, factory workers etc. Sure, it's not going directly into infrastructure but it is not lost.
In fact, it's quite possible that if it wasn't invested into HFT it would be held as cash by the company or paid out as a dividend (which is fine as well).
not to mention risky. remember Knight Capital, the kings of HFT?
on August 1 no less, their new software deployment essentially annihilated half a billion dollars, all due to - you guessed it - a refactored command line flag! can't make this stuff up.
As an outsider with admittedly limited knowledge. What would happen if you limited movement on a stock to be on the second?
Would that not prevent this never ending race for faster and closer access. Something that doesn’t really seem to be adding value to society or the market.
who gets the priority in order fulfillment placed in that second? otherwise you have the same issue. the brokerage might also be tempted to make money by front running those trades since that have all the trades in front of them for a second before needing to be fulfilled
Yea, I was one of them. I mixed up trading ahead from side-channel knowledge which breaks insider-trading rules, with having a fast engine which can just move bits faster than the other guy.
I don't "like" HFT the same way I don't "like" the market at all, but HFT is not actually stealing grannies money.
Side channel knowledge doesn’t break insider trading rules. You need to have a fiduciary duty to someone to betray to be doing insider trading. There needs to be someone who has the right to that knowledge, who you’re supposed to act on behalf of, who doesn’t want you trading on it.
There are many types of “side channel” non public information that aren’t insider trading.
...actually it doesn't sound to me like this article is about HFT-based prop trading at all (prop trading would mean Goldman Sachs taking positions onto their own books), but about the business unit called GSAT (Goldman Sachs algorithmic trading) who execute trades on behalf of clients, so never taking any positions onto their own books.
The traditional market model used to be that at every point in time, a market maker would offer a price to buy/sell at. The bid would, of course, be lower than the ask, the difference being called the spread. When a pension fund wanted to execute a trade, they would have had to cross the spread, and, statistically speaking, half the spread would immediately accrue to the market maker as profit. ..so this is money that YOU, the holder of a pension, are losing, and that THEY, the rich folks acting as market makers that the public likes to get mad at, are taking away from you.
The business model of GSAT is that a pension fund can ask Goldman Sachs to use algorithms to do things on their behalf that are less naive than what I just described above and end up giving less money to the market maker. For example, GSAT can become a market maker on your behalf, but make a market on only one side, i.e. only offer a price to buy at or only offer a price to sell at, with the resulting trades being executed on your behalf, so it is now you who makes money off those trades, in the same way as it would traditionally be a market maker's privilege to do.
The public loves to get mad at Wall Street. But they should please get their facts straight about who the good guys are and who the bad guys are.
Catering to HFT shops was already in 100 microsecond latency a few years ago using commodity hardware and software (cannot speak of Goldman Sachs but another one of similar ilk, can't imagine GS were much far away if not even better)
Worked in an HFT shop and we were down to optimizing nanoseconds. Lots of folks had war stories about spending months to shave off tens of nanoseconds off pieces used millions of times per day.
> When a pension fund wanted to execute a trade, they would have had to cross the spread, and, statistically speaking, half the spread would immediately accrue to the market maker as profit. ..so this is money that YOU, the holder of a pension, are losing, and that THEY, the rich folks acting as market makers that the public likes to get mad at, are taking away from you.
This is totally incorrect. Let's say a pension fund wants to sell 1000 shares of APPL. The bid is $99, the ask is $101. They sell to the bidders and get $99,000 (minus some fees to the exchange probably). Your pension scheme just successfully liquadated their position.
At this point, contrary to what you say, the market maker has made 0 profit. What they've done, is taken a position in APPL which they think theoretically is profitable, but they aren't in the business of speculating on APPL. So now they have to hedge that risk by buying negatively correlated products and slowly trying to offload that position either by letting the market fill their offers on the ask, or hoping the bid improves. Once they have paid for their hedging and closed out their position according to their strategy over a period of time, whatever they have left is their profit.
What you paid that market maker for was for taking on the risk of holding the product whilst spending time to offload it to the rest of the market.
What GSAT do is say "Hey, don't sell this to the market, let us take care of that for you"- which is exactly the same service a market maker provides except they don't quote publicly, and in fact they probably do this by working with market makers.
Don't start a reply with "this is totally incorrect", when you're merely pointing out a simplification or approximation.
I was making a point about the business model of a proptrader marketmaker versus the business model of an algorithmically sophisticated broker and needed to establish some preliminaries and it would have served zero purpose to go into the particulars of the costs related to risk warehousing.
Having worked for an equities highfrequency marketmaking business myself: Mark-to-market at mid-price is the benchmark that those traders will use for figuring out, at the end of the day/week/month, whether it was a good or a bad day/week/month, and at the end of the year for negotiating their bonuses, even when they are left holding some positions with uncertain future. Everybody knows that it's a simplification/approximation, but, due to the efficient markets hypothesis, it's a very good one.
It's the kind of approximation where it's being taken for granted that people understand that it's not ACTUALLY the trader's profit. Because otherwise one would need to include in the discussion the fact that the receptionist at the proptrader marketmaker's office building is also a cost factor eating into their margins.
I didn't mention the risk warehousing for the same reason I didn't mention the receptionist. And I'm not going to go into a rebuttal about how it's definitely not necessary to hedge every single trade, for the same reason: because it's not the topic under discussion here.
Maxim of quantity:
* Make your contribution as informative as is required (for the current purposes of the exchange).
* Do not make your contribution more informative than is required.
So, to conclude: Your conversational move in this language game was a pretty weak one. The sentence "this is totally incorrect" however sounds like someone trying to establish dominance. Weakness and trying to establish dominance is a bad combo.
Yes. Many people use "algorithmic trading" as a synonym for "automated trading" or even HFT, when it is used (in the industry) basically for "optimal execution, computer aided, of a client order".
It seems like we could save a lot of pointless expenditure on an ultimately meaningless arms race in flash trading if we imposed reasonable limits on the time required to hold an equity in order for a trade to be legally recognized.
> pointless expenditure on an ultimately meaningless arms race
What about price discovery is pointless? Would you prefer that prices update only once a day? Once a week? Once a month? Realtime pricing of securities and derivatives is critical for an efficiently functioning economy.
> if we imposed reasonable limits on the time required to hold an equity in order for a trade to be legally recognized
This would damage the ability of market makers to function, ultimately driving up the cost of offering pensions, 401k plans, retail investing, low fee ETFs, etc. If these companies are willing to spend their money competing for the ability to offer you a better, faster price, why is this upsetting?
If you're opposed to the emphasis on latency in equity markets, focus on rule 612 of Reg NMS, which prohibits showing more competitive prices.
> What about price discovery is pointless? Would you prefer that prices update only once a day? Once a week? Once a month? Realtime pricing of securities and derivatives is critical for an efficiently functioning economy.
Pretty much everything at sub-second resolution is pointless.
I'd like to hear a coherent argument how realtime or even sub-second pricing of securities and derivatives is critical for an efficiently functioning economy, yet the largest markets in the world are closed 2/3rd of the day.
The fact that most markets are closed on all weekends plus over 10 holidays per year suggests that even an update once per day wouldn't make much of a difference.
Most of these efforts are aimed at taking advantage of lags in information flow, often within the very trading systems on which the trades are occurring. They are exploits, not essential market-making. Noise, not signal. What's the right timeframe? Something based on the time it takes for humans to reason about a price. Not a day, but certainly not milliseconds, either.
Nanosecond pricing updates might be a little more than anybody needs. Maybe if the exchange cleared once every second, that would serve anyone's purposes. Remember the market is ultimately about allocating capital between businesses and governments, and for economic purposes it doesn't need to run any faster than they can.
>Would you prefer that prices update only once a day?
Obviously you are being hyperbolic, but some people have proposed literally laying excess cable to slow down the speed of automatic trades, which can be highly volatile. That's not damaging market makers at all. It's smoothing out the supply and demand to prevent micro-crashes and other arbitrage.
Allowing trading of fractional pennies has been theorized to be another way to break up flash trading as you need to accurately guess the real price not just get your exact price up before anyone else.
I wouldn't say it's meaningless. These types of technologies are usually arbitraged out; ending up with a few large players that are squeezing out a return.
It means high frequency volatility is taken out of the equation for the rest of us -- Which generally is a benefit for other players in the market.
FX has (effectively) been that way for a long time.
Even if the business functions were pointless, the research and execution going into these projects is valuable knowledge that likely has broad application.
How is it different to any other competition between companies, what makes this one specifically pointless? Your idea will just put more money in the pocket of the bank traders rather than spreading the wealth to all the IT engineers and equipment manufacturers which supply this slight advantage, what's wrong with that?
Yeah I know people preferring microservices for its horizontal scalability advantage but never for reducing latency. Maybe something was lost in the translation to the author.
They were likely running old Tibco/RV systems (network) distributed across the network (common for 1990s to early 2000 trading systems), and replaced the system (and hardware) with multi-core boxes, and use shared memory for message passing. Reduces internal latency from milliseconds to sub-microsecond.
Only thing that I can think of is that it would make it easier to detect the bottlenecks. Other than that, certainly in high-frequency trading - I only see it adding latency.
Complexity is never reduced, but made more manageable. Microservices make a good architecture a lot harder, certainly if you have to start with a blank slate and don't really have an idea what you're up against.
However, things that are virtually impossible to enforce or even very hard to implement in a monolith, are made manageable if you have the proper setup. That includes central standardised logging, tracing, metrics and monitoring. If you have these in place and can enforce them, you're off to a good start.
These things rarely happen in a v1 though - which usually is a POC that ends up in production, and if you have a full-blown microservice architecture from the start, this will probably grow into something a lot worse than huge PHP monolith. With microservices you need to design a 'platform', and ad-hoc POC development never results in a good design, but just something that functionally works.
Most exchanges have an auction process that sets opening and closing prices. They let everyone get their orders in and then run an algorithm to find the price that will execute the most volume.
They could do the same process every 5 minutes and only allow stocks to trade in the auction. Then all of the resources used on pointless HFT could be used on something economically productive.
Most exchange open/close auctions use time priority to deal with order imbalances. If there are buy orders for 300 shares and sell orders for 500 at the same price, then only first 300 shares on the sell side will be filled.
The speed incentive is a consequence of time priority, not the auction frequency. Switching from continuous auctions to open/close style auctions every 5 minutes would not remove the incentive to be fast.
Preventing HFT would mean exchanges make less money, meaning they'd either have to charge more or find a new way to subsidize market activity. It's akin to how Robinhood sells data to trading firms to subsidize trades for long-term retail investors. "Economically productive" isn't always black and white.
Can someone explain to me what is gained by processing the trades in real time vs. batching the processing into say 1 second increments? What does GS gain by being able to get their trade there a few milliseconds before the competition and what do I as a consumer gain from this?
Think about it this way:
When you get new information that according to your trading algo might move the price of a stock and you want to act on it, once you have made the decision you want the time for your order to reach the exchange to be as low as possible as you assume others can have the same info and act to it accordingly.
Not just anyone can have a direct market access connection to an exchange's order book. This is usually reserved to the members of the exchange, and many rules and regulations apply. So even large volume traders use Sponsored Access, transacting directly with the exchange through the access platform of a sponsoring member that ensures not just technical service but most often also some risk and regulatory compliance controls. This service is not provided for free.
GS competes with a few others to provide such an access platform. Reducing the overhead latency of the platform itself in the trade loop makes them more attractive and allows them to attract HFT clients and/or maintain healthy margins.
Batching won't solve any problems you might think it would solve. The real world is continuous. Because of this, everyone is incentivized to wait until the last possible millisecond (or nanosecond) before the auction to submit bids so the incentive to build fast acting systems remains.
There are other problems as well, like how to deal with bid/ask imbalances at auction time.
suppose "real time" means order of magnitude milliseconds for example's sake.
the price could change a bunch in that interval of 1000ms. you may think "well only slightly" - fractions of cents - but if GS can make fractions of pennies on those events, scaled up to all seconds that the market is open, you can see why that's potentially attractive.
for whatever it's worth, GS in 2009 claimed that HFT generated <1% of their profits. whether you believe that is another thing.
Ok that explains why they want to do it, but how does it benefit the market to allow that? How does it benefit the consumers and corporations? In other words, what is the argument against creating a law that requires a minimum of 1 second batches for example?
It's difficult to say 'things would be X% more expensive' because of the interconnected complexity the GP was talking about, but there is definitely a very apparent benefit.
Finance is an online multiplayer game.
They are two weakly-connected systems.
Largely because most of the trade strategies live in ancient xls files no one dares edit.
The real reason are competitors. You have an advantage if you have the faster line. In the name of fairness there are lines of the exact same length* in many trade centers precisely for this reason.
There are bots that feign transaction so that others react in a specific way. In the last moment these are canceled again, too late for competitors to still react. This is an example for when you need a faster line.
To suggest this arms race in high frequency trade has a serious economic benefit is ridiculous in my opinion.
* I do literally mean cable length. Yes, they have become that crazy
In stock exchanges at least, what you describe is called spoofing. The regulators are not friendly towards it, because spoofing enables to skew price discovery.
It's generally hard to detect and hard to prove, but there have been cases where the regulators have proved sufficiently well that in certain cases, there were orders never intended to be executed. And yes, if you get caught, there are sanctions.
https://en.wikipedia.org/wiki/Spoofing_(finance)
Wider spreads just makes it more expensive for everyone. Personally, I'd like my pension money going towards the actual investment rather than paying for a wider spread, but I'm just strange.
The counterargument is that there are diminishing and/or negative returns to increased liquidity and velocity.
Take just stocks. Liquidity is not a problem. You have liquidity whether trades take minutes or milliseconds. Pricing? I'd say we have pricing covered too, at least the pricing that more/faster algorithmic trading will contribute.
Meanwhile, all this stuff costs money, people, resources that aren't available for actual productive work instead of overhead.
That's a very high standard. What's productive? What's productive enough, in your book, to be worth the effort used here?
That’s a very biased view. Another view would be that improving the efficiency of the largest markets in the world have a much larger positive impact on society than the vast majority of the “productive” work you refer to.
For starters, is the evidence behind this Hayekian market efficiency really so strong as to warrant this kind of absolute confidence in the wisdom of markets?
> markets work by polling the expertise of many different parties who all understand a piece of how things should be valued.
…as well as orders of magnitude more people who do not understand how things should be valued. → noise, which is fine ("excess volatility"), but which can also become highly persistent in the presence of correlated expectations ("bubbles")
> This results in millions, if not billions, of interconnected price-discovery feedback loops.
Well, there are negative and positive feedback loops, only one of which is stabilizing!
> beneficial […] because the price discovery feedback loops get faster
This can also backfire. In fact, this is why a number of stock markets have instituted a trading stop if an asset moves "too fast". Slowing things down / reducing liquidity can stabilize a situation. Actually, this reminds me of
[1] W. A. Brock, C. H. Hommes, and F. O. Wagener. More hedging instruments may destabilize markets. Journal of Economic Dynamics and Control, 33:1912–1928, 2009.
where you have a similar counterintuitive argument.
The history of the idea of market efficiency is long and the idea remains controversial or contested. See e.g. Philip Mirowski's writings.
> Well, there are negative and positive feedback loops, only one of which is stabilizing!
Absolutely. Entities that consistently contribute positive feedback cause harm to markets and they are generally doing something that is either prohibited or foolish. I don't consider either a good long term profit strategy. The market regulation departments work to remove one and large losses tend to remove the other.
> This can also backfire. In fact, this is why a number of stock markets have instituted a trading stop if an asset moves "too fast". Slowing things down / reducing liquidity can stabilize a situation.
Sure, exchanges use a variety of market integrity controls, including limits on rapid and/or large price changes that can trigger order rejections or trading halts. These controls can be beneficial when the price fluctuation was due to poor trading, but can be damaging to a market when the fluctuation was due to significant new information or because there is a natural high volatility situation such as a derivative that is about to expire or is rarely traded. Consequently, the exchanges have to be careful about how and when halts are invoked. Some exchanges often get it wrong.
The main point I was making is that lowering the latency of the multitude of price discovery feedback loops making up the global market can be very beneficial because it allows the pricing dependencies to be more fully determined.
That or moral apathy. At some point in the 80s we decided that markets driven by business profits should dictate every aspect of society.
I imagine 100 years from now they'll look back at today in disgust.
Why pick on traders? I know a lot of developers making well into six figures. I hear them talking about getting 3 new graphics cards for their gaming rigs instead of how they worked at a soup kitchen.
What is your point? That GS should be donating 100m to charity instead of reinvesting into their business?
Where do you think that money goes? Workers will be paid to implement their plan and taxes will be paid on those wages. In fact about 40% of that 100m will eventually end up being paid in taxes.
Deleted Comment
Dead Comment
Assuming the market can be said to have a Nyquist rate, then once you hit that you have all relevant information. Increasing the sample rate past Nyquist does not make a system more stable unless you have a very specific system designed specifically to take advantage of that. More typically, it just increases your noise-bandwidth product and can decrease total system stability and accuracy.
That's really interesting, I never thought of it that way.
>> markets work by polling the expertise of many different parties who all understand a piece of how things should be valued.
Does the whole picture ever become apparent to all of the interested parties after the fact? Or do market movements remain subject to a high degree of interpretation even after they happen?
https://en.wikipedia.org/wiki/Friedrich_Hayek#The_economic_c...
https://en.wikipedia.org/wiki/The_Use_of_Knowledge_in_Societ...
To dilate on your feedback loop comment, physical systems may benefit from a higher sampling rate but usually only to a point. This point is often related to the physical dynamics of the system (e.g., natural frequencies). For example, a small thruster may benefit much more from increasing sampling rates from 1000Hz to 10kHz than a large rocket engine. I assume/wonder if there's a similar analogy to diminished returns in stock information systems, like more volatile markets benefiting more from higher frequency of data. It would be interesting to see where the diminishing returns are.
Additionally, what is the nyquist limit for such an ideally realised market, if it is indeed to be modelled as a recursive sampled approximator and how is this derived? Given an infinitely recursive network of arbitrarily connected market agents, is any such calculation convergent? If so, why? If not, how does the market ever converge to any appropriate price - a price which accurately reflects the market conditions excluding pricing operations and market costs which aren't directly related to the production of the instrument in question?
Keep in mind that, if the market is functioning ideally no market participant will exceed the nyquist rate as all participants knowledge of market conditions converges to zero. How is any sampling rate, excluding zero, convergent? If not, how is any such market realisable? If so, what is the loss function between ideal model and realisable, perfectly imperfect real world implementation? What is the minimum profit, if not zero, and why?
However, it does seem that arbitrage opportunities decrease when such high-speed trading is occurring and, does so even more quickly the faster trading speed and market sampling are increased. How can we account for this, if not by increased market efficiency?
I conjecture that, by ever increasing the sampling rate and the speed at which transactions complete, markets are not being made more efficient. Instead, I hypothesise that, as markets directly effect the price of the instrument reflexively, the feedback latency produced creates relative local pockets of perceived value - which are only profitable trades in relation to local information asymmetry. As the vast majority of high-speed trading holds market positions on extremely short time scales, shifting exposure constantly, this profit is immediately realised locally resulting in the gradual diffuision of this inefficiency as the increase in price of all instruments. This is a direct result of the cost of trading being factored directly into the agent's local acceptable sale price of held instruments. Every local agent trading action is ideal, but the global market is a divergently inefficient one. Indeed, it is a market in which its pricing inefficiency is maximally concealed from all market participants.
In a sense, I conjecture that the estimator is not functioning to increase market efficiency but is, instead amplifying local inefficiency globally, in effect, much like a charge pump would operate in a voltage multiplier circuit. In essence such a scheme acts to conceal increased market cost and overhead (including the profit of market participants) into market instrument pricing. However, it does so in an extremely small and diffuse way so as to make the rise in price of a single instrument, as a result of this activity, extremely difficult to detect as all instruments increase similarly on the same time scale.
This behaviour appears to be similar in nature to 'salami slicing', an often effective embezzlement technique - except that, instead of exploiting an information asymmetry created by lack of interest in small quantities in the part of auditing accountants, it exploits the information asymmetry created by the speed of light itself.
Of course, the faster the sampling rate, the more efficient the described amplification process would take place. Does this effect correlate between markets with differing but estimable information asymmetry? If there is no correlation, this hypothesis is invalid. It would seem to be an area ripe for research and analysis of market data.
Do you see any technical issue with this conjecture by which we may discount it immediately?
>This profit is immediately realised by the increase in price of all commodities globally.
This would only hold if there weren’t profitable short trades. The profit can also be realized by the decrease in global commodities that would have been slower before.
It is not very precise about the premisses, "the market", "the reward" or "the critical products"--variables in a non-linear equation, so to speak, that do not necessarily have a unique solution, or no solution.
Berkshire Hathway has couple of trades every minute and difference in bid/ask prices is huge around $1000+, yet you dont see people complaining about that.
If it leads towards efficiency, productivity, most beneficial allocation, as suggested, those are all biases.
Dead Comment
Dead Comment
Dead Comment
The real-world impact of increased speed of execution by brokers is less money left on the table for HFTs to snap up and less slippage to the actual economic beneficiary (ie the actual retail investor or pension fund investing people's money gets a better trade).
My understanding (could be incorrect) is that their quant trading business (what they called their 'HFT' shop, although it wasn't really high-frequency compared to real HFTs like Winton, Knight, Jump, Citadel or whatever) was probably going to get shuttered as they moved out of proprietary risk-taking generally, but I don't have any information either way.
In GSAT at the time, compared to others on the street our tech was pretty sophisticated intellectually but not fast (eg we didn't have ultrafast marketdata, our exchange latency was quite high and our execution algo speed was pretty slow) and so we had to do a lot of smart coding to prevent ourselves being ripped off by actual HFTs given they could move so much faster than we could.
There's a lot more to HFT than reg NMS by the way, I was working in London, so we did all the GSAT trading on European exchanges none of which has anything crazy like reg NMS and there was still HFT shenanigans of various kinds that people would try (eg timing arbitrages if they could see that you had different execution speeds on different venues etc).
The better distinction is whether those trades are on a principal or agency basis.
Dead Comment
The impact of such regulation was tested by the SEC recently with the 'tick size' program. Instead of reducing the minimum increment, some names saw it increased to $0.05. The hope was to increase liquidity while decreasing volatility in these names. In fact, those names experienced decreased liquidity with no decrease in volatility.
HFT is a result of regulation.
[0] https://www.sec.gov/divisions/marketreg/subpenny612faq.htm
[1] https://www.benzinga.com/general/education/18/04/11517027/th...
I'm not sure anyone knows for sure why this happened, but the best theory i've heard is that the reduction in tick size reduced the expected profits of market makers, because they are collecting less spread on every contract they turn around, while not affecting their potential losses, because external factors which cause the market to jump three basis points will still cause it to jump three basis points. Halved regular profits divided by constant occasional losses equals no longer worth bothering with.
Multiply x2 and add an extra 10%.
Make that the minimum order placement tick duration.
There would be 1 single global price and no arbitrage between markets possible.
probably because it's difficult to see any actual value that this provides to society.
Doesn't this apply to a majority of activities in the financial sector?
Deleted Comment
Think about the social benefits of $100 million invested in nyc transit infrastructure.
The economy's incentive structure is broken and this is a prime example.
In fact, it's quite possible that if it wasn't invested into HFT it would be held as cash by the company or paid out as a dividend (which is fine as well).
I get it, just seems... unlikely. Who cares?
on August 1 no less, their new software deployment essentially annihilated half a billion dollars, all due to - you guessed it - a refactored command line flag! can't make this stuff up.
But it is a huge barrier to entry now; it also is a waste of resources.
Would that not prevent this never ending race for faster and closer access. Something that doesn’t really seem to be adding value to society or the market.
What if stock markets where about the long term ?
I don't "like" HFT the same way I don't "like" the market at all, but HFT is not actually stealing grannies money.
There are many types of “side channel” non public information that aren’t insider trading.
The traditional market model used to be that at every point in time, a market maker would offer a price to buy/sell at. The bid would, of course, be lower than the ask, the difference being called the spread. When a pension fund wanted to execute a trade, they would have had to cross the spread, and, statistically speaking, half the spread would immediately accrue to the market maker as profit. ..so this is money that YOU, the holder of a pension, are losing, and that THEY, the rich folks acting as market makers that the public likes to get mad at, are taking away from you.
The business model of GSAT is that a pension fund can ask Goldman Sachs to use algorithms to do things on their behalf that are less naive than what I just described above and end up giving less money to the market maker. For example, GSAT can become a market maker on your behalf, but make a market on only one side, i.e. only offer a price to buy at or only offer a price to sell at, with the resulting trades being executed on your behalf, so it is now you who makes money off those trades, in the same way as it would traditionally be a market maker's privilege to do.
The public loves to get mad at Wall Street. But they should please get their facts straight about who the good guys are and who the bad guys are.
Catering to HFT shops was already in 100 microsecond latency a few years ago using commodity hardware and software (cannot speak of Goldman Sachs but another one of similar ilk, can't imagine GS were much far away if not even better)
This is totally incorrect. Let's say a pension fund wants to sell 1000 shares of APPL. The bid is $99, the ask is $101. They sell to the bidders and get $99,000 (minus some fees to the exchange probably). Your pension scheme just successfully liquadated their position.
At this point, contrary to what you say, the market maker has made 0 profit. What they've done, is taken a position in APPL which they think theoretically is profitable, but they aren't in the business of speculating on APPL. So now they have to hedge that risk by buying negatively correlated products and slowly trying to offload that position either by letting the market fill their offers on the ask, or hoping the bid improves. Once they have paid for their hedging and closed out their position according to their strategy over a period of time, whatever they have left is their profit.
What you paid that market maker for was for taking on the risk of holding the product whilst spending time to offload it to the rest of the market.
What GSAT do is say "Hey, don't sell this to the market, let us take care of that for you"- which is exactly the same service a market maker provides except they don't quote publicly, and in fact they probably do this by working with market makers.
I was making a point about the business model of a proptrader marketmaker versus the business model of an algorithmically sophisticated broker and needed to establish some preliminaries and it would have served zero purpose to go into the particulars of the costs related to risk warehousing.
Having worked for an equities highfrequency marketmaking business myself: Mark-to-market at mid-price is the benchmark that those traders will use for figuring out, at the end of the day/week/month, whether it was a good or a bad day/week/month, and at the end of the year for negotiating their bonuses, even when they are left holding some positions with uncertain future. Everybody knows that it's a simplification/approximation, but, due to the efficient markets hypothesis, it's a very good one.
It's the kind of approximation where it's being taken for granted that people understand that it's not ACTUALLY the trader's profit. Because otherwise one would need to include in the discussion the fact that the receptionist at the proptrader marketmaker's office building is also a cost factor eating into their margins.
I didn't mention the risk warehousing for the same reason I didn't mention the receptionist. And I'm not going to go into a rebuttal about how it's definitely not necessary to hedge every single trade, for the same reason: because it's not the topic under discussion here.
It's called the "maxim of quantity" and it is a generally-accepted maxim of conversation. (see https://en.wikipedia.org/wiki/Cooperative_principle)
Maxim of quantity: * Make your contribution as informative as is required (for the current purposes of the exchange). * Do not make your contribution more informative than is required.
So, to conclude: Your conversational move in this language game was a pretty weak one. The sentence "this is totally incorrect" however sounds like someone trying to establish dominance. Weakness and trying to establish dominance is a bad combo.
What about price discovery is pointless? Would you prefer that prices update only once a day? Once a week? Once a month? Realtime pricing of securities and derivatives is critical for an efficiently functioning economy.
> if we imposed reasonable limits on the time required to hold an equity in order for a trade to be legally recognized
This would damage the ability of market makers to function, ultimately driving up the cost of offering pensions, 401k plans, retail investing, low fee ETFs, etc. If these companies are willing to spend their money competing for the ability to offer you a better, faster price, why is this upsetting?
If you're opposed to the emphasis on latency in equity markets, focus on rule 612 of Reg NMS, which prohibits showing more competitive prices.
Pretty much everything at sub-second resolution is pointless.
I'd like to hear a coherent argument how realtime or even sub-second pricing of securities and derivatives is critical for an efficiently functioning economy, yet the largest markets in the world are closed 2/3rd of the day.
The fact that most markets are closed on all weekends plus over 10 holidays per year suggests that even an update once per day wouldn't make much of a difference.
Obviously you are being hyperbolic, but some people have proposed literally laying excess cable to slow down the speed of automatic trades, which can be highly volatile. That's not damaging market makers at all. It's smoothing out the supply and demand to prevent micro-crashes and other arbitrage.
It means high frequency volatility is taken out of the equation for the rest of us -- Which generally is a benefit for other players in the market.
FX has (effectively) been that way for a long time.
I would love to see microservices which actually solve problems and reduce complexity! :(
However, things that are virtually impossible to enforce or even very hard to implement in a monolith, are made manageable if you have the proper setup. That includes central standardised logging, tracing, metrics and monitoring. If you have these in place and can enforce them, you're off to a good start.
These things rarely happen in a v1 though - which usually is a POC that ends up in production, and if you have a full-blown microservice architecture from the start, this will probably grow into something a lot worse than huge PHP monolith. With microservices you need to design a 'platform', and ad-hoc POC development never results in a good design, but just something that functionally works.
They could do the same process every 5 minutes and only allow stocks to trade in the auction. Then all of the resources used on pointless HFT could be used on something economically productive.
There was ~$107 billion in online advertising in 2018. There was ~$145 billion traded in Nasdaq listed equities yesterday.
Global financial markets and online advertising have different requirements.
[0] https://www.marketingcharts.com/advertising-trends/spending-...
[1] http://www.nasdaqtrader.com/Trader.aspx?id=DailyMarketSummar...
The speed incentive is a consequence of time priority, not the auction frequency. Switching from continuous auctions to open/close style auctions every 5 minutes would not remove the incentive to be fast.
MiFID 2 regulation was a real push to move people away from using dark pools.
0 - https://markets.cboe.com/europe/equities/trading/periodic_au... 1 - https://business.nasdaq.com/auction-on-demand/index.html
Not just anyone can have a direct market access connection to an exchange's order book. This is usually reserved to the members of the exchange, and many rules and regulations apply. So even large volume traders use Sponsored Access, transacting directly with the exchange through the access platform of a sponsoring member that ensures not just technical service but most often also some risk and regulatory compliance controls. This service is not provided for free.
GS competes with a few others to provide such an access platform. Reducing the overhead latency of the platform itself in the trade loop makes them more attractive and allows them to attract HFT clients and/or maintain healthy margins.
There are other problems as well, like how to deal with bid/ask imbalances at auction time.
the price could change a bunch in that interval of 1000ms. you may think "well only slightly" - fractions of cents - but if GS can make fractions of pennies on those events, scaled up to all seconds that the market is open, you can see why that's potentially attractive.
for whatever it's worth, GS in 2009 claimed that HFT generated <1% of their profits. whether you believe that is another thing.
GS is in an arms race with other fintech firms to be first in line to act on new information.