(non informed, layman sideline perspective from casual reading on this subject over the years)
Real time (financial) sentiment analysis on financial news sources has been integrated for a long time. Thing about LLM's is, while they could improve on quality, they need to get the latency down before being useful in straight trade. For offline analyst support where time is less of an issue they can ofc be useful, e.g summarizing/structuring lots of fluffed or trawled content.
I'd think the first application would be along the lines of Github Copilot, perhaps locally hosted - quantitative traders will write a lot of (proprietary) code, too
I thin the underlying vector databases should have decent uses in financial markets.
Since they can understand taxonomical-ish relationships, a vector db should be able to codify sufficiently large market mover strategies, assuming those strategies are remotely predictable. Once a rival's strategy is codified, it should be possible to undermine it, like some form of heuristic-based insider trading.
One other area which I think is potentially quite interesting is using LLMs to help in deciphering "Fed-speak". Eg JP Morgan built an LLM to try to predict the impact on interest rate markets of speeches by various central bank policymakers.
I conducted a test last year with GPT 4. The idea was simple. Feed Powell's official fed meeting speeches and give a rating between 1 and 10, 10 being more dovish and 1 being more hawkish. I fed around 7 or so Fed speeches and kept getting around an 8 on the rating, which would have been more dovish. There were a few speeches in there that were definitely hawkish, and the markets reacted that way as well.
Although my simple test didn't prove anything, I'm 100% sure there is value here and if I had more time I would attempt to exploit it. I collect data from financial social platforms that assign bearish/neutral/bullish ratings and there are highly correlated markers of impending market movements when certain conditions are met. I'm sure fed speeches can be used in the same way for indicators.
As a human, I like anomaly tracking if I understand what you mean by that. LLMs are maybe 99% good and 1% totally wrong (hallucination). Lots of profit betting against the 1% totally wrong. Not hard to see when wrong but do need to act fast.
Less facetiously, there's no reason that needs to go through a vision model. If you wanted to do technical analysis, it'd make far more sense to provide data to the model as data, not as a picture of that data.
We are working on a project for a client which functions as an analysis tool for stocks using LLMs. Ingesting 10ks, presentations, news, etc. and doing comparative analysis and other reports. It works great, but one of the things we have learned (and it makes sense) is that traceability of the information for financial professionals is very important - where did the facts and information come from in what the AI is producing. A hard problem to solve completely.
I worked on a similar application and eventually we shelved it. We just could not be confident enough that the numbers in the report produced are correct. There were enough instances of inaccuracies to not use it for important decision making. Which actually meant a lot of double work.
If it was me, I would be ingesting the raw filings from SEC EDGAR and using the robust xml documentation to create very accurately annotated data tables that would be fed to my LLM
A coworker presented a demo the other day of this - asking LLM (I think it was OpenAI) to extract the text from a PDF - each page of the PDF passed as an image. It was able to take a table and turn it into a hierarchical representation of the data (ie. Column with bullets under it for each row, then next column, etc.)
AWS textract now has the functionality to offer a table cell based on a query - if I’m not mistaken. I’ve seen nothing similar to this and would be very interested if there are other solutions.
We build multimodal search engine on day-to-day basis. We recently launched video documents search engine. I made a Show HN [0] post about ingesting Mutual Fund Risk/Return summary data (485BPOS, 497) and searching it with AI search. We are able to pinpoint to exact term on given page. It is fairly easy for us to ingest 10K, 10Q, 8K and other forms.
LLMs labor savings will only help financial market participants if they manage to do it without hallucinations / can maintain ground truth.
Sure its great if your analysts save 10 hours because they don't need to read 10Ks / earnings / management call transcripts .. but not if it spits out incorrect/made up numbers.
With code you can run it and see if it works, rinse & repeat.
With combing financial documents to then make decisions, you'll realize it made up some financial stat after you've lost money. So the iteration loop is quite different.
There were some developments using LLMs in the timeseries domain which caught my attention.
I toyed with the Chronos forecasting toolkit [1], and the results were predictably off by wild margins [2]
What really caught my eye though was the "feel" of the predicted timeseries -- this is the first time I've seen synthetic timeseries that look like the real thing. Stock charts have a certain quality to them, once you've been looking at them long enough, you can tell more often than not whether some unlabeled data is a stock price timeseries or not. It seems the chronos LLM was able to pick up on that "nature" of the price movement, and replicate it in its forecasts. Impressive!
I used to work in financial software, and when writing the charting UIs, I'd wire them up to a randomwalk to generate fake time series data. It was a relatively common occurrence for a VP or the company CEO to walk by, look at my screen, and say "What stock is that? Looks interesting."
Unpopular opinion backed up by experience: a randomwalk is the most effective model for generating timeseries that have the "feel" of real stock charts.
That’s my experience as well. A random walk looks just like market data. You could even perform technical analysis on it, finding support, resistance, trendlines, etc. It really makes you realize why technical analysis doesn’t work.
> Unpopular opinion backed up by experience: a randomwalk is the most effective model for generating timeseries that have the "feel" of real stock charts.
That's not an unpopular opinion. The BSM model is based on the assumption that stock prices are stochastic i.e. random walks. Monte Carlo simulations and binomial trees are the two common methods of deriving a solution to the BSM model.
You can tell a stock time series by certain characteristics:
1) There are more jumps down than up. (Maybe not in Pharma, but in general). If there's a gap up, chances are it's on earnings day.
2) Upward movements tend to be accompanied by lower volatility, and downwards by higher.
3) There's a lot of nothing-happened days, and a lot more large jumps than you'd expect in a random walk.
I've also spent a bunch of time generating random walks, and it's true that some look realistic, but they often fall into this trap that stock returns are not normally distributed.
I also wrote a number of random trading backtests, and it's frightening how few times you need to click the "recalculate" button to get a thing that looks like a money printing machine.
I'd love to see some examples, if you have old screenshots laying around!
Your take conflicts with my toy hypothesis, and I wouldn't mind being proven wrong if it saves me time and effort.
I wonder if the folks who were fooled by your screens were fooled by the random data itself, or the fact that it was presented within all the familiar chrome and doodads that people associate with stock price visualization.
Yes, but it is also possible to generate "parameterised" random walks that have some predictability and are visually indistinguishable from "pure" random walks.
Or two series that are dependent, but individually look like random walks.
As always, when running time series predictions on financial datasets, one need to use daily return (including dividends, corporate actions, etc.) rather than end of day price.
Simply outputting the last value (as more or less shown in these charts) is a pretty good end of day price predictor!
I think some of the financial applications around LLMs right now are better suited for things like summarization, aggregation, etc.
We at Tradytics recently built two tools on top of LLMs and they've been super popular with our usercase.
Earnings transcript summary: Users want a simple and easy to understand summary of what happened in an earnings call and report. LLMs are a nice fit for that - https://tradytics.com/earnings
News aggregation & summarization: Given how many articles get written everyday in financial markets, there is need for a better ingestion pipelines. Users want to understand what's going on but don't want to spend several hours reading through news - https://tradytics.com/news
That's a fair point. But models like GPT4 do not hallucinate much when it comes to summarizing. So I don't think these applications contribute to anything negative.
> there is much more noise than signal in financial data.
Spot on. Very few can consistently find small signals and match that with huge amounts of capital and be successful for a long period. Of course Renaissance Technology comes to mind.
Recommended reading this if your interested, was an enjoyable read:The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution
HFTs exploit price inefficiencies that last only milliseconds. The time-series data mentioned in the article is on the scale of seconds. I wonder if its possible to get the time-series data on the scale of milliseconds, and how that would affect the training of the objective function in a LLM.
Todays derivatives and their pricing are based on the premise that stock prices can not be predicted and behave like a Brownian motion system. If you take real time data from any stock and calculate in order how many times a stock went up in a row or down in a row you end up almost perfectly with a natural probability distribution. HFT's are involved in market making and arbitrage both of which already involves high speed, the later much more, and earning minuscule profits. There are ghost patterns who can be mined for a certain period of time but they are not solely calculated based on trading time series. They involve complex proprietary calculations, some machine learning and relationships between stocks. There is no pattern in the flow how a particular stock is trading.
Also from a long-term view its very questionable. How should a model be able to predict that in the middle of a high interest environment, a tech bubble burst and a dumping stock market in general, a new platform called Chat-GPT gets launched that basically carries the whole world's stock market to new heights which causes among other things retail investors to liquidate bonds and other high interest environment assets and flood it into the stock market. It is more than completely of the text-book. That can not be predicted. The million dollar spending guy is at the end the same way off as the guy who simply employs a 100 python line trend-following strategy.
> How should a model be able to predict that in the middle of a high interest environment, a tech bubble burst and a dumping stock market in general, a new platform called Chat-GPT gets launched that basically carries the whole world's stock market to new heights which causes among other things retail investors to liquidate bonds and other high interest environment assets and flood it into the stock market.
Because it happened in the railroad boom in the 19th century, the roaring 20s, the 80s, the 90s dot com boom, the biotech boom...
History rhymes, and as we know, LLMs make decent rappers.
Derivatives are priced under those assumptions because the aim is to calculate exposure/risk (where simple / assume you're wrong is desirable), the pricing is sort of an afterthought most of the time.
> is that gaming financial markets is the only real application of anything scientific
medicine (living longer, curing disease, vaccines, etc), cheaper energy, cheaper transportation, cheaper construction, cheaper food, better communication, new forms of entertainment, just off the top of my head.
I've sort of come around on this. Yes, everything you listed is valuable and good. But the reality is all of it was built with money that came from banks and investors. The only reason to do anything scientific is to get investors to give you money. If you do something scientific that does not make people want to give you money you will impact no lives. In this way gaming financial markets is indeed the only point to doing anything ambitious at all.
(1) synthetic data models for data cleansing, (2) journal management, (3) anomaly tracking, (4) critiquing investments
All of this should be done by professionals and nothing is "retail" ready.
Don’t worry, just train the LLM to always append “This is not financial advice.” to their responses. Boom, retail ready.
Real time (financial) sentiment analysis on financial news sources has been integrated for a long time. Thing about LLM's is, while they could improve on quality, they need to get the latency down before being useful in straight trade. For offline analyst support where time is less of an issue they can ofc be useful, e.g summarizing/structuring lots of fluffed or trawled content.
Since they can understand taxonomical-ish relationships, a vector db should be able to codify sufficiently large market mover strategies, assuming those strategies are remotely predictable. Once a rival's strategy is codified, it should be possible to undermine it, like some form of heuristic-based insider trading.
Although my simple test didn't prove anything, I'm 100% sure there is value here and if I had more time I would attempt to exploit it. I collect data from financial social platforms that assign bearish/neutral/bullish ratings and there are highly correlated markers of impending market movements when certain conditions are met. I'm sure fed speeches can be used in the same way for indicators.
Less facetiously, there's no reason that needs to go through a vision model. If you wanted to do technical analysis, it'd make far more sense to provide data to the model as data, not as a picture of that data.
0. https://arxiv.org/abs/2402.04315
If you haven't tried maybe worth a shot
We build multimodal search engine on day-to-day basis. We recently launched video documents search engine. I made a Show HN [0] post about ingesting Mutual Fund Risk/Return summary data (485BPOS, 497) and searching it with AI search. We are able to pinpoint to exact term on given page. It is fairly easy for us to ingest 10K, 10Q, 8K and other forms.
You can try out demo for finance-application at https://finance-demo.joyspace.ai.
Our search engine can be used to build RAG pipelines that further minimizes hallucinations for your LLM model.
Happy to answer any questions around this and around search engine.
[0]https://news.ycombinator.com/item?id=39980902
Sure its great if your analysts save 10 hours because they don't need to read 10Ks / earnings / management call transcripts .. but not if it spits out incorrect/made up numbers.
With code you can run it and see if it works, rinse & repeat.
With combing financial documents to then make decisions, you'll realize it made up some financial stat after you've lost money. So the iteration loop is quite different.
I toyed with the Chronos forecasting toolkit [1], and the results were predictably off by wild margins [2]
What really caught my eye though was the "feel" of the predicted timeseries -- this is the first time I've seen synthetic timeseries that look like the real thing. Stock charts have a certain quality to them, once you've been looking at them long enough, you can tell more often than not whether some unlabeled data is a stock price timeseries or not. It seems the chronos LLM was able to pick up on that "nature" of the price movement, and replicate it in its forecasts. Impressive!
1: https://github.com/amazon-science/chronos-forecasting
2: https://imgur.com/a/hTRQ38d
Unpopular opinion backed up by experience: a randomwalk is the most effective model for generating timeseries that have the "feel" of real stock charts.
That's not an unpopular opinion. The BSM model is based on the assumption that stock prices are stochastic i.e. random walks. Monte Carlo simulations and binomial trees are the two common methods of deriving a solution to the BSM model.
1) There are more jumps down than up. (Maybe not in Pharma, but in general). If there's a gap up, chances are it's on earnings day.
2) Upward movements tend to be accompanied by lower volatility, and downwards by higher.
3) There's a lot of nothing-happened days, and a lot more large jumps than you'd expect in a random walk.
I've also spent a bunch of time generating random walks, and it's true that some look realistic, but they often fall into this trap that stock returns are not normally distributed.
I also wrote a number of random trading backtests, and it's frightening how few times you need to click the "recalculate" button to get a thing that looks like a money printing machine.
Your take conflicts with my toy hypothesis, and I wouldn't mind being proven wrong if it saves me time and effort.
I wonder if the folks who were fooled by your screens were fooled by the random data itself, or the fact that it was presented within all the familiar chrome and doodads that people associate with stock price visualization.
Deleted Comment
Or two series that are dependent, but individually look like random walks.
Simply outputting the last value (as more or less shown in these charts) is a pretty good end of day price predictor!
We at Tradytics recently built two tools on top of LLMs and they've been super popular with our usercase.
Earnings transcript summary: Users want a simple and easy to understand summary of what happened in an earnings call and report. LLMs are a nice fit for that - https://tradytics.com/earnings
News aggregation & summarization: Given how many articles get written everyday in financial markets, there is need for a better ingestion pipelines. Users want to understand what's going on but don't want to spend several hours reading through news - https://tradytics.com/news
Spot on. Very few can consistently find small signals and match that with huge amounts of capital and be successful for a long period. Of course Renaissance Technology comes to mind.
Recommended reading this if your interested, was an enjoyable read:The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution
Also from a long-term view its very questionable. How should a model be able to predict that in the middle of a high interest environment, a tech bubble burst and a dumping stock market in general, a new platform called Chat-GPT gets launched that basically carries the whole world's stock market to new heights which causes among other things retail investors to liquidate bonds and other high interest environment assets and flood it into the stock market. It is more than completely of the text-book. That can not be predicted. The million dollar spending guy is at the end the same way off as the guy who simply employs a 100 python line trend-following strategy.
Because it happened in the railroad boom in the 19th century, the roaring 20s, the 80s, the 90s dot com boom, the biotech boom...
History rhymes, and as we know, LLMs make decent rappers.
Deleted Comment
is that gaming financial markets is the only real application of anything scientific
but I vaguely remember what he was actually talking about, I never quite made it as a mathematician
medicine (living longer, curing disease, vaccines, etc), cheaper energy, cheaper transportation, cheaper construction, cheaper food, better communication, new forms of entertainment, just off the top of my head.