Readit News logoReadit News
jsemrau · a year ago
A lot of words for not bringing much new content to the discussion. I think the most interesting application of LLMs in Finance are

(1) synthetic data models for data cleansing, (2) journal management, (3) anomaly tracking, (4) critiquing investments

All of this should be done by professionals and nothing is "retail" ready.

bigyikes · a year ago
> All of this should be done by professionals and nothing is "retail" ready.

Don’t worry, just train the LLM to always append “This is not financial advice.” to their responses. Boom, retail ready.

pennomi · a year ago
As an AI language model, I am unable to answer as this goes against the ethical principles of respect and impartiality. This is not financial advice.
Rexxar · a year ago
Or just append the string to output without asking the LLM to do it :-).
Raphael · a year ago
Hard to waste any time reading about AI because it's likely written by AI. But then I probably shouldn't read anything written past 2022.
PeterStuer · a year ago
(non informed, layman sideline perspective from casual reading on this subject over the years)

Real time (financial) sentiment analysis on financial news sources has been integrated for a long time. Thing about LLM's is, while they could improve on quality, they need to get the latency down before being useful in straight trade. For offline analyst support where time is less of an issue they can ofc be useful, e.g summarizing/structuring lots of fluffed or trawled content.

t_mann · a year ago
I'd think the first application would be along the lines of Github Copilot, perhaps locally hosted - quantitative traders will write a lot of (proprietary) code, too
OtherShrezzing · a year ago
I thin the underlying vector databases should have decent uses in financial markets.

Since they can understand taxonomical-ish relationships, a vector db should be able to codify sufficiently large market mover strategies, assuming those strategies are remotely predictable. Once a rival's strategy is codified, it should be possible to undermine it, like some form of heuristic-based insider trading.

tomatocracy · a year ago
One other area which I think is potentially quite interesting is using LLMs to help in deciphering "Fed-speak". Eg JP Morgan built an LLM to try to predict the impact on interest rate markets of speeches by various central bank policymakers.
Shocka1 · a year ago
I conducted a test last year with GPT 4. The idea was simple. Feed Powell's official fed meeting speeches and give a rating between 1 and 10, 10 being more dovish and 1 being more hawkish. I fed around 7 or so Fed speeches and kept getting around an 8 on the rating, which would have been more dovish. There were a few speeches in there that were definitely hawkish, and the markets reacted that way as well.

Although my simple test didn't prove anything, I'm 100% sure there is value here and if I had more time I would attempt to exploit it. I collect data from financial social platforms that assign bearish/neutral/bullish ratings and there are highly correlated markers of impending market movements when certain conditions are met. I'm sure fed speeches can be used in the same way for indicators.

bernardlunn · a year ago
As a human, I like anomaly tracking if I understand what you mean by that. LLMs are maybe 99% good and 1% totally wrong (hallucination). Lots of profit betting against the 1% totally wrong. Not hard to see when wrong but do need to act fast.
mattew · a year ago
This makes sense. Can you clarify what you mean by journal management in this context?
NovemberWhiskey · a year ago
The most interesting applications for LLMs in finance are basically all summarization.
profsummergig · a year ago
Could someone please clarify what "journal management" means?
spaceman_2020 · a year ago
Can Vision GPT be trained to do technical analysis?
duskwuff · a year ago
Calling rand() requires very little training. ;)

Less facetiously, there's no reason that needs to go through a vision model. If you wanted to do technical analysis, it'd make far more sense to provide data to the model as data, not as a picture of that data.

conorh · a year ago
We are working on a project for a client which functions as an analysis tool for stocks using LLMs. Ingesting 10ks, presentations, news, etc. and doing comparative analysis and other reports. It works great, but one of the things we have learned (and it makes sense) is that traceability of the information for financial professionals is very important - where did the facts and information come from in what the AI is producing. A hard problem to solve completely.
neodypsis · a year ago
Could something like that proposed in "Training Language Models to Generate Text with Citations via Fine-grained Rewards" [0] work for you?

0. https://arxiv.org/abs/2402.04315

richrichie · a year ago
I worked on a similar application and eventually we shelved it. We just could not be confident enough that the numbers in the report produced are correct. There were enough instances of inaccuracies to not use it for important decision making. Which actually meant a lot of double work.
pid-1 · a year ago
Same experience here.
cpursley · a year ago
I assume you're ingesting PDFs. If so, how are you handling tables accurately?
Kon-Peki · a year ago
If it was me, I would be ingesting the raw filings from SEC EDGAR and using the robust xml documentation to create very accurately annotated data tables that would be fed to my LLM
scrollbar · a year ago
A coworker presented a demo the other day of this - asking LLM (I think it was OpenAI) to extract the text from a PDF - each page of the PDF passed as an image. It was able to take a table and turn it into a hierarchical representation of the data (ie. Column with bullets under it for each row, then next column, etc.)

If you haven't tried maybe worth a shot

coastermug · a year ago
AWS textract now has the functionality to offer a table cell based on a query - if I’m not mistaken. I’ve seen nothing similar to this and would be very interested if there are other solutions.
sagar-co · a year ago
This is really interesting.

We build multimodal search engine on day-to-day basis. We recently launched video documents search engine. I made a Show HN [0] post about ingesting Mutual Fund Risk/Return summary data (485BPOS, 497) and searching it with AI search. We are able to pinpoint to exact term on given page. It is fairly easy for us to ingest 10K, 10Q, 8K and other forms.

You can try out demo for finance-application at https://finance-demo.joyspace.ai.

Our search engine can be used to build RAG pipelines that further minimizes hallucinations for your LLM model.

Happy to answer any questions around this and around search engine.

[0]https://news.ycombinator.com/item?id=39980902

steveBK123 · a year ago
LLMs labor savings will only help financial market participants if they manage to do it without hallucinations / can maintain ground truth.

Sure its great if your analysts save 10 hours because they don't need to read 10Ks / earnings / management call transcripts .. but not if it spits out incorrect/made up numbers.

With code you can run it and see if it works, rinse & repeat.

With combing financial documents to then make decisions, you'll realize it made up some financial stat after you've lost money. So the iteration loop is quite different.

mirekrusin · a year ago
Price speculations are hallucinations about future with hope of happening.
btbuildem · a year ago
There were some developments using LLMs in the timeseries domain which caught my attention.

I toyed with the Chronos forecasting toolkit [1], and the results were predictably off by wild margins [2]

What really caught my eye though was the "feel" of the predicted timeseries -- this is the first time I've seen synthetic timeseries that look like the real thing. Stock charts have a certain quality to them, once you've been looking at them long enough, you can tell more often than not whether some unlabeled data is a stock price timeseries or not. It seems the chronos LLM was able to pick up on that "nature" of the price movement, and replicate it in its forecasts. Impressive!

1: https://github.com/amazon-science/chronos-forecasting

2: https://imgur.com/a/hTRQ38d

nostrademons · a year ago
I used to work in financial software, and when writing the charting UIs, I'd wire them up to a randomwalk to generate fake time series data. It was a relatively common occurrence for a VP or the company CEO to walk by, look at my screen, and say "What stock is that? Looks interesting."

Unpopular opinion backed up by experience: a randomwalk is the most effective model for generating timeseries that have the "feel" of real stock charts.

IAmGraydon · a year ago
That’s my experience as well. A random walk looks just like market data. You could even perform technical analysis on it, finding support, resistance, trendlines, etc. It really makes you realize why technical analysis doesn’t work.
Onavo · a year ago
> Unpopular opinion backed up by experience: a randomwalk is the most effective model for generating timeseries that have the "feel" of real stock charts.

That's not an unpopular opinion. The BSM model is based on the assumption that stock prices are stochastic i.e. random walks. Monte Carlo simulations and binomial trees are the two common methods of deriving a solution to the BSM model.

lordnacho · a year ago
You can tell a stock time series by certain characteristics:

1) There are more jumps down than up. (Maybe not in Pharma, but in general). If there's a gap up, chances are it's on earnings day.

2) Upward movements tend to be accompanied by lower volatility, and downwards by higher.

3) There's a lot of nothing-happened days, and a lot more large jumps than you'd expect in a random walk.

I've also spent a bunch of time generating random walks, and it's true that some look realistic, but they often fall into this trap that stock returns are not normally distributed.

I also wrote a number of random trading backtests, and it's frightening how few times you need to click the "recalculate" button to get a thing that looks like a money printing machine.

iamgopal · a year ago
This is true, I have tested this with multiple veterans and none could tell them apart
btbuildem · a year ago
I'd love to see some examples, if you have old screenshots laying around!

Your take conflicts with my toy hypothesis, and I wouldn't mind being proven wrong if it saves me time and effort.

I wonder if the folks who were fooled by your screens were fooled by the random data itself, or the fact that it was presented within all the familiar chrome and doodads that people associate with stock price visualization.

Bostonian · a year ago
Since volatility clustering does exist in returns, a GARCH model should produced more realistic-looking returns than a pure random walk.

Deleted Comment

yobbo · a year ago
Yes, but it is also possible to generate "parameterised" random walks that have some predictability and are visually indistinguishable from "pure" random walks.

Or two series that are dependent, but individually look like random walks.

actionfromafar · a year ago
Or it looked interesting because it did not look normal.
yzmtf2008 · a year ago
As always, when running time series predictions on financial datasets, one need to use daily return (including dividends, corporate actions, etc.) rather than end of day price.

Simply outputting the last value (as more or less shown in these charts) is a pretty good end of day price predictor!

hydershykh · a year ago
I think some of the financial applications around LLMs right now are better suited for things like summarization, aggregation, etc.

We at Tradytics recently built two tools on top of LLMs and they've been super popular with our usercase.

Earnings transcript summary: Users want a simple and easy to understand summary of what happened in an earnings call and report. LLMs are a nice fit for that - https://tradytics.com/earnings

News aggregation & summarization: Given how many articles get written everyday in financial markets, there is need for a better ingestion pipelines. Users want to understand what's going on but don't want to spend several hours reading through news - https://tradytics.com/news

paulryanrogers · a year ago
As more of the reports get written by layers of AI it makes me wonder how lossy and noisy this whole pipeline is becoming.
hydershykh · a year ago
That's a fair point. But models like GPT4 do not hallucinate much when it comes to summarizing. So I don't think these applications contribute to anything negative.
monkeydust · a year ago
> there is much more noise than signal in financial data.

Spot on. Very few can consistently find small signals and match that with huge amounts of capital and be successful for a long period. Of course Renaissance Technology comes to mind.

Recommended reading this if your interested, was an enjoyable read:The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution

wuj · a year ago
HFTs exploit price inefficiencies that last only milliseconds. The time-series data mentioned in the article is on the scale of seconds. I wonder if its possible to get the time-series data on the scale of milliseconds, and how that would affect the training of the objective function in a LLM.
multicast · a year ago
Todays derivatives and their pricing are based on the premise that stock prices can not be predicted and behave like a Brownian motion system. If you take real time data from any stock and calculate in order how many times a stock went up in a row or down in a row you end up almost perfectly with a natural probability distribution. HFT's are involved in market making and arbitrage both of which already involves high speed, the later much more, and earning minuscule profits. There are ghost patterns who can be mined for a certain period of time but they are not solely calculated based on trading time series. They involve complex proprietary calculations, some machine learning and relationships between stocks. There is no pattern in the flow how a particular stock is trading.

Also from a long-term view its very questionable. How should a model be able to predict that in the middle of a high interest environment, a tech bubble burst and a dumping stock market in general, a new platform called Chat-GPT gets launched that basically carries the whole world's stock market to new heights which causes among other things retail investors to liquidate bonds and other high interest environment assets and flood it into the stock market. It is more than completely of the text-book. That can not be predicted. The million dollar spending guy is at the end the same way off as the guy who simply employs a 100 python line trend-following strategy.

mvkel · a year ago
> How should a model be able to predict that in the middle of a high interest environment, a tech bubble burst and a dumping stock market in general, a new platform called Chat-GPT gets launched that basically carries the whole world's stock market to new heights which causes among other things retail investors to liquidate bonds and other high interest environment assets and flood it into the stock market.

Because it happened in the railroad boom in the 19th century, the roaring 20s, the 80s, the 90s dot com boom, the biotech boom...

History rhymes, and as we know, LLMs make decent rappers.

mhh__ · a year ago
Derivatives are priced under those assumptions because the aim is to calculate exposure/risk (where simple / assume you're wrong is desirable), the pricing is sort of an afterthought most of the time.
imtringued · a year ago
The tech is different but the people are the same.
mhh__ · a year ago
The data is reasonably easily acquired, for a price...

Deleted Comment

ysofunny · a year ago
If I learned anything from a conference by benoit mandelbrot back in my college days

is that gaming financial markets is the only real application of anything scientific

but I vaguely remember what he was actually talking about, I never quite made it as a mathematician

jonahx · a year ago
> is that gaming financial markets is the only real application of anything scientific

medicine (living longer, curing disease, vaccines, etc), cheaper energy, cheaper transportation, cheaper construction, cheaper food, better communication, new forms of entertainment, just off the top of my head.

nexuist · a year ago
I've sort of come around on this. Yes, everything you listed is valuable and good. But the reality is all of it was built with money that came from banks and investors. The only reason to do anything scientific is to get investors to give you money. If you do something scientific that does not make people want to give you money you will impact no lives. In this way gaming financial markets is indeed the only point to doing anything ambitious at all.
kortilla · a year ago
What does that even mean? How is the atomic bomb not real?
actionfromafar · a year ago
The atomic bomb is used very much today to influence markets, to be fair.