Readit News logoReadit News
jgord · a year ago
The text has some great explanatory diagrams and looks to be a very high quality overview of ML thru the lens of probability, with lots of math.

I was also recently impressed by Zhaos "Mathematical Foundation of Reinforcement Learning", free textbook and video lectures on YT : https://github.com/MathFoundationRL/Book-Mathematical-Founda...

If you dont have a lot of time, at least glance at Zhaos overview contents diagram, its a good conceptual map of the whole field, imo .. here :

https://github.com/MathFoundationRL/Book-Mathematical-Founda...

and maybe watch the intro video.

vimgrinder · a year ago
The first lecture is so good. Not only from perspective of content, but how Zhao explain things about how to think about learning as a student. ty for recommendation.
abhgh · a year ago
I came across this a few days ago, and my excuse to give it a a serious look is that Andreas Krause has some deep and interesting research in Gaussian Processes and Bandits [1].

[1] https://scholar.google.com/scholar?start=10&q=andreas+krause...

trostaft · a year ago
It's Krause, he's one of the biggest researchers in the field. At least based on the other work of his I've read, he's a good writer too. This ought to be a worth while read.
svilen_dobrev · a year ago
stupid question: can a LLM (i.e neural network) tell me the probability of the answer it just spew? i.e. turn into fuzzy logic? Aaand, can it tell me how much it does believe itself? i.e. what's the probability that above probability is correct? i.e. confidence i.e. intuitionisticaly fuzzy logic?

Long time ago at uni we studied these things for a while.. and even made a Prolog interpreter having both F+IF (probability + confidence) coefficients for each and every term..

vlovich123 · a year ago
Not out of the box I think; I wouldn’t trust any self-assesment like that. With enough compute, you could probably come up with a metric by doing a beam search and using an LLM to evaluate how many of the resultant answers were effectively the same as a proxy for “confidence”.
energy123 · a year ago
Similar to bootstrapping a random variable in statistics. Your N estimates (each estimate is derived from a subset of the sample data) give you an estimate of the distribution of the random variable. If the variance of that distribution is small (relative to the magnitude of the point estimate) then you have high confidence that your point estimate is close to the true value.

Likewise in your metric, if all answers are the same despite perturbations then it's more likely to be ... true?

I'd really like to see a plot of your metric versus the SimpleQA hallucation benchmark that OpenAI uses.

ramity · a year ago
The way I understand it, an LLM response is a chain of tokens where each is the most probable token. Maybe there exists more complicated candidate and selection approaches than that, but biggest number works for me. For the sake of simplicity, let's just say tokens are words. You'd have access to the probability of each word in the ordering of the sentence, but I'm not sure how that would then be used to evaluate to the probability of the sentence itself or its truthiness.
nthingtohide · a year ago
neom · a year ago
You can say "give me a % chance you think this thing will happen and why" and it will spit out a lot of context behind it's thinking, I'm not a math guy and I'm aware "probability" has some more complex math stuff in it, but just from a "why do you believe this so strongly?" perspective, I've personally found it's able to have me agree or disagree fairly. You can then give it additional context you know about, and it will refine it's estimation. Basically I've started treating them like context connection systems, just to look to see if dots even possibly could connect before I do the connecting myself.
bob1029 · a year ago
I'm not 100% sure what you mean by this, but there is token probability available in some providers:

https://cookbook.openai.com/examples/using_logprobs

madacol · a year ago
I think logprobs functionality was removed because it allowed to exfiltrate the last layer of weights of their propietary models
esafak · a year ago
Suitably modified, they can. Bayesian neural networks provide uncertainty quantification. The challenge is calibrating the predictions, and deciding whether devoting model capacity to uncertainty quantification would not be better spent on a bigger, uncertain model.

https://en.wikipedia.org/wiki/Calibration_(statistics)

Example: Efficient and Effective Uncertainty Quantification for LLMs (https://openreview.net/forum?id=QKRLH57ATT)

ogrisel · a year ago
According to the following paper, it's possible to get calibrated confidence scores by directly asking the LLM to verbalize a confidence level, but it strongly depends on how you prompt it to do so:

https://arxiv.org/abs/2412.14737

whtrbt · a year ago
Maybe stupid answer, but I’ve read a few older papers that used ensembles to identify when a prediction is out of distribution. Not sure what SotA approach is though.
6gvONxR4sf7o · a year ago
Yes, but those probabilities tend to be poorly calibrated, especially after the tuning they get for instruction following and such.

Dead Comment

antonkar · a year ago
I think we’ll need a GUI for the models to democratize interpretability and let even gamers explore them. Basically to train another model, that will take the LLM and convert it into 3D shapes and put them in some 3D world that is understandable for humans.

Simpler example: represent an LLM as a green field with objects, where humans are the only agents:

You stand near a monkey, see chewing mouth nearby, go there (your prompt now is “monkey chews”), close by you see an arrow pointing at a banana, father away an arrow points at an apple, very far away at the horizon an arrow points at a tire (monkeys rarely chew tires).

So things close by are more likely tokens, things far away are less likely, you see all of them at once (maybe you’re on top of a hill to see farther). This way we can make a form of static place AI, where humans are the only agents

soulofmischief · a year ago
I had a mind-bending Salvia trip at eighteen that went sort of like that.

My mind turned into an infinitely large department store where each aisle was a concurrent branch of thought, and the common ingredient lists above each aisle were populated with words, feelings and concepts related to each branch.

The PA system replaced my internal monologue, which I no longer had, but instead I was hearing my thoughts externally as if they were another person's.

I was able to walk through these aisles and marvel at the immense, fractal, interdependent web of concurrent thought my brain was producing in realtime.

_rpxpx · a year ago
“When I began to navigate psychospace with LSD, I realized that before we were conscious, seemingly self-propelled human beings, many tapes and corridors had been created in our minds and reflexes which were not of our own making. These patterns and tapes laid down in our consciousness are walled off from each other. I see it as a vast labyrinth with high walls sealing off the many directives created by our personal history.

Many of these directives are contradictory. The coexistence of these contradictory programs is what we call inner conflict. This conflict causes us to constantly check ourselves while we are caught in the opposition of polarity. Another metaphor would be like a computer with many programs running simultaneously. The more programs that are running, the slower the computer functions. This is a problem then. With all the programs running that are demanded of our consciousness in this modern world, we have problems finding deep integration.

To complicate matters, the programs are reinforced by fear. Fear separates, love integrates. We find ourselves drawn to love and unity, but afraid to make the leap.

What I found to be the genius of LSD is that it really gets you high, higher than the programs, higher than the walls that mask and blind one to the energy destroying presence of many contradictory but hidden programs. When LSD is used intentionally it enables you to see all the tracks laid down, to explore each one intensely. It also allows you to see the many parallel and redundant programs as well as the contradictory ones.

It allows you to see the underlying unity of all opposites in the magic play of existence. This allows you to edit these programs and recreate superior programs that give you the insight to shake loose the restrictions and conflicts programmed into each one of us by our parents, our religion, our early education, and by society as a whole.”

~ Nick Sand, 2001, Mind States conference, quoted in Casey Hardison's obituary

antonkar · a year ago
We have our first Neo candidate)

The guy who’ll make the GUI for LLMs is the next Jobs/Gates/Musk and Nobel Prize Winner (I think it’ll solve alignment by having millions of eyes on the internals of LLMs), because computers became popular only after the OS with a GUI appeared. I recently shared how one of its “apps” possibly can look: https://news.ycombinator.com/item?id=43319726

neom · a year ago
If you feel like being a hippie, you can find the "rendering engine for reality" in here: Mandelbrot (1980) – The Mandelbrot Set and fractal geometry Julia (1918) – Memoire sur l'iteration des fonctions rationelles (Julia sets) Meyer (1996) – Quantum Cellular Automata (procedural complexity) Wolfram (1984) – Cellular automata as models of complexity Bak et al. (1987) – Self-organized criticality. Wolfram, Gorard & Crowley (2020) - "A Class of Models with the Potential to Represent Fundamental Physics" - Kari & Culik (2009) - "Universal Pattern Generation by Cellular Automata". Just combine the papers, ofc, that is crazy - but its fun to be a bit crazy sometimes. It's one of my fav thought experiments, just for fun. :)
jgord · a year ago
I dont think anyone has found a good way to map higher dimensional space onto 4D visualizations, yet.

Maybe this is why tokens and language are so useful for humans ? they might be the closest analog we have.

antonkar · a year ago
Good point, I think at least some lossy “compression” into a GUI is possible. The guy who’ll make the GUI for LLMs is the next Jobs/Gates/Musk and Nobel Prize Winner (I think it’ll solve alignment by having millions of eyes on the internals of LLMs), because computers became popular only after the OS with a GUI appeared. I recently shared how one of its “apps” possibly can look: https://news.ycombinator.com/item?id=43319726
ikeashark · a year ago
What
antonkar · a year ago
What what?)
meindnoch · a year ago
Sir, this is a Wendy's.
bongodongobob · a year ago
What
antonkar · a year ago
Another related shocking idea: https://news.ycombinator.com/item?id=43319726
nbeleski · a year ago
Seems similar, or at least partially overlap, with what I would say is the best reference on the subject, an Introduction to Statistical Learning from Gareth James et al [1].

I wonder it this one might be a bit more accessible, although I guess the R/Python examples are helpful on the latter.

[1] https://www.statlearning.com/

whimsicalism · a year ago
not really, islr is a pretty basic book - this is about more advanced techniques to propagate probability estimates rather than point-wise

and frankly i would not recommend islr anymore today, too dated

keviniam · a year ago
What would you (or other informed parties) recommend?
chasely · a year ago
Kevin Murphy racing to rename his Probabilistic Machine Learning series.
brador · a year ago
Interesting separation and distinction between noisy inputs, noisy processing and noisy chains.
overu589 · a year ago
Existential Reality is potential distribution not arrangement of states.

Potential exists, probability is a mathematical description of its distribution. Every attribute is a dimension (vector). State is merely a passing measurement of resolve. Potential interacts through constructive and destructive interference. Constructive and destructive interference resolve to state in a momentary measure of “now” (an inevitability decaying proposition.)

Existential Reality is potential distributing, not arrangements of state.