Readit News logoReadit News
activatedgeek commented on Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output   github.com/klara-research... · Posted by u/mrciffa
Der_Einzige · 7 months ago
You and the OP talk a lot of smack about logprobs but we show that using them in even the simple case of dynamic truncation of your cutoff point (min_p sampler vs static top_p/top_k) leads to extreme performance improvements (especially on small models) and unlocks very high temperature sampling (for more creativity/less slop/better synthetic data-gen): https://arxiv.org/abs/2407.01082 [1].

Indeed, ultra high temperature sampling in its own right should be studied. I can do top_k = 2 and temperature = system.maxint and get decent results which are extraordinarily creative (with increasing probability of token related spelling issues as top_k goes up).

I'm convinced that the models logprobs hold so much bloody value and knowledge that I unironically do not care about how many "theoretical guarantees" it lacks or about it's non-correspondence to our usage of language.

[1]: Btw, this paper is now ICLR 2025 accepted and likely going to get an oral/honorable mention since we are ranked #18 out of all submissions by scores and have extremely favorable meta-review. Peer review seems to agree with our claims of extreme performance improvements.

activatedgeek · 7 months ago
Congratulations on the strong reception of min-p. Very clever!

We may be talking about two orthogonal things here. And also to be clear, I don't care about theoretical guarantees either.

Now, min-p is solving for the inadequacies of standard sampling techniques. It is almost like a clever adaptive search which other sampling methods fail at (despite truncations like top-k/top-p).

However, one thing that I noticed in the min-p results was that lower temperatures were almost always better in the final performance (and quite expectedly the inverse for creating writing). This observation makes me think that the underlying model is generally fairly good at ranking the best tokens. What sampling allows us is a margin-for-error in cases where the model ranked a relevant next token not at the top, but slightly lower.

Therefore, my takeaway from min-p is that it solves for deficiencies of current samplers but its success is not in contradiction to the fact that logprobs are bad proxies for semantics. Sampling is the simplest form of search, and I agree with you that better sampling methods are a solid ingredient to extract information from logprobs.

activatedgeek commented on Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output   github.com/klara-research... · Posted by u/mrciffa
deoxykev · 7 months ago
The fundemental challenge of using log probabilities to measure LLM certainty is the mismatch between how language models process information and how semantic meaning actually works. The current models analyze text token by token-- fragments that don't necessarily align with complete words, let alone complex concepts or ideas.

This creates a gap between the mechanical measurement of certainty and true understanding, much like mistaking the map for the territory or confusing the finger pointing at the moon with the moon itself.

I've done some work before in this space, trying to come up with different useful measures from the logprobs, such as measuring shannon entropy over a sliding window, or even bzip compression ratio as a proxy for information density. But I didn't find anything semantically useful or reliable to exploit.

The best approach I found was just multiple choice questions. "Does X entail Y? Please output [A] True or [B] False. Then measure the linprobs of the next token, which should be `[A` (90%) or `[B` (10%). Then we might make a statement like: The LLM thinks there is a 90% probability that X entails Y.

activatedgeek · 7 months ago
That has been my understanding too. More generally, a verifier at the end certainly helps.

In our paper [1], we find that asking a follow up question like "Is the answer correct?" and taking the normalized probability of "Yes" or "No" token (or more generally any such token trained for) seems to be best bet so far to get well-calibrated probabilities out of the model.

In general, the log-probability of tokens is not a good indicator of anything other than satisfying the pre-training loss function of predicting the "next token." (it likely is very well-calibrated on that task though) Semantics of language are a much less tamable object, especially when we don't quite have a good way to estimate a normalizing constant because every answer can be paraphrased in many ways and still be correct. The volume of correct answers in the generation space of language model is just too small.

There is work that shows one way to approximate the normalizing constant via SMC [2], but I believe we are more likely to benefit from having a verifier at train-time than any other approach.

And there are stop-gap solutions to make log probabilities more reliable by only computing them on "relevant" tokens, e.g. only final numerical answer tokens for a math problem [3]. But this approach kind of side-steps the problem of actually trying to find relevant tokens. Perhaps something more in the spirit of System 2 attention which selects meaningful tokens for the generated output would be more promising [4].

[1]: https://arxiv.org/abs/2406.08391 [2]: https://arxiv.org/abs/2404.17546 [3]: https://arxiv.org/abs/2402.10200 [4]: https://arxiv.org/abs/2311.11829

activatedgeek commented on Learning to Reason with LLMs   openai.com/index/learning... · Posted by u/fofoz
OkGoDoIt · a year ago
Some practical notes from digging around in their documentation: In order to get access to this, you need to be on their tier 5 level, which requires $1,000 total paid and 30+ days since first successful payment.

Pricing is $15.00 / 1M input tokens and $60.00 / 1M output tokens. Context window is 128k token, max output is 32,768 tokens.

There is also a mini version with double the maximum output tokens (65,536 tokens), priced at $3.00 / 1M input tokens and $12.00 / 1M output tokens.

The specialized coding version they mentioned in the blog post does not appear to be available for use.

It’s not clear if the hidden chain of thought reasoning is billed as paid output tokens. Has anyone seen any clarification about that? If you are paying for all of those tokens it could add up quickly. If you expand the chain of thought examples on the blog post they are extremely verbose.

https://platform.openai.com/docs/models/o1https://openai.com/api/pricing/https://platform.openai.com/docs/guides/rate-limits/usage-ti...

activatedgeek · a year ago
Reasoning tokens are indeed billed as output tokens.

> While reasoning tokens are not visible via the API, they still occupy space in the model's context window and are billed as output tokens.

From here: https://platform.openai.com/docs/guides/reasoning

activatedgeek commented on Perceived Age   suryad.com/blog/percieved... · Posted by u/sdan
activatedgeek · a year ago
This effect is very interesting.

Veritasium covered this effect in a video [1] for the interested.

[1]: https://www.youtube.com/watch?v=aIx2N-viNwY (2016)

activatedgeek commented on Ask HN: What is the best way to author blogs in 2024?    · Posted by u/badrabbit
activatedgeek · a year ago
I use Astro + Cloudflare Pages for my website [1]. I document the key bits of my stack here [2] for completeness.

I've been very happy with Astro because it is a good example of low floor and high ceiling software. I can start with plain HTML, make it more flexible with Astro language (still very close to HTML), make authoring easier with Markdown (+ lifestyle extensions from Remark/Rehype), and extend to frameworks like React on a need basis (which I use for some pages where I use maps).

[1]: https://sanyamkapoor.com [2]: https://sanyamkapoor.com/kb/the-stack

u/activatedgeek

KarmaCake day1383August 24, 2016
About
sanyamkapoor.com
View Original