Readit News logoReadit News
Posted by u/lakshith-403 2 years ago
Show HN: We've open-sourced our LLM attention visualization librarygithub.com/labmlai/inspec...
Inspectus allows you to create interactive visualizations of attention matrices with just a few lines of Python code. It’s designed to run smoothly in Jupyter notebooks through an easy-to-use Python API. Inspectus provides multiple views to help you understand language model behaviors. If you have any questions, feel free to ask!
xcodevn · 2 years ago
On a related note: recently, I released a visualization of all MLP neurons inside the llama3 8B model. Here is an example "derivative" neuron which is triggered when talking about the derivative concept.

https://neuralblog.github.io/llama3-neurons/neuron_viewer.ht...

skulk · 2 years ago
This is insanely fun to just flip through. I found a "sex" neuron. https://neuralblog.github.io/llama3-neurons/neuron_viewer.ht...
vpj · 2 years ago
Pretty cool. The tokens are highlighted based on the activation?
xcodevn · 2 years ago
Yes, you're correct. The tokens are highlighted based on the neuron activation value, which is scaled to a range of 0 to 10.
SushiHippie · 2 years ago
This seems to be what Anthropic and OpenAI did in their research

Golden Gate Claude - https://news.ycombinator.com/item?id=40459543 - (60 comments, 16 days ago)

Extracting Concepts from GPT-4 - https://news.ycombinator.com/item?id=40599749 (144 comments, 2 days ago)

lakshith-403 · 2 years ago
Interesting. I think OpenAI here uses sparse autoencoders to map out sparse activation patterns in networks. Comparing them to how a real person reasons about a situations.

Inspectus, on the other hand is a general tool to visualize how transformer models pay attention to different parts of the data they process.

dimatura · 2 years ago
That OpenAI work is more elaborate. It trains an additional network in such a way that it encodes what GPT is doing in terms of activations, but in a more interpretable way (hopefully). Here, as far as I can tell, it's visualizing the activation of the attention layers directly.
ravjo · 2 years ago
Sounds great. Non-engineer, but curious. Is there a walkthrough blog post or video that can help someone appreciate/understand this easily?
swifthesitation · 2 years ago
Attention in transformers, visually explained | Chapter 6, Deep Learning - 3Blue1Brown: https://www.youtube.com/watch?v=eMlx5fFNoYc&t=
lakshith-403 · 2 years ago
Thank you
blackbear_ · 2 years ago
Loosely related, but also a great read: https://distill.pub/2020/circuits/zoom-in/
benf76 · 2 years ago
This looks cool but can you explain how to make it useful?
lakshith-403 · 2 years ago
I'm not a primary user. Just cleaned up the existing codebase to make it open source. But you could use this to visualise attentions and debug the model.

For an example if you're working on a Q&A model, you can check which tokens in the prompt contributed to the output. It's possible to detect issues like output not paying attention to any important part of the prompt.

3abiton · 2 years ago
The issue with the ambiguity of usage plague lots of OSS projects. Guides/Tutorials will always help drive usage much more, just look at the usage of GPT-3 vs ChatGPT (which is GPT-3.5 with WebUI slapped on top of it).

Deleted Comment

JackYoustra · 2 years ago
Hey! This is pretty neat, it reminds me of the graphs made by transformer_lens. Cool to see all of these visualization libraries popping up!