Readit News logoReadit News
leod commented on Show HN: A fast HNSW implementation in Rust   github.com/swapneel/hnsw-... · Posted by u/xcyto
leod · a year ago
Happy to see people working on vector search in Rust. Keep it up!

As far as HNSW implementations go, this one appears to be almost entirely unfinished. Node insertion logic is missing (https://github.com/swapneel/hnsw-rust/blob/b8ef946bd76112250...) and so is the base layer beam search.

leod commented on Posh: Type-Safe Graphics Programming in Rust   leod.github.io/rust/gamed... · Posted by u/leod
incrudible · 2 years ago
Using Rust for writing shaders may sound appealing, but that puts a language that is slow to compile into a domain where fast iteration times are crucial. Most shaders can be compiled in milliseconds and e.g. graphical authoring tools rely on this.
leod · 2 years ago
This is a fair point and also called out in the discussion section. To some degree, this could be mitigated by hotloading shader code (and compiling shaders in debug mode). However, this remains as a fundamental downside of the approach.

Personally, I think that this is a price worth paying!

leod commented on Riffusion – Stable Diffusion fine-tuned to generate music   riffusion.com/about... · Posted by u/MitPitt
leod · 3 years ago
Awesome work.

Would you be willing to share details about the fine-tuning procedure, such as the initialization, learning rate schedule, batch size, etc.? I'd love to learn more.

Background: I've been playing around with generating image sequences from sliding windows of audio. The idea roughly works, but the model training gets stuck due to the difficulty of the task.

leod commented on CNN-generated images are surprisingly easy to spot for now   peterwang512.github.io/CN... · Posted by u/hardmaru
leod · 6 years ago
Interesting. They train an image classifier to detect images that were generated by a GAN-trained CNN. I wonder if it could be possible to include this classifier in the training loss, such that the generated images fly under its radar as much as possible. If this makes sense, then I guess the cat-and-mouse game just gained another level. On the other hand, what the classifier is detecting could be a fingerprint of the CNN architecture itself.

(Full disclosure: I have only read the abstract so far.)

leod commented on Talking to myself: how I trained GPT2-1.5b for rubber ducking using my chat data   svilentodorov.xyz/blog/gp... · Posted by u/Tenoke
CDSlice · 6 years ago
It doesn't seem very accurate, there isn't close to enough Electron hate whenever it is in the title.

This is pure gold though:

> How does one make a web app using a standard framework? I've never used it, but it sounds like someone has been able to put together something like a Web app with only one app.

Edit: This is even better.

> Rewriting a Linux kernel in Rust, by hand, is definitely the right thing to do as a beginner/intermediate programmer.

leod · 6 years ago
Ha! In the model's defense, its training data [1] ends in 2017 -- not sure if hatred for Electron was as prevalent back then.

[1] https://archive.org/details/14566367HackerNewsCommentsAndSto...

leod commented on Talking to myself: how I trained GPT2-1.5b for rubber ducking using my chat data   svilentodorov.xyz/blog/gp... · Posted by u/Tenoke
rahimnathwani · 6 years ago
This is cool. If you were to cache the results and generate a unique URL for each, people could easily share the funniest ones.
leod · 6 years ago
Thanks! I actually planned to make results shareable at the start, but, knowing the internet, I did not like the idea of being held responsible for whatever content (say offensive or even illegal things) people would put into the titles.
leod commented on Talking to myself: how I trained GPT2-1.5b for rubber ducking using my chat data   svilentodorov.xyz/blog/gp... · Posted by u/Tenoke
blazespin · 6 years ago
I would be curious to know how much when we write, how much of it is self-attention and how much of it is our fore-brain actually trying to make sense? My guess is that the more tired / rushed / burned out you are, the % of self attention increases.

Sometimes watching the news, it seems like 90% of what they say when they are 'vamping' is just self-attention.

Has anyone posted any GPT / Hacker News generated text yet? Wisdom of the crowds, indeed. It'd be interesting to post using it with light editing, especially something that uses upvotes for training.

One of the things I was thinking about was training on your favorite novel, so you could have a sort of conversation with it / ask it questions. A kind of interactive cliff notes. However, as looked into it I realized it was still too much of a markov chain like thing to be functionally useful. Fun idea though.

The real win, in all of this, of course is auto completion in different mediums. Code completion demos are pretty wild - https://tabnine.com/blog/deep/ Come to think about it, you could probably use it for writing academic papers as well assuming you know the content well.

Self-Attention and Human/Computer interaction is a very brave new world. I don't think people really yet know the potential for seismic shift here.

leod · 6 years ago
I've trained a Transformer encoder-decoder model (this was slightly before GPT2 came out) to generate HN comments from titles. There is a demo running at https://hncynic.leod.org

u/leod

KarmaCake day89April 13, 2012View Original