Readit News logoReadit News
antoineMoPa commented on Zig feels more practical than Rust for real-world CLI tools   dayvster.com/blog/why-zig... · Posted by u/dayvster
antoineMoPa · 3 months ago
I also loved Zig when manually typing code, but I increasingly use AI to write my code even in personal projects. In that context, I'd rather use Rust more, since the AI takes care of complex syntax anyway. Also, the rust ecosystem is bigger, so I'd rather stick to this community.

> Developers are not Idiots

I'm often distracted and AIs are idiots, so a stricter language can keep both me and AIs from doing extra dumb stuff.

antoineMoPa commented on Show HN: Price Per Token – LLM API Pricing Data   pricepertoken.com/... · Posted by u/alexellman
antoineMoPa · 5 months ago
It would be fun to compare with inference providers (groq/vertex ai, etc.).
antoineMoPa commented on Ask HN: What are you working on? (March 2025)    · Posted by u/david927
antoineMoPa · 9 months ago
I'm making a tiny tiny LLM in rust (using candle) to teach myself AI https://github.com/antoineMoPa/rust-text-experiments
antoineMoPa commented on The FFT Strikes Back: An Efficient Alternative to Self-Attention   arxiv.org/abs/2502.18394... · Posted by u/iNic
andersource · 10 months ago
A fully connected layer has different weights for each feature (or position in input in your formulation). So the word "hello" would be treated completely differently if it were to appear in position 15 vs. 16, for example.

Attention, by contrast, would treat those two occurrences similarly, with the only difference depending on positional encoding - so you can learn generalized patterns more easily.

antoineMoPa · 10 months ago
I think that this is the explanation I needed, thanks!
antoineMoPa commented on The FFT Strikes Back: An Efficient Alternative to Self-Attention   arxiv.org/abs/2502.18394... · Posted by u/iNic
antoineMoPa · 10 months ago
What I don't get about attention is why it would be necessary when a fully connected layer can also "attend" to all of the input. With very small datasets (think 0 - 500 tokens), I found that attention makes training longer and results worse. I guess the benefits show up with much larger datasets. Note that I'm an AI noob just doing some personal AI projects, so I'm not exactly a reference.
antoineMoPa commented on Forgejo: A self-hosted lightweight software forge   forgejo.org/... · Posted by u/thunderbong
antoineMoPa · a year ago
It's not clear from the landing page whether it's a git code platform / mercurial / entirely new VCS. I wish it was clearer (looking at the Readme, looks like it's indeed a git hosting platform).

I don't really care about the governance model as a user seeing this landing page for the first time, so I wonder why it's so prominent, vs telling me what the actual product is.

antoineMoPa commented on Show HN: SuperSplat – open-source 3D Gaussian Splat Editor   playcanvas.com/supersplat... · Posted by u/ovenchips
antoineMoPa · a year ago
Did anyone build a text-to-splat 3D generation model? Seems like it would be pretty straightforward? Should make it really easy to generate assets for video games.

EDIT: yep - https://gsgen3d.github.io/

antoineMoPa commented on Civet: A Superset of TypeScript   civet.dev/... · Posted by u/revskill
Kab1r · a year ago
Am I the only one that really dislikes the syntax choices here?
antoineMoPa · a year ago
Nope! I'm also not convinced by it.
antoineMoPa commented on Mira Murati leaves OpenAI   twitter.com/miramurati/st... · Posted by u/brianjking
imjonse · a year ago
I am glad most people do not talk in real life using the same style this message was written in.
antoineMoPa · a year ago
To me, this looks like something chatgpt would write.

u/antoineMoPa

KarmaCake day1870April 27, 2015
About
Twitter, Github: @antoineMoPa
View Original