Readit News logoReadit News
gozzoo commented on How AI on Microcontrollers Works: Operators and Kernels   danielmangum.com/posts/ai... · Posted by u/hasheddan
gozzoo · 2 months ago
What about the so called NPUs which are present in some modern microcontroller chips?
gozzoo commented on Eleven v3   elevenlabs.io/v3... · Posted by u/robertvc
GrayShade · 3 months ago
I'm not sure what you mean. I chose Romanian from the language selector and tried Matilda, Alice and Laura. Laura actually sounds like an English TTS trying to pronounce Romanian.
gozzoo · 3 months ago
Exactly the same thing with Bulgarian voices.
gozzoo commented on Show HN: I rewrote my Mac Electron app in Rust   desktopdocs.com/?v=2025... · Posted by u/katrinarodri
gozzoo · 3 months ago
Isn't such app best implmented with some cross platform framework like flutter? It has support for all major desktop OSes and at leqast the examples run very smooth.
gozzoo commented on What Is Entropy?   jasonfantl.com/posts/What... · Posted by u/jfantl
gozzoo · 4 months ago
The visualisation is great, the topic is interesting and very well explained. Can sombody recomend some other blogs with similar type of presentation?
gozzoo commented on Rebuilding Prime Video UI with Rust and WebAssembly   infoq.com/presentations/p... · Posted by u/8s2ngy
gozzoo · 4 months ago
I'm not familiar with Rust or WebAssembly, but isn't Flutter more appropriate specifically for such applications?
gozzoo commented on Hacker Laws   hacker-laws.com/... · Posted by u/kaonwarb
JSR_FDED · 5 months ago
So many people get Occam’s razor wrong. I like the way you describe it as the least number of concepts and assumptions, rather than just “simplest”.
gozzoo · 5 months ago
How is "least number of concepts and assumptions" different than “simplest”?
gozzoo commented on Tencent's 'Hunyuan-T1'–The First Mamba-Powered Ultra-Large Model   llm.hunyuan.tencent.com/#... · Posted by u/thm
AJRF · 5 months ago
Iman Mirzadeh on Machine Learning Street Talk (Great podcast if you haven’t already listened!) put into a words a thought I had - LLM labs are so focused on making those scores go up it’s becoming a bit of a perverse incentive.

If your headline metric is a score, and you constantly test on that score, it becomes very tempting to do anything that makes that score go up - i.e Train on the Test set.

I believe all the major ML labs are doing this now because:

- No one talks about their data set

- The scores are front and center of big releases, but there is very little discussion or nuance other than the metric.

- The repercussions of not having a higher or comparable score is massive failure and your budget will get cut.

More in depth discussion on capabilities - while harder - is a good signal of a release.

gozzoo · 5 months ago
Intelligence is so vaguely defined and has so many dimensions that it is practically impossible to assess. The only approximation we have is the benchmarks we currently use. It is no surprise that model creators optimize their models for the best results in these benchmarks. Benchmarks have helped us drastically improve models, taking them from a mere gimmick to "write my PhD thesis." Currently, there is no other way to determine which model is better or to identify areas that need improvement.

That is to say, focusing on scores is a good thing. If we want our models to improve further, we simply need better benchmarks.

u/gozzoo

KarmaCake day2180June 17, 2009View Original