Readit News logoReadit News
icelancer commented on Manim: Animation engine for explanatory math videos   github.com/3b1b/manim... · Posted by u/pykello
sansseriff · 4 days ago
If you're passionate about it, then go for it! Though be aware that several others have tried similar things:

- https://www.befreed.ai/knowledge-visualizer

- https://kodisc.com/

- https://github.com/hesamsheikh/AnimAI-Trainer

- https://tiger-ai-lab.github.io/TheoremExplainAgent/

- https://tma.live/, HN discussion: https://news.ycombinator.com/item?id=42590290

- https://generative-manim.vercel.app/

No doubt the results can be impressive: https://x.com/zan2434/status/1898145292937314347

Only reason I'm aware of all these attempts is because I'm betting the 'one-shot LLM animation' technique is not scalable long term. I'm trying to build an AI animation app that has a good human-in-the-loop experience. Though I'm building with bevy instead of manim

icelancer · 3 days ago
I have used Tiger AI Lab's harness and have several issues/tickets I opened there. It's not a very serious project and doesn't work... basically at all. It's a good idea especially with VLM review but the results are not good whatsoever.
icelancer commented on Manim: Animation engine for explanatory math videos   github.com/3b1b/manim... · Posted by u/pykello
sansseriff · 4 days ago
I remember listening to a podcast where Grant Sanderson basically said the opposite. He tried generating manim code with LLMs and found the results unimpressive. Probably just goes to show that competence in manim looks very different to us layman than it does to Grant haha
icelancer · 3 days ago
Yeah I've mostly had Grant's experience. Some frameworks have hooked in VLMs to "review" the manim animations and drawings but it doesn't help much.
icelancer commented on Show HN: Fractional jobs – part-time roles for engineers   fractionaljobs.io... · Posted by u/tbird24
icelancer · 8 days ago
This is great. I currently work a fractional role on top of being a founder. Wish it was more commonly available.
icelancer commented on Show HN: Whispering – Open-source, local-first dictation you can trust   github.com/epicenter-so/e... · Posted by u/braden-w
hereme888 · 8 days ago
Earlier today I discovered Vibe: https://github.com/thewh1teagle/vibe

Local, using WhisperX. Precompiled binaries available.

I'm hoping to find and try a local-first version of an nvidia/canary like (like https://huggingface.co/nvidia/canary-qwen-2.5b) since it's almost twice as fast as Whisper with even lower word-error-rate

icelancer · 8 days ago
Been using WhisperX myself for years. The big factor is the diarization they offer through pyannotate in the single package. I do like the software even if they make some weird choices and configuration issues.

Allegedly Groq will be offering diarization with their cloud offering and super fast API which will be huge for those willing to go off-local.

icelancer commented on Who Invented Backpropagation?   people.idsia.ch/~juergen/... · Posted by u/nothrowaways
cs702 · 9 days ago
Whatever the facts, the OP comes across as sour grapes. The author, Jürgen Schmidhuber, believes Hopfield and Hinton did not deserve their Nobel Prize in Physics, and that Hinton, Bengio, and LeCun did not deserve their Turing Award. Evidently, many other scientists disagree, because both awards were granted in consultation with the scientific community. Schmidhuber's own work was, in fact, cited by the Nobel Prize committee as background information for the 2024 Nobel.[a] Only future generations of scientists, looking at the past more objectively, will be able to settle these disputes.

[a] https://www.nobelprize.org/uploads/2024/11/advanced-physicsp...

icelancer · 9 days ago
Didn't click the article, came straight to the comments thinking "I bet it's Schmidhuber being salty."

Some things never change.

icelancer commented on Cognitive decline can be slowed down with lifestyle changes   smithsonianmag.com/smart-... · Posted by u/ulrischa
apsurd · 21 days ago
How come it's so hard for people to drink water? Honest question. Like why don't you just wake up in the morning, take a whiz, brush your teeth, and drink a glass of water?

What gets in between? Because the first two are 99% success rate I'd bet.

icelancer · 21 days ago
Because most people drink to thirst, not out of habit.
icelancer commented on Qwen-Image: Crafting with native text rendering   qwenlm.github.io/blog/qwe... · Posted by u/meetpateltech
cellis · 22 days ago
Also I think you need a 40GB "card", not just 40GB of vram. I wrote about this upthread, you're probably going to need one card, I'd be surprised if you could chain several GPUs together.
icelancer · 22 days ago
Oh right, I forgot some diffusion models can't offload / split layers. I don't use vision generation models much at all - was just going off LLM work. Apologies for the potential misinformation.
icelancer commented on Qwen-Image: Crafting with native text rendering   qwenlm.github.io/blog/qwe... · Posted by u/meetpateltech
rwmj · 23 days ago
This may be obvious to people who do this regularly, but what kind of machine is required to run this? I downloaded & tried it on my Linux machine that has a 16GB GPU and 64GB of RAM. This machine can run SD easily. But Qwen-image ran out of space both when I tried it on the GPU and on the CPU, so that's obviously not enough. But am I off by a factor of two? An order of magnitude? Do I need some crazy hardware?
icelancer · 23 days ago
> This may be obvious to people who do this regularly

This is not that obvious. Calculating VRAM usage for VLMs/LLMs is something of an arcane art. There are about 10 calculators online you can use and none of them work. Quantization, KV caching, activation, layers, etc all play a role. It's annoying.

But anyway, for this model, you need 40+ GB of VRAM. System RAM isn't going to cut it unless it's unified RAM on Apple Silicon, and even then, memory bandwidth is shot, so inference is much much slower than GPU/TPU.

icelancer commented on Qwen-Image: Crafting with native text rendering   qwenlm.github.io/blog/qwe... · Posted by u/meetpateltech
barefootford · 23 days ago
gpt doesn't respect masks
icelancer · 23 days ago
Correct. Have tried this without much success despite OpenAI's claims.
icelancer commented on Claude Code weekly rate limits    · Posted by u/thebestmoshe
flashgordon · a month ago
Yeah at this point the goal is to see how to maximize for inference. For training it is impossible from the get go to compete with the frontier labs anyway. Im trying to calculate (even amortized over 2 years) the daily cost of running the equivalent rig that can get close to a single claude agent performance. (without needing a 6-digit gpu).
icelancer · a month ago
Really the only reason to have a local setup is for 24/7 on-demand high-volume inference that can't tolerate enormous cold starts.

u/icelancer

KarmaCake day12925September 3, 2012
About
Former miserable Data Scientist. I run a small business now.
View Original