Readit News logoReadit News
SequoiaHope commented on New York’s budget bill would require “blocking technology” on all 3D printers   blog.adafruit.com/2026/02... · Posted by u/ptorrone
b00ty4breakfast · 6 days ago
THis is a case of me not knowing and assuming, ha. I remember the peak days of the RepRap scene so I just assumed as that slowed down, the entire thing was dead
SequoiaHope · 6 days ago
I was attending Bay Area Reprap Club meetings in 2010! Got my first printer (Ultimaker V1) in 2011. My how things have changed. We just got a second Bambu H2D Pro at work. Incredible machine.
SequoiaHope commented on Tesla is committing automotive suicide   electrek.co/2026/01/29/te... · Posted by u/jethronethro
testing22321 · 12 days ago
Yesterday on the earnings call Elon said the reveal is in April “hopefully”.
SequoiaHope · 6 days ago
Sure. April fools day I imagine.
SequoiaHope commented on New York’s budget bill would require “blocking technology” on all 3D printers   blog.adafruit.com/2026/02... · Posted by u/ptorrone
b00ty4breakfast · 7 days ago
I would unironically love to see the diy 3d printer scene come back.
SequoiaHope · 7 days ago
It never went away. The Voron continues to be a popular DIY 3D printer, tho many people choose to buy ready-made printers.
SequoiaHope commented on Show HN: I trained a 9M speech model to fix my Mandarin tones   simedw.com/2026/01/31/ear... · Posted by u/simedw
SequoiaHope · 11 days ago
Amazingly I just did the same thing! Only with AISHELL. It needs work. I used the encoder from the Meta MMS model.

https://github.com/sequoia-hope/mandarin-practice

SequoiaHope commented on Tesla is committing automotive suicide   electrek.co/2026/01/29/te... · Posted by u/jethronethro
SequoiaHope · 12 days ago
Amazing that Tesla announced the next generation roadster and sold pre-orders which I think cost $100k and then just never released it and there seems to be no indication (last I checked) that it ever will be.
SequoiaHope commented on Project Genie: Experimenting with infinite, interactive worlds   blog.google/innovation-an... · Posted by u/meetpateltech
avaer · 12 days ago
Soft disagree; if you wanted imagination you don't need to make a video model. You probably don't need to decode the latents at all. That seems pretty far from information-theoretic optimality, the kind that you want in a good+fast AI model making decisions.

The whole reason for LLMs inferencing human-processable text, and "world models" inferencing human-interactive video, is precisely so that humans can connect in and debug the thing.

I think the purpose of Genie is to be a video game, but it's a video game for AI researchers developing AIs.

I do agree that the entertainment implications are kind of the research exhaust of the end goal.

SequoiaHope · 12 days ago
Didn’t the original world models paper do some training in latent space? (Edit: yes[1])

I think robots imagining the next step (in latent space) will be useful. It’s useful for people. A great way to validate that a robot is properly imagining the future is to make that latent space renderable in pixels.

[1] “By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment.”

https://arxiv.org/abs/1803.10122

SequoiaHope commented on Prism   openai.com/index/introduc... · Posted by u/meetpateltech
techblueberry · 14 days ago
There's this thing where all the thought leaders in software engineering ask "What will change about building about building a business when code is free" and while, there are some cool things, I've also thought, like it could have some pretty serious negative externalities? I think this question is going to become big everywhere - business, science, etc. which is like - Ok, you have all this stuff, but do is it valuable? Which of it actually takes away value?
SequoiaHope · 14 days ago
To be fair, the question “what will change” does not presume the changes will be positive. I think it’s the right question to ask, because change is coming whether we like it or not. While we do have agency, there are large forces at play which impact how certain things will play out.
SequoiaHope commented on Ask HN: Gmail spam filtering suddenly marking everything as spam?    · Posted by u/goopthink
zukzuk · 17 days ago
This has been “down” for me for a few months now, ever since Google tied this functionality to the same toggle that opts you in for using your email data for AI training. So now you can’t filter this stuff without also agreeing to a whole swath of unrelated and opt-ins.

Ive since gone on an unsubscribe campaign, and things seem bearable now.

SequoiaHope · 17 days ago
Same. Can’t ignore the messages when they’re all in one place, which makes hitting unsubscribe easier.
SequoiaHope commented on The state of modern AI text to speech systems for screen reader users   stuff.interfree.ca/2026/0... · Posted by u/tuukkao
cachius · 18 days ago
Glooming bottom line:

So what's the way forward for blind screen reader users? Sadly, I don't know.

Modern text to speech research has little overlap with our requirements. Using Eloquence [32-bit voice last compiled in 2003], the system that many blind people find best, is becoming increasingly untenable. ESpeak uses an odd architecture originally designed for computers in 1995, and has few maintainers. Blastbay Studios [...] is a closed-source product with a single maintainer, that also suffers from a lack of pronunciation accuracy.

In an ideal world, someone would re-implement Eloquence as a set of open source libraries. However, doing so would require expertise in linguistics, digital signal processing, and audiology, as well as excellent programming abilities. My suspicion is that modernizing the text to speech stack that is preferred by blind power-users is an effort that would require several million dollars of funding at minimum.

Instead, we'll probably wind up having to settle for text to speech voices that are "good enough", while being nowhere near as fast and efficient [800 to 900 words per minute] as what we have currently.

SequoiaHope · 18 days ago
My big takeaway was that a great way AI could help would be to aide in decompiling Eloquence, though I don’t know if there are gotchas there.

I found some sample audio from Eloquence. I like this type of voice!

https://youtu.be/bBp8NP3JTpI

SequoiaHope commented on Nvidia Stock Crash Prediction   entropicthoughts.com/nvid... · Posted by u/todsacerdoti
pvab3 · 21 days ago
inference requires a fraction of the power that training does. According to the Villalobos paper, the median date is 2028. At some point we won't be training bigger and bigger models every month. We will run out of additional material to train on, things will continue commodifying, and then the amount of training happening will significantly decrease unless new avenues open for new types of models. But our current LLMs are much more compute-intensive than any other type of generative or task-specific model
SequoiaHope · 21 days ago
Run out of training data? They’re going to put these things in humanoids (they are weirdly cheap now) and record high resolution video and other sensor data of real world tasks and train huge multimodal Vision Language Action models etc.

The world is more than just text. We can never run out of pixels if we point cameras at the real world and move them around.

I work in robotics and I don’t think people talking about this stuff appreciate that text and internet pictures is just the beginning. Robotics is poised to generate and consume TONS of data from the real world, not just the internet.

u/SequoiaHope

KarmaCake day19339February 4, 2013
About
Hi I’m Sequoia! (she/her)

I make robots mostly. I want to figure out how robots can help solve social problems. Read my writing or see my portfolio at tlalexander.com and see my robots at http://reboot.love

My cool robot arm: https://twitter.com/TLAlexander/status/1455320442642714625

See my farming robotics work here: https://community.twistedfields.com/t/a-closer-look-at-acorn-our-open-source-precision-farming-rover/108

And:

https://community.twistedfields.com/t/join-the-solar-farming-revolution-support-acorn-and-empower-farmers-worldwide/370

You are welcome to email me but usually I don’t end up responding to out of the blue emails (due to procrastination or disorganization). [tlalexander at gmail]

View Original