Readit News logoReadit News
adi_pradhan commented on Show HN: Tailwind Template Directory   tailkits.com/... · Posted by u/yucelfaruksahan
geekodour · a year ago
I have been using 3.5 sonnet to generate me tailwind components it does a pretty good job, and then with clsx, tailwind-merge and tailwind-variants things become pretty maintainable. I reach-out to shadcn-svelte when I need components which would take some extra tuning.
adi_pradhan · a year ago
Same. 3.5 Sonnet seems particularly good at React and Tailwind. I'm moving away from other frameworks and libraries to React and Tailwind because of this.
adi_pradhan commented on Video generation models as world simulators   openai.com/research/video... · Posted by u/linksbro
empath-nirvana · 2 years ago
I think people might be missing what this enables. It can make plausible continuations of video, with realistic physics. What happens if this gets fast enough to work _in real time_.

Connect this to a robot that has a real time camera feed. Have it constantly generate potential future continuations of the feed that it's getting -- maybe more than one. You have an autonomous robot building a real time model of the world around it and predicting the future. Give it some error correction based on well each prediction models the actual outcome and I think you're _really_ close to AGI.

You can probably already imagine different ways to wire the output to text generation and controlling its own motions, etc, and predicting outcomes based on actions it, itself could plausibly take, and choosing the best one.

It doesn't actually have to generate realistic imagery or imagery that doesn't have any mistakes or imagery that's high definition to be used in that way. How realistic is our own imagination of the world?

Edit: I'm going to add a specific case. Imagine a house cleaning robot. It starts with an image of your living room. Then it creates a image of your living room after it's been cleaned. Then it interpolates a video _imagining itself cleaning the room_, then acts as much as it can to mimic what's in the video, then generates a new continuation, then acts, and so on. Imagine doing that several times a second, if necessary.

adi_pradhan · 2 years ago
Thanks for adding the specific case. I think with testing these sort of limited domain applications make sense.

It'll be much harder for more open ended world problems where the physics encountered may be rare enough in the dataset that the simulation breaks unexpectedly. For example a glass smashing into the floor. The model doesn't simulate that causally afaik

u/adi_pradhan

KarmaCake day3February 17, 2018
About
@adidoit
View Original