Readit News logoReadit News
growthwtf commented on Meta launches Hyperscape, technology to turn real-world spaces into VR   techcrunch.com/2025/09/17... · Posted by u/PaulHoule
andybak · 4 months ago
Well we know it's a gaussian splat, we know what the inputs are (RGB and 6dof pose) and we know how gaussian splat training is usually done...
growthwtf · 4 months ago
Sure, in production envs I have seen humans being used in 3 places: 1. the pose data calibration 2. cleaning up covariances (reducing blobbiness) 3. adding metadata for app usage But, to your point, hard to say which of these or any apply without more info. I would be very very impressed if there were no humans and it's 'just' a training time issue though!
growthwtf commented on Meta launches Hyperscape, technology to turn real-world spaces into VR   techcrunch.com/2025/09/17... · Posted by u/PaulHoule
andybak · 4 months ago
It's not "rendering" with gaussian splats. It's more "training" (or "fitting"). And not knowing how much the usage vs compute ratio is, I would hesitate to comment.

But knowing a little bit about gaussian splatting, I can't think what manual steps requiring human assistance are even likely to be necessary?

growthwtf · 4 months ago
Without knowing the specifics of their pipeline I would also hesitate to comment further.
growthwtf commented on Meta launches Hyperscape, technology to turn real-world spaces into VR   techcrunch.com/2025/09/17... · Posted by u/PaulHoule
andybak · 4 months ago
False. This is just gaussian splats being queued up on a server somewhere
growthwtf · 4 months ago
You could be correct, but it would be a real indictment of their rendering farm I think.
growthwtf commented on Meta launches Hyperscape, technology to turn real-world spaces into VR   techcrunch.com/2025/09/17... · Posted by u/PaulHoule
growthwtf · 4 months ago
Rendering takes a few hours means humans are building it at least partially.
growthwtf commented on Cloudflare Introduces NET Dollar   cloudflare.com/press/pres... · Posted by u/wilhelmklopp
growthwtf · 5 months ago
You all need to stop being so pessimistic. This is a great idea.

Want PBS to stick around? Make it so anybody who's sticking on chat GPT gets great answers from PBS and every time ChatGPT scrapes it, PBS gets money.

Is it extremely difficult? Obviously. Will it work? Probably not, very few things do. Is it a great thing that some folks are doing it and trying to make it work so that we can have a functional media ecosystem in a post-social-media age? Absolutely.

growthwtf commented on Apple Silicon GPU Support in Mojo   forum.modular.com/t/apple... · Posted by u/mpweiher
lqstuart · 5 months ago
I’m saying actual algorithmic (as in not data) model innovation has never been a significant part of the revenue generation in the field. You get your random forest, or ResNet, or BERT, or MaskRCNN, or GPT-2-with-One-Weird-Trick, and then you spend four hours trying to figure out how to preprocess your data.

On the flipside, far from figuring out GPU efficiency, most people with huge jobs are network bottlenecked. And that’s where the problem arises: solutions for collective comms optimization tend to explode in complexity because, among other reasons, you now have to package entire orchestrators in your library somehow, which may fight with the orchestrators that actually launch the job.

Doing my best to keep it concise, but Hopper is like a good case study. I want to use Megatron! Suddenly you need FP8, which means the CXX11 ABI, which means recompiling Torch along with all those nifty toys like flash attention, flashinfer, vllm, whatever. Ray, jsonschema, Kafka and a dozen other things also need to match the same glibc and glibc++ versions. So using that as an example, suddenly my company needs C++ CICD pipelines, dependency management etc when we didn’t before. And I just spent three commas on these GPUs. And most likely, I haven’t made a dime on my LLMs, or autonomous vehicles, or weird cyborg slavebots.

So what all that boils down to is just that there’s a ton of inertia against moving to something new and better. And in this field in particular, it’s a very ugly, half-assed, messy inertia. It’s one thing to replace well-designed, well-maintained Java infra with Golang or something, but it’s quite another to try to replace some pile of shit deep learning library that your customers had to build a pile of shit on top of just to make it work, and all the while fifty college kids are working 16 hours a day to add even more in the next dev release, which will of course be wholly backwards and forwards incompatible.

But I really hope I’m wrong :)

growthwtf · 5 months ago
Lattner's comment aside (which I'm fanboying a little bit at), I do tend to agree with your pessimism/realism for what it's worth. It's gonna be a long long time before that whole mess you're describing is sorted out, but I'm confident that over the next decade we will do it. There's just too much money to be made by fixing it at this point.

I don't think it's gonna happen instantly, but it will happen, and Mojo/Modular are really the only language platform I see taking a coherent approach to it right now.

growthwtf commented on Qwen3-Omni: Native Omni AI model for text, image and video   github.com/QwenLM/Qwen3-O... · Posted by u/meetpateltech
simonw · 5 months ago
The model weights are 70GB (Hugging Face recently added a file size indicator - see https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct/tree... ) so this one is reasonably accessible to run locally.

I wonder if we'll see a macOS port soon - currently it very much needs an NVIDIA GPU as far as I can tell.

growthwtf · 5 months ago
A fun project for somebody who has more time than myself would be to see if they can get it working with the new Mojo stuff from yesterday for Apple. I don't know if the functionality would be fully baked out enough yet to actually do the port successfully, but it would be an interesting try.
growthwtf commented on Apple Silicon GPU Support in Mojo   forum.modular.com/t/apple... · Posted by u/mpweiher
lqstuart · 5 months ago
I like Chris Lattner but the ship sailed for a deep learning DSL in like 2012. Mojo is never going to be anything but a vanity project.
growthwtf · 5 months ago
Nah. There's huge alpha here, as one might say. I feel like this comment could age even more poorly than the infamous dropbox comment.

Even with Jax, PyTorch, HF Transformers, whatever you want to throw at it--the dx for cross-platform gpu programming that are compatible with large language models requirements specifically is extremely bad.

I think this may end up be the most important thing that Lattner has worked on in his life (And yes, I am aware of his other projects!)

growthwtf commented on A visual introduction to big O notation   samwho.dev/big-o/... · Posted by u/samwho
samwho · 6 months ago
Part of the problem is that a lot of people that come across big O notation have no need, interest, or time to learn calculus. I think it's reasonable for that to be the case, too.
growthwtf · 6 months ago
I'm not the original commentator, that makes a lot of sense! I had assumed there was a huge overlap, personally.
growthwtf commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
ipnon · 6 months ago
Does this mean AGI is cancelled? 2027 hard takeoff was just sci-fi?
growthwtf · 6 months ago
Good thing they didn't nuke the data centers after all!

u/growthwtf

KarmaCake day158March 31, 2023View Original