Readit News logoReadit News
pilooch commented on Nubeian Translation for Childhood Songs by Hamza El Din   nubianfoundation.org/tran... · Posted by u/tzury
pilooch · a month ago
The 'Eclipse' album is a classic.
pilooch commented on Meta Ray-Ban Display   meta.com/blog/meta-ray-ba... · Posted by u/martpie
pilooch · 3 months ago
I like the glasses path, well I do wear glasses, but some elements remain unclear to me:

- are prescription glasses available for display ? I guess not ? - these glasses need to be online, I guess they do so with a phone and bluetooth connection nearby ? So that's the glasses, the band and the phone, oh and the glasses case, seems a lot to carry. - pedestrian navigation seems to be rolled out per city, so it's not like having gmaps available right out of the box.

pilooch commented on I got the highest score on ARC-AGI again swapping Python for English   jeremyberman.substack.com... · Posted by u/freediver
pilooch · 3 months ago
Congrats, this solution resembles AlphaEvolve. Text serves as the high-level search space, and genetic mixing (map-elites in AE) merges attemps at lower levels.
pilooch commented on Will AI Replace Human Thinking? The Case for Writing and Coding Manually   ssp.sh/brain/will-ai-repl... · Posted by u/articsputnik
jjallen · 4 months ago
I have gone from using Claude Code all day long since the day it was launched to only using the separate Claude app. In my mind that is a nice balance of using it, but not too much, not too fast.

there is the temptation to just let these things run in our codebases, which I think for some projects is totally fine. For most websites I think this would usually be fine. This is for two reasons: 1) these models have been trained on more websites than probably anything else and 2) if a div/text is off by a little bit then usually there will be no huge problems.

But if you're building something that is mission critical, unless you go super slowly, which again is hard to do because these agents are tempting to go super fast. That is sort of the allure of them: to be able to write sofware super fast.

But as we all know, in some programs you cannot have a single char wrong or the whole program may not work or have value. At least that is how the one I am working on is.

I found that I lost the mental map of the codebase I am working on. Claude Code had done too much too fast.

I found a function this morning to validate futures/stocks/FUT-OPT/STK-OPT symbols where the validation was super basic and terrible that it had written. We had implemented some very strong actual symbol data validation a week or two ago. But that wasn't fully implemented everywhere. So now I need to go back and do this.

Anyways, I think finding where certain code is written would be helpful for sure and suggesting various ways to solve problems. But the separate GUI apps can do that for us.

So for now I am going to keep just using the separate LLM apps. I will also save lots of money in the meantime (which I would gladly spend for a higher quality Claude Code ish setup).

pilooch · 4 months ago
Losing the mental map is the number one issue for me. I wonder if there could be a way to keep track of it, even at a high level. Keeping the ability to dig in is crucial.
pilooch commented on Mosh Mobile Shell   mosh.org... · Posted by u/rbinv
just_human · 4 months ago
Hey all, I’ve been working on an open source rust-based alternative to mosh that solves some of the key issues like scroll back and using WebRTC instead of bootstrapping udp over ssh (which doesn’t work on firewalled networks). Would love any feedback on which features you’d like to see!
pilooch · 4 months ago
Hello, very interested in the scrollback! I've used mosh for 10+ years and it still runs my 100+ opened terminals to this day ! Would love to try your alternative
pilooch commented on The Math Behind GANs (2020)   jaketae.github.io/study/g... · Posted by u/sebg
radarsat1 · 4 months ago
Whenever someone says this I like to point out that they are very often used to train the VAE and VQVAE models that LDM models use. Slowly diffusion is encroaching on its territory with 1-step models, however, and there are now alternative methods to generate rich latent spaces and decoders too, so this is changing, but I'd say up until last year most of the image generators still used an adversarial objective for the encoder-decoder training. This year, not sure..
pilooch · 4 months ago
Exactly, for real time applications VTO, simulators,...), i.e. 60+FPS, diffusion can't be used efficiently. The gap is still there afaik. One lead has been to distill DPM into GANs, not sure this works for GANs that are small enough for real time.
pilooch commented on Show HN: Luminal – Open-source, search-based GPU compiler   github.com/luminal-ai/lum... · Posted by u/jafioti
jakestevens2 · 4 months ago
Your description is exactly right. We create a search space of all possible kernels and find the best ones based on runtime. The best heuristic is no heuristic.

This obviously creates a combinatorial problem that we mitigate with smarter search.

The kernels are run on the computer the compiler is running on. Since runtime is our gold standard it will search for the best configuration for your hardware target. As long as the setup is mostly the same, the optimizations should carry over, yes.

pilooch · 4 months ago
Is this a bit similar to what tensorrt does, but in a more opened manner ?

u/pilooch

KarmaCake day750January 7, 2011
About
https://www.jolibrain.com/
View Original