Readit News logoReadit News
Brystephor commented on M8.7 earthquake in Western Pacific, tsunami warning issued   earthquake.usgs.gov/earth... · Posted by u/jandrewrogers
decimalenough · a month ago
Japan forecasting tsunamis up to 3m across basically the entire eastern coast. First waves will hit within 10 minutes.

https://www.nhk.or.jp/kishou-saigai/tsunami/

https://www3.nhk.or.jp/news/live/ (live, Japanese)

https://www3.nhk.or.jp/nhkworld/en/live/ (live, English)

The east coast is also where the vast majority of Japan's population lives, and was previously hit by the 2011 tsunami (Fukushima and all that). We're about to find out the hard way what lessons they have learned.

Update: First detected wave in Nemuro, Hokkaido (northernmost Japan) was only 30cm. There may be more. Waves of 3-4m have apparently already hit Kamchatka in Russia.

Update 2: We're almost an hour in and highest waves to actually hit Japan remain only 40 cm. It looks unlikely that this will cause major damage.

Brystephor · a month ago
How big was the 2011 tsunami? Is 3m bigger or smaller?
Brystephor commented on Seven replies to the viral Apple reasoning paper and why they fall short   garymarcus.substack.com/p... · Posted by u/spwestwood
jes5199 · 3 months ago
I think the Apple paper is practically a hack job - the problem was set up in such a way that the reasoning models must do all of their reasoning before outputting any of their results. Imagine a human trying to solve something this way: you’d have to either memorize the entire answer before speaking or come up with a simple pattern you could do while reciting that takes significantly less brainpower - and past a certain size/complexity, it would be impossible.

And this isn’t how LLMs are used in practice! Actual agents do a thinking/reasoning cycle after each tool-use call. And I guarantee even these 6-month-old models could do significantly better if a researcher followed best practices.

Brystephor · 3 months ago
Forcing reasoning is analogous to requiring a student to show their work when solving a problem if im understanding the paper correctly.

> you’d have to either memorize the entire answer before speaking or come up with a simple pattern you could do while reciting that takes significantly less brainpower

This part i dont understand. Why would coming up with an algorithm (e.g. a simple pattern) and reciting it be impossible? The paper doesnt mention the models coming up with the algorithm at all AFAIK. If the model was able to come up with the pattern required to solve the puzzles and then also execute (e.g. recite) the pattern, then that'd show understanding. However the models didn't. So if the model can answer the same question for small inputs, but not for big inputs, then doesnt that imply the model is not finding a pattern for solving the answer but is more likely pulling from memory? Like, if the model could tell you fibbonaci numbers when n=5 but not when n=10, that'd imply the numbers are memorized and the pattern for generation of numbers is not understood.

Brystephor commented on GCP Outage   status.cloud.google.com/... · Posted by u/thanhhaimai
Brystephor · 3 months ago
some core GCP cloud services are down. might be a good time for GCP dependent people to go for a walk, do some stretches, and check back in a couple hours.
Brystephor commented on Ask HN: Anyone struggling to get value out of coding LLMs?    · Posted by u/bjackman
michaelrpeskin · 3 months ago
A little snarky but: In my experience, the folks who are 100x more productive are multiplying 100 times a small number.

I've found great success with LLMs in the research phase of coding. Last week I needed to write some domain-specific linear algebra and because of some other restrictions, I couldn't just pull in LAPACK. So I had to hand code the work (yes, I know you shouldn't hand code this kind of stuff, but it was a small slice and the domain didn't require the fully-optimized LAPACK stuff). I used an LLM to do the research part that I normally would have had to resort to a couple of math texts to fully understand. So in that case it did make me 100x more effective because it found what I needed and summarized it so that I could convert it to code really quickly.

For the fun of it, I did ask the LLM to generate the code for me too, and it made very subtle mistakes that wouldn't have been obvious unless you were already an expert in the field. I could see how a junior engineer would have been impressed by it and probably just check it in and go on.

I'm still a firm believer in understanding every bit of code you check in, so even if LLMs get really good, the "code writing" part of my work probably won't ever get faster. But for figuring out what code to write - I think LLMs will make people much faster. The research and summarize part is amazing.

The real value in the world is synthesis and novel ideas. And maybe I'm a luddite, but I still think that takes human creativity. LLMs will be a critical support structure, but I'm not sold on them actually writing high-value code.

Brystephor · 3 months ago
> I've found great success with LLMs in the research phase of coding.

This is what I've found it most helpful for. Typically I want an example specific to my scenario and use an LLM to generate the scenario that I ask questions about. It helps me go from understanding a process at a high level, to learning more about what components are involved at a lower level which let's me then go do more research on those components elsewhere.

Brystephor commented on Ask HN: What are you working on? (May 2025)    · Posted by u/david927
Brystephor · 3 months ago
Reinforcement learning system. Currently trying to understand how to implement contextual thompson sampling and its details after doing non contextual thompson sampling. My YouTube history is a lot of logistic regression related videos at the moment.
Brystephor commented on Baby is healed with first personalized gene-editing treatment   nytimes.com/2025/05/15/he... · Posted by u/jbredeche
Brystephor · 4 months ago
This is incredible work. Its jaw dropping to learn that something like this is possible at all. Sometimes I wish I could work for a company whose products make a meaningful positive contribution to the work.

Do companies like this have a need for SWEs? Are there opportunities for a backend SWE without any background in hardware or biology?

Brystephor commented on Succinct data structures   blog.startifact.com/posts... · Posted by u/pavel_lishin
Brystephor · 6 months ago
Maybe a silly question, but has anyone used these in production? Or used libraries in production which are built on these structures?

Im imagining a meeting about some project design, and thinking about how it'd go if someone suggested using parentheses to represent nodes of a tree. I imagine it'd get written off quickly. Not because it wouldn't work, but because of the complexity and learning curve involved.

Brystephor commented on When imperfect systems are good: Bluesky's lossy timelines   jazco.dev/2025/02/19/impe... · Posted by u/cyndunlop
rubslopes · 6 months ago
This problem is discussed in the beginning of the Designing Data-Intensive Applications book. It's worth a read!
Brystephor · 6 months ago
Do you know the name of the problem or strategy used for solving the problem? I'd be interested in looking it up!

I own DDIA but after a few chapters of how database work behind the scenes, I begin to fall asleep. I have trouble understanding how to apply the knowledge to my work but this seems like a useful thing with a more clear application.

Brystephor commented on Apple Invites   apple.com/newsroom/2025/0... · Posted by u/openchampagne
PaulHoule · 7 months ago
Even though "... anyone can RSVP, regardless of whether they have an Apple Account or Apple device" I think this being an Apple branded service is going to make this appear exclusionary and will mean some people won't participate even if they could.

I see the same risk involved with Apple TV's branding; Apple TV works great on Xbox, on NVIDIA Shield and on PC. I'm sure though there are a lot of people who just decide that shows like Foundation and subscriptions like MLS Season's Pass just aren't for them. I don't know if it is a 5% or a 20% drop but it has to be real.

Brystephor · 7 months ago
Software engineer here with an android phone. I've never bothered to look into Apple TV because I assumed it'd only be available on Apple devices. Similarly, I saw this post and thought there may be a reason for me to get an iPhone now as I assumed this would be available on apple devices only.
Brystephor commented on TikTok says it is restoring service for U.S. users   nbcnews.com/tech/tech-new... · Posted by u/Leary
yieldcrv · 7 months ago
Is it a big political statement to shut down a couple hours before the deadline of shutting down?

The app stores removed the app in accordance with that timeline too.

Brystephor · 7 months ago
No. It's a big political statement to include political messaging and plead to political figures when you shut down. Then to praise those political figures afterwards is additional political messaging.

u/Brystephor

KarmaCake day670May 16, 2018View Original