Readit News logoReadit News
throwaway31131 commented on Y Combinator files brief supporting Epic Games, says store fees stifle startups   macrumors.com/2025/08/21/... · Posted by u/greenburger
rootusrootus · 3 days ago
If there was more than a duopoly in smartphones, I'd say Apple should be able to have whatever horrible app policy they want, so long as it is clearly communicated to everyone including customers. Let the market decide.

But that's not where we are. I think it makes sense to treat both Apple and Google as de facto monopolies with respect to the smartphone market, and impose some regulation on what they have to allow and how much they can charge for it.

throwaway31131 · 3 days ago
We probably do need some kind of regulation in this space because for better or worse, and I think it’s worse, it’s hard to be a participant in modern society without a smart phone. (In my mind it would be something more akin to the communications act of 1934, but for apps to mandate a certain amount of “interoperability” across operating systems, whatever that may mean, but I digress)

on the other hand, it wasn’t all that long ago that we had many smart phone markers and operating systems, all with different strategies. It’s possible that the market did decide…

throwaway31131 commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
tshaddox · 4 days ago
> We don't know if AGI is even possible outside of a biological construct yet. This is key.

A discovery that AGI is impossible in principle to implement in an electronic computer would require a major fundamental discovery in physics that answers the question “what is the brain doing in order to implement general intelligence?”

throwaway31131 · 4 days ago
We would also need a definition of AGI that is provable or disprovable.

We don’t even have a workable definition, never mind a machine.

throwaway31131 commented on Developer's block   underlap.org/developers-b... · Posted by u/todsacerdoti
jay_kyburz · 4 days ago
Seems painful now, but in the blink of an eye they will sleep through the night and when you look back years later you'll wish you pick them up and rock them to sleep again.
throwaway31131 · 4 days ago
“The days are long, but the years are short"

Couldn’t be more true…

throwaway31131 commented on From M1 MacBook to Arch Linux: A month-long experiment that became permanenent   ssp.sh/blog/macbook-to-ar... · Posted by u/articsputnik
cycomanic · 5 days ago
> But the quality of MacBooks is just another level. I had 3 or 4 so far since 2010, and each of them held at least 5 years. Crazy good.

When I read things like this it really sounds like there is some reality distortion field in the mac world. How is that anywhere special? I'm running a thinkpad X1 as my 2 main laptops (it was my only work machine until 2 years ago) and I never felt the need to replace it. It gave me 8-10h battery life and the only issue I ever had was that 1.5 years ago the battery was reaching end of life and capacity started dropping very fast.

That was just a 70$ repair I could easily do myself.

My youngest daughter just inherited my mother's x220 (?) (she has been running Linux) that I got for my mother in 2011 or 2012. That never received any work and still works fine except that I didn't change the battery so you have to run it of ac power.

My older daughter and my mother both just got some used thinkpads that are >3years old and don't have any issues either.

So from my experience a 5 year lifetime for a macbook is really nothing special and definitely not "crazy good".

throwaway31131 · 4 days ago
Defect rate is a probability distribution so some people will get 10 heads in a row and others will get 10 tails, and since it’s the internet, and since it’s kinda amazing we hear about both. But most people get a mixture and post nothing.

Every IT survey I’ve ever seen shows Macs as more reliable. On the other hand the repairs are often more expensive. So there’s a trade off.

For me, my M1 16” is a champ. The computer is almost too good. I’d like to upgrade but honestly there’s no reason to so I just can’t get myself to ditch a perfectly good computer.

throwaway31131 commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
pj_mukh · 6 days ago
Might want to clarify things with your boss who says otherwise [1]? I do wish journalists would stop quoting these people unedited. No one knows what will actually happen.

[1]: https://www.shrm.org/topics-tools/news/technology/ai-will-sh...

throwaway31131 · 6 days ago
I'm not sure those statements are in conflict with each other.

“My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.” - Matt Garman

"We will need fewer people doing some of the jobs that are being done today” - Amazon CEO Andy Jassy

Maybe they differ in degree but not in sentiment.

throwaway31131 commented on I accidentally became PureGym’s unofficial Apple Wallet developer   drobinin.com/posts/how-i-... · Posted by u/valzevul
throwaway31131 · 11 days ago
Great post.

What’s Next: Shame Notifications: "You were literally 100 meters from the gym and walked past it"

As much as I hate to admit it, that would probably work on me and I’d probably turn it on.

throwaway31131 commented on Sam Altman is in damage-control mode after latest ChatGPT release   cnn.com/2025/08/14/busine... · Posted by u/reconnecting
yahoozoo · 13 days ago
Eh, I don’t know. I spent some time over 3 days trying to get Claude Code to write a pagination plugin for ProseMirror. Had a few different branches with different implementations and none of them worked well at all and one or two of them were really over-engineered. I asked GPT-5, via the Chat UI (I don’t pay for OpenAI products), and it basically one-shot a working plugin, not only that, the code was small and comprehensible, too.
throwaway31131 · 12 days ago
I wonder if GPT-5 benefited from what you learned about how-to-prompt for this problem while prompting Claude for 3 days?

I done a couple of experiments now and I can get an LLM to make not horrible and mostly functional code with effort. (I’ve been trying to create algorithms from CS papers that don’t link to code) I’ve observed once you discover the magic words the LLM wants and give sufficient background in the history, it can do ok.

But, for me anyway, the process of uncovering the magic words is slower than just writing the code myself. Although that could be because I’m targeting toy examples that aren’t very large code bases and aren’t what is in the typical internet coding demo.

throwaway31131 commented on The "high-level CPU" challenge (2008)   yosefk.com/blog/the-high-... · Posted by u/signa11
gchadwick · 15 days ago
This certainly rings true with my own experiences (worked on both GPUs and CPUs and now doing AI inference at UK startup Fractile).

> If there were a hypothetical high-level CPU language that somehow encoded all the information the microarchitecture needs to measure to manage memory access,

I think this is fundamentally impossible because of dynamic behaviours. It's very tempting to assume that if you can be clever enough you can work this all out ahead of time, encode what you need in some static information and the whole computation just runs like clockwork on the hardware, no or little area spent on scheduling, stalling, buffering etc. Though I think history has shown over and over this just doesn't work (for more general computation at least, more feasible in restricted domains). There's always lots of fiddly details in real workloads that surprise you and if you've got an inflexible system you've got no give and you end up 'stopping the world' (or some significant part of it) to deal with it killing your performance.

Notably running transformer models feels like one of those restrictive domains you could do this well in but dig in and there's plenty enough dynamic behaviour in there that you can't escape the problems they cause.

throwaway31131 · 14 days ago
> I think this is fundamentally impossible because of dynamic behaviours.

I was thinking that too as it sounds awfully close to a variant of the halting problem.

But since I am not sure that it's actually equivalent, and there might be some heuristic that constrains the problem, in theory, to make it solvable "enough" I went with the softer "I don't know how to do it" rather than "it can't be done"

throwaway31131 commented on Starbucks in Korea asks customers to stop bringing in printers/desktop computers   fortune.com/2025/08/11/st... · Posted by u/zdw
KptMarchewa · 14 days ago
Countries generally have a immigration system that prevents people from moving there when you don't have enough money to support yourself.
throwaway31131 · 14 days ago
In my experience most countries put barriers in place to prevent people from moving into the country, from another country, when you don't have enough money to support yourself, but I believe the parent is describing a system that puts barriers in place for internal migration of its citizens.

Russia has a similar system. https://en.wikipedia.org/wiki/Resident_registration_in_Russi...

throwaway31131 commented on The "high-level CPU" challenge (2008)   yosefk.com/blog/the-high-... · Posted by u/signa11
throwaway31131 · 15 days ago
"You don't think about memories when you design hardware"

I found that comment interesting because memory is mostly what I think about when I design hardware. Memory access is by far the slowest thing, so managing memory bandwidth is absolutely critical, and it uses a significant portion of the die area.

Also, a significant portion of the microarchitecture of a modern CPU is all about managing memory accesses.

If there were a hypothetical high-level CPU language that somehow encoded all the information the microarchitecture needs to measure to manage memory access, then it would likely tie in performance, assuming the CPU team did a good job measuring memory. Still, it wouldn't need all the extra stuff that did the measurement. So I see that as a win.

The main problem is, I have absolutely no idea how to do that, and unfortunately, I haven't met anyone else who knows either. Hence, tons of bookkeeping logic in CPUs persists.

u/throwaway31131

KarmaCake day165April 6, 2025View Original