Readit News logoReadit News
gmm1990 commented on Are we repeating the telecoms crash with AI datacenters?   martinalderson.com/posts/... · Posted by u/davedx
gmm1990 · 16 days ago
Some of the utilization comparisons are interesting, but the article says 2 trillion was spent on laying fiber but that seems suspicious.
gmm1990 commented on Agent design is still hard   lucumr.pocoo.org/2025/11/... · Posted by u/the_mitsuhiko
ctoth · a month ago
> I would guess that at least in the cloud flare instance some of the responsible code was ai generated

Your whole point isn't supported by anything but ... a guess?

If given the chance to work with an AI who hallucinates sometimes or a human who makes logical leaps like this

I think I know what I'd pick.

Seriously, just what even? "I can imagine a scenario where AI was involved, therefore I will treat my imagination as evidence."

gmm1990 · a month ago
The whole point is that the outages happened not that the ai code caused them. If ai is so useful/amazing then these outages should be less common not more. It’s obviously not rock solid evidence. Yeah ai could be useful and speed up or even improve a code base but there isn’t any evidence that that’s actually improving anything the only real studies point to imagined productivity improvements
gmm1990 commented on Agent design is still hard   lucumr.pocoo.org/2025/11/... · Posted by u/the_mitsuhiko
bdangubic · a month ago
> labeling people who disagree with you undereducated or uninformed

I did neither of these two things... :) I personally could not care about

- (over)hype

- 12/13/14/15 ... digit USD investment

- exponential vs. sigmoid

There are basically two groups of industry folk:

1. those that see technology as absolutely transformational and are already doing amazeballs shit with it

2. those that argue how it is bad/not-exponential/ROI/...

If I was a professional (I am) I would do everything in my power to learn everything there is to learn (and then more) and join the Group #1. But it is easier to be in Group #2 as being in Group #1 requires time and effort and frustrations and throwing laptop out the window and ... :)

gmm1990 · a month ago
If there is really amazing stuff happening with this technology how did we have two recent major outages that were cause by embarrassing problems? I would guess that at least in the cloud flare instance some of the responsible code was ai generated
gmm1990 commented on It's the “hardware”, stupid   haebom.dev/archive?post=4... · Posted by u/haebom
proee · 2 months ago
I would love to know if Ives was truly foundation to the iPhone, or if he was more given the overall idea and polished it with the final look/feel (which is also important).

Who was the exact individual that had the vision for glass multi-touch screens and sick gesture like effects (scroll).

Perhaps the team just sat around a table and came up with the vision. Or maybe it was Jobs, but clearly there were some good visionaries in the company.

I think the proof of Ive's excellence will come out of this next OpenAI project. If it's something lame, then I will assume his impact at Apple was overrated. If it's a jaw-dropper, then maybe he really is the cat's meow.

gmm1990 · 2 months ago
I think he could have been instrumental to the iphone (not saying he was or wasn't) and whatever he tries next is a complete flop. The ability to be successful is contextual, and great artists can produce mediocre art.
gmm1990 commented on Alaska Airlines' statement on IT outage   news.alaskaair.com/on-the... · Posted by u/fujigawa
gmm1990 · 2 months ago
Is there a public generic measure of IT outages with historical data. Severe outages seem to be more common lately, but I don't have any data to back it up.
gmm1990 commented on André Gorz predicted the revolt against meaningless work (2023)   znetwork.org/znetarticle/... · Posted by u/robtherobber
vintermann · 2 months ago
Output of what?

There are two ways to make money: you can trade people with money something that they prefer to money, or you can help people with money make more money, in exchange for a share of it.

The value of "output", whatever it is, is dependent on who currently has money. A vaccine for malaria has no value if no one who has money prefers it to money. A machine that can get you to Mars has no value, unless people with money want to go to Mars.

I say people with money, but it really people times money. And a few people have almost all of it.

So when we talk about output, GDP, whatever, as long as it's measured in money remember that it's mostly rich people's preferences we're talking about.

gmm1990 · 2 months ago
value doesn't have to be monetary
gmm1990 commented on Apple M5 chip   apple.com/newsroom/2025/1... · Posted by u/mihau
gmm1990 · 2 months ago
Interesting that there's only the m5 on the macbook pro. I thought the m4 and m4 pro/max were at the same time on the macbook pro
gmm1990 commented on Redis is fast – I'll cache in Postgres   dizzy.zone/2025/09/24/Red... · Posted by u/redbell
Implicated · 3 months ago
> Is it the typical advantages of using a db over a filesystem the reason to use redis instead of just reading from memory mapped files?

Eh - while surely not everyone has the benefits of doing so, I'm running Laravel and using Redis is just _really_ simple and easy. To do something via memory mapped files I'd have to implement quite a bit of stuff I don't want/need to (locking, serialization, ttl/expiration, etc).

Redis just works. Disable persistence, choose the eviction policy that fits the use, config for unix socket connection and you're _flying_.

My use case is generally data ingest of some sort where the processing workers (in my largest projects I'm talking about 50-80 concurrent processes chewing through tasks from a queue (also backed by redis) and are likely to end up running the same queries against the database (mysql) to get 'parent' records (ie: user associated with object by username, post by slug, etc) and there's no way to know if there will be multiples (ie: if we're processing 100k objects there might be 1 from UserA or there might be 5000 by UserA - where each one processing will need the object/record of UserA). This project in particular there's ~40 million of these 'user' records and hundreds of millions of related objects - so can't store/cache _all_ users locally - but sure would benefit from not querying for the same record 5000 times in a 10 second period.

For the most part, when caching these records over the network, the performance benefits were negligible (depending on the table) compared to just querying myqsl for them. They are just `select where id/slug =` queries. But when you lose that little bit of network latency and you can make _dozens_ of these calls to the cache in the time it would take to make a single networked call... it adds up real quick.

PHP has direct memory "shared memory" but again, it would require handling/implementing a bunch of stuff I just don't want to be responsible for - especially when it's so easy and performant to lean on Redis over a unix socket. If I needed to go faster than this I'd find another language and likely do something direct-to-memory style.

gmm1990 · 3 months ago
Thanks for the write up. Seems like a cool pattern I hadn’t heard of before
gmm1990 commented on Redis is fast – I'll cache in Postgres   dizzy.zone/2025/09/24/Red... · Posted by u/redbell
Implicated · 3 months ago
> If you, like the whole world, consume Redis through a network connection, it should be obvious to you that network is in fact the bottleneck.

Not to be annoying - but... what?

I specifically _do not_ use Redis over a network. It's wildly fast. High volume data ingest use case - lots and lots of parallel queue workers. The database is over the network, Redis is local (socket). Yes, this means that each server running these workers has its own cache - that's fine, I'm using the cache for absolutely insane speed and I'm not caching huge objects of data. I don't persist it to disk, I don't care (well, it's not a big deal) if I lose the data - it'll rehydrate in such a case.

Try it some time, it's fun.

> And at the end of the day, what exactly is the performance tradeoff? And does it pay off to spend more on an in-memory cache like Redis to buy you the performance Delta?

Yes, yes it is.

> That's why real world benchmarks like this one are important.

That's not what this is though. Just about nobody who has a clue is using default configurations for things like PG or Redis.

> They help people think through the problem and reassess their irrational beliefs.

Ok but... um... you just stated that "the whole world" consumes redis through a network connection. (Which, IMO, is wrong tool for the job - sure it will work, but that's not where/how Redis shines)

> What you cannot refute are the real world numbers.

Where? This article is not that.

gmm1990 · 3 months ago
that is an interesting use case, I hadn't thought about a setup like this with a local redis cache before. Is it the typical advantages of using a db over a filesystem the reason to use redis instead of just reading from memory mapped files?

u/gmm1990

KarmaCake day176March 25, 2019View Original