Your whole point isn't supported by anything but ... a guess?
If given the chance to work with an AI who hallucinates sometimes or a human who makes logical leaps like this
I think I know what I'd pick.
Seriously, just what even? "I can imagine a scenario where AI was involved, therefore I will treat my imagination as evidence."
I did neither of these two things... :) I personally could not care about
- (over)hype
- 12/13/14/15 ... digit USD investment
- exponential vs. sigmoid
There are basically two groups of industry folk:
1. those that see technology as absolutely transformational and are already doing amazeballs shit with it
2. those that argue how it is bad/not-exponential/ROI/...
If I was a professional (I am) I would do everything in my power to learn everything there is to learn (and then more) and join the Group #1. But it is easier to be in Group #2 as being in Group #1 requires time and effort and frustrations and throwing laptop out the window and ... :)
Who was the exact individual that had the vision for glass multi-touch screens and sick gesture like effects (scroll).
Perhaps the team just sat around a table and came up with the vision. Or maybe it was Jobs, but clearly there were some good visionaries in the company.
I think the proof of Ive's excellence will come out of this next OpenAI project. If it's something lame, then I will assume his impact at Apple was overrated. If it's a jaw-dropper, then maybe he really is the cat's meow.
There are two ways to make money: you can trade people with money something that they prefer to money, or you can help people with money make more money, in exchange for a share of it.
The value of "output", whatever it is, is dependent on who currently has money. A vaccine for malaria has no value if no one who has money prefers it to money. A machine that can get you to Mars has no value, unless people with money want to go to Mars.
I say people with money, but it really people times money. And a few people have almost all of it.
So when we talk about output, GDP, whatever, as long as it's measured in money remember that it's mostly rich people's preferences we're talking about.
Eh - while surely not everyone has the benefits of doing so, I'm running Laravel and using Redis is just _really_ simple and easy. To do something via memory mapped files I'd have to implement quite a bit of stuff I don't want/need to (locking, serialization, ttl/expiration, etc).
Redis just works. Disable persistence, choose the eviction policy that fits the use, config for unix socket connection and you're _flying_.
My use case is generally data ingest of some sort where the processing workers (in my largest projects I'm talking about 50-80 concurrent processes chewing through tasks from a queue (also backed by redis) and are likely to end up running the same queries against the database (mysql) to get 'parent' records (ie: user associated with object by username, post by slug, etc) and there's no way to know if there will be multiples (ie: if we're processing 100k objects there might be 1 from UserA or there might be 5000 by UserA - where each one processing will need the object/record of UserA). This project in particular there's ~40 million of these 'user' records and hundreds of millions of related objects - so can't store/cache _all_ users locally - but sure would benefit from not querying for the same record 5000 times in a 10 second period.
For the most part, when caching these records over the network, the performance benefits were negligible (depending on the table) compared to just querying myqsl for them. They are just `select where id/slug =` queries. But when you lose that little bit of network latency and you can make _dozens_ of these calls to the cache in the time it would take to make a single networked call... it adds up real quick.
PHP has direct memory "shared memory" but again, it would require handling/implementing a bunch of stuff I just don't want to be responsible for - especially when it's so easy and performant to lean on Redis over a unix socket. If I needed to go faster than this I'd find another language and likely do something direct-to-memory style.
Not to be annoying - but... what?
I specifically _do not_ use Redis over a network. It's wildly fast. High volume data ingest use case - lots and lots of parallel queue workers. The database is over the network, Redis is local (socket). Yes, this means that each server running these workers has its own cache - that's fine, I'm using the cache for absolutely insane speed and I'm not caching huge objects of data. I don't persist it to disk, I don't care (well, it's not a big deal) if I lose the data - it'll rehydrate in such a case.
Try it some time, it's fun.
> And at the end of the day, what exactly is the performance tradeoff? And does it pay off to spend more on an in-memory cache like Redis to buy you the performance Delta?
Yes, yes it is.
> That's why real world benchmarks like this one are important.
That's not what this is though. Just about nobody who has a clue is using default configurations for things like PG or Redis.
> They help people think through the problem and reassess their irrational beliefs.
Ok but... um... you just stated that "the whole world" consumes redis through a network connection. (Which, IMO, is wrong tool for the job - sure it will work, but that's not where/how Redis shines)
> What you cannot refute are the real world numbers.
Where? This article is not that.