We did not pick them, and cancelled any other relashionship we had with them, in IR space by example.
We did not pick them, and cancelled any other relashionship we had with them, in IR space by example.
When Spotify entered the market they paid a lot of money for exclusive content, but for the 99% of podcasters their interaction with the space was the same as Apple, submit your RSS and it serves it from your hosting. However, Spotify also bought two podcast hosting platforms: Anchor and Megaphone which ends up blurring that line a bit. As far as I know the Anchor/Megaphone hosted podcasts are not treated differently by Spotify, but that could change at any time.
The recent change from Google with the retiring of Google Podcasts in favor of Youtube Music is a tremendous step in the opposite direction. Youtube Music does NOT use the RSS based distribution method, with podcasters uploading their files directly to Youtube. Google even offers an RSS import.
From all of the metrics I have seen the above three platforms make up 80%+ or so of the podcast user base. So if they made changes to make things less open podcast creators would be forced to follow. Making it feel like the openness of podcasts is, at least in 2023, more of an illusion or an act of charity than anything else.
So the break is already happening in this world...
Basically every vendor has its own formats, fields and the way to centralize this data (syslog still rules...) and parse it in a common way (a source IP is a source IP in all tech) has been a pain point since forever. There is basically a whole industry around it, and a whole bunch of logstash parsers have been scarificed. Even better is that vendors have a tendency to change format once in a while, so even some you have will break way more often then they should. Many vendors dont see that as an issue as it locks their clients in.
This is another attempt at solving this. It does seem to have traction for once, and nobody wants to piss off Amazon, if they make this a prerequesite to be on their marketplace then it will actually work.
It's also depressing to see how electricity is a distant third to gas and other petroleum products in terms of energy demand - even though the eletric supply is almost 100% renewabe, all those transport trucks and cars far outweigh it.
https://www.cer-rec.gc.ca/en/data-analysis/energy-markets/pr...
I can confirm that we have no brown-outs. Also - We have a mix of very cold and warm weather, so are consuming lots of energy for heating in winter. - Hydro-Quebec, state-owned utility has a stellar record for maintenance, coverage and capacity management. It also send money back to the gov to support social programs. - Even with all that our electriciyy cost for consumers,,entre rpsies, and industrial is one of the lowest in the world.
https://www.cer-rec.gc.ca/en/data-analysis/energy-commoditie...
But second, I'd love to understand the compute vs storage tradeoff chosen here. Looking at the (pretty!) picture [1], I was shocked to see "Wow, it's mostly storage?". Is that from going all flash?
Heading to https://oxide.computer/product for more details, lists:
- 2048 cores
- 30 TB of memory
- 1024 TB of flash (1 PiB)
Given how much of the rack is storage, I'm not sure which Milan was chosen (and so whether that's 2048 threads or 4196 [edit: real cores, 4196 threads]), but it seems like visually 4U is compute? [edit: nope] Is that a mistake on my part, because dual-socket Milan at 128 threads per socket is 256 threads per server, so you need at least 8 servers to hit 2048 "somethings", or do the storage nodes also have Milans [would make sense] and their compute is included [also fine!] -- and so similarly that's how you get a funky 30 TiB of memory?
[Top-level edit from below: the green stuff are the nodes, including the compute. The 4U near the middle is the fiber]
P.S.: the "NETWORK SPEED 100 GB/S" in all caps / CSS loses the presumably 100 Gbps (though the value in the HTML is 100 gb/s which is also unclear).
[1] https://oxide.computer/_next/image?url=%2Fimages%2Frenders%2...
We built a few racks of Supermicro AMD servers (4 X computes in 2U), and we load tested it to 23kva peak usage (about 1/2 full with nthat type of nodes only, our DC would let us go further)
Were also over 1 PB of disks (unclear how much of this is redundancy), also in NVMe (15.36 TB x 24 in 2U is a lot of storage...)
Other then that not a bad concept, not sure of a premium they will charge or what will be comparable on price.
I can definitely confirm that the IBM statement is true. Execs signed on many projects, and we were always stuck with a blue pile of unusable garbage at the end. For twice the price orginally agreed to...
Working with RedHat as of today is still, well, working with RedHat. Highly competent people building things that will run well for a long time, and (so far) still at a reasonable cost. I do start to see some changes on pricing (high increases are on the horizon...), and more red tape around things that don't fit in the standard boxes. So i'm trying to decouple some areas from being fully dependant to more standardized/vendor agnostic models to keep options opened (mainly in container space).
The TR 3970x in particular is amazing for a server. It's 32 core, all running at a base freq of 3.7Ghz. It's amazing to have that kind of high clock with so many cores.
Current FTC is good(personal opinion) from anti trust point of view but maybe bad for startup exits[0].
[0] https://x.com/ID_AA_Carmack/status/1812978264484552987