Readit News logoReadit News
pantalaimon commented on Converting a $3.88 analog clock from Walmart into a ESP8266-based Wi-Fi clock   github.com/jim11662418/ES... · Posted by u/tokyobreakfast
DesiLurker · 19 hours ago
makes me wonder what if I just wanted to sync with nfc every once in a while. wifi seems overkill for this. maybe it could be done much cheaper with nfc sync witha phone twice a year?
pantalaimon · 18 hours ago
You often have a radio clock source like DCF77 that all those radio controlled clocks use
pantalaimon commented on I write games in C (yes, C) (2016)   jonathanwhiting.com/writi... · Posted by u/valyala
giancarlostoro · 3 days ago
Were not most games back in the day in C?
pantalaimon · 3 days ago
Quake and Doom sure come to mind
pantalaimon commented on I write games in C (yes, C) (2016)   jonathanwhiting.com/writi... · Posted by u/valyala
pansa2 · 3 days ago
Yeah, you could argue that choosing C is just choosing a particular subset of C++.

The main difference from choosing a different subset, e.g. “Google C++” (i.e. writing C++ according to the Google style guide), is that the compiler enforces that you stick to the subset.

pantalaimon · 3 days ago
C is not a subset of C++, there are some subtle things you can do in C that are not valid C++
pantalaimon commented on xAI joins SpaceX   spacex.com/updates#xai-jo... · Posted by u/g-mork
nwellinghoff · 7 days ago
Yeah it does not make a whole lot of sense as the useful lifespan of the gpus in 4-6 years. Sooo what happens when you need to upgrade or repair?
pantalaimon · 7 days ago
Same that happens with Starlink satellites that are obsolete or exhausted their fuel - they burn up in the atmosphere.
pantalaimon commented on xAI joins SpaceX   spacex.com/updates#xai-jo... · Posted by u/g-mork
saratogacx · 7 days ago
I think the Colossus[1] predated the ENIAC but is still in line with your general theme of doing stuff for the military. In this case it was used for cipher breaking, not firing calculations.

You could argue that it doesn't really count though because it was only turing complete in theory: "A Colossus computer was thus not a fully Turing complete machine. However, University of San Francisco professor Benjamin Wells has shown that if all ten Colossus machines made were rearranged in a specific cluster, then the entire set of computers could have simulated a universal Turing machine, and thus be Turing complete."

[1] https://en.wikipedia.org/wiki/Colossus_computer

pantalaimon · 7 days ago
> You could argue that it doesn't really count though because it was only turing complete in theory

Then you have to also count the Z3 which predates the Colossus by 2 years.

[1] https://en.wikipedia.org/wiki/Z3_(computer)

pantalaimon commented on xAI joins SpaceX   spacex.com/updates#xai-jo... · Posted by u/g-mork
ben_w · 8 days ago
The quoted "1 TW of photovoltaic cells per year, globally" is the peak output, not the average output. They're only about 20% higher peak output in space… well, if you can keep them cool at least.
pantalaimon · 8 days ago
But there are no clouds in space and with the right orbit they are always facing the sun
pantalaimon commented on xAI joins SpaceX   spacex.com/updates#xai-jo... · Posted by u/g-mork
tavavex · 8 days ago
Because that brings in the whole distributed computing mess. No matter how instantaneous the actual link is, you still have to deal with the problems of which satellites can see one another, how many simultaneous links can exist per satellite, the max throughput, the need for better error correction and all sorts of other things that will drastically slow the system down in the best case. Unlike something like Starlink, with GPUs you have to be ready that everyone may need to talk to everyone else at the same time while maintaining insane throughput. If you want to send GPUs up one by one, get ready to also equip each satellite with a fixed mass of everything required to transmit and receive so much data, redundant structural/power/compute mass, individual shielding and much more. All the wasted mass you have to launch with individual satellites makes the already nonsensical pricing even worse. It just makes no sense when you can build a warehouse on the ground, fill it with shoulder-to-shoulder servers that communicate in a simple, sane and well-known way and can be repaired on the spot. What's the point?
pantalaimon · 8 days ago
Starlink already solved those problems, they do 200 GBit/s via laser between satellites.

And for data centers, the satellite wouldn't be as far apart as starlight satellites, they would be quite close instead.

pantalaimon commented on xAI joins SpaceX   spacex.com/updates#xai-jo... · Posted by u/g-mork
adastra22 · 8 days ago
Single upset events in a modern GPU are not bitflips. They destroy the surrounding circuitry and usually disable the whole unit.
pantalaimon · 8 days ago
If that happens you disable that CUDA core. If you GPU is too damaged, you deorbit the satellite.
pantalaimon commented on xAI joins SpaceX   spacex.com/updates#xai-jo... · Posted by u/g-mork
hristov · 8 days ago
Currently, just a cursory google search shows $1500-3000 per kilogram to put something into low earth orbit. Lets take the low bound because of efficiencies of scale. So $1500.

A million tons will cost $1500x1000x1000000= 1,500,000,000,000. That is one and a half TRILLION dollars per year. That is only the lift costs, it does not take into account the cost of manufacturing the actual space data centers. Who is going to pay this?

pantalaimon · 8 days ago
That's the price before Starship which would be the prerequisite for this whole project.
pantalaimon commented on xAI joins SpaceX   spacex.com/updates#xai-jo... · Posted by u/g-mork
FireBeyond · 8 days ago
Oh, good. So we only need to multiply that by 200 million times, per space datacenter.
pantalaimon · 8 days ago
The data center would still consist of many individual satellites, much like a earth based data center consists of many individual servers

u/pantalaimon

KarmaCake day14099August 28, 2013View Original