Even if you have cash many shops would not sell anything in case of a mass outage because registers are just clients which depend on a cloud to register a transaction. Not reliable but cheap when it works.
The real question is how long can some of the smaller banks' datacenters stay up.
If an entire nation trips offline then every generator station disconnects itself from the grid and the grid itself snaps apart into islands. To bring it back you have to disconnect consumer loads and then re-energize a small set of plants that have dedicated black start capability. Thermal plants require energy to start up and renewables require external sources of inertia for frequency stabilization, so this usually requires turning on a small diesel generator that creates enough power to bootstrap a bigger generator and so on up until there's enough electricity to start the plant itself. With that back online the power from it can be used to re-energize other plants that lack black start capability in a chain until you have a series of isolated islands. Those islands then have to be synchronized and reconnected, whilst simultaneously bringing load online in large blocks.
The whole thing is planned for, but you can't really rehearse for it. During a black start the grid is highly unstable. If something goes wrong then it can trip out again during the restart, sending you back to the beginning. It's especially likely if the original blackout caused undetected equipment damage, or if it was caused by such damage.
In the UK contingency planning assumes a black start could take up to 72 hours, although if things go well it would be faster. It's one reason it's a good idea to always have some cash at home.
Edit: There's a press release about a 2016 black start drill in Spain/Portugal here: https://www.ree.es/en/press-office/press-release/2016/11/spa...
1. The grid has to fully collapse with no possibility of being rescued by interconnection
2. As a result, a generation asset has to be started without external power or a grid frequency to synch to
3. An asset capable of this is usually a small one connected to a lower voltage network that has to then backfeed the higher voltage one
4. Due to the difficulty of balancing supply/demand during the process, the frequency can fluctuate violently with a high risk of tripping the system offline again
None of this applies in yesterday's case:
The rest of the European synchronous grid is working just fine.
News reports stated Spain restored power by reconnecting to France and Morocco.
By reestablishing the HV network first, they can directly restart the largest generation asset with normal procedures.
As they bring more and more load or generation online, there's little risk of big frequency fluctuations because the wider grid can absorb that.
Concurrency is so hard that even OpenJDK developers can't prevent these kind of bugs
The simplest "safe" way of doing this involves defensively copying the input argument. However, the `compress` function will likely make yet another smaller copy, making the constructor very allocation and CPU intensive.
In fact, due to the fixed array size in Java, all thread-safe implementations must either allocate two arrays to hold the two possible encodings, which guarantees one piece of garbage, or iterating the input array twice.
For such a core class like String, this is probably unacceptable cost. And the constructor is not documented to be thread-safe, so no one should expect it to.
In reality, there are much more impactful data structures to abuse in Java.