There are two main trouble spots in DC-DC converter design - protection and noise.
A switching power supply is a dead short across its input once the inductor has saturated. The switch, usually a power MOSFET, needs to turn off on every cycle before that happens. Otherwise, something will fail and probably burn out. Also, the failure mode of power MOSFETS is usually "on". So protection circuitry is needed. Fuses, current limiters, etc. This is why UL approval for switchers connected to the power line is important.
Switchers work by generating big inductive spikes. Those spikes are supposed to be directed into capacitors and smoothed out into DC. Without suitable filtering, spikes will be pushed into the power source, the load, and the RF spectrum. A few ferrite beads, Zener diodes, and small capacitors in the right spots will fix this. LTSpice simulation is useful in picking the component values. You're not done until both the current and voltage curves are flat.
Both are excellent resources and look at design from a bit different point of view.
I think the biggest problems with swithing designs are not what you have listed, although both noise and failure modes are a huge problem and main cause of concern (and cost) when certifying your designs.
The biggest problem is that they are just so damn complex and they have so damn complex characteristics over time and operating parameters. You might think you understand how a switching PSU works but that's just an illusion. There are people who spent their entire life specialising in switching PSU design and are still learning. At best we can understand how they behave within certain parameters and then try to make sure to shut it down safely when we leave those parameters.
Spontaneous symmetry breaking has been at the root of at least three Nobel prizes, and is crucial to understanding the differences in physical systems at very high energies both in laboratories and in extreme astrophysical settings, at both early and approximately present times in the universe.
https://en.wikipedia.org/wiki/Spontaneous_symmetry_breaking
The early universe was in a high energy state, being very much hotter and denser than the later universe, as you say. There are several epochs -- notably the https://en.wikipedia.org/wiki/Electroweak_epoch -- where symmetry breaking is important, and using the lower energy theory (electromagnetism, in this example) simply does not work: results are (if even calculable) manifestly wrong, leading to a universe with a very different cosmic microwave background, and very different chemistry and nuclear physics.
I think at best one might say that theories with broken symmetries could still have those symmetries (i.e., the breaking may be reversible under "different ... physical conditions", like if our universe surprisingly evolved to a Big Crunch), however treating that as a denial of the possibility of different physics in the early universe is probably something you'd have to take up with philosophers or lexicographers for now.
Additionally, there is no reason to just assume (and refuse to trace out implications if wrong, or to validate) that physical constants are constants everywhere and everywhen. Putting some spacetime-location-dependent function on constants like G, k_B, \alpha, \Lambda, c has at the very least proven instructive in further understanding the concordance (standard) models of particle physics and cosmology, where those constants are taken as constant everywhere and at all times in the universe. Indeed paramaterizing apparent constants is outright productive science. See e.g. <https://en.wikipedia.org/wiki/Test_theories_of_special_relat...> for a scratch-the-surface set of details, and additionally <https://en.wikipedia.org/wiki/Variable_speed_of_light#Relati...> are at least [a] interesting [b] testable and [c] improves testability of the families of theories in which these constants are assumed truly constant (i.e, everywhere and everywhen).
> popular belief
Well, I guess your popular is could outweigh a literature search. But for scientists:
<https://duckduckgo.com/?q=%22spontaneous+symmetry+breaking%2...>
<https://duckduckgo.com/?q=%22new+physics%22+early+universe+s...>
etc.
Finally,
> lost any possibility to predict anything
It's been about half a century since Kenneth Wilson and Nikolay Bogolyubov explored rescaling and renormalization, and nowadays practically every physical theory is written down as, considered as, or is being adapted towards <https://en.wikipedia.org/wiki/Effective_field_theory> (EFT). It is common that different EFTs apply to the same physical configuration as some characteric scale is crossed, and it is possible that physical theories will be EFTs all the way down (and all the way up), with the concept of fundamental becoming a relation between families of theories. (For example, Newtonian gravitation is less fundamental than General Relativity, because the former can be derived from the latter (and not the reverse), not because General Relativity is known to be correct at all scales).