Honestly, the article is so disingenuous that it comes off like a paid-for puff piece for Astronomer. It's the article-equivalent of the late-night infomercial guy who rips open a bag of potato chips like the hulk because he doesn't have this special tool that's just four easy payments of $9.99.
The performance issue is still here, just launch Airflow and submit thousand dagruns with simple python sleep(1) and you will hit the cpu bound very quickly with a total time that will have a large duration. Airflow is not designed for a lot of short duration tasks. When using event driving data flow, it's really complicated to managed.
Imagine a flow that will be triggered for each store for example (thousand of store, with 10+ tasks for each one), Airflow will not be able to manage this kind of workflow quickly (and it's not its goals). Airflow was clearly defined to handle small (hundreds tasks) for a long time.
For the XCOM part, Airflow store this in database, so you can't store data into this, you will need to store a small data (database is not here to store big files). In Kestra, we have a provide a storage that allow storing large data (Go, To, ...) between tasks natively with the pain on multiple node clusters.
For other workflow engine (dagster, prefect, ...), we decided to use a complete different approach on how to build a pipeline. Since others decide to use python code, we decided to go to descriptive language (like terraform for example). This have a lot of advantages on how the developer user experience is: With Kestra, you can directly the web UI in order to edit, create and run your flows, no need to install anything on the user desktop and no need a complex deployment pipeline in order to test on final instance. Other advantage is that it allow to use terraform to deploy your flows, typical development workflow are: on development environment, use the UI, on production deploy your resource with terraform, flow and all the others cloud resource.
After, it will be really nice to have some independent performance benchmark. I really think Kestra is really fast since it was based on a queue system (Kafka) and not a Database. Since workflow are only events (change status, new tasks, ...) that is need to be consume by different service, database don't seems to be a good choice and my benchmark show that Kestra is able to handle a lot of concurrent tasks without using a lot of CPU.
AirFlow 2 is designed to support larger XCOM messages, so the guidance to only use it for small data no longer applies.
Your DAG construction overhead issue is likely due to dagbag refreshing. Airflow checks for DAG changes on a fixed interval, causing a reimport. The default period for that is fairly small, so for large deployments you will want to use a larger period (e.g. at least 5 minutes). I do not know why the default is so short (or was last I checked, anyway). Python files shouldn't do much of note on import regardless IMO.
I am not otherwise familiar with the improvements in Airflow 2, so I cannot say for sure if your other complaints still remain.
Here is an explanation of chirped pulse amplification: https://www.rp-photonics.com/chirped_pulse_amplification.htm... This technique is for producing optical photons, which will have less than 10 ev of energy. In the same way that you can't focus the sun's light to a point that gets hotter than the surface of the sun (violates second law of thermodynamics), it isn't obvious how low energy laser pulses can be useful for this. The article offers no explanation whatsoever. Maybe the electric field across the nucleus can be made strong enough to induce scission?
In general, if you want to interact with the nucleus you need photons on the order of 1 Mev or more, whose wavelengths are comparable to the size of the nucleus. These are gamma rays, which are not optical photons. There are ways to boost optical photons to those energies (like inverse compton scattering), but the article says nothing about that either. I would think inverse compton scattering of a chirped pulse from an electron packet in an accelerator will completely destroy the sharp timing and reflect the distribution of the electrons instead.
Pulsed lasers bring material interactions into a highly non-linear regime - photon intensity is so high that multiple photon absorption is common. In the typical nuclear decay regime you are concerned with single photon absorption, and the gamma ray intuition is correct. There are also a number of approaches where various targets hit with ultrafast lasers produce controllable flux of gamma rays which are used in downstream experimentation.
https://cco.ndu.edu/Portals/96/Documents/prism/prism_4-4/Str...
A disease capable of coming close to ending civilization would need to have properties far beyond any disease observed so far. Either it needs to infect massive populations before we detect it, or it has to transmit over long distances (miles) despite e.g. moderate precautions like masking, air filtering. I think there's good reason to doubt such a pathogen could exist. The closest I could imagine would be an HIV-like immunodeficiency virus that can be transmitted via aerosol - but even that would have to cause disease much more severe than HIV without resistance among even .01% of the population.
It's the government trying to enforce their opinion of who should own those Bitcoins, thereby taking power away from the owner that the network has decided on, which would be "whoever has the cryptographic keys".
Kind of squirrely, and I tried really hard to phrase that so it isn't a tautology. But if you're dealing with radio waves, your metamaterial can have huge (meter-scale) features. If you're dealing with visible light, your feature size is on the hundreds of nanometer scale.
Thin films have a characteristic bending length: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.11..., and this determines the size of features you should pattern to exploit that bending/folding interaction.
I think the bending analysis you cite can determine the relative feature sizes desirable for certain "micro-scale" mechanical behavior, but it's possible to build a mechanical "metamaterial" much larger than that as well.
The important thing is that mutations occur at a certain rate per virus per unit time. If you have an isolated population that's sequenced infrequently then (1) that strain will appear to evolve more slowly as there's a smaller population capable of mutating, and (2) once that strain is sequenced it's going to look far from what you've seen already since you haven't been tracking the intermediate mutations in this population.
The S/N ratio can be analyzed in terms of a random walk in high dimension. Variance in these walks grows over time (in terms of distance from origin, i.e. number of mutations), so the discrepancy doesn't seem super far from what's plausible under the null hypothesis. Perhaps someone can do the math on that.
The hypothesis merits further investigation, but the strength of the evidence presented here really requires some complex statistical analysis to determine if innocuous explanations fit. The analysis is far more complex than I would expect an epidemiologist or virologist to apply in the course of their work.
(Disclaimer: I am an author on the linked paper)
Also I note the only thing you have posted before is a link to this paper in particular.