Here is some personal experience: we previously built Coinbase's automated chatbot and we used a flowchart type builder to do that. This was a intent-entity based system that used deep learning models. It started great, but pretty quickly it became a nightmare to manage. To account for the fact that users could ask things out of turn or move across topics every other turn - we added in concepts called jumps - where control could go from one path to another unrelated path of workflow in on hop - which again introduced a lot of maintenance complexity.
The way we see it is that, when we assign a task to another human or a teammate we don't give them a flowchart - we just give them high level instructions. Maybe that should be the standard for building systems with LLMs?
Is this making sense?
Regarding actual space elevators though, while they're not sci-fi to the extent of something like FTL travel - ie. they're technically not physically impossible - they're still pretty firmly in the realm of sci-fi. We don't have anything close to a cable that could sustain its own weight, let alone that of whatever is being elevated. Plus, how do you stabilize the cable and lifter in the atmosphere?
A space elevator on the moon is much more feasible: less gravity, slow rotation, no atmosphere, less dangerous debris. But it's also much less useful.