All other engineering disciplines are ultimately limited to building things in (at most) 3 euclidean dimensions. There is only so much junk you can hide in a finite volume of space.
Code by comparison lives in hyperbolic space [0] and you can hide _anything_ in such a space without it being obvious. This is exemplified by the unpleasant discovery all of us have had of a supposedly peripheral folder holding source code called all over the code base and the near impossibility of moving it in a location that makes sense for it without having to refactor the whole code base.
People, including myself, have a seriously bad intuition just how much volume there is in a space which grows at least exponentially.
The closest discipline to software engineering is mathematics and that has an even worse track record. There's the folklore about half of all math papers giving the wrong proof for the right conclusion. By comparison software engineering only gets catastrophic bugs less than every other time a program is run.
[0] All trees are natively embedded in some hyperbolic space of whatever curvature matches the average number of children per node, and all code can be ultimately represented as a tree.
However, at the end of the day, there is an input and output and compute and memory needed to run the thing and if we look at that we realize, we never actually left the bounded physical realm and we can still engineer software systems against real world constraints. We can judge its efficiency and breaking points.
What's very different is the cost to change the system to do something new and that's where this unbounded complexity blows up in our face.
This is a common sense view of computation that's unfortunately wrong.
The simplest counter example is the busy beaver program: with as little as 12 states we have saturated the computational capabilities of the universe, but it looks completely safe and sane for the first few states you would be testing against.
You may call it pathological, and you'd be right, but the point is that you never know under which rug a function that takes more computation than the universe can supply is hiding.
By comparison power electronics engineers don't have to formally prove that they didn't accidentally include a nuclear power plant in their e-scooter design.
If I was designing a software system, I could introduce a time constraint. An imagined conversation: "How long will it take to get an answer? Between half a second and the heat death of the universe. OK. Can we just issue a timeout error after 1 second?"
This is putting controls in place so the system doesn't exceed its constraints and although hypothetical it might be able to do a job for any input, it can't because we haven't been able to find a more efficient solution for certain known and unknown scenarios.