I waited hours to play Witcher 3, Cuberpunk, Assassins Creed Odyssey. With WoW I was playing within minutes.
I worked on Guild Wars 2, which has this feature. I made a first prototype of it that streamed all content on-the-fly. It's pretty easy to implement - you have an abstraction that asynchronously loads a file off of the disk, and you can just make that download from the network instead.
The tricky part is when you want to ensure all the assets are there for a specific area before you load in, or simply knowing what order to download things in. For example, there was a starter area of Guild Wars 2 that spawned monsters from many other areas, this meant that the manifest of what was needed was enormous for that area.
So the 'playability' threshold becomes a trade-off between game experience (assets popping in as you play) and quick entry.
At the time, some very good flight software engineers had been working diligently on a new UI framework that was written in the same code style and process as the rest of our flight software. However, I noticed a classic problem - we were working on the UI platform at the same time that we were trying to design and prototype the actual UI.
I made some observations:
1) We can create a prototype right now in Chrome, with its incumbent versatility.
2) The chip running the UI can actually reasonably run Chrome.
3) Web browsers are historically known for crashing, but that's partly because they have to handle every page on the whole Internet. A static system with the same browser running a single website, heavily tested, may be reliable enough for our needs.
4) We can always go back and reimplement the UI on top of the space-grade UI platform, and actually it'll be a lot easier because we will know exactly functionality we need out of that platform.
The prototype was a great success; we were able to implement a lot of interesting UI in just a week.
I left SpaceX before Crew Dragon launched, so I'm not sure what ended up launching or what the state of affairs is today. I remember hearing some feedback from testing sessions that the astronauts were pleasantly surprised when we were able to live edit a button when they commented it was too hard to reliably press it with their gloved finger.
As for reliability, to do a fair analysis you need to understand the requirements of the mission. Only then can you start thinking about faults and how to mitigate them. This isn't like Apollo where the astronauts had to physically reconfigure the spacecraft for each phase of the mission -- to an exceptionally large extent, Dragon flies itself. As a minor example of systemic fault tolerance, each display is individually controlled by its own processor. If a display fails, whether due to Chrome or cosmic radiation, an astronaut can simply use a different display.
Also, as a side note regarding "touchscreens" -- I believe some (very important) buttons did launch with Crew Dragon, but buttons and wiring are heavy, and weight is the enemy. If you're going to have a screen anyways, making it a touchscreen adds relatively trivial weight.