This .unwrap() sounds too easy for what it does, certainly much easier than having an entire try..catch block with an explicit panic. Full disclosure: I don't actually know Rust.
Any project has to reason about what sort of errors can be tolerated gracefully and which cannot. Unwrap is reasonable in scenarios you expect to never be reached, because otherwise your code will be full of all sorts of possible permutations and paths that are harder to reason about and may cascade into extremely nuanced or subtle errors.
Rust also has a version of unwrap called "expect" where you provide a string that logs why the unwrap occurred. It's similar, but for pieces of code that are crucial it could be a good idea to require all 'unwraps' to instead be 'expects' so that people at least are forced to write down a reason why they believe the unwrap can never be reached.
[edit: link removed; I don't want to promote that guy but to give the gist he was saying that people who believe in free speech are trash, targeting X users with hate. Mastodon is absolutely saturated with this.]
Most of the criticism I see of X seems completely made up out of malice or is regurgitation of things other poorly informed or resentful people have said.
The supposed FSF in Europe should post links to the sections of the open source algorithm they claim to be criticizing, and show us their PR.
On the first point the simplest thing is I used to report people who use overt slurs or anti-semitic language. When Musk took over it started taking months for them to follow up and the response was simply to lock the account until they deleted the offending tweet. Eventually when I would report those people X just switched to saying they weren't breaking the rules. Now the replies of tons of seemingly normal posts that get lots of visibility are full of vile people trying to derail conversation with racism or anti-semitism.
Another big problem is the way that blue-check accounts are boosted has incentivized every account to act like click-bait all the time. Whenever a post gets semi-viral the blue-check replies are artificially lifted to the top and most of them are totally worthless because the commenters are just trying to 'grab space' so people click their profile and follow them. It used to be that if big accounts posted something interesting you might see a bunch of interesting follow up replies. Now it's spammers at the top and then racists / crazies mixed in with more thoughtful replies if you scroll down a few pages past the blue-checks. It used to be that the algorithmic feed would surface me all sorts of interesting and novel work from people across the tech world but now there's a whole category of people trying to make every single Tweet viral enough to get payouts.
And then there's Musk himself. He's ordered the algorithm to be manipulated to boost himself more. He's clearly expressed discontent when the algorithm doesn't work the way he wants, he's meddled heavily in the platform's AI bot to make it say things Musk prefers, and he's been rather unscrupulous chasing his political goals. I think it's not unlikely he'd use the platform to guide public opinion, perhaps even using AI to do it discretely and intelligently. I view that as a significant risk.
So the platform has gone from something that's highly useful to me, and a place I greatly enjoyed, to something that more often than not wastes my time and exposes me to people that disturb me. And on top of all that I think contributing to the platform may empower someone who I deeply distrust to manipulate public opinion towards their political goals.