"Unsafe code Ropey uses unsafe code to help achieve some of its space and performance characteristics. Although effort has been put into keeping the unsafe code compartmentalized and making it correct, please be cautious about using Ropey in software that may face adversarial conditions.
Auditing, fuzzing, etc. of the unsafe code in Ropey is extremely welcome. If you find any unsoundness, please file an issue! Also welcome are recommendations for how to remove any of the unsafe code without introducing significant space or performance regressions, or how to compartmentalize the unsafe code even better."
"Please be cautious about using Linux/macOS/Windows/Firefox/Chrome/Safari in adversarial conditions." I've never read a statement like that, even though it would be more warranted than in this case.
And even unsafe Rust is far safer than C and C++. It still provides automatic memory management by default, the thread safety guarantees that come with ownership, and abstraction mechanics that make it harder to commit blunders that can lead to unsafety.
Decentralization can be hidden from the user, it's an implementation detail.
There's literally a popular decentralized social network.
It's less about the tech, and more about the execution.
Historically we can look at LimeWire or PopcornTime as an example.
Both decentralized, both popular due to the ease-of-use.
No there isn't. Not a single one.
There are a few federated social networks, which is a fancy way of saying that they are centralized networks that have (or can have, in principle) more than one "center".
In practice, the overwhelming majority of users of such networks gravitate towards one or a handful of large providers. And many of those providers actually refuse to federate with other providers unless they follow an ever-growing list of politically-charged rules. This is just centralization with extra steps.
This has been an open issue for 5 months. When I noticed it, I couldn't believe my eyes and it was the last time I've run Zed so far. Judge for yourself whether this is a deal-breaker for you; I wish I had known about it earlier.
I do wonder what you actually mean by this.
> Those who think social media has "corrupted" society are barking up the wrong tree.
There is clear, evidenced research around social media's negative effect on society.
https://link.springer.com/article/10.1007/s00127-020-01906-9
https://scholarcommons.scu.edu/engl_176/2/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7364393/
https://journals.sagepub.com/doi/10.1177/20563051241269305
etc.
Spoke to an old friend on the phone recently, for the first time in 10 years or so. Within a few minutes of the conversation, he had used phrases like "As a father..." and "I cannot stand by while..."
It was as if he were speaking to an audience, rather than to me. I was glad when the call ended.
At this point LI is beyond saving but a serious competitor would have to be heavily moderated to stay clean which we all know won't happen.
Social media is and always has been a mirror of the real world. Today's dominant real-world culture consists of virtue signaling, vague pseudo-philosophy, toxic positivity, and a hyper-focus on group identity. You can see this every time you read the news, but you can also hear it when you just talk to random people.
The trend towards that culture started a few years before social media became a thing. I can't even imagine having a conversation with a friend anymore the way I used to in the 1990s and early 2000s. Everything I see online, I recognize from real-world interactions. Those who think social media has "corrupted" society are barking up the wrong tree.
It is difficult if you have been told all your life that you are the best, to accept the fact that a computer or even other people might be better than you.
It requires lot of self-reflection.
Real top-tiers programmers actually don’t feel threatened by LLMs. For them it is just one more tool in the toolbox like syntax highlighting or code completion.
They choose to use these tools based on productivity gains or losses, depending on the situation.
They should, because LLMs are coming for them also, just maybe 2-3 years later than for programmers that aren't "real top-tier".
The idea that human intellect is something especially difficult to replicate is just delusional. There is no reason to assume so, considering that we have gone from hole card programming to LLMs competing with humans in a single human lifetime.
I still remember when elite chessplayers were boasting "sure, chess computers may beat amateurs, but they will never beat a human grandmaster". That was just a few short years before the Deep Blue match.
The difference is that nobody will pay programmers to keep programming once LLMs outperform them. Programmers will simply become as obsolete as horse-drawn carriages, essentially overnight.