Readit News logoReadit News
hollerith commented on Art of Roads in Games   sandboxspirit.com/blog/ar... · Posted by u/linolevan
gwbas1c · 16 hours ago
When you go into the Northeast, a lot of narrower roads were planned for slow-moving horse-drawn carts.
hollerith · 16 hours ago
Horse-drawn carts are not any narrower than cars are, and many place (e.g., the Marina District of San Francisco) designed in the horse era have very wide streets.
hollerith commented on Exploiting signed bootloaders to circumvent UEFI Secure Boot (2019)   habr.com/en/articles/4462... · Posted by u/todsacerdoti
varispeed · a day ago
> Then restarting it will remove it. So far Apple has had a perfect record with this unlike Android.

Not things like Pegasus.

It does not minimise attack surface, but minimise ways _you_ can ensure there is nothing on the phone that shouldn't be there.

hollerith · a day ago
We're talking about the verification of the boot chain, and last I heard, Pegasus has never subverted that: its strategy is to break back in after every reboot.
hollerith commented on Exploiting signed bootloaders to circumvent UEFI Secure Boot (2019)   habr.com/en/articles/4462... · Posted by u/todsacerdoti
charcircuit · a day ago
The security story of the PC platform is such a mess due to fragmentation. I have way more trust in Apple's security here.
hollerith · a day ago
Ditto hardware designed by Google.
hollerith commented on Exploiting signed bootloaders to circumvent UEFI Secure Boot (2019)   habr.com/en/articles/4462... · Posted by u/todsacerdoti
varispeed · a day ago
Security through obscurity is not a great idea. This is what Apple's current approach is. For instance if your iPhone is infected with malware, there is no anti-virus software that can find it, because Apple doesn't let software to have such deep access that is needed for scanning.
hollerith · a day ago
That is a perverse use of "security through obscurity".
hollerith commented on "The Stanford scam proves America is becoming a nation of grifters"   thetimes.com/us/news-toda... · Posted by u/cwwc
DivingForGold · 3 days ago
All universities should require a bonafide doctor's certificate of disability, just like that used to obtain a State handicap parking permit - - with severe criminal penalties for deception or lying, including, cancellation of student visa if caught.
hollerith · 3 days ago
Thank you for unintentionally illustrating (my guess as to) the root of the problem: the unfounded belief by the public, elected officials and administrators that if only we get a doctor involved in the decision, then surely the decision will go well.
hollerith commented on TikTok's 'addictive design' found to be illegal in Europe   nytimes.com/2026/02/06/bu... · Posted by u/thm
hollerith · 3 days ago
The choice would not be so clear-out to me. I'd have to think about it.
hollerith · 3 days ago
"clear-cut"
hollerith commented on TikTok's 'addictive design' found to be illegal in Europe   nytimes.com/2026/02/06/bu... · Posted by u/thm
haugis · 4 days ago
I see what you're saying, but I would much rather my 9-year old spends an hour on TikTok than an hour smoking Marlboros.
hollerith · 3 days ago
The choice would not be so clear-out to me. I'd have to think about it.
hollerith commented on TikTok's 'addictive design' found to be illegal in Europe   nytimes.com/2026/02/06/bu... · Posted by u/thm
eggy · 4 days ago
I'm skeptical about banning design patterns just because people might overuse them. Growing up, I had to go to the theater to see movies, but that didn't make cliffhangers and sequels any less compelling. Now we binge entire Netflix series and that's fine, but short-form video needs government intervention? The real question is: where do we draw the line between protecting people from manipulative design and respecting their ability to make their own choices? If we're worried about addictive patterns, those exist everywhere—streaming platforms, social feeds, gaming, even email notifications. My concern isn't whether TikTok's format is uniquely dangerous. It's whether we trust adults to manage their own media consumption, or if we need regulatory guardrails for every compelling app. I'd rather see us focus on media literacy and transparency than constantly asking governments to protect us from ourselves.

You can't legislate intelligence...

hollerith · 4 days ago
>I had to go to the theater to see movies, but that didn't make cliffhangers and sequels any less compelling.

The argument against tiktok (and smartphones in general) is not that experiences above a certain threshold of compellingness are bad for you: it is that filling your waking hours with compelling experiences is bad for you.

Back when he had to travel to a theatre to have them, a person was unable to have them every free minute of his day.

hollerith commented on In Tehran   lrb.co.uk/blog/2026/janua... · Posted by u/mitchbob
hollerith · 5 days ago
Basically zero Americans are starving.
hollerith commented on Y Combinator will let founders receive funds in stablecoins   fortune.com/2026/02/03/fa... · Posted by u/shscs911
stackghost · 6 days ago
I too have raised before.

I'm not saying raising and then buying T-Bills is better than just raising less.

I'm saying if you find yourself with excess cash, you can't just un-raise. In that scenario, then short term T Bills are strictly better than cash.

hollerith · 6 days ago
>if you find yourself with excess cash, you can't just un-raise

I always thought a startup can return cash to investors as long as the payments or dispersements are proportional to the amount of stock owned.

u/hollerith

KarmaCake day8023November 4, 2007
About
Richard Hollerith, San Francisco Bay Area. hruvulum@gmail.com

No one has any plausible plan or halfway-decent plan for how to maintain control over an AI that has become super-humanly capable. Essentially all of the major AI labs are trying to create a super-humanly capable AI, and eventually one of them will probably succeed—which would be very bad (i.e., probably fatal) for humanity. So, clearly the AI labs must be shut down and in fact would have been shut down already if humanity were sufficiently competent. The AI labs must stay shut down until someone comes up with a good plan for controlling (or aligning) AIs, which will probably take at least 3 or 4 decades. We know that because people have been conducting the intellectual search for a plan for more than 2 decades as their full-time job, and those people report that the search is very difficult.

A satisfactory alternative might be to develop a method for determining whether a novel AI design can acquire a dangerous level of capabilities along with some way of ensuring that no lab or group goes ahead with an AI that can. This might be satisfactory if the determination can be made before giving the AI access to people or the internet. But I know of no competent researcher that has ever worked on or made any progress on this problem whereas at least the control problem has received a decent amount of attention from researchers and funding institutions.

The AIs that might prove fatal to humanity will be significantly different in design from the AIs that have been already widely deployed: for one thing, they will constantly learn (like a person does) as opposed to already-deployed AIs in which the vast majority of the AI's learning happens during a training phase that ends before any widespread deployment of the AI. Also, they will be much better than current AIs at working towards a long-term goal. I say this because I don't want to be misunderstood as believing that Google Gemini 2.5 or ChatGPT 5.0 might take over the world: I understand that those AIs are incapable of a such a thing. The worry is the AIs that are still on the drawing board or that will appear on a drawing board 5 or 10 years from now. There is no need to ban Gemini 2.5 and ChatGPT 5. Since some AI researchers pursue AI "progress" for ideological reasons and will tend to persist stubbornly even after AI research is banned, the best time to ban AI frontier research is now, so that these stubborn ideologues whose research will have been driven underground (because of the ban) but are capable of making a little more "progress" on AI will be unlikely to be able to make enough "progress" to end the world.

Again, as soon as anyone comes up with a solid plan for controlling (or aligning) an AI even if the AI turns out to be more capable than us, the ban on frontier AI research can be lifted as long as the majority of AI experts and AI researchers are in agreement that the plan is solid. No one can say how long this search for a solid plan will take, but IMHO it will probably take at least 3 or 4 decades.

More at

https://intelligence.org/the-problem/

View Original