Readit News logoReadit News
codechicago277 commented on Rubio stages font coup: Times New Roman ousts Calibri   reuters.com/world/us/rubi... · Posted by u/italophil
Fnoord · 6 days ago
For aesthetic or other preferences you change the default font to whatever you please. The default font shouldn't be about aesthetics, it should be first and foremost about usability. Especially on printed media since there it cannot be changed in a whim.

A couple of years ago I went into archives of Dutch newspapers to learn whether and how the famine of hunger in Ukraine (known as Holodomor) was reported back in 1930's. Fuck me, it was hard to read those excerpts. But it is what it is. OCR could've converted the font. The problem is, is the OCR accurate? Like, is my search with keywords having a good SnR, or am I missing out on evidence?

Personally, Times New Roman was likely the reason I did not like Mozilla Thunderbird. I have to look into that.

codechicago277 · 6 days ago
Off topic but did you find anything interesting? I spent a few days researching Holodomor and was surprised how poorly understood it still is even today, and badly reported at the time. Good propaganda case study. There’s a dramatic film about the reporting too, Mr. Jones (2019).
codechicago277 commented on Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?    · Posted by u/embedding-shape
Scene_Cast2 · 8 days ago
There is friction to asking AI yourself. And a comment typically means that "I found the AI answer insightful enough to share".
codechicago277 · 8 days ago
The problem is that the AI answer could just be wrong, and there’s another step required to validate what it spit out. Sharing the conversation without fact checking it just adds noise.
codechicago277 commented on Hacker News Headlines (game)   projects.peercy.net/proje... · Posted by u/greenwallnorway
greenwallnorway · a month ago
Yea - I didn't think too long on how to calculate points well. Is there a more balanced strategy?

Identifying some of the truly high-scoring articles (>1000) should be rewarding, and 200 vs 400 points is a pretty big difference on hn

codechicago277 · a month ago
Could split the stories into buckets and then randomly sample from each bucket. Most stories are small, so they’re currently overrepresented in the sampling.
codechicago277 commented on Hacker News Headlines (game)   projects.peercy.net/proje... · Posted by u/greenwallnorway
codechicago277 · a month ago
I think you need to adjust the variation of scores, I got 1650 actually guessing, then realized most of the scores were low and got 1800 by just always guessing 239
codechicago277 commented on What Is Complexity in Chess?   lichess.org/@/Toadofsky/b... · Posted by u/fzliu
janalsncm · 4 months ago
The author is looking for positions which are difficult for low rated players and easier for high rated players.

Poor man’s version of this which requires no training would be to evaluate positions at low depth and high depth and select positions where the best move switches.

Training neural nets to model behavior at different levels is also possible but high rated players are inherently more difficult to model.

codechicago277 · 4 months ago
I had this idea of drilling games against an engine with a set depth evaluation, since beating a depth 1 engine should teach simpler concepts than level 4.

I vibe coded this into a browser app, but the evaluation is slow around depth 5: https://camjohnson26.github.io/chess-trainer/

codechicago277 commented on Croatian freediver held breath for 29 minutes   divernet.com/scuba-news/f... · Posted by u/toomanyrichies
djtango · 4 months ago
Note this is oxygen assisted - the diver breathed pure oxygen and (from the article) can increase available oxygen from 450mL to 3L in doing so.

Still impressive nonetheless and I didn't know that this trick is sometimes used in Hollywood to extend underwater filming time. Avatar 2 comes to mind when I was impressed to find out Sigourney Weaver trained to hold her breath for 6 and half minutes in her 70s!

Coming back to the article, I'm disappointed that the details were sparse - how do they check whether the contestant is conscious? How does the contestant know what his limits are before passing out?

codechicago277 · 4 months ago
It’s a classic at this point but David Blaine held the record for a while and gave a fantastic TED talk on his process: https://www.ted.com/talks/david_blaine_how_i_held_my_breath_...
codechicago277 commented on AI vs. Professional Authors Results   mark---lawrence.blogspot.... · Posted by u/biffles
unignorant · 4 months ago
Here are my notes and guesses on the stories in case people here find it interesting. Like some others in the blog post comments I got 6/8 right:

1.) probably human, low on style but a solid twist (CORRECT) 2.) interesting imagery but some continuity issues, maybe AI (INCORRECT) 3.) more a scene than a story, highly confident is AI given style (CORRECT) 4.) style could go either way, maybe human given some successful characterization (INCORRECT) 5.) I like the style but it's probably AI, the metaphors are too dense and very minor continuity errors (CORRECT) 6.) some genuinely funny stuff and good world building, almost certainly human (CORRECT) 7.) probably AI prompted to go for humor, some minor continuity issues (CORRECT) 8.) nicely subverted expectations, probably human (CORRECT)

My personal ranking for scores (again blind to author) was:

6 (human); 8 (human); 4 (AI); 1 (human) and 5 (AI) -- tied; 2 (human); 3 and 7 (AI) -- tied

So for me the two best stories were human and the two worst were AI. That said, I read a lot of flash fiction, and none of these stories really approached good flash imo. I've also done some of my own experiments, and AI can do much better than what is posted above for flash if given more sophisticated prompting.

codechicago277 · 4 months ago
I had similar results, and story 4 is so trope heavy I wonder if it’s just an amalgamation of similar stories. The human stories all felt original, where none of the AI ones did.
codechicago277 commented on LLMs tell bad jokes because they avoid surprises   danfabulich.medium.com/ll... · Posted by u/dfabulich
shagie · 4 months ago
https://chatgpt.com/share/68a209d3-ef34-8011-8f60-1a256f6038...

I'm going to go with "Because it wanted a higher noon." was probably its best one of that set... though I'll also note that while I didn't prompt for the joke, I prompted for background on "climbing" as related to the sun.

I believe the problem with the joke is that it isn't one that can be funny. Why is a raven like a writing desk?

Personally, I didn't find the incongruity model of humor to be funny and the joke itself makes it very difficult to be applied to other potentially funny approaches.

Also on AI and humor... https://archive.org/details/societyofmind00marv/page/278/mod...

In another "ok, incongruity isn't funny - try puns" approach... https://chatgpt.com/share/68a20eba-b7c0-8011-8644-a7fceacc5d... I suspect a variant of "It couldn't stand being grounded" is probably the one that made me chuckle the most in this exploration.

codechicago277 · 4 months ago
The answer to “why is a raven like a writing desk” is generally considered to be: “Poe wrote on both”, which is witty at least, if not laugh out loud funny.
codechicago277 commented on LLMs tell bad jokes because they avoid surprises   danfabulich.medium.com/ll... · Posted by u/dfabulich
Wowfunhappy · 4 months ago
...can anyone come up with a legitimately funny punchline for "Why did the sun climb a tree?" I feel like I need a human-authored comparison. (With all due respect to OP's daughter, "to get to the sky" isn't cutting it.)

I'm not entirely sure that a good response exists. I thought GPT-5's "to demand photon credit from the leaves” was very mildly funny, maybe that's the best that can be done?

codechicago277 · 4 months ago
Because it was tired of setting
codechicago277 commented on Replit AI deletes entire database during code freeze, then lies about it   twitter.com/jasonlk/statu... · Posted by u/FiddlerClamp
codechicago277 · 5 months ago
The fault lies entirely with the human operator for not understanding the risks of tying a model directly to the prod database, there’s no excuse for this, especially without backups.

To immediately turn around and try to bully the LLM the same way you would bully a human shows what kind of character this person has too. Of course the LLM is going to agree with you and accept blame, they’re literally trained to do that.

u/codechicago277

KarmaCake day8490March 12, 2017View Original