Readit News logoReadit News
calepayson commented on A sane but bull case on Clawdbot / OpenClaw   brandon.wang/2026/clawdbo... · Posted by u/brdd
webdood90 · 5 days ago
can't imagine getting this riled up over lowercase text. some serious fist-shaking-at-clouds energy.

it's meant to convey a casual, laid back tone - it's not that big of a deal.

calepayson · 5 days ago
> to normal humans, they look ridiculous, but they think they're cool and they're not harming anyone so i just leave them to it.

fixed it for you! now it’s in a casual, laid back tone.

calepayson commented on California is free of drought for the first time in 25 years   latimes.com/california/st... · Posted by u/thnaks
gosub100 · 20 days ago
how to be a novelist: use 10^n words when 10^(n-1) will do.
calepayson · 20 days ago
I think there are authors where this definitely applies and I don’t think Steinbeck is one of them.

It feels analogous to complaining about how Michelangelo painted the Sistine chapel on the ceiling instead of on a canvas where we wouldn’t have to crane our necks to see it.

calepayson commented on Our approach to advertising   openai.com/index/our-appr... · Posted by u/rvz
calepayson · 24 days ago
> In the coming weeks, we’re also planning to start testing ads in the U.S. for the free and Go tiers, so more people can benefit from our tools with fewer usage limits or without having to pay.

This single sentence probably took so many man-hours. I completely understand why they’re trying to integrate ads but this feels like a generational run for a company founded with the purpose of safely researching superintelligence.

calepayson commented on AWS CEO says replacing junior devs with AI is 'one of the dumbest ideas'   finalroundai.com/blog/aws... · Posted by u/birdculture
sfpotter · 2 months ago
It may well be. Books have tons of useful expository material that you may not find in docs. A library has related books sitting in close proximity to one another. I don't know how many times I've gone to a library looking for one thing but ended up finding something much more interesting. Or to just go to the library with no end goal in mind...
calepayson · 2 months ago
Speaking as a junior, I’m happy to do this on my own (and do!).

Conversations like this are always well intentioned and friction truly is super useful to learning. But the ‘…’ in these conversations seems to always be implicating that we should inject friction.

There’s no need. I have peers who aren’t interested in learning at all. Adding friction to their process doesn’t force them to learn. Meanwhile adding friction to the process of my buddies who are avidly researching just sucks.

If your junior isn’t learning it likely has more to do with them just not being interested (which, hey, I get it) than some flaw in your process.

Start asking prospective hires what their favorite books are. It’s the easiest way to find folks who care.

calepayson commented on Beej's Guide to Learning Computer Science   beej.us/guide/bglcs/... · Posted by u/amruthreddi
idkwhatiamdoing · 2 months ago
ice overview. A personal struggle of mine as someone who is self taught (with a degree in statistics) and has a full time job that does not constantly require programming, I struggle with learning fundamentals alongside doing actual projects. If someone has any advice in this regard, it would be much welcome.
calepayson · 2 months ago
Im a student right now and have a background in a non-CS field so struggle with the impostor-syndrome/fundamentals double whammy. The advice I’ve found most valuable is to basically cosplay as someone who’s a complete pro. What would that person read for news? How do they practice their craft? What books do they read on their free time?

Cosplay that role long enough and you become it. I’m still learning but it has been a great signpost for me over the last couple years.

Cheers and keep crushing it!

calepayson commented on Student perceptions of AI coding assistants in learning   arxiv.org/abs/2507.22900... · Posted by u/victorbuilds
abenga · 2 months ago
If sitting in the back and cheating guarantees a good grade, that's a shit school, honestly. The school seems to know that people cheat, and how, but nothing is being done. Randomize seating, have a proctor stand in the back of the class, suspend/expel people who are caught cheating.
calepayson · 2 months ago
Ya it drives me crazy. I know someone who scored an 81% on a midterm where a few people scored in the high 90%. The professor told them, that among the people they didn’t suspect of cheating, they got the highest score. No curve, no prosecution of the cheaters.
calepayson commented on Student perceptions of AI coding assistants in learning   arxiv.org/abs/2507.22900... · Posted by u/victorbuilds
quesera · 2 months ago
FWIW: When I was in undergrad, the students who showed up only for exams and sat in the back of the room were not cheating, and still ended up with some of the best scores.

They had opted out of the lectures, believing that they were inefficient or ineffective (or just poorly scheduled). Not everyone learns best in a lecture format. And not everyone is starting with the same level of knowledge of the topic.

Also:

> A 4.0 and a good score on an online assessment used to be a great signal that someone was competent

... this has never been true in my experience, as a student or hiring manager.

calepayson · 2 months ago
> FWIW: When I was in undergrad, the students who showed up only for exams and sat in the back of the room were not cheating, and still ended up with some of the best scores.

For many classes this is still the case, and I lump these folks in with the great students. They still care about learning the material.

My experience has been that these students are super common in required undergrad classes and not at all common in the graduate-level electives that I’ve seen this happening in.

> ... this has never been true in my experience, as a student or hiring manager.

Good to know. What’ve you focused on when you’re hiring?

calepayson commented on Student perceptions of AI coding assistants in learning   arxiv.org/abs/2507.22900... · Posted by u/victorbuilds
calepayson · 2 months ago
> Our findings reveal that students perceived AI tools as helpful for grasping code concepts and boosting their confidence during the initial development phase. However, a noticeable difficulty emerged when students were asked to work un-aided, pointing to potential over reliance and gaps in foundational knowledge transfer.

As someone studying CS/ML this is dead on but I don't think the side-effects of this are discussed enough. Frankly, cheating has never been more incentivized and it's breaking the higher education system (at least that's my experience, things might be different at the top tier schools).

Just about every STEM class I've taken has had some kind of curve. Sometimes individual assignments are curved, sometimes the final grade, sometimes the curve isn't a curve but some sort of extra credit. Ideally it should be feasible to score 100% in a class but I think this actually takes a shocking amount of resources. In reality, professors have research or jobs to attend to and same with the students. Ideally there are sections and office hours and the professor is deeply conscious of giving out assignments that faithfully represent what students might be tested on. But often this isn't the case. The school can only afford two hours of TA time a week, the professors have obligations to research and work, the students have the same. And so historically the curve has been there to make up for the discrepancy between ideals and reality. It's there to make sure that great students get the grades that they deserve.

LLMs have turned the curve on its head.

When cheating was hard the curve was largely successful. The great students got great grades, the good students got good grades, those that were struggling usually managed a C+/B-, and those that were checked out or not putting in the time failed. The folks who cheated tended to be the struggling students but, because cheating wasn't that effective, maybe they went from a failing grade to just passing the class. A classic example is sneaking identities into a calculus test. Sure it helps if you don't know the identities but not knowing the identities is a great sign that you didn't practice enough. Without that practice they still tend to do poorly on the test.

But now cheating is easy and, I think it should change the way we look at grades. This semester, not one of my classes is curved because there is always someone who gets a 100%. Coincidentally, that person is never who you would expect. The students who attend every class, ask questions, go to office hours, and do their assignments without LLMs tend to score in B+/A- range on tests and quizzes. The folks who set the curve on those assignments tend to only show up for tests and quizzes and then sit in the far back corners when they do. Just about every test I take now, there's a mad competition for those back desks. Some classes people just dispense with the desk and take a chair to the back of the room.

Every one of the great students I know is murdering themselves to try to stay in the B+/A- range.

A common refrain when people talk about this is "cheaters only cheat themselves" and while I think has historically been mostly true, I think it's bullshit now. Cheating is just too easy, the folks who care are losing the arms race. My most impressive peers are struggling to get past the first round of interviews. Meanwhile, the folks who don't show up to class and casually get perfect scores are also getting perfect scores on the online assessments. Almost all the competent people I know are getting squeezed out of the pipeline before they can compete on level-footing.

We've created a system that massively incentivizes cheating and then invented the ultimate cheating tool. A 4.0 and a good score on an online assessment used to be a great signal that someone was competent. I think these next few years, until universities and hiring teams adapt to LLMs, we're going to start seeing perfect scores as a red flag.

calepayson commented on Ask HN: How do you handle logging and evaluation when training ML models?    · Posted by u/calepayson
calepayson · 3 months ago
For now, the plan is to move from Jupyter back to a text editor. Jupyter is very forgiving of mistakes. The model didn't work? Change some parameters and rerun the training cell. This is amazing for new folks, who are being bombarded by new information, and (it sounds like) for experienced folks who have already developed great habits around ML projects. But I think intermediate folks need a little friction to help hammer home why best practice is best practice.

I'm hoping the text editor + project directory approach helps force ML projects away from a single file and towards some sort of codified project structure. Sometimes it just feels like there's too much information in a file and it becomes hard to assign it to a location mentally (a bit like reading a physical copy of a tough book vs a kindle copy). Any advice or thoughts on this would be appreciated!

u/calepayson

KarmaCake day165November 24, 2023View Original