Readit News logoReadit News
kevindamm commented on Manim: Animation engine for explanatory math videos   github.com/3b1b/manim... · Posted by u/pykello
0_____0 · 18 hours ago
It's pretty wild to me (I do hardware) that data goods like code can rot the way they do. If my electronics designs sit for a couple years, they'll need changes to deal with parts obsolescence etc. if you want to make new units.

If you did want your software project to run the same as today when compiled/interpreted 10 years from now, what would you have to reach for to make it 'rot-resistant'?

kevindamm · 18 hours ago
The biggest factor is dependencies' changes, so a good defense against bitrot is to reduce the dependencies as much as possible and try to limit dependencies to those which are exceptionally stable.

This greatly limits velocity, though, and still doesn't help against security issues that need patching.. or if any of the stable dependencies made certain assumptions about hardware that has since changed. But, with the right selection of dependencies and some attention to good design, it is possible to write code durable against bitrot. It's just very uncommon.

Deleted Comment

kevindamm commented on Unification (2018)   eli.thegreenplace.net/201... · Posted by u/asplake
maweki · 6 days ago
Datalog does not need/do unification for rule evaluation, as it is just matching variables to values in a single direction. Body literals are matched against the database and the substitutions are applied to the rest of the rule and the head.

Prolog does unification of the proof goal with the rule head. It's necessary there but not with datalog.

kevindamm · 6 days ago
While bottom-up evaluation is the norm in datalog, it is not a requirement and there are datalog engines that evaluate top-down or all-at-once.

But I still agree with you about the capitalization. Some formats, like KIF, use a '?' prefix instead, and I've seen some HRF notations that mix the '?' prefix with non-KIF formatting (':-' operator and '.' terminator).

kevindamm commented on Vibe coding tips and tricks   github.com/awslabs/mcp/bl... · Posted by u/mooreds
colesantiago · 6 days ago
> "Thoroughly review and understand the generated code"

That isn't vibe coding though.

Vibe coding means you don't look at the code, you look at the front / back end and accept what you see if it meets your expectations visually, and the code doesn't matter in this case, you "see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works." [1]

If the changes are good enough, i.e. the front/backend works well, then it's good and keep prompting.

You rely on and give in into the ~vibes~. [1]

[1] https://x.com/karpathy/status/1886192184808149383

kevindamm · 6 days ago
Maybe the zeroth tip is "never go full vibe coder."

It can be tempting, but there's so much impact that even small changes to the code can have, and often in subtle ways, that it should at least be scanned and read carefully in certain critical parts. Especially as you near the point where hosting it on AWS is practical.

Even in Karpathy's original quote that you referenced he says "It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding." Maybe it should have been called vibe prompting.

kevindamm commented on AI doesn't lighten the burden of mastery   playtechnique.io/blog/ai-... · Posted by u/gwynforthewyn
lordnacho · 6 days ago
In recent weeks, I have made huge changes to my main codebase. Pretty sweeping stuff, things that would have taken me months to get right. Both big, architecturally important things, as well as minor housekeeping tasks.

None of it could have been done without AI, yet I am somehow inclined to agree with the sentiment in this article.

Most of what I've done lately is, in some strange sense, just typing quickly. I already knew what changes I wanted, in fact I had it documented in Trello. I already understood what the code did and where I wanted it to go. What was stopping me, then?

Actually, it was the dread loop of "aw gawd, gotta move this to here, change this, import that, see if it builds, goto [aw gawd]". To be fair, it isn't just typing, there ARE actual decisions to be made as well, but all with a certain structure in mind. So the dread loop would take a long long time.

To the extent that I'm able to explain the steps, Claude has been wonderful. I can tell it to do something, it will make a suggestion, and I will correct it. Very little toil, and being able to make changes quickly actually opens up a lot of exploration.

But I wonder if AI had not been invented at this point in my career, where I would be. I wonder what I will teach my teenager about coding.

I've been using a computer to write trading systems for a long time now. I've slogged through some very detailed little things over the years. Everything from how networks function to how c++ compiles things, how various exchanges work on the protocol level, how the strats make money.

I consider it all a very long apprenticeship.

But the timing of AI, for me, is very special. I've worked through a lot of minutiae in order to understand stuff, and just as it's coming together in a greater whole, I get this tool that lets me zip through the tedium.

I wonder, is there a danger to giving the tool to a new apprentice? If I send my kid off to learn coding using the AI, will it be a mess? Or does he get to mastery in half the time of his father? I'm not sure the answer is so obvious.

kevindamm · 6 days ago
I think it depends on the effort put into reading and understanding the code being generated. The article makes the assumption that extended use of LLMs leads to a shift of _not reviewing and validating the code_. It is pointed out as the wrong thing to do but goes on assuming that that's what you do. I think it's like reading books.. there are various degrees of reading comprehension from skimming for content/tone, to reading for enjoyment, to studying for applications, to active analysis like preparing for a book club. There isn't a prescribed depth of reading for any document but context and audience has an effect on what depth is appropriate. With code, if it's for a one-off utility that can be verified for a specific application, yeah why not just look at its output and skip the code, full vibing, especially if it doesn't have any authority on its own. But if it is business critical it better still have at least two individuals read over it, and other CONTRIBUTING-related policies.

And it's not just complacence.. this illusion of mastery cuts even harder for those who haven't really developed the skills to do a review of the code. And, some code is just easier to write than it is to read, or easier to generate with confidence using an automata or some macro-level code, which often the LLMs will not produce, in preference to repeated-in-various-styles inlining of sub-solutions, unless you have enough mastery to know how to ask for the appropriate abstraction and would still rather not just write the deterministic version.

   > I wonder, is there a danger to giving the tool to a new apprentice? If I send my kid off to learn coding using the AI, will it be a mess
As long as your kid develops the level of mastery needed to review the code, and takes the time to review it, I don't think it'll be a mess (or not too large to debug). A lot of this depends on how role models use the tool, I think. If it's always nonchalant "oh we'll just re-roll or change the prompt and see" then I doubt there will be mastery. If the response is "hmm, *opens debugger*" then it's much more likely.

I don't think there's anything wrong with holding back on access to LLM code generators but that's like saying no access to any modern LLMs at this point, so maybe that's too restrictive; tbh I'm glad that's not a decision I'm having to make for any young person these days. But separate from that you can still encourage a status quo of due diligence for any code that gets generated.

kevindamm commented on OpenAI Progress   progress.openai.com... · Posted by u/vinhnx
reasonableklout · 7 days ago
I'd love to know more about how OpenAI (or Alec Radford et al.) even decided GPT-1 was worth investing more into. At a glance the output is barely distinguishable from Markov chains. If in 2018 you told me that scaling the algorithm up 100-1000x would lead to computers talking to people/coding/reasoning/beating the IMO I'd tell you to take your meds.
kevindamm · 7 days ago
Transformers can train models with much larger parameter sizes compared to other model architectures (with the same amount of compute and time), so it has an evident advantage in terms of being able to scale. Whether scaling the models up to multi-billion parameters would eventually pay out was still a bet but it wasn't a wild bet out of nowhere.
kevindamm commented on The Factory Timezone   data.iana.org/time-zones/... · Posted by u/todsacerdoti
DaiPlusPlus · 11 days ago
What happens if someone makes an honest mistake (or is just malicious) and makes their NTP server run fast?
kevindamm · 11 days ago
It's system dependent but Linux will generally speed up or slow down the time advancement until the delta from adjtime(...) matches up:

https://linux.die.net/man/2/clock_gettime

   This clock is not affected by discontinuous jumps in the system time (e.g., if the system administrator manually changes the clock), but is affected by the incremental adjustments performed by adjtime(3) and NTP.
https://linux.die.net/man/3/adjtime

   If the adjustment in delta is positive, then the system clock is speeded up by some small percentage (i.e., by adding a small amount of time to the clock value in each second) until the adjustment has been completed. If the adjustment in delta is negative, then the clock is slowed down in a similar fashion.

Deleted Comment

kevindamm commented on Nearly 1 in 3 Starlink satellites detected within the SKA-Low frequency band   astrobites.org/2025/08/12... · Posted by u/aragilar
jocaal · 11 days ago
Einstein developed relativity from mathematical reasoning. A major influence was the michaelson morley experiment, which was solely done on earth. Relativity was developed in the early 1900's and the first radio telescope was made in the 1930's. Also, orbital mechanics uses mostly Newtonian mechanics and the communication of satellites is radio waves which were understood way before einstein. There is no relativity involved. Literally everything you said is factually incorrect.
kevindamm · 11 days ago
Satellites experience time dilation because of their orbital velocity and gravitational field being significantly different at their altitude. Without accounting for this, the clock drift would become unmanageable and Newtonian models are insufficient to correct for it.

You're right that the majority of Einstein's theories were ultimately thought experiments but getting the parameters correct involved a lot of measurements and experimenting, to get to where tech like GPS and StarLink can be accurate. We were also looking at far away stars for centuries before Einstein so that he could have the environment for his ideas to be discussed, which I was including in my phrasing "looking at things light-years away."

I wasn't saying it to start an argument, though. I wanted to counter the rather dismal view of "why do we need radio telescopes."

kevindamm commented on Nearly 1 in 3 Starlink satellites detected within the SKA-Low frequency band   astrobites.org/2025/08/12... · Posted by u/aragilar
jocaal · 11 days ago
Why do we need radio telescopes. Satellite communications are infinitely more useful for people on earth than some research papers about things light-years away
kevindamm · 11 days ago
Ironically, those satellites would not be able to communicate effectively without the understanding of relativity that was obtained by looking at things light-years away.

u/kevindamm

KarmaCake day1339August 17, 2023View Original