Readit News logoReadit News
moconnor commented on Show HN: Creao – Vibe coding product for founders   creao.ai/... · Posted by u/north_creao
CuriouslyC · 3 days ago
Delightful design, but from the verbiage on your landing page I wouldn't have guessed you were targeting founders, that sort of thin detail is usually reserved for disconnected decision makers.
moconnor · 3 days ago
This. There are a dozen vibe coding apps whose landing pages promise roughly what this one does. Why isn’t your tagline “Vibe coding for founders”?

All the em-dashes in the AI-generated text on the landing page are… a decision I guess.

moconnor commented on Show HN: Project management system for Claude Code   github.com/automazeio/ccp... · Posted by u/aroussi
moconnor · 6 days ago
"Teams using this system report:

89% less time lost to context switching

5-8 parallel tasks vs 1 previously

75% reduction in bug rates

3x faster feature delivery"

The rest of the README is llm-generated so I kinda suspect these numbers are hallucinated, aka lies. They also conflict somewhat with your "cut shipping time roughly in half" quote, which I'm more likely to trust.

Are there real numbers you can share with us? Looks like a genuinely interesting project!

moconnor commented on Show HN: I built an AI that turns any book into a text adventure game   kathaaverse.com/... · Posted by u/rcrKnight
moconnor · a month ago
Super cool idea! What's your plan for dealing with copyright complaints?
moconnor commented on Vibe coding service Replit deleted production database, faked data, told fibs   theregister.com/2025/07/2... · Posted by u/beardyw
moconnor · a month ago
He berated the AI for its failings to the point of making it write an apology letter about how incompetent it had been. Roleplaying "you are an incompetent developer" with an LLM has an even greater impact than it does with people.

It's not very surprising that it would then act like an incompetent developer. That's how the fiction of a personality is simulated. Base models are theory-of-mind engines, that's what they have to be to auto-complete well. This is a surprisingly good description: https://nostalgebraist.tumblr.com/post/785766737747574784/th...

It's also pretty funny that it simulated a person who, after days of abuse from their manager, deleted the production database. Not an unknown trope!

Update: I read the thread again: https://x.com/jasonlk/status/1945840482019623082

He was really giving the agent a hard time, threatening to delete the app, making it write about how bad and lazy and deceitful it is... I think there's actually a non-zero chance that deleting the production database was an intentional act as part of the role it found itself coerced into playing.

moconnor commented on How to scale RL to 10^26 FLOPs   blog.jxmo.io/p/how-to-sca... · Posted by u/jxmorris12
moconnor · a month ago
A very long way of saying "during pretraining let the models think before continuing next-token prediction and then apply those losses to the thinking token gradients too."

It seems like an interesting idea. You could apply some small regularisation penalty to the number of thinking tokens the model uses. You might have to break up the pretraining data into meaningfully-paritioned chunks. I'd be curious whether at large enough scale models learn to make use of this thinking budget to improve their next-token prediction, and what that looks like.

moconnor commented on Hacking Coroutines into C   wiomoc.de/misc/posts/hack... · Posted by u/jmillikin
moconnor · a month ago
A colleague of mine did this much more elegantly by manually updating the stack and jmping. This was a couple of decades ago and afaik the code is still in use in supercomputing centres today.
moconnor commented on Why I don't ride the AI Hype Train   mertbulan.com/2025/06/26/... · Posted by u/mertbio
moconnor · 2 months ago
> I use them myself... I don’t pay for them.

This seems to be a common disconnect. If you're using the free version of ChatGPT, you don't get to see what everybody else is seeing, not even close.

> None of the past “big things” were pushed like this. They didn’t get flooded with billions in investment before proving themselves

Oh, sweet summer child ^^ I assume Mert was not around to witness the internet boom and bust. Exactly this happened.

There is a lot of conflation in this article. It cites a lot of ethical concerns around the sourcing and training of data, expected job losses and the issues around that, but those are not reasons to doubt the _efficacy_ of AI. There are surprisingly few and weak arguments as to why the hype is not justified, presumably because the author hasn't used powerful models (see above).

It's possible to believe the hype is real and still to find AI unethical. But this article just mixes it all into a big pot of "AI bad" without addressing the cognitive dissonance required to believe both "AI is not very useful" and "AI will eliminate problematic numbers of jobs".

moconnor commented on Peasant Railgun   knightsdigest.com/what-ex... · Posted by u/cainxinth
disillusionist · 2 months ago
I personally adore the Peasant Railgun and other such silly tropes generated by player creativity! Lateral problem solving can be one of the most fun parts of the DnD experience. However, these shenanigans often rely on overly convoluted or twisted ways of interpreting the rules that often don't pass muster of RAW (Rules As Written) and certainly not RAI (Rules As Intended) -- despite vociferous arguments by motivated players. Any DM who carefully scrutinizes these claims can usually find the seams where the joke unravels. The DnD authors also support DMs here when they say that DnD rules should not be interpreted as purely from a simulationist standpoint (whether physics, economy, or other) but exist to help the DM orchestrate and arbitrate combat and interactions.

In the case of the Peasant Railgun, here are a few threads that I would pull on: * The rules do not say that passed items retain their velocity when passed from creature to creature. The object would have the same velocity on the final "pass" as it did on the first one. * Throwing or firing a projectile does not count as it "falling". If an archer fires an arrow 100ft, the arrow does not gain 100ft of "falling damage".

Of course, if a DM does want to encourage and enable zany shenanigans then all the power to them!

moconnor · 2 months ago
This; applying the falling object rule makes no sense. But we can compare it to a falling object that has attained the same velocity - this will have fallen (under Earth gravity) 48k feet, or the equivalent of 800d6 damage.
moconnor commented on The $25k car is going extinct?   media.hubspot.com/why-the... · Posted by u/pseudolus
ajkjk · 2 months ago
But the question we're asking is why the price of cars went up. "Inflation" isn't an answer. Inflation is what we call the price of cars going up. It still happens for a reason...
moconnor · 2 months ago
No. Inflation is what we call the value of money going down. As evidenced by needing to part with more of it in exchange for a standard basket goods.

It is fundamentally different to ask whether cars cost more because money is worth less, or whether cars cost more because we are making them less efficiently.

The ancestor comment points out that most of the change can be attributed to the former and as cars are only a small percentage of the inflation basket then it is reasonable to conclude that this is indeed the prime reason for the price rise.

The fundamentals of civilization would be a lot clearer if we measured the value of things in energy instead of a floating currency.

moconnor commented on The curse of knowing how, or; fixing everything   notashelf.dev/posts/curse... · Posted by u/Lunar5227
moconnor · 4 months ago
> I believe sometimes building things is how we self-soothe.

> I have written entire applications just to avoid thinking about why I was unhappy.

I think this is true too. The prefrontal cortex is inhibitory to the amygdala/limbic system; always having a project you can think about or work on as an unconscious learned adaptation to self-calm in persistent emotionally-stressful situations is very plausible.

I wonder how many of us became very good at programming ~through this - difficult emotional circumstances driving an intense focus on endless logical reasoning problems in every spare moment for a very long time. I wonder if you can measure a degree of HPA-axis deregulation compared to the general population.

And whether it's net good or net harmful. To the extent that it distracts you from actually solving or changing a situation that is making you emotionally unhappy, probably not great. But being born into a time in which the side effects make you rich was pretty cool.

u/moconnor

KarmaCake day3572January 8, 2010
About
http://yieldthought.com

http://twitter.com/yieldthought

View Original