Readit News logoReadit News
codemog commented on Don't post generated/AI-edited comments. HN is for conversation between humans   news.ycombinator.com/news... · Posted by u/usefulposter
fuzzer371 · 9 hours ago
Yup. And they all sound like slop. Read the papers, comprehend the papers, don't make someone else's computer do it for you.
codemog · 8 hours ago
Anti-tech contrarian sentiment happens with every new technology. Someone older than you probably said the same thing about the internet.

Dead Comment

codemog commented on Learnings from paying artists royalties for AI-generated art   kapwing.com/blog/learning... · Posted by u/jenthoven
petterroea · 2 days ago
How is it not arrogant to be firm in your belief, even if signals say otherwise? If I believe it is OK not to shower, and everyone around me complains about it, is it not arrogant of me to ignore the signals because "they just don't understand yet"?

I think a much more useful question is whether some arrogance is necessary to succeed. I personally think it is. But we are discussing a post mortem here, and the author is (in my opinion) clearly beating around the bush and using "the time wasn't right" to hide what may be uncomfortable truths.

Is a post mortem valuable if it doesn't address these face first? I am not the one with all the answers here, but what I am used to in mature tech teams is that the uncomfortable parts are usually the most important in any post mortem.

There are plenty of stories about companies that failed because the timing was wrong, and then see another company succeed in their place later on. That doesn't mean failure simply means "the timing was wrong" - you are putting a lot of weight on society adjusting to your belief. Consider that venture capital often invests in hundreds of founders like this, betting that at least one of them wasn't wrong. That's not statistically in your favor.

It is OK (in fact it is valuable) to fail and conclude that your signals may have been wrong. There's a reason some venture capital funds prefer investing in people who have failed before.

codemog · 2 days ago
Personally, I don’t know how you can say the timing was never right and will never be right at any point in the future. That frankly seems impossible, unless it was something like a B2B SaaS that gouges out your eyeballs, but I guess we’ll agree to disagree.
codemog commented on Learnings from paying artists royalties for AI-generated art   kapwing.com/blog/learning... · Posted by u/jenthoven
petterroea · 2 days ago
> The timing wasn’t right. We depended on artists helping us to promote the platform, and they didn’t.

There's a certain arrogance to believing the timing "simply wasn't right". It looks really bad if you try it with any recent controversy:

* "The timing wasn't right to charge people for heated car seats"

* "The timing wasn't right to make Photoshop a subscription service"

* "The timing wasn't right to increase fees"

It's a way of talking yourself away from the fact that what you are making may, inherently, be disliked. The cited survey even seems to have been read as favourably as possible:

> Surveys consistently showed that consumers believed artists deserved payment when AI generated content in their style.

This doesn't mean people want artists style to be generated by AI. It could mean they think it's horrible, but if it happens they should at least be compensated for it. In fact, the quotes survey even says 43% believe companies should ban copying artists styles. I could make the exact opposite argument with the same data:

"Many consumers believe companies should ban copying styles, and this may be a more common opinion than measured as most people have no experience with modern AI tools and therefore no chance to have made an opinion yet. What is known is that the majority believe that if artists were to be copied, they should at least be compensated"

edit: formatting, typo

codemog · 2 days ago
It’s not arrogant to be firm in your beliefs. You’re not arrogant for believing the timing is never right. You may even be 100% right, but you don’t have to belittle or put down the other side. In this case, they already lost, what more do you want?
codemog commented on No, it doesn't cost Anthropic $5k per Claude Code user   martinalderson.com/posts/... · Posted by u/jnord
ymaws · 2 days ago
How confident are you in the opus 4.6 model size? I've always assumed it was a beefier model with more active params that Qwen397B (17B active on the forward pass)
codemog · 2 days ago
Also curious if any experts can weigh in on this. I would guess in the 1 trillion to 2 trillion range.
codemog commented on Iranians describe scenes of catastrophe after Tehran's oil depots bombed   theguardian.com/world/202... · Posted by u/Red_Tarsius
spiderfarmer · 3 days ago
Surely:

This will make the US safer.

This will make stuff cheaper.

This is a well thought out war.

It will improve the US economoy.

It will not destabilise the region.

This will make life better for Americans.

It will in no way make people hate the USA.

codemog · 3 days ago
Great use of tax dollars while the American people face all time cost of living highs among a plethora of many other problems. It’s sickening.
codemog commented on How to run Qwen 3.5 locally   unsloth.ai/docs/models/qw... · Posted by u/Curiositry
otabdeveloper4 · 4 days ago
There's diminishing returns bigly when you increase parameter count.

The sweet spot isn't in the "hundreds of billions" range, it's much lower than that.

Anyways your perception of a model's "quality" is determined by careful post-training.

codemog · 4 days ago
Interesting. I see papers where researchers will finetune models in the 7 to 12b range and even beat or be competitive with frontier models. I wish I knew how this was possible, or had more intuition on such things. If anyone has paper recommendations, I’d appreciate it.
codemog commented on How to run Qwen 3.5 locally   unsloth.ai/docs/models/qw... · Posted by u/Curiositry
throwdbaaway · 4 days ago
There are Qwen3.5 27B quants in the range of 4 bits per weight, which fits into 16G of VRAM. The quality is comparable to Sonnet 4.0 from summer 2025. Inference speed is very good with ik_llama.cpp, and still decent with mainline llama.cpp.
codemog · 4 days ago
Can someone explain how a 27B model (quantized no less) ever be comparable to a model like Sonnet 4.0 which is likely in the mid to high hundreds of billions of parameters?

Is it really just more training data? I doubt it’s architecture improvements, or at the very least, I imagine any architecture improvements are marginal.

codemog commented on We might all be AI engineers now   yasint.dev/we-might-all-b... · Posted by u/sn0wflak3s
roli64 · 6 days ago
Lost me at "I’m building something right now. I won’t get into the details. You don’t give away the idea."
codemog · 6 days ago
It’s kind of funny seeing all the AI hype guys talking about their 10 OpenClaw instances all running doing work and when you ask what it is, you can never get a straight answer..

For the record though, I love agentic coding. It deals with the accumulated cruft of software for me.

u/codemog

KarmaCake day29March 6, 2026View Original