Readit News logoReadit News
anonthrowawy commented on Clarifying our pricing   cursor.com/en/blog/june-2... · Posted by u/twapi
deepdarkforest · 2 months ago
Cursor raised 900M, are losing market share to claude code(resorting to poaching 2 leads from there [1]), AND they're decreasing the value of their product? Huge red flag. They should be able to burn cash like no tomorrow. Also, the PR language on this post, and the timing(midnight on a US holiday) is not ideal.

This news coupled with google raising the new gemini flash cost by 5x, azure dropping their startup credits, and 2-3 others(papers showing RL has also hit a wall for distilling or improving models), are now solid signals that despite what Sam altman says, intelligence will NOT be soon too cheap to meter. I think we are starting to see the squeeze from the big players. Interesting. I wonder how many startups are betting on models becoming 5-10x cheaper for their business models. If on device models don't get good, I bet a lot of them are in big trouble

[1] https://www.investing.com/news/economy-news/anysphere-hires-...

anonthrowawy · 2 months ago
> papers showing RL also hitting a wall

any reference for this?

anonthrowawy commented on Q-learning is not yet scalable   seohong.me/blog/q-learnin... · Posted by u/jxmorris12
s-mon · 3 months ago
While I like the blogpost, I think the use of unexplained acronyms undermines the opportunity of this blogpost to be useful to the wider audience. Small nit: make sure acronyms and jargon is explained.
anonthrowawy · 3 months ago
i actually think thats what made it crisp
anonthrowawy commented on Seven replies to the viral Apple reasoning paper and why they fall short   garymarcus.substack.com/p... · Posted by u/spwestwood
bowsamic · 3 months ago
This doesn’t address the primary issue: that they had no methodology for choosing puzzles that weren’t in the training set and indeed while they claimed to have chosen puzzles that aren’t they didn’t explain why they think that. The whole point of the paper was to test LLM reasoning in untrained cases but there’s no reason to expect such puzzles to not part of the training set, and if you don’t have any way of telling if it is not or then your paper is not going to work out
anonthrowawy · 3 months ago
how could you prove that?

u/anonthrowawy

KarmaCake day3June 2, 2025View Original