Readit News logoReadit News
masterjack commented on Meta Ray-Ban Display   meta.com/blog/meta-ray-ba... · Posted by u/martpie
motorest · 6 months ago
> Why do you lie? Clearly this is not the case if you look at the user's submission and comment history

Do you think someone's comment history link is an obscure secret no one can access?

> Clearly this is not the case (..)

Oh really? Please explain in your own words why you believe this is not the case.

masterjack · 6 months ago
> Please explain in your own words why you believe this is not the case.

It wasn’t 2023: Last post 11 months ago, last comment 8 months ago, which is a typical level of lurking

masterjack commented on Anthropic agrees to pay $1.5B to settle lawsuit with book authors   nytimes.com/2025/09/05/te... · Posted by u/acomjean
slg · 6 months ago
Maybe small compared to the money raised, but it is in fact enormous compared to the money earned. Their revenue was under $1b last year and they projected themselves as likely to make $2b this year. This payout equals their average yearly revenue of the last two years.
masterjack · 6 months ago
I thought they were projecting 10B and said a few months ago they have already grown from a 1B to 4B run rate
masterjack commented on New knot theory discovery overturns long-held mathematical assumption   scientificamerican.com/ar... · Posted by u/baruchel
Someone · 6 months ago
I also do not understand the intuition behind the assumption. To tie two knots together, you have to make a cut in both of them, and you have two ways to tie them together again. Doesn’t that introduce some opportunity to get rid of some complexity of the knots?
masterjack · 6 months ago
Remarkably there’s really just one way to tie them together, you can always manipulate the knot to move between the different variants
masterjack commented on Anthropic raises $13B Series F   anthropic.com/news/anthro... · Posted by u/meetpateltech
asdffdasy · 6 months ago
your crystal ball needs calibration. this round alone was 14pct (183/13)... so the dillution was likely over 20pct.
masterjack · 6 months ago
13/183=0.071 so how can it be 20pct for this round??
masterjack commented on Anthropic raises $13B Series F   anthropic.com/news/anthro... · Posted by u/meetpateltech
m101 · 6 months ago
This round started at $5bn target and it ends at $13bn. When this sort of thing happens it's normally because the company wants to 1) hit the "hot" market, and 2) has uncertainty about their ability to raise revenues at higher valuations in the future.

Whatever it is, the signal it's sending of Anthropic insiders is negative for AI investors.

Other comments having read a few hundred comments here:

- there is so much confusion, uncertainty, and fanciful thinking that it reminds me of the other bubbles that existed when people had to stretch their imaginations to justify valuations

- there is increasing spend on training models, and decreasing improvements in new models. This does not bode well

- wealth is an extremely difficult thing to define. It's defined vaguely through things like cooperation and trade. Ultimately these llms actually do need to create "wealth" to justify the massive investments made. If they don't do this fast this house of cards is going to fall, fast.

- having worked in finance and spoken to finance types for a long time: they are not geniuses. They are far from it. Most people went into finance because of an interest in money. Just because these people have $13bn of other people's money at their disposal doesn't mean they are any smarter than people orders of magnitude poorer. Don't assume they know what they are doing.

masterjack · 6 months ago
I may agree if it was a 20% dilution round, but not if they are increasing from 3% to 7% dilution. Being so massively oversubscribed is a bullish sign, bad companies would be struggling to fill out their round.
masterjack commented on Gemini 2.5 Deep Think   blog.google/products/gemi... · Posted by u/meetpateltech
mettamage · 8 months ago
Wait, how does this work? If you load in one LLM of 40 GB, then to load in four more LLMs of 40 GB still takes up an extra 160 GB of memory right?
masterjack · 8 months ago
It will typically be the same 40 GB model loaded in, but called with many different inputs simultaneously
masterjack commented on The Math Is Haunted   overreacted.io/the-math-i... · Posted by u/danabramov
7373737373 · 8 months ago
Does Lean have some sort of verification mode for untrusted proofs that guarantees that a given proof certainly does not use any "sorry" (however indirectly), and does not add to the "proving power" of some separately given fixed set of axioms with further axioms or definitions?
masterjack · 8 months ago
Yes, you can `print axioms` to make sure no axioms were added, make sure it compiles with no warnings or errors. There’s also a SafeVerify utility that checks more thoroughly and catches some tricks that RL systems have found

u/masterjack

KarmaCake day237June 5, 2014
About
Jack of all trades, master of some. Full-stack ranging from assembly to react to machine learning
View Original