Readit News logoReadit News
flamedoge commented on U.S. government takes 10% stake in Intel   cnbc.com/2025/08/22/intel... · Posted by u/givemeethekeys
miohtama · 12 days ago
There is only 1 winner and 1 loser: Intel.

It's the only chip manufacturer "left" in the US. The argument is national security: the US expects China to invade Taiwan and this will kill TSMC in the process.

Whether this will happen or not can be debated, but this is what the government expects.

flamedoge · 12 days ago
so.. shouldn't US take stake in TSMC instead?
flamedoge commented on Ask HN: Any insider takes on Yann LeCun's push against current architectures?    · Posted by u/vessenes
Lerc · 6 months ago
I feel like we're stacking naive misinterpretations of how LLMs function on top of one another here. Grasping gradient descent and autoregressive generation can give you a false sense of confidence. It is like knowing how transistors make up logic gates and believing you know more than CPU design than you actually do.

Rather than inferring from how you imagine the architecture working, you can look at examples and counterexamples to see what capabilities they have.

One misconception is that predicting the next word means there is no internal idea on the word after next. The simple disproof of this is that models put 'an' instead of 'a' ahead of words beginning with vowels. It would be quite easy to detect (and exploit) behaviour that decided to use a vowel word just because it somewhat arbitrarily used an 'an'.

Models predict the next word, but they don't just predict the next word. They generate a great deal of internal information in service of that goal. Placing limits on their abilities by assuming the output they express is the sum total of what they have done is a mistake. The output probability is not what it thinks, it is a reduction of what it thinks.

One of Andrej Karpathy's recent videos talked about how researchers showed that models do have an internal sense of not knowing the answer, but fine tuning on question answering I'd not give them the ability to express that knowledge. Finding information the model did and didn't know then fine tuning to say I don't know for cases where it had no information allowed the model to generalise and express "I don't know"

flamedoge · 6 months ago
It literally doesn't know how to handle 'I don't know' and needs to be taught. fascinating.
flamedoge commented on OpenAI O3-Mini   openai.com/index/openai-o... · Posted by u/johnneville
s_dev · 7 months ago
Currently on the internet people skip the article and go straight to the comments. Soon people will skip the comments and go striaght to an AI summary reading neither the original article nor the comments.
flamedoge · 7 months ago
Soon people will read other people's summary that they copied from AI summary on the web.
flamedoge commented on Amazon workers to strike at multiple US warehouses during busy holiday season   reuters.com/technology/am... · Posted by u/petethomas
ramon156 · 9 months ago
Had to buy some simple toolkit today. It was 5 bucks on Amazon, but I decided to find a local store that sells it. Same price btw.

I think we should be a bit more aware about the impact of ordering everything through Amazon. Not only regarding delivery, but also the message it sends to local stores.

flamedoge · 9 months ago
Order from local store, order from Amazon, use first one to receive, return new one to whichever was more expensive.
flamedoge commented on Meta Movie Gen   ai.meta.com/research/movi... · Posted by u/brianjking
intended · a year ago
Human attention doesn’t get freed up by creating more content. It gets consumed.

In all your examples -

1) Yes. It was a good thing

2) Yes. It is now a thing done to learn how to draw, and a niche skill

3) Yes, yes, yes.

IF people are bemoaning the devaluing of certain activity, yup it’s true. It happens. There are fewer horses than there were yesterday.

Certain forms of activity get devalued. They are replaced by an alternative that creates surplus. But life goes on to bigger things.

The same with GenAI. Content is increasingly easy to create at scale. This reduced cost of production applies for both useful content and pollution.

Except if finding valid information is made harder, - then life becomes more complex and we don’t go on to bigger and better things.

The abundance of fabricated content which is indistinguishable from authentic content means that authentic content is devalued, and that any content consumed must now wait before it is verified.

It increases the cost of trusting information, which reduces the overall value of the network. It’s like the cost of lemons for used cars.

This is the looming problem. Hopefully something appears that mitigates the worst case scenarios, however the medium case and even bad case are well and truly alive.

flamedoge · a year ago
who's stopping you from Amish lifestyle? problem seems to be that people want authentic, hand-made 'art' but with price of a mass manufactured tech.
flamedoge commented on NASA acknowledges it cannot quantify risk of Starliner propulsion issues   arstechnica.com/space/202... · Posted by u/geerlingguy
windexh8er · a year ago
That's because Starliner isn't an MVP. It's a vehicle designed to transport humans. You don't send humans to space in an MVP.
flamedoge · a year ago
so whats it good for. sending monkeys?
flamedoge commented on Noexcept affects libstdc++’s unordered_set   quuxplusone.github.io/blo... · Posted by u/signa11
z_open · a year ago
somewhat unrelated but it's worth pointing out that noexcept is more specific for move semantics.

In fact most c++ developers believe that throwing an exception in a noexcept function is undefined behavior. It is not. The behavior is defined to call std::terminate. Which would lead one to ask how does it know to call that. Because noexcept functions have a hidden try catch to see if it should call it. The result is that noexcept can hurt performance, which is surprising behavior. C++ is just complicated.

flamedoge · a year ago
what.. noexcept throws exception..? what kind of infinite wisdom led to this
flamedoge commented on CrowdStrike Official RCA is now out [pdf]   crowdstrike.com/wp-conten... · Posted by u/Sarkie
cptskippy · a year ago
Yeah, my read was that they changed an interface to include an optional parameter but never actually tested the underlying code by providing said optional parameter.

The bug in clients (sensors) wasn't due to regex, the regex was in their integration unit testing which also had a bug and was never supplying the 21st parameter to the client code.

flamedoge · a year ago
regex isn't probably a good thing in a kernel boot code considering it's NP hard
flamedoge commented on C++'s `noexcept` can sometimes help or hurt performance   16bpp.net/blog/post/noexc... · Posted by u/def-pri-pub
aw1621107 · a year ago
> I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc).

Do you know if the reasoning for originally switching noexcept violations from UB to calling std::terminate was documented anywhere? The corresponding meeting minutes [0] describes the vote to change the behavior but not the reason(s). There's this bit, though:

> [Adamczyk] added that there was strong consensus that this approach did not add call overhead in quality exception handling implementations, and did not restrict optimization unnecessarily.

Did that view not pan out since that meeting?

[0]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n30...

flamedoge · a year ago
> did not restrict optimization unnecessarily.

well clearly there is a cost

flamedoge commented on Zen 5's 2-ahead branch predictor: how a 30 year old idea allows for new tricks   chipsandcheese.com/2024/0... · Posted by u/matt_d
cpldcpu · a year ago
My understanding is that they do not predict the target of the next branch but of the one after the next (2-ahead). This is probably much harder than next-branch prediction but does allows to initiate code fetch much earlier to feed even deeper pipelines.
flamedoge · a year ago
I wonder what they need before this change. Branch predictor hardware may not have accounted for depth beyond single conditional branch? but pipeline was probably always filled, unpredicted.

u/flamedoge

KarmaCake day275January 20, 2015View Original