For those of us who cannot tell, what are the clues?
1. "Q2 2025 revenue above guidance" - Start with fake good news about good Q2 results. Fake because it's baselining on "guidance", which is already low since Wall Street knows Intel is in deep trouble. MBA/Finance types often cherry-pick some (semi-cooked) top-level finance number for good news, even though the whole email is about admitting the company is in deep trouble, announcing layoffs, etc.
2. "We are making hard but necessary decisions to streamline the organization..." - not hard for him, but the people losing their jobs!
3. "We are also on track to implement our return-to-office policy in September" - contract this with later comments about improving culture and empowering engineers!
4. "drive organizational effectiveness and transform our culture" - large companies with ~100k employees don't change their culture, but CEOs love to pretend so. To CEOs, transforming culture usually means making some reporting line changes, directing HR to do do some surveys and "listening sessions", firing teams with low NPS scores and thus forcing people to up their scores on subsequent surveys, and then a few months later declaring victory.
5. "We will eliminate bureaucracy and empower engineers to innovate with greater speed and focus." - for example, by forcing them back to the office? Nothing in this emil indicates actual empowerment.
6. "Strategic Pillars of Growth" - typical MBA speak.
7. "We remain deeply committed to investing in the U.S." ... "To that end, we are further slowing construction in Ohio" - great example of executive double-speak.
8. If you actually parse what this is saying, it's essentially about layoffs, cost-cutting, stopping some investment projects, RTO, and "doubling down" on existing projects like 18A and 14A. No trace of innovation in organizational culture, product design, etc.
9. "I have instituted a policy where every major chip design is reviewed and approved by me before tape-out. This discipline will improve our execution and reduce development costs." - we are improving culture by stating that only the MBA-speak CEO can make good decisions about chip designs, the other 74,999 people are idiots who slow down execution and improve costs!
10. If you look at the "Refine our AI Strategy", it's short and only has obvious things, like "will concentrate our efforts on areas we can disrupt and differentiate, like inference and agentic AI". There is no information here, because of course Intel already lost to Nvidia on training/GPUs, so training isn't a good focus area. But it's pretty shocking that in 2025 there is no actual ideas for what Intel could do in the AI space!
> On my M2 MacBook, the renderer process is now using 6% CPU (down from 15%), and the GPU process is now using 6% CPU and less than 1% GPU (down from 25% and 20%).
This still feels way too much compute for a tiny animation updating a couple times a second.
- How did those two guys get promoted to General, if they're incompetent enough to get themselves into a situation where this is serious problem?
- Given the stated goal of attacking the fortified City, the City's field forces (in the valley between the two Generals) need to be polished off first. Whichever General is the plausible leader for that (higher seniority, larger forces, better terrain, whatever) should start attacking them, counting on the other General to quickly notice and launch his own attack.
And yet they don’t have much to show for it.
For this use-case it's been very useful, it can usually generate close-to-complete solutions, as long as it's one of the major programming languages, and it's a reasonably standard problem. So in general I'm always surprised when people say that LLMs are completely useless for coding --- this is just not true, and I feel sorry for people who shut themselves off from a useful tool.
But even at this small scale, even the best (o3) models sometimes totally fail. Recently I started a series of posts on distributed algorithms [1], and when I was working on the post/code for the Byzantine Generals / Consensus algorithm, o3 --- to my honest surprise --- just totally failed. I tried about 10 different times (both from scratch and describing the incorrect behaviour of its code), also showing it the original Lamport paper, and it just couldn't get it right... even though the toy implementation is just ~100 LOC, the actual algorithm portion is maybe 25 LOC. My hypothesis is that there are very few implementations online, and additionally I find the descriptions of the algorithm a bit vague (interleaves message cascade and decision logic).
[1] https://bytepawn.com/tag/distributed.html