https://www.intc.com/news-events/press-releases/detail/1750/...
What’s old is new again: back in 2017, Intel tried something similar with AMD (Kaby Lake-G). They paired a Kaby Lake CPU with a Vega GPU and HBM, but the product flopped: https://www.tomshardware.com/news/intel-discontinue-kaby-lak...
Let's go back even further.. I get strong nForce vibes from that extract!
256kbit/s was pretty much the standard ADSL speed 20 years ago. I remember thinking it was lucky some of my friends had 512kbit/s and 1500kbit/s was considered extremely fortunate.
Even still calls over Skype worked fine, you could run IRC or MSN Messenger while loading flash games or downloading MP3s. You could definitely play games like Starcraft, Age of Empires, Quake, UT2004, etc. on a 256k ADSL line. Those plans were also about 8x the price of this plan, not even adjusting for inflation.
Not only that, those lines were typically only 64k upload speed. The usefulness of a 500kbit/s up/down line is incredibly high. I think the only reason it might seem less useful now is that web services are not typically optimised to be usable on dial-up speeds like they were 20 years ago.
With the right setup and having feeds/content download asynchronously rather than "on-demand", 500kbit/s is still plenty of internet by today's standards.
That's my charitable interpretation.
Using an LLM to count how many Rs are in the word strawberry is silly. Using it to write a script to reliably determine how many <LETTER> are in <WORD> is not so silly.
The same goes for many repeated task you'd have an LLM naively perform.
I think that is essentially what the article is getting at, but it's got very little to do with MCP. Perhaps the author has more familiarity with "slop" MCP tools than I do.
The takeaway I also found was that the running cost was really dominated by pretokenization (the regex). It's cool to see that you found a faster way to run the regex, but have you tried comparing the performance of just swapping out the regex engine and leaving the actual BPE to tiktoken? I wonder if that is upstreamable?
https://github.com/openai/tiktoken/blob/main/src/lib.rs#L95-...
The HN submission has been editorialised since it was submitted, originally said "Yes, I will judge you for using AI..." and a lot of the replies early on were dismissive based on the title alone.
[ai]: rewrote the documentation ...
This is helps us to put another set of "glasses" as we later review the code.
If you use AI as tab-complete but it's what you would've done anyway, should you flag it? I don't know, plenty to think about when it comes to what the right amount of disclosure is.
I certainly wish that with our company, people could flag (particularly) large commits as coming from a tool rather than a person, but I guess the idea is that the person is still responsible for whatever the tool generates.
The problem is that it's incredibly enticing for over-worked engineers to have AI do large (ie. diffs) but boring tasks that they'd typically get very little recognition for (eg. ESLint migrations).
Dead Comment
I just symlink now to AGENTS.md
You might want to tell Claude not to write so many comments but you might want to tell Gemini not to reach for Kotlin so much, or something.
A unified approach might be nice, but using the same prompt for all of the LLM "coding tools" is probably not going to be as nice as having prompts tailored for each specific tool.