Readit News logoReadit News
22c commented on FFmpeg to Google: Fund us or stop sending bugs   thenewstack.io/ffmpeg-to-... · Posted by u/CrankyBear
22c · a month ago
Sidenote from the article, but TIL Mark Atwood is no longer at Amazon.
22c commented on Nvidia buys $5B in Intel   tomshardware.com/pc-compo... · Posted by u/stycznik
scrlk · 3 months ago
> For personal computing, Intel will build and offer to the market x86 system-on-chips (SOCs) that integrate NVIDIA RTX GPU chiplets. These new x86 RTX SOCs will power a wide range of PCs that demand integration of world-class CPUs and GPUs.

https://www.intc.com/news-events/press-releases/detail/1750/...

What’s old is new again: back in 2017, Intel tried something similar with AMD (Kaby Lake-G). They paired a Kaby Lake CPU with a Vega GPU and HBM, but the product flopped: https://www.tomshardware.com/news/intel-discontinue-kaby-lak...

22c · 3 months ago
> What’s old is new again

Let's go back even further.. I get strong nForce vibes from that extract!

22c commented on Starlink announced a $5/month plan that gives unlimited usage at 500kbits/s   twitter.com/ID_AA_Carmack... · Posted by u/tosh
22c · 4 months ago
Carmack's comments and the comments in the thread entirely surprise me.

256kbit/s was pretty much the standard ADSL speed 20 years ago. I remember thinking it was lucky some of my friends had 512kbit/s and 1500kbit/s was considered extremely fortunate.

Even still calls over Skype worked fine, you could run IRC or MSN Messenger while loading flash games or downloading MP3s. You could definitely play games like Starcraft, Age of Empires, Quake, UT2004, etc. on a 256k ADSL line. Those plans were also about 8x the price of this plan, not even adjusting for inflation.

Not only that, those lines were typically only 64k upload speed. The usefulness of a 500kbit/s up/down line is incredibly high. I think the only reason it might seem less useful now is that web services are not typically optimised to be usable on dial-up speeds like they were 20 years ago.

With the right setup and having feeds/content download asynchronously rather than "on-demand", 500kbit/s is still plenty of internet by today's standards.

22c commented on Holographic ribbon aims to oust magnetic tape with 50-year life span and 200TB   tomshardware.com/pc-compo... · Posted by u/freddier
duskwuff · 5 months ago
That's a clever theory, but the company specifically described it as having "zero energy storage costs".
22c · 5 months ago
Does it mean that they can be stored at room temperature, in humid conditions, etc? ie. requiring no HVAC/dehumidifiers or whatever else might be needed to reliably store archive media?

That's my charitable interpretation.

22c commented on Tools: Code Is All You Need   lucumr.pocoo.org/2025/7/3... · Posted by u/Bogdanp
22c · 6 months ago
On the idea of replacing ones self with a shell script, I think there's nothing stopping people (and it should probably be encouraged) with replacing ones use of an LLM with an LLM generated "shell script".

Using an LLM to count how many Rs are in the word strawberry is silly. Using it to write a script to reliably determine how many <LETTER> are in <WORD> is not so silly.

The same goes for many repeated task you'd have an LLM naively perform.

I think that is essentially what the article is getting at, but it's got very little to do with MCP. Perhaps the author has more familiarity with "slop" MCP tools than I do.

22c commented on Show HN: TokenDagger – A tokenizer faster than OpenAI's Tiktoken   github.com/M4THYOU/TokenD... · Posted by u/matthewolfe
kevmo314 · 6 months ago
Nice work! I tried something similar a while back ago: https://github.com/kevmo314/tokie

The takeaway I also found was that the running cost was really dominated by pretokenization (the regex). It's cool to see that you found a faster way to run the regex, but have you tried comparing the performance of just swapping out the regex engine and leaving the actual BPE to tiktoken? I wonder if that is upstreamable?

22c · 6 months ago
There is at least some awareness already when it comes to the performance of the regex engine:

https://github.com/openai/tiktoken/blob/main/src/lib.rs#L95-...

22c commented on LLM code generation may lead to an erosion of trust   jaysthoughts.com/aithough... · Posted by u/CoffeeOnWrite
tomhow · 6 months ago
We considered tl;dr summaries off-topic well before LLMs were around. That hasn't changed. Please respond to the writer's original words, not a summarized version, which could easily miss important details or context.
22c · 6 months ago
I read the article, I summarised the extremely lengthy points by using AI and then replied to that for the benefit of context.

The HN submission has been editorialised since it was submitted, originally said "Yes, I will judge you for using AI..." and a lot of the replies early on were dismissive based on the title alone.

22c commented on LLM code generation may lead to an erosion of trust   jaysthoughts.com/aithough... · Posted by u/CoffeeOnWrite
Loic · 6 months ago
I am asking my team to flag git commits with a lot of LLM/Agent use with something like:

[ai]: rewrote the documentation ...

This is helps us to put another set of "glasses" as we later review the code.

22c · 6 months ago
I think it's a good idea, it does disrupt some of the traditional workflows though.

If you use AI as tab-complete but it's what you would've done anyway, should you flag it? I don't know, plenty to think about when it comes to what the right amount of disclosure is.

I certainly wish that with our company, people could flag (particularly) large commits as coming from a tool rather than a person, but I guess the idea is that the person is still responsible for whatever the tool generates.

The problem is that it's incredibly enticing for over-worked engineers to have AI do large (ie. diffs) but boring tasks that they'd typically get very little recognition for (eg. ESLint migrations).

Dead Comment

22c commented on Gemini CLI   blog.google/technology/de... · Posted by u/sync
eisbaw · 6 months ago
Can't we standardize on AGENTS.md instead of all these specific CLAUDE.md and now GEMINI.md.

I just symlink now to AGENTS.md

22c · 6 months ago
Hmm kinda makes sense to keep them separate because the agents perform differently, right?

You might want to tell Claude not to write so many comments but you might want to tell Gemini not to reach for Kotlin so much, or something.

A unified approach might be nice, but using the same prompt for all of the LLM "coding tools" is probably not going to be as nice as having prompts tailored for each specific tool.

u/22c

KarmaCake day1471September 6, 2017View Original