Readit News logoReadit News
jerrygenser commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
hbarka · 9 days ago
A year ago Sunday Pichai declared code red, now it’s Sam Altman declaring code red. How tables have turned, and I think the acquisition of Windsurf and Kevin Hou by Google seems to correlate with their level up.
jerrygenser · 8 days ago
Acquisition of noam shazeer to supercharge their Gemini flagship model line I think made a bigger impact.

To make an argument it was Kevin Hou, then we would need to see Antigravity their new IDE being key. I think the crown jewel are the Gemini models.

jerrygenser commented on Await Is Not a Context Switch: Understanding Python's Coroutines vs. Tasks   mergify.com/blog/await-is... · Posted by u/remyduthu
jerrygenser · 24 days ago
A common use of asyncio is a server framework like fastapi that schedules tasks. I used such a framework for a while before realizing that I needed to create_task for within-task concurrency.
jerrygenser commented on Pyrefly: Python type checker and language server in Rust   pyrefly.org/?featured_on=... · Posted by u/brianzelip
f311a · 2 months ago
Yeah, there are now 3 competitors and they all written in Rust:

- zuban

- ty (from ruff team)

- pyrefly

One year ago, we had none of them, only slow options.

jerrygenser · 2 months ago
Basedpyright is not rust but it's a fork of pyright with added features that are otherwise locked in vscode
jerrygenser commented on Litestream v0.5.0   fly.io/blog/litestream-v0... · Posted by u/emschwartz
simjnd · 3 months ago
AFAIK the problem of N+1 isn't necessarily one more DB query, but one more network roundtrip. So if for each page of your app you have an API endpoint that provides exactly all of the data required for that page, it doesn't matter how many DB queries your API server makes to fulfill that request (provided that the API server and the DB are on the same machine).

This is essentially what GraphQL does instead of crafting each of these super tailored API endpoints for each of your screens, you use their query language to ask for the data you want, it queries the DB for you and get you the data back in a single network roundtrip from the user perspective.

(Not an expert, so I trust comments to correct what I got wrong)

jerrygenser · 3 months ago
You still have to write the resolver for graphql. I've seen. N+1 with graphql if you don't actually use data loader+batch pattern OR if you use it incorrectly.
jerrygenser commented on Weaponizing Ads: How Google and Facebook Ads Are Used to Wage Propaganda Wars   medium.com/@eslam.elsewed... · Posted by u/bhouston
NickC25 · 3 months ago
It's very sad, and very telling that our biggest corporations have become suckups to the reactionary side of the right wing and continue to carry the water for the most degenerate and attention-seeking members of said right wing.

None of these nutcases offer true help to society (note: neither do the extreme leftists, just so we're clear that I'm not team red or team blue), and it does no good that our corporations are actively picking a side.

jerrygenser · 3 months ago
Corporations are picking the side that's in power. If team blue is in power they would pick blue. Corporations are (usually) not moral or inherently politically motivated other than to the extent of optimizing short term shareholder value.
jerrygenser commented on Deploying DeepSeek on 96 H100 GPUs   lmsys.org/blog/2025-05-05... · Posted by u/GabrielBianconi
brilee · 4 months ago
For those commenting on cost per token:

This throughput assumes 100% utilizations. A bunch of things raise the cost at scale:

- There are no on-demand GPUs at this scale. You have to rent them for multi-year contracts. So you have to lock in some number of GPUs for your maximum throughput (or some sufficiently high percentile), not your average throughput. Your peak throughput at west coast business hours is probably 2-3x higher than the throughput at tail hours (east coast morning, west coast evenings)

- GPUs are often regionally locked due to data processing issues + latency issues. Thus, it's difficult to utilize these GPUs overnight because Asia doesn't want their data sent to the US and the US doesn't want their data sent to Asia.

These two factors mean that GPU utilization comes in at 10-20%. Now, if you're a massive company that spends a lot of money on training new models, you could conceivably slot in RL inference or model training to happen in these off-peak hours, maximizing utilization.

But for those companies purely specializing in inference, I would _not_ assume that these 90% margins are real. I would guess that even when it seems "10x cheaper", you're only seeing margins of 50%.

jerrygenser · 4 months ago
Re the overnight that's why some providers are offering there are batch tier jobs that are 50% off which return over up to 12 or 24 hours for non-interactive use cases.
jerrygenser commented on Show HN: Envoy – Command Logger   github.com/heyyviv/envoy... · Posted by u/heyviv
nvader · 4 months ago
In charity, I think there is actually a product opportunity for improvements to the standard shell histfile.

I've often been frustrated by my history not being easily shared between concurrent terminals, difficulties in searching, and lack of other metadata such as timestamp, duration and exit code.

Although I suspect this repo was vibe-coded so far, I think there's a promising problem to solve here.

jerrygenser · 4 months ago
Why do you suspect it was vibe coded? There are 2 substantive files that are each less than 100 lines...

Also the readme doesn't have the usual emojis for every bullet point.

jerrygenser commented on Intel CEO Letter to Employees   morethanmoore.substack.co... · Posted by u/fancy_pantser
colin_mccabe · 5 months ago
Intel is not funded by venture capital.
jerrygenser · 5 months ago
You could remove venture and it would still be accurate
jerrygenser commented on Coding with LLMs in the summer of 2025 – an update   antirez.com/news/154... · Posted by u/antirez
LeafItAlone · 5 months ago
>I am afraid that in a few years, that will no longer be possible (as in most programmers will be so tied to a paid LLM

As of now, I’m seeing no lock-in for any LLM. With tools like Aider, Cursor, etc., you can swim on a whim. And with Aider, I do.

That’s what I currently don’t get in terms of investment. Companies (in many instances, VCs) are spending billions of dollars and tomorrow someone else eats their lunch. They are going to need to determine that method of lock-in at some point, but I don’t see it happening with the way I use the tools.

jerrygenser · 5 months ago
They can lock in by subsidizing the price of you use their tool, while making the default price larger for wrappers. This can draw people from the wrapper that can support multiple models to the specific CLI that supports the proprietary model.
jerrygenser commented on Musks xAI pressed employees to install surveillance software on personal laptops   businessinsider.com/xai-p... · Posted by u/c420
v5v3 · 5 months ago
Elon Musk declared that none of his employees could work remotely. So this is an admission that he has failed to achieve this goal.
jerrygenser · 5 months ago
I think this is like mass data labelers, including in other countries or other low cost areas.

u/jerrygenser

KarmaCake day623February 1, 2022
About
meet.hn/city/us-Boston
View Original