Readit News logoReadit News
bcrl commented on Hard-braking events as indicators of road segment crash risk   research.google/blog/hard... · Posted by u/aleyan
bcrl · 2 days ago
Many of the merge lanes in California are insanely short compared to those in the rest of the world. The worst are the ones that have merge immediately before an overpass and exit immediately after where merging and exiting have about the width of the overpass to change lanes. I found those infuriating when I used to visit friends in the Bay area. The pattern where I live is the opposite (long exit lane before the overpass and a long merge lane after) and provides far better margins of safety.
bcrl commented on End of an era for me: no more self-hosted git   kraxel.org/blog/2026/01/t... · Posted by u/dzulp0d
mzajc · 3 days ago
Yes, the attack is continuous. The rate fluctuates a lot, even within a day. It's definitely an anomaly, because eg. from 2025-08-15 to 2025-10-05 I saw zero days with more than 10k requests. Here's a histogram of the past 2 weeks plus today.

  2026-01-28     21'460
  2026-01-29     27'770
  2026-01-30     53'886
  2026-01-31    100'114  #
  2026-02-01    132'460  #
  2026-02-02     73'933
  2026-02-03    540'176  #####
  2026-02-04    999'464  #########
  2026-02-05    134'144  #
  2026-02-06  1'432'538  ##############
  2026-02-07  3'864'825  ######################################
  2026-02-08  3'732'272  #####################################
  2026-02-09  2'088'240  ####################
  2026-02-10    573'111  #####
  2026-02-11  1'804'222  ##################

bcrl · 3 days ago
It's plausible that the AI companies have given up storing data for training runs and just stream it off the Internet directly now. It's probably cheaper to stream than buying more SSDs and HDDs from a supply constrained supply chain at this point.
bcrl commented on GNU Hurd Is "Almost There" with x86_64, SMP and ~75% of Debian Packages Building   phoronix.com/news/GNU-Hur... · Posted by u/sergiogdr
pjmlp · 10 days ago
Yeah, but we are still far off making it mainstream beyond some key use cases, QNX, INTEGRITY, language runtimes on top of type 1 hypervisors, all kernel extension points being pushed into userspace across Apple,Google,Microsoft offerings, Nintendo Switch,....
bcrl · 7 days ago
Given the tectonic shift in priorities for Linux kernel development over the past decade, I'm willing to bet that many key developers would be open to a microkernel architecture now than ~25+ years ago. CPUs now have hardware features that reduce the overhead of MMU context changes which gets rid of a significant part of the cost of having isolated address spaces to contain code. The Meltdown and Spectre attacks really forced the security issue to the point where major performance costs to improve security became acceptable in a way that was not the case in the '90s or '00s.
bcrl commented on 1 kilobyte is precisely 1000 bytes?   waspdev.com/articles/2026... · Posted by u/surprisetalk
dooglius · 11 days ago
The wiki page agrees with parent, "The double-sided, high-density 1.44 MB (actually 1440 KiB = 1.41 MiB or 1.47 MB) disk drive, which would become the most popular, first shipped in 1986"
bcrl · 11 days ago
To make things even more confusing, the high-density floppy introduced on the Amiga 3000 stored 1760 KiB
bcrl commented on Nvidia to shift 2028 chip production to Intel, reshaping TSMC strategy   digitimes.com/news/a20260... · Posted by u/akyuu
adrian_b · 16 days ago
The new Intel 18A CMOS process has succeeded to improve the energy efficiency over both all older Intel processes and the older TSMC 3 nm process used by Intel for its Arrow Lake and Lunar Lake CPUs.

On the other hand, it seems that Intel struggles to reach high clock frequencies in this new manufacturing process because the Panther Lake CPU models have lower clock frequencies than the corresponding Arrow Lake CPU models made by TSMC and the few Panther Lake models with maximum clock frequencies of 5 GHz or more (Core Ultra X7 and X9) are very expensive, so it is likely that their availability will be limited (due to low fabrication yields).

Therefore it is plausible that for now companies like NVIDIA and Apple will choose to use Intel only for low-risk products, as the article says.

bcrl · 16 days ago
Please read the article in full. The GPU die where all the computations occur and the majority of power is spent will remain on TSMC.

TSMC plans their A14 process to be in high volume production in 2028. It will include backside power delivery introduced in their A14 process (expected 2026/2027 high volume production), which means it will be quite competitive with Intel.

https://semiwiki.com/wikis/industry-wikis/tsmc-a14-process-t...https://semiwiki.com/wikis/industry-wikis/%F0%9F%A7%A0-tsmc-...

There's an older article at https://www.igorslab.de/en/350-watts-for-nvidias-new-top-of-... which shows the breakdown of power consumption for GPUs. The GPU die itself is only 230W of the entire power budget.

bcrl commented on Nvidia to shift 2028 chip production to Intel, reshaping TSMC strategy   digitimes.com/news/a20260... · Posted by u/akyuu
ternus · 16 days ago
For one non-core part of their chip.

> The GPU die will remain with TSMC

bcrl · 16 days ago
The entire sentence is even less enthusiastic:

"The GPU die will remain with TSMC, but portions of the I/O die are expected to leverage Intel's 18A or the planned 14A process slated for 2028, contingent on yield improvements."

Reading between the lines: Nvidia will most likely design a TSMC version of those I/O die portions in case Intel fails.

Intel has a decades long reputation of failing its attempted foundry customers. Whether or not Nvidia's ownership stake is sufficient to overcome the inertia within Intel that has resulted in those failures remains to be seen.

bcrl commented on Notes on the Intel 8086 processor's arithmetic-logic unit   righto.com/2026/01/notes-... · Posted by u/elpocko
kens · 22 days ago
Author here for all your 8086 questions...
bcrl · 21 days ago
Thanks for publishing your blog! The articles are quite enlightening, and it's interesting to see how semiconductors evolved in the '70s, '80s and '90s. Having grown up in this time, I feel it was a great time to learn as one could understand an entire computer, but details like this were completely inaccessible back then. Keep up the good work knowing that it is appreciated!

A more personal question: is your reverse engineering work just a hobby or is it tied in with your day to day work?

bcrl commented on Sopro TTS: A 169M model with zero-shot voice cloning that runs on the CPU   github.com/samuel-vitorin... · Posted by u/sammyyyyyyy
burnt-resistor · a month ago
None, obviously, and it's barking up the wrong tree. The genie is already out of the bottle as there are zillions of similar free services and software that do the same thing, and there's no quick-fix panacea technological solutions to social and legal problems. Legislation in every locality need to create extremely harsh penalties for impersonating other people, and elders need to be educated to ask questions of their family members that only the real people would know the answers to.
bcrl · a month ago
Ah yes, the "things are bad; we shouldn't try to fix them" argument. That isn't a philosophy which I subscribe to. People should very much consider the ethical implications of releasing software they created to the general public.
bcrl commented on macOS 26.2 update enables 160MHz channels on 5GHz Wi-Fi networks   cultofmac.com/news/apple-... · Posted by u/zdw
chrisandchris · a month ago
Fot at home, I tend to stick with 2.4 GHz. It is slower, but with a <100 Mbit uplink to the internet, local speed does not matter. 2.4 does just work better with less APs and thicker walls.
bcrl · a month ago
2.4 GHz is unreliable for me these days due to interference from bluetooth headphones and hearing aids that other people are using. The issues tend to only show up during extended periods of video streaming, and having looked at a bunch of traffic captures over the holidays, it seems to be limited to certain streaming services sending very large bursts of traffic at extremely high rates (likely from servers with 100+ Gbps interfaces using TSO to reduce CPU usage). That makes me think that the regularly paced bluetooth interference from real time audio streams limits the maximum viable burst size of a 2.4 GHz wifi radio.

Yes, this happened a bunch more over the Christmas holiday when we had an extra 3 or 4 younger family members all listening to music and videos over their bluetooth ear buds and headphones, which made it much easier to track down as it was quite a rare intermittent failure with only a single bluetooth device being active.

bcrl commented on Flock Hardcoded the Password for America's Surveillance Infrastructure 53 Times   nexanet.ai/blog/53-times-... · Posted by u/fuck_flock
iancarroll · a month ago
Although I don’t like Flock, I’m a bit skeptical of the claims in the article. Most screenshots appear to be client-side JavaScript snippets, not API responses from this key.

In the bug bounty community, Google Maps API key leaks are a common false positive, because they are only used for billing purposes and don’t actually control access to any data. The article doesn’t really prove ArcGIS is any different.

bcrl · a month ago
Security for maps is basically impossible. Maps tend to have to be widely shared within government and engineering, and if you know what you're looking for, it's remarkably straightforward to find ways to access layers you would normally have to pay for. It's a consequence of the need to share data widely for a variety of purposes -- everything from zoning debates within a local county to maps for broadband funding across an entire country create a public need to share mapping information. Keys don't get revoked once projects end as that would result in all the previously published links becoming stale, which makes life harder for everyone doing research and planning new projects.

Moreover, university students in programs like architecture are given access to many map layers as part of the school's agreements with the organizations publishing the data. Without that access, students wouldn't be able to pick up the skills needed to do the work they will eventually be hired for. And if students can get data, then it's pretty much public.

Privacy is becoming (or already is) nearly impossible in the 21st century.

u/bcrl

KarmaCake day1917January 29, 2020View Original