Readit News logoReadit News
atq2119 commented on Tesla said it didn't have key data in a fatal crash, then a hacker found it   washingtonpost.com/techno... · Posted by u/clcaev
sfn42 · a day ago
I think a world where drivers are held accountable for their actions sounds like a just and probably safer world.

If you cause an accident by driving distracted or being reckless I think it's only fair that the facts are known so that you can be punished accordingly. Certainly better than someone innocent having to share responsibility for your mistake.

I think that would probably make people think twice about being reckless and even if it doesn't at least they'll get what they deserve.

atq2119 · a day ago
I think this is the right way to look at it. Privacy is extremely important to me, but cars are basically lethal weapons. Using them on public roads has to come with a certain amount of responsibility that balances privacy against other goods.
atq2119 commented on Are OpenAI and Anthropic losing money on inference?   martinalderson.com/posts/... · Posted by u/martinald
fallmonkey · 2 days ago
The estimation for output token is too low since one reasoning-enabled response can burn through thousands of output tokens. Also low for input tokens since in actual use there're many context (memory, agents.md, rules, etc) included nowadays.
atq2119 · 2 days ago
When using APIs, you pay for reasoning tokens like you do for actual outputs. So, the estimation on a per-token basis is not affected by reasoning.

What reasoning affects is the ratio of input to output tokens, and since input tokens are cheaper, that may well affect the economics in the end.

atq2119 commented on IBM and AMD to work on quantum-centric supercomputing   newsroom.ibm.com/2025-08-... · Posted by u/donutloop
dijit · 4 days ago
Materials Science and Drug Discovery would suddenly become a lot easier, along with financial modelling (of our entire society possibly) and logistics/supply chains.

They would also be much better at training ML and doing pattern recognition.

Basically anything that requires a massively parallel computation on undeterminable states that are only clear in hindsight. They’re really important actually and its only an unfortunate side-effect that the same solution breaks all our cryptography.

(of course: the offensive wings of our defence ministries really enjoy that side-effect)

atq2119 · 4 days ago
> Basically anything that requires a massively parallel computation on undeterminable states that are only clear in hindsight.

If only. This description makes it sound as if quantum computers could help efficiently solve all problems in NP, which is not believed to be true.

Those "undeterminable" states need some non-trivial algebraic structure so that destructive interference of states can do its magic in a quantum computer. Finding such a structure is incredibly difficult, if it exists at all.

atq2119 commented on Margin debt surges to record high   advisorperspectives.com/d... · Posted by u/pera
koolba · 9 days ago
These raw numbers are meaningless to compare. If anything it should show margin debt as a fraction of total assets (or total market size as a proxy).
atq2119 · 9 days ago
The other thing is that whenever I see graphs like that spanning decades but not using a logarithmic scale, I can't help but roll my eyes a bit.

We may well be approaching a dotcom moment, but this kind of graph automatically exaggerates what's happening more recently.

atq2119 commented on Branch prediction: Why CPUs can't wait?   namvdo.ai/cpu-branch-pred... · Posted by u/signa11
zenolijo · 11 days ago
I do wonder how branch prediction actually works in the CPU, predicting which branch to take also seems like it should be expensive, but I guess something clever is going on.

I've also found G_LIKELY and G_UNLIKELY in glib to be useful when writing some types of performance-critical code. Would be a fun experiment to compare the assembly when using it and not using it.

atq2119 · 11 days ago
By now I'd assume that all modern high performance CPUs use some form of TAGE (tagged geometric history) branch prediction, so that's a good keyword to search for if you really want to get into it.
atq2119 commented on Staff disquiet as Alan Turing Institute faces identity crisis   theguardian.com/technolog... · Posted by u/glutamate
logifail · 11 days ago
> "we're leaders in electronics manufacturing"

The UK wasn't claiming to be "leaders _in manufacturing_", they were claiming "international leadership in AI".

As I said elsewhere in the thread, citation needed...

atq2119 · 11 days ago
What you're replying to is an analogy, that you seem to completely fail to understand.
atq2119 commented on Left to Right Programming   graic.net/p/left-to-right... · Posted by u/graic
aquafox · 12 days ago
The consensus here seems to be that Python is missing a pipe operator. That was one of the things I quickly learned to appreciate when transitioning from Mathematica to R. It makes writing data science code, where the data are transformed by a series of different steps, so much more readable and intuitive.

I know that Python is used for many more things than just data science, so I'd love to hear if in these other contexts, a pipe would also make sense. Just trying to understand why the pipe hasn't made it into Python already.

atq2119 · 12 days ago
The next step after pipe operators would be reverse assignment statements to capture the results.

I find myself increasingly frustrated at seeing code like 'let foo = many lines of code'. Let me write something like 'many lines of code =: foo'.

atq2119 commented on Nvidia Tilus: A Tile-Level GPU Kernel Programming Language   github.com/NVIDIA/tilus... · Posted by u/ashvardanian
alwahi · 12 days ago
okay, I am not a systems level programmer but I am currently learning c with the aim of doing some gpgpu programming using Cuda etc., what is a tile level gpu kernel programming language? and how is it different from something like cuda?

I know i can ask a llm or search on google, but i was hoping someone in the community could explain it in a way i could understand.

atq2119 · 12 days ago
I'd say the main difference is that in traditional GPU languages, the thread of execution is a single lane of a warp or wave. You typically work with ~fp32-sized values, and those are mapped by the compiler to one lane of a 32-wide vector register in a wave (or 16- to 128-wide depending on the architecture). Control flow often has to be implemented through implicit masking as different threads mapped to lanes of the same vector can make different control flow decisions (that is, an if statement in the source program gets compiled to an instruction sequence that uses masking in some way - the details vary by vendor).

In tile languages, the thread of execution is an entire workgroup (or block in CUDA-speak). You typically work with large vector/matrix-sized values. The compiler decides how to distribute those values onto vector registers across waves of the workgroup. (Example: if your program has a value that is a 32x32 matrix of fp32 elements and a workgroup has 8 32-wide waves, the value will be implemented as 4 standard-sized vector registers in each wave of the workgroup.) All control flow affects the entire workgroup equally since the ToE is the entire workgroup, and so the compiler does not have to do implicit masking. Instead, tile languages usually have provisions for explicit masking using boolean vectors/matrices.

Tile languages are a new phenomenon and clearly disagree on what the exact level of abstraction should be. For example, Triton mostly hides the details of shared memory from the programmer and lets the compiler take care of software pipelined loads, while in this Tilus here, it looks like the programmer has to program shared memory use explicitly.

atq2119 commented on The future of large files in Git is Git   tylercipriani.com/blog/20... · Posted by u/thcipriani
vlovich123 · 15 days ago
And what happens when an object is missing from the cloud storage or that storage has been migrated multiple times and someone turns down the old storage that’s needed for archival versions?
atq2119 · 14 days ago
You obviously get errors in that case, which is not great.

But GP's point was that there is an entire other category of errors with git-lfs that are eliminated with this more native approach. Git-lfs allows you to get into an inconsistent state e.g. when you interrupt a git action that just doesn't happen with native git.

atq2119 commented on ARM adds neural accelerators to GPUs   newsroom.arm.com/news/arm... · Posted by u/dagmx
cubefox · 15 days ago
There are now at least three ways to accelerate machine learning models on consumer hardware:

  - GPU compute units (used for LLMs)
  - GPU "neural accelerators"/"tensor cores" etc (used for video game anti-aliasing and increasing resolution or frame rate)
  - NPUs (not sure what they are actually used for)
And of course models can also be run, without acceleration, on the CPU.

atq2119 · 15 days ago
At least for desktop gaming, the tensor cores are in the GPU compute units (SM), same as for the big data center GPUs.

It seems ARM believe it makes sense to go a different route for mobile gaming.

u/atq2119

KarmaCake day3713September 12, 2018View Original