Readit News logoReadit News
avianes commented on SIMD < SIMT < SMT: Parallelism in Nvidia GPUs (2011)   yosefk.com/blog/simd-simt... · Posted by u/shipp02
ribit · 2 years ago
> Not sure what you mean by lockstep here. When an operand-collector entry is ready it dispatch it to execute as soon as possible (write arbitration aside) even if other operand-collector entries from the same warp are not ready yet (so not really what a would call "threads lock-step"). But it's possible that Nvidia enforce that all threads from a warp should complete before sending the next warp instruction (I would call it something like "instruction lock-step"). This can simplify data dependency hazard check. But that an implementation detail, it's not required by the SIMT scheme.

Hm, the way I understood it is that a single instruction is executed on a 16-wide SIMD unit, thus processing 16 elements/threads/lanes simultaneously (subject to execution mask of course). This is what I mean by "in lockstep". In my understanding the role of the operand collector was to make sure that all register arguments are available before the instruction starts executing. If the operand collector needs multiple cycles to fetch the arguments from the register file, the instruction execution would stall.

So you are saying that my understanding is incorrect and that the instruction can be executed in multiple passes with different masks depending on which arguments are available? What is the benefit as opposed to stalling and executing the instruction only when all arguments are available? To me it seems like the end result is the same, and stalling is simpler and probably more energy efficient (if EUs are power-gated).

> But, yes (warp) instruction is already scheduled, but (ALU) operation are re-scheduled by the operand-collector and it's dispatch. In the Nvidia patent they mention the possibility to dispatch operation in an order that prevent write collision for example.

Ah, that is interesting, so the operand collector provides a limited reordering capability to maximize hardware utilization, right? I must have missed that bit in the patent, that is a very smart idea.

> But it's possible that Nvidia enforce that all threads from a warp should complete before sending the next warp instruction (I would call it something like "instruction lock-step"). This can simplify data dependency hazard check. But that an implementation detail, it's not required by the SIMT scheme.

Is any existing GPU actually doing superscalar execution from the same software thread (I mean the program thread, i.e., warp, not a SIMT thread)? Many GPUs claim dual-issue capability, but that either refers to interleaved execution from different programs (Nvidia, Apple) or a SIMD-within SIMT or maybe even a form of long instruction word (AMD). If I remember correctly, Nvidia instructions contain some scheduling information that tells the scheduler when it is safe to issue the next instruction from the same wave after the previous one started execution. I don't know how others do it, probably via some static instruction timing information. Apple does have a very recent patent describing dependency detection in an in-order processor, no idea whether it is intended for the GPU or something else.

> you have multiple multiple operand-collector entry to minimize the probability that no entry is ready. I should have say "to minimize bubbles".

I think this is essentially what some architectures describe as the "register file cache". What is nice about Nvidia's approach is that it seems to be fully automatic and can really make the best use of a constrained register file.

avianes · 2 years ago
> I understood it is that a single instruction is executed on a 16-wide SIMD unit, thus processing 16 elements/threads/lanes simultaneously (subject to execution mask of course). This is what I mean by "in lockstep".

Ok I see, that definitely not what I understood from my study of the Nvidia SIMT uarch. And yes I will claim that "the instruction can be executed in multiple passes with different masks depending on which arguments are available" (using your words).

> So the operand collector provides a limited reordering capability to maximize hardware utilization, right?

Yes, that my understanding, and that's why I claim it's different from "classical" SIMD

> What is the benefit as opposed to stalling and executing the instruction only when all arguments are available?

That's a good question, note that: I think Apple GPU uarch do not work like the Nvidia one, my understanding is that Apple uarch is way closer to a classical SIMD unit. So it's definitely not killer to move form the original SIMT uarch from Nvidia.

That said, a think the SIMT uarch from Nvidia is way more flexible, and better maximize hardware utilization (executing instruction as soon as possible always help for better utilization). And let say you have 2 warps with complementary masking, with the Nvidia's SIMT uarch it goes naturally to issue both warps simultaneously and they can be executed at the same cycle within different ALU/core. With a classical SIMD uarch it may be possible but you need extra hardware to handle warp execution overlapping, and even more hardware to enable overlapping more that 2 threads.

Also, Nvidia's operand-collector allow to emulate multi-ported register-file, this probably help with register sharing. There is actually multiple patent from Nvidia about non-trivial register allocation within the register-file banks, depending on how the register will be used to minimize conflict.

> Is any existing GPU actually doing superscalar execution from the same software thread (I mean the program thread, i.e., warp, not a SIMT thread)?

It's not obvious what would mean "superscalar" in an SIMT context. For me a superscalar core is a core that can extract instruction parallelism from a sequential code (associated to a single thread) and therefore dispatch/issue/execute more that 1 instruction per cycle per thread. With SIMT most of the instruction parallelism is very explicit (with thread parallelism), so it's not really "extracted" (and not from the same thread). But anyway, if you question is either multiple instructions from a single warp can be executed in parallel (across different threads), then a would say probably yes for Nvidia (not sure, there is very few information available..), at least 2 instructions from the same thread block (from the same program, but different warp) should be able to be executed in parallel.

> I think this is essentially what some architectures describe as the "register file cache"

I'm not sure about that, there is actually some published papers (and probably some patents) from Nvidia about register-file cache for SIMT uarch. And that come after the operand-collector patent. But in the end it really depend what concept you are referring to with "register-file cache".

In the Nvidia case a "register-file cache" is a cache placed between the register-file and the operand-collector. And it makes sense in their case since the register-file have variable latency (depending on collision) and because it will save SRAM read power.

avianes commented on SIMD < SIMT < SMT: Parallelism in Nvidia GPUs (2011)   yosefk.com/blog/simd-simt... · Posted by u/shipp02
ribit · 2 years ago
In an operand-collector architecture the threads are still executed in lockstep. I don't think this makes the basic architecture less "SIMD-y". Operand collectors are a smart way to avoid multi-ported register files, which enables more compact implementation. Different vendors use different approaches to achieve a similar result. Nvidia uses operand collectors, Apple uses explicit cache control flags etc.

> This enable to read from the register-file in an asynchronous fashion (by "asynchronous" here I mean not all at the same cycle) without introducing any stall.

You can still get stalls if an EU is available in a given cycle but not all operands have been collected yet. The way I understand the published patents is that operand collectors are a data gateway to the SIMD units. The instructions are alraedy scheduled at this point and the job of the collector is to sgnal whether the data is ready. Do modern Nvidia implementations actually reorder instructions based feedback from operand collectors?

> That why (or 1 of the reason) you need to sync your threads in the SIMT programing model and not in an SIMD programming model.

It is my understanding that you need to synchronize threads when accessing shared memory. Not only different threads can execute on different SIMD, but also threads on the same SIMD can access shared memory over multiple cycles on some architectures. I do not see how thread synthconization relates to operand collectors.

avianes · 2 years ago
> In an operand-collector architecture the threads are still executed in lockstep. > [...] > It is my understanding that you need to synchronize threads when accessing shared memory.

Not sure what you mean by lockstep here. When an operand-collector entry is ready it dispatch it to execute as soon as possible (write arbitration aside) even if other operand-collector entries from the same warp are not ready yet (so not really what a would call "threads lock-step"). But it's possible that Nvidia enforce that all threads from a warp should complete before sending the next warp instruction (I would call it something like "instruction lock-step"). This can simplify data dependency hazard check. But that an implementation detail, it's not required by the SIMT scheme.

And yes, it's hard to expose de-synchronization without memory operations, so you only need sync for memory operation. (load/store unit also have operand-collector)

> You can still get stalls if an EU is available in a given cycle but not all operands have been collected yet

That's true, but you have multiple multiple operand-collector entry to minimize the probability that no entry is ready. I should have say "to minimize bubbles".

> The way I understand the published patents is that operand collectors are a data gateway to the SIMD units. The instructions are alraedy scheduled at this point and the job of the collector is to sgnal whether the data is ready. Do modern Nvidia implementations actually reorder instructions based feedback from operand collectors?

Calling UE "SIMD unit" in an SIMT uarch add a lot of ambiguity, so I'm not sure a understand you point correctly. But, yes (warp) instruction is already scheduled, but (ALU) operation are re-scheduled by the operand-collector and it's dispatch. In the Nvidia patent they mention the possibility to dispatch operation in an order that prevent write collision for example.

avianes commented on SIMD < SIMT < SMT: Parallelism in Nvidia GPUs (2011)   yosefk.com/blog/simd-simt... · Posted by u/shipp02
ribit · 2 years ago
How would you envision that working at the hardware level? GPUs are massively parallel devises, they need to keep the scheduler and ALU logic as simple and compact as possible. SIMD is a natural way to implement this. In real world, SIMT is just SIMD with some additional capabilities for control flow and a programming model that focuses on SIMD lanes as threads of execution.

What’s interesting is that modern SIMT is exposing quite a lot of its SIMD underpinnings, because that allows you to implement things much more efficiently. A hardware-accelerated SIMD sum is way faster than adding values in shared memory.

avianes · 2 years ago
> GPUs are massively parallel devises, they need to keep the scheduler and ALU logic as simple and compact as possible

The simplest hardware implementation is not always the more compact or the more efficient. This is a misconception, example bellow.

> SIMT is just SIMD with some additional capabilities for control flow ..

In the Nvidia uarch, it does not. The key part of the Nvidia uarch is the "operand-collector" and the emulation of multi-ports register-file using SRAM (single or dual port) banking. In a classical SIMD uarch, you just retrieve the full vector from the register-file and execute each lane in parallel. While in the Nvidia uach, each ALU have an "operand-collector" that track and collect the operands of multiple in-flight operations. This enable to read from the register-file in an asynchronous fashion (by "asynchronous" here I mean not all at the same cycle) without introducing any stall.

When a warp is selected, the instruction is decoded, an entry is allocated in the operand-collector of each used ALU, and the list of register to read is send to the register-file. The register-file dispatch register reads to the proper SRAM banks (probably with some queuing when read collision occur). And all operand-collectors independently wait for their operands to come from the register-file, when an operand collector entry has received all the required operands, the entry is marked as ready and can now be selected by the ALU for execution.

That why (or 1 of the reason) you need to sync your threads in the SIMT programing model and not in an SIMD programming model.

Obviously you can emulate an SIMT uarch using an SIMD uarch, but a think it's missing the whole point of SIMT uarch.

Nvidia do all of this because it allow to design a more compact register-file (memories with high number of port are costly) and probably because it help to better use the available compute resources with masked operations

avianes commented on Intel Unveils Lunar Lake Architecture   anandtech.com/show/21425/... · Posted by u/zdw
tedunangst · 2 years ago
> Another striking advance is the migration in the P-core database from a 'sea of fubs' to a 'sea of cells'. This process of updating the organization of the P-cores substructure moves from tiny, latch-dominated partitions to more extensive and ever larger flop-dominated partitions that are very agnostic as things go.

I like to pretend I know a thing about CPU design, but I have to admit, I have no idea what's going on here.

avianes · 2 years ago
Modern chip designs have an enormous amount of logic and therefore standard-cells. And when you are dealing with a huge amount of cell all together, it quickly becomes unmanageable, syntheses tools runtime explode, quality of results declines, results become chaotic, ..

So chip designs are spliced into partitions. Each partition is a part of your design that you synthesized separately. For example you may setup a partition for the core, and you can instantiate it multiple time into a core_cluster partition.

Note that: synthetiser's logical optimizer cannot work on logic across partitions, so you don't want to small partitions (otherwise you will have more manual optimize to do) but you also don't want too big partitions (otherwise runtime and development iteration time increase).

The question is what the good size for a partition ?

* ALU is ~ 10 K cells (synthesis runtime range from few seconds to ~5 min)

* small core (low-end) is ~1M cells (synthesis runtime range from 1~8 hours)

In the Intel terminology: "sea of FUBs" approach is to prefer small partitions, while "sea of cells" approach prefer big partitions.

About the predominance of latch or flop, it's mainly a consequence from the level of manual optimization. (latch are smaller, but harder to manage, and it give diminishing returns with new process node). Same for process-node-specific vs process-node-agnostic.

PS: Most modern designs are "sea of cells" according the Intel terminology

avianes commented on IEEE FP8 Formats for Machine Learning (Draft) [pdf]   github.com/P3109/Public/b... · Posted by u/avianes
hi-v-rocknroll · 2 years ago
Formats, plural.

sign:explicit leading mantissa bit:mantissa bits:exponent bits

1:0:(P-1):(8-P) where P ∈ [1,7]

P ∈ [3,5] appears to be more useful

binary8p3 -> 1:0:2:5

binary8p4 -> 1:0:3:4

binary8p5 -> 1:0:4:3

Edge cases:

P = 8 would disallow all exponent bits, leaving a sign bit and 7 explicit mantissa bits of precision

P = 0 would be unsigned and 8 exponent bits with only an implicit precision bit but no explicit mantissa bits

avianes · 2 years ago
Also, note that:

- There is only one Zero (encoded as 0x00), no negative-zero

- There is only one NaN (encoded as 0x80)

avianes commented on Ask HN: Does anyone care about OpenPOWER?    · Posted by u/sandwichbop
api · 2 years ago
Even modern C and C++ code generally ports just fine as long as you avoid things with unspecified behavior like exotic casts and weird pointer tricks. Have to watch endianness too but that doesn’t actually come up all that often. Also AFAIK there are no longer any big-endian architectures in common use.

Newer languages are even easier. Rust and Go almost always port with zero issues. Obviously the same goes for scripting and VM based languages like Java.

The architecture matters less than it used to.

avianes · 2 years ago
Are you aware that x86 and ARM/POWER/RISCV memory consistency model are really different? You can encounter very sneaky multitreading bug when running on ARM/POWER/RISCV a program that you have only tested on x86.

Apple has actually put a lot of effort to make the x86 to ARM transition as smooth as possible regarding memory consistency model, this is a strong indication that it's not as trivial as you seem to think.

avianes commented on A CPU is a compiler   outerproduct.net/boring/2... · Posted by u/signa11
ahartmetz · 3 years ago
> But if you are talking about Mill, yes it will never work.

Maybe Mill Computing won't pull it off, but why do you think the approach is bad? It seems sufficiently different from Itanium to not have the exact same problems.

avianes · 3 years ago
Well, first of all, because it shows no results after +10 years. There is definitely no indication that it will work someday.

And above all because there are too many choices that are too specific, outdated and too exotic. (e.g. the split-stream encoding is way too exotic)

I work in a small company that makes processors, and I know from experience that developing a processor is a very complicated subject, you have to go step by step (Mill does not). When you come up with a new design/idea, you try to simulate it, test it and implement it. You don't pile up new ideas without getting feedback on them.

avianes commented on A CPU is a compiler   outerproduct.net/boring/2... · Posted by u/signa11
uticus · 3 years ago
> Branch probability analysis | Branch prediction

> Peephole optimisations, idioms | Idioms

From the software side, what is it exactly that makes it so difficult to write a higher-level language that must be optimized, instead of a language that already exposes the optimizations in a friendly way?

("higher-level language" meaning c, not python or something)

avianes · 3 years ago
Not sure what is exactly your thought, since the optimizations you quote don't really takes advantage of any exposed optimization feature of the language.

Are your asking why we could not expose optimization feature (e.g. branch hint to replace branch-prediction) in the programming language ?

Are you asking if it's difficult for a compiler to optimize some type of high-level languages ?

Since "higher-level" language is C in your question, what the "language that already exposes the optimizations in a friendly way" ? Are you thinking the CPU µop as a language ?

avianes commented on A CPU is a compiler   outerproduct.net/boring/2... · Posted by u/signa11
tormeh · 3 years ago
This will never work due to incentive structure, unless the compiler devs are working for the same company that makes the CPU. Otherwise, compiler devs will target a lowest common denominator and call it a day. And even if the compiler devs perfectly support new CPU instructions, compiler users usually want a single binary that can run on as many CPUs as possible, and so will use the lowest common denominator once again. Currently, your CPU will 99% of the time run basic AMD64 instructions, regardless of its capabilities. So Intel and AMD try to make their CPUs really good at running AMD64 code.
avianes · 3 years ago
> unless the compiler devs are working for the same company that makes the CPU

Every CPU manufacturing company have a compile team.

> This will never work

VLIW processors do work, and for a while now. This type of architecture performs better for data-intensive workloads, so you don't see them in the general-purpose world.

But if you are talking about Mill, yes it will never work.

u/avianes

KarmaCake day280November 23, 2018View Original