Is that actually true, given that branch history is stored lossily? What if other branches that have the same hash are all always taken?
- 1) Is there a branch here?
- 2) If so, is it taken?
- 3) If so, where to?
If a conditional branch is never taken, then it's effectively a NOP, and you never store it anywhere, so you treat (1) as "no there isn't a branch here." Doesn't get cheaper than that.Of course, (1) and (3) are very important, so you pick your hashes to reduce aliasing to some low, but acceptable level. Otherwise you just have to eat mispredicts if you alias too much.
Note: (1) and (3) aren't really functions of history, they're functions of their static location in the binary (I'm simplifying a tad but whatever). You can more freely alias on (2), which is very history-dependent, because (1) will guard it.
For example: - Dan Luu has a nice write-up: https://danluu.com/branch-prediction/ - Wikipedia’s page is decent: https://en.m.wikipedia.org/wiki/Branch_predictor
> I've also found G_LIKELY and G_UNLIKELY in glib to be useful when writing some types of performance-critical code.
A lot of the time this is a hint to the compiler on what the expected paths are so it can keep those paths linear. IIRC, this mainly helps instruction cache locality.
The real value is that the easiest branch to predict is a never-taken branch. So if the compiler can turn a branch into a never-taken branch with the common path being straight line code, then you win big.
And it takes no space or effort to predict never taken branches.
The main benefit of VLIW is that it simplifies the processor design by moving the complicated tasks/circuitry into the compiler. Theoretically, the compiler has more information about the intent of the program which allows it to better optimize things.
It would also be somewhat of a security boon. VLIW moves the branch prediction (and rewinding) into the processor. With exploits like spectre, pulling that out would make it easier to integrate compiler hints on security sensitive code "hey, don't spec ex here".
That’s not really the problem.
The real issue is that VLIW requires branches to be strongly biased, statically, so a compiler can exploit them.
But in fact branches are very dynamic but trivially predicted by branch predictors, so branch predictors win.
Not to mention that even vliw cores use branch predictors, because the branch resolution latency is too long to wait for the branch outcome to be known.
The only reason I don't use one now is that I travel a lot more so it's irrelevant, and I have to work enough on tools with Google/Vercel/other analytics that it is just very inconvenient.
Regarding smart TVs, I have found that it's better to just use an Apple TV or Kodi box and never connect to them internet though. Having said, I gave my TV away because I never used it, so this might not be as up to date. A Pi hole will block ads on smart TVs though.
I’m not up to speed on this stuff but I thought pihole only blocked the simplest stuff from devices that play nice?
Chisel has a similar competitor called SpinalHDL that is apparently a bit better.
https://spinalhdl.github.io/SpinalDoc-RTD/master/index.html
IMO using general purpose languages as SV generators is not the right approach. The most interesting HDL I've seen is Filament. They're trying to do for hardware what Rust has done for software. (It's kind of insane that nobody has done that yet, given how much effort we put into verifying shitty SV.) Haven't tried it yet though.
I don't think that's true. My reading of that is "you lock in the price on your start date and can keep that for the next 2 years going forward". That doesn't help anybody joining at >$1k / share. :D (and that's only ESPP, not standard stock compensation).
It's a fundamentally impossible ask.
Compilers are being asked to look at a program (perhaps watch it run a sample set) and guess the bias of each branch to construct a most-likely 'trace' path through the program, and then generate STATIC code for that path.
But programs (and their branches) are not statically biased! So it simply doesn't work out for general-purpose codes.
However, programs are fairly predictable, which means a branch predictor can dynamically learn the program path and regurgitate it on command. And if the program changes phases, the branch predictor can re-learn the new program path very quickly.
Now if you wanted to couple a VLIW design with a dynamically re-executing compiler (dynamic binary translation), then sure, that can be made to work.
L1i matters, people!
RISC-V consistently wins on L1i footprint.
The complaining is about number of dynamic instructions ("path length"), which can hit you if you don't fuse. Of course, path length might not actually be the bottleneck to raw performance, but it's an easy metric to argue, so a lot of people latch on to it.
Deleted Comment