Readit News logoReadit News
_chris_ commented on My thoughts on renting versus buying   milesbarr.me/posts/my-tho... · Posted by u/milesbarr
kennywinker · 6 months ago
If renting doesn’t make sense economically, how is it profitable for the landlord? Because it’s almost always profitable for the landlord.
_chris_ · 6 months ago
Longer time horizon -- mortage inflates away. In the short-term, only need to beat the property tax bill, especially if the interest rate is <3% and the property is increasing in value faster than that.
_chris_ commented on Branch prediction: Why CPUs can't wait?   namvdo.ai/cpu-branch-pred... · Posted by u/signa11
o11c · 7 months ago
> And it takes no space or effort to predict never taken branches.

Is that actually true, given that branch history is stored lossily? What if other branches that have the same hash are all always taken?

_chris_ · 7 months ago
A BPU needs to predict 3 things:

  - 1) Is there a branch here?
  - 2) If so, is it taken?
  - 3) If so, where to?
If a conditional branch is never taken, then it's effectively a NOP, and you never store it anywhere, so you treat (1) as "no there isn't a branch here." Doesn't get cheaper than that.

Of course, (1) and (3) are very important, so you pick your hashes to reduce aliasing to some low, but acceptable level. Otherwise you just have to eat mispredicts if you alias too much.

Note: (1) and (3) aren't really functions of history, they're functions of their static location in the binary (I'm simplifying a tad but whatever). You can more freely alias on (2), which is very history-dependent, because (1) will guard it.

_chris_ commented on Branch prediction: Why CPUs can't wait?   namvdo.ai/cpu-branch-pred... · Posted by u/signa11
moregrist · 7 months ago
There’s ample information out there. There are quite a few text books, blogs, and YouTube videos covering computer architecture, including branch prediction.

For example: - Dan Luu has a nice write-up: https://danluu.com/branch-prediction/ - Wikipedia’s page is decent: https://en.m.wikipedia.org/wiki/Branch_predictor

> I've also found G_LIKELY and G_UNLIKELY in glib to be useful when writing some types of performance-critical code.

A lot of the time this is a hint to the compiler on what the expected paths are so it can keep those paths linear. IIRC, this mainly helps instruction cache locality.

_chris_ · 7 months ago
> A lot of the time this is a hint to the compiler on what the expected paths are so it can keep those paths linear. IIRC, this mainly helps instruction cache locality.

The real value is that the easiest branch to predict is a never-taken branch. So if the compiler can turn a branch into a never-taken branch with the common path being straight line code, then you win big.

And it takes no space or effort to predict never taken branches.

_chris_ commented on Branch prediction: Why CPUs can't wait?   namvdo.ai/cpu-branch-pred... · Posted by u/signa11
cogman10 · 7 months ago
With the way architectures have gone, I think you'd end up recreating VLIW. The thing holding back VLIW was compilers were too dumb and computers too slow to really take advantage of it. You ended up with a lot of "NOP"s as a result in the output. VLIW is essentially how modern GPUs operate.

The main benefit of VLIW is that it simplifies the processor design by moving the complicated tasks/circuitry into the compiler. Theoretically, the compiler has more information about the intent of the program which allows it to better optimize things.

It would also be somewhat of a security boon. VLIW moves the branch prediction (and rewinding) into the processor. With exploits like spectre, pulling that out would make it easier to integrate compiler hints on security sensitive code "hey, don't spec ex here".

_chris_ · 7 months ago
> The thing holding back VLIW was compilers were too dumb

That’s not really the problem.

The real issue is that VLIW requires branches to be strongly biased, statically, so a compiler can exploit them.

But in fact branches are very dynamic but trivially predicted by branch predictors, so branch predictors win.

Not to mention that even vliw cores use branch predictors, because the branch resolution latency is too long to wait for the branch outcome to be known.

_chris_ commented on Pi-hole v6   pi-hole.net/blog/2025/02/... · Posted by u/tkuraku
wkyleg · a year ago
In my experience Pi hole is a very worthwhile investment. People who used my internet when I had one would remark how much faster it was. Everything in general seems faster, even things that you wouldn't think of. I typically use Brave for browsing which has good ad blocking capabilities, but this adds a whole additional layer.

The only reason I don't use one now is that I travel a lot more so it's irrelevant, and I have to work enough on tools with Google/Vercel/other analytics that it is just very inconvenient.

Regarding smart TVs, I have found that it's better to just use an Apple TV or Kodi box and never connect to them internet though. Having said, I gave my TV away because I never used it, so this might not be as up to date. A Pi hole will block ads on smart TVs though.

_chris_ · a year ago
Wouldn’t a smart tv do something ... smarter than just using the default dns given to it by the network?

I’m not up to speed on this stuff but I thought pihole only blocked the simplest stuff from devices that play nice?

_chris_ commented on T1: A RISC-V Vector processor implementation   github.com/chipsalliance/... · Posted by u/namanyayg
IshKebab · a year ago
Chisel is just a Scala library to generate SV. I haven't actually used it but I've used similar systems and a really big problem with them is debugging. The generated SV tends to be unreadable and you will spend a lot of time debugging it.

Chisel has a similar competitor called SpinalHDL that is apparently a bit better.

https://spinalhdl.github.io/SpinalDoc-RTD/master/index.html

IMO using general purpose languages as SV generators is not the right approach. The most interesting HDL I've seen is Filament. They're trying to do for hardware what Rust has done for software. (It's kind of insane that nobody has done that yet, given how much effort we put into verifying shitty SV.) Haven't tried it yet though.

https://filamenthdl.com/

_chris_ · a year ago
It’s not that hard to debug— your signal names and register names all carry through. Sure, lots of temp wires get generated but that’s never where your bug is.
_chris_ commented on Intel Honesty   stratechery.com/2024/inte... · Posted by u/surprisetalk
ohcmon · 2 years ago
You would be surprised, but nvidia’s employee stock plans allow to select the purchase price within the last 2 years https://www.nvidia.com/en-us/benefits/money/espp/
_chris_ · 2 years ago
> allow to select the purchase price within the last 2 years

I don't think that's true. My reading of that is "you lock in the price on your start date and can keep that for the next 2 years going forward". That doesn't help anybody joining at >$1k / share. :D (and that's only ESPP, not standard stock compensation).

_chris_ commented on The Itanic Saga: The History of VLIW and Itanium   abortretry.fail/p/the-ita... · Posted by u/blakespot
gregw2 · 2 years ago
I’d be interested in understanding why the compilers never panned out but have never seen a good writeup on that. Or why people thought the compilers would be able to succeed in the first place at the mission.
_chris_ · 2 years ago
> I’d be interested in understanding why the compilers never panned out but have never seen a good writeup on that. Or why people thought the compilers would be able to succeed in the first place at the mission.

It's a fundamentally impossible ask.

Compilers are being asked to look at a program (perhaps watch it run a sample set) and guess the bias of each branch to construct a most-likely 'trace' path through the program, and then generate STATIC code for that path.

But programs (and their branches) are not statically biased! So it simply doesn't work out for general-purpose codes.

However, programs are fairly predictable, which means a branch predictor can dynamically learn the program path and regurgitate it on command. And if the program changes phases, the branch predictor can re-learn the new program path very quickly.

Now if you wanted to couple a VLIW design with a dynamically re-executing compiler (dynamic binary translation), then sure, that can be made to work.

_chris_ commented on How to Design an ISA   queue.acm.org/detail.cfm?... · Posted by u/eatonphil
o11c · 2 years ago
Designing for fusion is valid, but RISC-V has a lot of cases that boil down to "use a 12-byte fused instruction where other architectures do it in 4 bytes".

L1i matters, people!

_chris_ · 2 years ago
> L1i matters, people!

RISC-V consistently wins on L1i footprint.

The complaining is about number of dynamic instructions ("path length"), which can hit you if you don't fuse. Of course, path length might not actually be the bottleneck to raw performance, but it's an easy metric to argue, so a lot of people latch on to it.

Deleted Comment

u/_chris_

KarmaCake day687August 6, 2014
About
RISC-V and Chisel user.

riscv.org chisel.eecs.berkeley.edu https://github.com/ucb-bar/riscv-sodor https://github.com/ucb-bar/riscv-boom

View Original