This also brings up another question if anyone knows. Is there a term for hardware description languages similar to turning complete for programming languages, or is there a different set of common terms?
This also brings up another question if anyone knows. Is there a term for hardware description languages similar to turning complete for programming languages, or is there a different set of common terms?
Still... it is tantalisingly close to a really nice HDL for design purposes. I have considered trying to make a pipelined RISC-V chip in Sail with all the caches, branch predictor etc.
One feature that makes it a little awkward though is that there isn't really anything like a class or a SV module that you can reuse. If you want to have N of anything you pretty much have to copy & paste it N times.
It's a really nice language - especially the lightweight dependent types. Basically it has dependent types for integers and bit-vector lengths so you can have some really nice guarantees. E.g. in this example https://github.com/Timmmm/sail_demo/blob/master/src/079_page... we have this function type
val splitAccessWidths : forall 'w, 0 <= 'w . (xlenbits, int('w)) ->
{'w0 'w1, 'w0 >= 0 & 'w1 >= 0 & 'w0 + 'w1 == 'w . (int('w0), int('w1))}
Which basically means it returns a tuple of 2 integers, and they must sum to the input integer. The type system knows this. Then when we do this: let (width0, width1) = splitAccessWidths(vaddr, width);
let val0 = mem_read_contiguous(paddr0, width0);
let val1 = mem_read_contiguous(paddr1, width1);
val1 @ val0
The type system knows that `length(val0) + length(val1) == width`. When you concatenate them (@ is bit-vector concatenation; wouldn't have been my choice but it's heavily OCaml-inspired), the type system knows `length(val1 @ val0) == width`.If you make a mistake and do `val1 @ val1` for example you'll get a type error.
A simpler example is https://github.com/Timmmm/sail_demo/blob/master/src/070_fanc...
The type `val count_ones : forall 'n, 'n >= 0. (bits('n)) -> range(0, 'n)` means that it's generic over any length of bit vector and the return type is an integer from 0 to the length of the bit vector.
I added it to Godbolt (slightly old version though) so you can try it out there.
It's not a general purpose language so it's really only useful for modelling hardware.
I don't understand why it's called a vuln. It's, like, the whole point of the system to be able to do this! It's how it's marketed!
That seems... very excessive? Who's actually being hurt here? No one is buying 20 year old consoles and games that probably aren't even sold by the original company anymore. Seems pretty much like a classic victimless crime IMO.
> Agents accused the creator of promoting pirated copyrighted materials stemming from his coverage of Anbernic handheld game consoles.
Seems hardly something worthy of arresting, let alone jailing someone.
> Italy has a history of heavy-handed copyright enforcement—the country's Internet regulator recently demanded that Google poison DNS to block illegal streams of soccer. So it's not hard to believe investigators would pursue a case against someone who posts videos featuring pirated games on YouTube.
Oh well... didn't realize Italy was like that
It's such a great and simple algorithm. I feel like it deserves to be more widely known.
I used it at Dyson to evaluate really subjective things like how straight a tress of hair is - pretty much impossible to say if you just look at a photo, but you can ask a bunch of people to compare two photos and say which looks straighter, then you can get an objective ranking.
> In the bootstrap process, the entire thing becomes way more complex. You see, rustc is not invoked directly. The bootstrap script calls a wrapper around the compiler.
> Running that wrapped rustc is not easy to run either: it requires a whole lot of complex, environment flags to be set.
> All that is to say: I don’t know how to debug the Rust compiler. I am 99.9 % sure there is an easy way to do this, documented somewhere I did not think to look. After I post this, somebody will tell me "oh, you just need to do X".
> Still, at the time of writing, I did not know how to do this.
> So, can we attach gdb to the running process? Nope, it crashes way to quickly for that.
It's kind of funny how often this problem crops up and the variety of tricks I have in my back to deal with it. Sometimes I patch the script to invoke gdb --args [the original command] instead, but this is only really worthwhile if it's a simple shell script and also I can track where stdin/stdout are going. Otherwise I might patch the code to sleep a bit before actually running anything to give me a chance to attach GDB. On some platforms you can get notified of process execs and sometimes even intercept that (e.g. as an EDR solution) and sometimes I will use that to suspend the process before it gets a chance to launch. But I kind of wish there was a better way to do this in general…LLDB has a "wait for launch" flag but it just spins in a loop waiting for new processes and it can't catch anything that dies too early.
When it is loaded it will automatically talk to VSCode and tell it to start a debugger and attach to it & it waits for the debugger to attach.
End result is you just have to run your script with an environment variable set and it will automatically attach a nice GUI debugger to the process no matter how deeply buried in scripts and Makefiles it is.
https://github.com/Timmmm/autodebug
I currently use this for debugging C++ libraries that are dynamically loaded into Questa (a commercial SystemVerilog simulator) that is started by a Python script running in some custom build system.
In the past I used it to debug Python code running in an interpreter launched by a C library loaded by Questa started by a Makefile started by a different Python interpreter that was launched by another Makefile. Yeah. It wasn't the only reason by a long shot but that company did not survive...
This is arguably worse than crashing with a stack trace (at least i can see a call path) or go's typical chain of human annotated error chains.
https://github.com/rust-lang/rust/issues/141854
> I'll also note that this was quite annoying to debug since the error message in its entirety was `operation not supported on this platform`, you can't use RUST_BACKTRACE, rustfmt doesn't have extensive logging, and I don't fancy setting up WASM debugging. I resorted to tedious printf debugging. (Side rant: for all the emphasis Rust puts on compiler error messages its runtime error messages are usually quite terrible!)
Even with working debugging it's hard to find the error since you can't set a breakpoint on `Err()` like you can with `throw`.
The reason there isn't an "equal" option is because it's impossible to calibrate. How close do the two options have to be before the average person considers them "equal"? You can't really say.
The other problem is when two things are very close, if you provide an "equal" option you lose the very slight preference information. One test I did was getting people to say which of two greyscale colours is lighter. With enough comparisons you can easily get the correct ordering even down to 8 bits (i.e. people can distinguish 0x808080 and 0x818181), but they really look the same if you just look at a pair of them (unless they are directly adjacent, which wasn't the case in my test).
The "polluted by randomness" issue isn't a problem with sufficient comparisons because you show the things in a random order so it eventually gets cancelled out. Imagine throwing a very slightly weighted coin; it's mostly random but with enough throws you can see the bias.
...
On the other hand, 16 comparisons isn't very many at all, and also I did implement an ad-hoc "they look the same" option for my tests and it did actually perform significantly better, even if it isn't quite as mathematically rigorous.
Also player skill ranking systems like Elo or TrueSkill have to deal with draws (in games that allow them), and really most of these ranking algorithms are totally ad-hoc anyway (e.g. why does Bradley-Terry use a sigmoid model?), so it's not really a big deal to add more ad-hocness into your model.