Hardware emulators are expensive, but a single mask respin at 7, 10, or 16nm is even more expensive.
See myth 7 here: http://www.electronicdesign.com/eda/11-myths-about-hardware-...
Hardware emulators are expensive, but a single mask respin at 7, 10, or 16nm is even more expensive.
See myth 7 here: http://www.electronicdesign.com/eda/11-myths-about-hardware-...
The reasons are numerous. I already gave a few. I will give another. Once you have to integrate hard IP from other parties, you cannot synthesise it to FPGA. Which means you won't be able to run any FPGA verification with that IP in the design. You can get a behavioural model that works in simulation only. In fact it is usually a requirement for Hard IP to be delivered with a cycle accurate model for simulation.
I'll give another reason. If you are verifying on FPGA you will be running a lot faster than simulation. The Design Under Test requires test stimulus at the speed of the FPGA. That mans you have to generate that stimulus at speed and then check all the outputs of the design against expected behaviour at speed. This means you have to create additional HW to form the testbench around the design. This is a lot of additional work to gain speed of verification. This work is not reusable once the design is synthesised for ASIC.
I can go on and on about this stuff. Maybe there are reasons for a particular product but I am talking about general ASIC SoC work. I got nothing against FPGAs. I am working on FPGAs right now. But real ASIC work uses simulation first and foremost. It is a dominant part of the design flow and FPGA validation just isn't. On a "Ask HN", you would be leading a newbie the wrong way to point to FPGAs. It is not done a lot.
In this case, I'd guess its got a lot to do with cost vs relevance of the simulation. If you're Intel or AMD making a processor, I bet FPGA versions of things are not terribly relevant because it doesn't capture a whole host of physical effects at the bleeding edge. OTOH for simpler designs on older processes, one might get a lot of less formal verification by demonstrating functionality on an FPGA. But this is speculation on my part.
Exactly. When you verify a design via an FPGA you are only essentially testing the RTL level for correctness. Once you synthesise for FPGA rather than the ASIC process, you diverge. In ASIC synthesis I have a lot more ability to meet timing constraints.
So given that FPGA validation only proves the RTL is working, ASIC projects don't focus on FPGA. We know we have to get back annotated gate level simulation test suite passing. This is a major milestone for any SoC project. So planning backwards from that point, we focus on building simulation testbenches that can work on both gate level and RTL.
I am not saying FPGAs are useless but they are not a major part of SoC work for a reason. Gate level simulation is a crucial part of the SoC design flow. All back end work is.
Nobody in their right mind would produce an ASIC without going through simulation as a form of validation. For anything non-trivial, that means FPGA.
The ability to perform constrained randomised verification is only workable via UVM or something like it. For large designs that is arguably the best verification methodology. Without visibility through the design to observe and record the possible corner cases of transactions, you can't be assured of functional coverage.
While FPGAs can run a lot more transactions, the ability to observe coverage of them is limited.
I have worked on multiple SoCs for Qualcomm, Canon and Freescale. FPGAs don't play a role in any SoC verification that I've worked on.
The skills to do front end work are similar but an ASIC design flow generally doesn't use an FPGA to prototype. They are considered slow to work with and not cost effective.
IP cores in ASICs come in a range of formats. "Soft IP" means the IP is not physically synthesised for you. "Hard IP" means it has been. The implications are massive for all the back end work. Once the IP is Hard, I am restricted in how the IP is tested, clocked, resetted and powered.
For front end work, IP cores can be represented by cycle accurate models. These are just for simulation. During synthesis you use a gate level model.
Does this confusion typically happen to engineers who are trying to teach themselves hardware design, or is it just an indication of a terribly-designed curriculum?
Maybe engineers need to be introduced to the synthesis tools at the same time as the simulator tools.
Simulating RTL is only an approximation of reality. So emphasizing RTL simulation is bad. You see it over and over though. People teach via RTL simulation.
Synthesis is the main concern. Can the design be synthesised into HW and meet the constraints? Because all the combinatorial logic gets transformed into something quite different in a FPGA.
Also, just to be clear, I do not (and never did) hold any hard feelings towards the recruiter; in fact, it was very kind of them to point out why I was not qualified in the first place. This has been probably the most reflective of how I let my ego get the best of me at times, and I hope it might serve as a warning to those who might be tempted to do the same "devsplaining" in similar situations.
Please let me know if you have any other criticisms beyond the ones already voiced in this thread. I'm reading through the comments here as I can, and it's been a lot of good advice. Thanks again.
You stuck to your guns and didn't just lie about Unix experience, so I commend you.
But if you really want the job, next time just lie and set them straight once you've gotten an interview. It is splitting hairs to make a big deal out of actual Unix experience vs Linux experience.
If you're aiming for an FPGA job after school you'll need to be proficient in verilog or vhdl (ideally both), there's no shortcut. The sooner you learn how to deal with their quirks and pitfalls (I agree they have a lot), the better. Sprinkle some good-ol' TCL in there and you're good to go. Yes python is better and more feature/library rich but the industry is still using TCL (which is not bad, just not modern).
Don't get me wrong, I'd like to see a standardized higher level approach to hardware description, but unless the vendors agree and support it there's very little chance it will be useful. The current trend in high level synthesis is non-portable vendor specific tools. The only way I see the trend changing is when FPGAs become more mainstream (already happening in the server/deep learning sectors) and there's a critical mass of customers that ask for FPGA tools in par with software tools (ie. high level languages, open source, etc.)
PS. You forgot the python based myHDL :)
But the language is just a small part of the design process. You have to be learn to design HW. The HW engineering project tailors the tool choices around the requirements of the product. It is assumed that engineers know the fundamentals. They can adapt to any high level synthesis tool.
Vendors training courses for all fancy HLS tools are done in a few days at most. They don't have a semester for any newbies to learn Verilog/VHDL or C/C++ first. It's assumed you know them.
One thing that bit me when I was a complete n00b: assigning registers from within more than a single always block. On my simulator (at the time) it worked perfectly but the synthesis tool silently ignored one of the blocks.
EDA tools suck. There I said it. Coming from a software it's truly shocking how poor error/warnings are handled. My "favorite" part is that you cannot enforce a "0 warnings" discipline as the libraries and examples from the vendors provoke thousands warnings and the only workaround is to filter the individual instances of the messages.
It's tool dependent but I believe you should see a warning that two drivers are assigned to the same net.
This is probably where I am guessing you mistakenly thought you were creating a register in Verilog with the keyword "reg". Synthesis tools don't work like that and haven't for quite a while.
Taken from https://blogs.mentor.com/verificationhorizons/blog/2013/05/0... :
"Initially, Verilog used the keyword reg to declare variables representing sequential hardware registers. Eventually, synthesis tools began to use reg to represent both sequential and combinational hardware as shown above and the Verilog documentation was changed to say that reg is just what is used to declare a variable. SystemVerilog renamed reg to logic to avoid confusion with a register – it is just a data type (specifically reg is a 1-bit, 4-state data type). However people get confused because of all the old material that refers to reg."
A lot of people here on HN seem to be self taught and not keeping up with tool and language developments. If you use tools and techniques from the 90s, don't expect wonderful results.
My experience went something like this: An hardware engineer needs to do a routine task like add a peripheral, swap some pin assignments, and modify the Verilog/VHDL. So they do all their synthesis and have an export ready to hand it off the the software engineer. They commit their changes and it probably causes differences in dozens of files, but so is life. It seems like this could be reduced to differences in a few human-readable files, except for the bitstream which obviously is binary.
The SW engineer then needs to update the FSBL and BSP for the board. I never found a way to automate this on the command line, you needed to update the FSBL using their horrible Eclipse-based import tool. In my case, I had to make some manual modifications to the FSBL. I think for my modifications that I needed to flip some GPIO pins early in the boot process and also do some RSA validation on the bitstream. Well, all those modifications would get wiped out. I never found a way to template those and preserve them across new imports.
So I had a bunch of differences that had to be manually merged every time. I had notes about it, but come on. What a pain. At the end of all of this, many dozens more files were changed. Once again, it seems like this should reduce to just handful of human-readable files like a FSBL configuration header / C file, the new U-Boot config, and the new kconfig. But instead you had two massive changesets in version control for some very routine work.