kenworld
Writing Testbenches


Writing Testbenches
By: Janick Bergeron
Published: 2003
Reviewed: 4/10/2004



First, I should point that "Writing Testbenches: Function Verification of HDL Models, 2nd Edition" is a book I read for work. The original edition was an excellent introduction to functional verification (making sure a design matches its specification and intent). I *highly* recommended it for anyone just coming out of school, or just starting to do verification. The second edition concentrates heavily on the use of Hardware Verification Languages, namely Verisity's Specman (E) and Synopsys' Vera. I use Specman on my current project, and while there are some serious issues, I don't ever want to go back to the old way, writing everything from scratch in VHDL (or God help me, Verilog). For those not familiar, HVL's provide two main abilities. First they facilitate creating constrained random input data. Not just random values within a range but ones with particular distributions and relationships to each other. Second, they allow one to record metrics on which corner cases are exercised. The latter is called "Functional Coverage". A lot of companies only take advantage of randomization, but the real power comes from Functional Coverage.
Most HVL vendors sell their tools claiming you can reach a lot of corners with a purely random environment, then write a few more directed tests to fill in the remaining corner cases, the result being less time and money. Reality doesn't work that way. Changing a random environment can cause it to cover a different set of cases when you re-run a verification suite. And even if you have a small input parameter, you might have to simulate a long time to reach all of its possible values. What HVL's can do is increase the quality of your results. Checking that the actual signals in a design encounter a particular corner is much better than having a test writer claim the corner is covered in some comment when the test is developed. And random testing has always been a useful technique for finding things you didn't think about. HVL's can also provide assertion-style checking, with easier syntax (though typically slower run times) than options such as PSL/Sugar.
While I haven't used Vera, my impression from the book is that I like Specman/E a lot more. Vera does offer private variables (which all languages should support), but that was the only advantage I saw. Vera is more C-like and requires constructor methods to create an instance of an object, whereas the E "gen" command performs all of the necessary allocation of an object and sub-objects for you. Plus E is truly "aspect oriented" meaning you can extend any class/method from subsequent files. Figuring out a clever set of extension to implement a particular test is the most enjoyable part of verification for me.
I can't say I learned a whole lot of new techniques, but then again I have been in the business for 14 years. I've also had the good fortune of being interested in and working with people who were trying to move beyond the status quo (e.g. Doug Park).
One place I had hoped to learn something new was in the area of preventing false positives on functional coverage items. This becomes increasingly difficult as you go from block level to chip or system level simulation. For example, with block "A" you might be able to examine its input parameters after processing a block and know certain cases are covered. But higher up in the system with blocks A, B, and C in a pipeline, block B might attenuate or zero-out the results such that A's parameters no longer affect the output. [One technique is to create a base event saying the end of the block was reached, then conditionally trigger a different event that checks the gain parameters of block B, or that the output of B had a certain amplitude. This new "qualified" event is then used to record the input parameters. I wound up with a dozen or so event flavors for each processing path of my last project]. Also the "right" way to handle the re-use of functional coverage items at a higher level still eludes me. E groups coverage items by class instead of by instance (contrary to how hardware engineers think). These kind of issues affect ones ability to be successful with functional verification. It is also very easy to create an environment where the blocks are so interdependent that no one can implement changes. Maybe I'll write "How to Succeed with E" after I get that coverage instance problem sorted out.
Overall, Writing Testbenches is a great reference, and again I recommend it to anyone doing functional verification. Until I write my book, it is the best one out there.