[Libre-soc-dev] nmigen tutorials etc.

Luke Kenneth Casson Leighton lkcl at lkcl.net
Sun Oct 10 14:20:10 BST 2021


---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68

On Sun, Oct 10, 2021 at 12:20 PM Andrey Miroshnikov
<andrey at technepisteme.xyz> wrote:

> > do that *every* time including even just adding one line of nmigen code.
> Thanks, great idea! It'll be a good excuse to run the hdl-dev-repos
> script as well.
>
> Waited on writing this reply until I tested ilang/verilog generation.

sensible

> Today I was able to figure out the process (because some tutorials are
> out of date, some don't include everything I needed).

whoops

> What came out of my exploration is a new tutorial section:
> https://libre-soc.org/docs/learning_nmigen/

nice - that's really useful https://libre-soc.org/docs/nmigen_verilog_tb.png

> Have a look, see if it makes sense.

excellent, i cross-referenced on https://libre-soc.org/resources

> Also some of the nmigen tutorial links have been taken down, so I
> replaced them with equivalent

actually you can still get many of them by using archive.org

> (although they haven't helped me with what
> you suggested).

no they won't.

>
> > keep doing it for a minimum of 4 months, because it will help you avoid some very costly errors, which will emerge over time.  i won't spoil the surprise :)
> Oh good, I better make sure to triple check my work then ;)

the key one is - and now that you understand this is AST - is
assigning a massive complex batch of nmigen AST to a *python*
variable, then using that *python* variable multiple times.

guess what happens?

that AST gets replicated... once for each *and every* time you use
that python variable.

whoops.

jacob just wrote this which helps, there
http://bugs.libre-soc.org/show_bug.cgi?id=722 but hoo-boy are the
memory requirements massive.

> > the idea is however that you will basically not need to know *at all* except in an ancillary indirect way that PartitionedSignal is even being used.
> Fair enough.

the alternatives - which i describe in detail here - are absolutely
flat-out bat-s*** insane.
https://libre-soc.org/3d_gpu/architecture/dynamic_simd/

> Also, when you mention SIMD, are talking specifically about vector
> instructions?

ah.  right.  you've possibly fallen into the common trap of "SIMD === Vectors".
given that multiple very large Corporations' Marketing Machines have
peddled that for over a decade this is not a surprise.

i updated the Vector Processor page on wikipedia to correct this
https://en.wikipedia.org/wiki/Vector_processor

so no: i am *not* talking specfically about vector instructions, because
they are not synonymous.

we *happen* to have some Vector instructions (SVP64).

the architectural back-end *happens* to be implemented in (masked, predicated)
SIMD.


> > such a project does in fact exist: it is called MyHDL. we evaluated it and found it places severe limitations on what can be done because it is, underneath, actual verilog.  as in: you write in python but it is *directly* expressing solely and exclusively verilog concepts.
> What's even the point of Verilog with a Python coat?

code clarity and readability, python PEP8 and doc-checkers, vast
numbers of tools
and libraries that manage documentation, much larger developer adoption, you
name it.

> With my limited knowledge, it seems nmigen has more powerful
> abstractions,

yes, exactly.  but it's more than that.  have a look at how PowerDecoder
is done.  see how long it takes you to *actually* spot any *actual*
nmigen code in it.
https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/decoder/power_decoder.py;hb=HEAD

there's actually two tutorials about it, *and* it's the subject of a number
of the different talks i've done.

> as well as better integration with formal verification,
> thus making your job easier (and more maintainable).

duh. bottom line it's pretty damn brilliant.

> > nmigen you have an "object" (a Module) to which you add "stuff" using python function calls and that "stuff" accumulates an in-memory Abstract Syntax Tree (look up the concept, under "compilers")
> Oh yeah ASTs are pretty cool, the Pascal interpreter Python tutorial
> uses ASTs.

you'll love this: i recovered the GardenSnake.py example from python-ply,
added a 3rd lexer pass which allowed it to properly support full python
syntax, then used it as the basis for the openpower ISA pseudo-code
to python compiler.


> > that in-memory AST then gets handed to a function whose job it is, just like any compiler, to syntax-check it and do other assurances, followed by walking the entire tree and spitting out a yosys-compatible ilang (aka RTLIL) file.
> You know, after I studied basic compiler design, and you mentioned ASTs
> now, it makes perfect sense to have an AST of your RTL. For
> visualisation, redundancy checking, errors, etc.
>
> Shame such a concept isn't discussed much on the hardware side of
> things. We have Karnaugh maps, KCL/KVL, which you can use to construct
> diagrams and expressions to optimise.

this is 90s-era. not that hardware *needs* to, and it's so fundamental
that i learned about karnaugh maps as part of O-Level and A-Level
maths back in 1985-87, but hardware engineering hasn't moved on
since...

... oh except for proprietary companies creating tools that you can only
use under NDA.

> > as a software engineer i am absolutely astounded and shocked that hardware engineers have no idea about git, continuous integration, or any of the standard techniques we take for granted.  the entire industry is about 15-20 years behind.  no wonder they charge so much money.
> I'm just as astounded as you are. So much is still manual in the ASIC
> design process that progress is slow, thus limiting the time spent on
> fixing bugs (which soon become generational bugs) to focus on cramming
> more features.

and it takes so long that they can't possibly do design-iteration.

> Funny thing that most of the time there's always a
> shortage of programmers, plenty of hardware engineers though.

yes but they then have to be trained in software engineering techniques,
and many of them can't handle it, there's too much they don't know.
(which explains the programmer shortage: people who *have* been
trained in software engineering get snapped up)

IBM *only* employs people with c++ software engineering experience
to work on the IBM POWER processor HDL. this tells you everything you
need to know. their entire validation suite is in c++.  they have their
*own gate-level simulator*... in c++.   they even have their own
HDL-to-GDS-II synthesis tools because the ones produced by commercial
companies are incapable of dealing with 18 billion transistor designs.

l.



More information about the Libre-soc-dev mailing list