[libre-riscv-dev] Yehowshua Tasks

Luke Kenneth Casson Leighton lkcl at lkcl.net
Fri Jun 19 02:48:25 BST 2020


On Fri, Jun 19, 2020 at 1:14 AM Yehowshua <yimmanuel3 at gatech.edu> wrote:
>
>
> > also, parameterising of the minerva code so that address and data
> > widths can be selected (128-bit, 64-bit, 32-bit)
>
> Minerva cache can already do this.

took another look and yes!

class L1Cache(Elaboratable):
    def __init__(self, nways, nlines, nwords, base, limit):

i'd falsely assumed that because the wishbone records
(LoadStoreInterface) are all restricted to 32 bit that L1Cache was, as
well.

> Also, the way that Minerva does things is by having
> `CachedLoadStoreUnit` contain the cache.
> `CachedLoadStoreUnit` is also connected to the wishbone bus.

yes.

> If I understand correctly, you would also want to follow this architecture?

yes absolutely.  and also keep the BareLoadStoreUnit as well.

> If you can describe the interface you have in mind for the `CachedLoadStoreUnit`,
> I can evaluate what is possible code-wise.

i'd like us to use it pretty much as-is, with little to no
modifications.  with the underlying concepts being near-identical
(because that's just Computer Science) i *believe* it is a matter of
identifying which signal names are which.

i think, however.... i have a sneaky suspicion.... that it might be a
good idea first to get L0CacheBuffer to talk a "dummy" version of
LoadStoreUnitInterface.

basically, write an intermediary class, called
"TestMemoryLoadStoreUnit" which, intead of L0CacheBuffer interfacing
directly to TestMemory, interfaces to the TestMemory *through* the
LoadStoreUnitInterface.

then - without having to consider or bring Wishbone or L1 Caches or
anything into the mix, the job is greatly simplified: we do *not* have
to think, "argh, it's all going to go out over dbus therefore all that
wishbone logic we have to get our heads round", actually _no_ we
*don't*.

just looking here:

https://github.com/lambdaconcept/minerva/blob/master/minerva/core.py#L455

this is the *only* interaction point that the core has with the
LSUI.dbus Record.

so.

yes.

slowly a potential strategy is sinking into my tiny braiiin.

how about this:

* start a new file soc/experiment/loadstore.py which imports
soc.minerva.loadstore LoadStoreUnitInterface (LSUI)
* expand its addresses to 48 bit (parameterised), the mask to 8 bit
(again, parameterised) and the data to 64 bit (likewise) - actually
i've committed this already
* create a TestMemoryLoadStoreUnit (TMLSU) which is compliant with the
LSUI interface, but *IGNORES* LSUI's wishbone bus entirely
* make TMLSU read and write to TestMemory *using* the LSUI x_addr,
x_mask, etc. etc.
* write a unit test to check that that's operational

theeeen, after that unit test is operational:

* modify L0CacheBuffer by ripping out the hard-connection to
TestMemory, replacing it with a FSM that groks LSUI.

this latter is where it gets slightly ugly, because L0CacheBuffer is
designed specifically to be "in control" - write directly to
TestMemory using pure and simple one-cycle "read-enable" and
"write-enable".  it does *not* grok the busy/stall/valid signalling,
and that's what it needs to do.

this needs to go - replaced by lsui.m_load_data plus lsui.x_load.eq(1)
and respecting x_stall/x_valid (etc)

            comb += lddata.eq((rdport.data & lenexp.rexp_o) >>
                              (lenexp.addr_i*8))

this needs to go - replaced by lsui.x_store_data plus
lsui.x_store.eq(1) and likewise.

            comb += wrport.data.eq(stdata)  # write st to mem
            comb += wrport.en.eq(lenexp.lexp_o) # enable writes

for a very very very first cut of TMLSU we can also assume that
m_load_error and m_store_error never get set.  however the very first
thing that should be done after we get the first version functional:
add that error handling in.

what do you think?

l.



More information about the libre-riscv-dev mailing list