[Libre-soc-dev] complex numbers as matrices

Richard Wilbur richard.wilbur at gmail.com
Wed Jun 30 19:07:07 BST 2021




> On Jun 28, 2021, at 11:27, Luke Kenneth Casson Leighton <lkcl at lkcl.net> wrote:
> 
> On 6/28/21, Richard Wilbur <richard.wilbur at gmail.com> wrote:
>> o think about.  Can
>> the vector engine handle multiplying a 3-dimensional matrix by a
>> 4-dimensional matrix, yielding a 3-dimensional matrix result?
> 
> yep.
> 
> https://libre-soc.org/openpower/sv/remap/
> 
> wait... *4* dimensional, no.
> 
> only goes as high as 3 dimensions, XYZ.
> 
> however... if you don't mind a manual for-loop on the 4th outer
> dimension it's fine.
> 
> i assume you didn't mean two 2-dimensional matrices, one 4x3 the other
> 3x4 to create a 2D matrix 3x3 result.
> 
> i believe you *actually* mean, "a matrix of NxNxNxN multiplied by one
> of NxNxN" which makes no sense, but hey

Let’s take a very simplistic example:  the circuit consists of a periodic voltage source, V(ω), connected across a purely inductive load, L.  According to Ohm’s Law, V=IZ.  This is for the AC case a complex-valued phasor equation, V(ω)=I(ω)Z(ω), where Z(ω) = jωL is the impedance of the inductor.

In this simplistic example, solving for the current, I(ω), in the inductive load is given by dividing both sides by the impedance, a complex number.

If we treat complex numbers as scalars, this is a 1x1 matrix equation. (Hence, 0D since it is normal practice to ignore dimensions whose size is 1.)  If we represent complex numbers as 2x2 real matrices this becomes a 1x1x2x2 matrix equation, and thus 2D.

When we consider a more complicated circuit with N elements and use Kirchoff’s Current or Voltage Laws to analyze it by nodes or loops, the result will be a set of N simultaneous equations in N variables.  The solution then involves an equation of
[ 1xN ] = [ 1xN ] x [ NxN ]
where every element is complex.  Hence for first-class complex treatment we have
[ 1xNx1 ] = [ 1xNx1 ] x [ NxNx1 ]
which yields under dimensional analysis
1D = 1D x 2D
which is hardly a challenge for a vector machine unless N grows too large.

On the other hand, if we choose to represent the complex numbers as 2x2 matrices of reals, then we have the following
[ 1xNx2x2] = [ 1xNx2x2 ] x [ NxNx2x2 ]
which yields under dimensional analysis
3D = 3D x 4D
and thus no longer necessarily so trivial, even for a vector machine.

I would consider dedicating a mode to dealing with pairs of scalars that travel together as complex numbers.  The rules are slightly different than regular [ 1x2 ] vectors but they are very important in science and engineering and, this feeds into a different conversation on this mailing list, the normal result of a Fourier transform.

With a mode to treat register pairs as the real and imaginary parts of complex numbers we can have double-precision complex number support for free.  Multiplication:
complex x complex = 4 scalar products, 1 negation, 2 additions
complex x real (or complex with imaginary part =0) = 2 scalar products (this is already covered by your vector multiply with one factor a scalar)
complex x imaginary (or complex with real part =0) = 2 scalar products, 1 negation, destination swap
complex + complex = 2 scalar additions (covered by vector addition)
complex - complex = 2 scalar negations, 2 scalar additions (covered by vector operations)
abs(complex= a + jb) = sqrt(complex x conjugate(complex))
  = sqrt((a + jb)(a - jb)) = sqrt(a^2 + b^2)
(Same as vector magnitude.)

argument(complex= x + jy) = arctan(y/x) (this can have a meaningful value for all real values of x,y except (0,0), same as 2D vector)(I recommend consulting the implementation of atan2)[*]
Normally this is defined with a principal branch covering the range θ in (-π, π].

conjugate(complex) = complex* = (a + jb)* = a - jb (entails negation of imaginary part)

reciprocal(complex) = (1/abs(complex)) x conjugate(complex)

equality (for z1 = a1 + jb1, z2 = a2 + jb2)
z1 == z2 <=> ((a1 == a2) and (b1 == b2)) (this is the same as vectors)

ordering is more complicated than normal scalars:  Two fairly simple possible schemas come to mind and both involve the polar form.  z1 = (r1, θ1), z2 = (r2, θ2)
r-major:  z1 < z2 <=> ((r1 < r2) or ((r1 == r2) and (θ1 < θ2))), 
θ-major:  z1 < z2 <=> ((θ1 < θ2) or ((θ1 == θ2) and (r1 < r2)))

I suppose ordering could be deemed unneeded, superfluous, or not meaningful by analogy with vectors.  On the other hand it is traditional for scalar types to admit some type of ordering.  The geometric interpretation of these two schemas is
r-major:  distance from the origin dominates and then, on rings of constant distance from the origin, direction dominates with due West (θ=π) greatest followed by more easterly directions from the second and then the first quadrants until due East (θ=0) is reached then passing westward through the fourth and finally third quadrants.
θ-major:  direction dominates as detailed above and then, for the same direction, distance from the origin dominates.

A case could be made also for a partial ordering on distance from the origin, alone.  The concept is simpler and doesn’t favor a particular direction or one quadrant over another.  The issue I have with that is that it doesn’t agree with the simple equality condition given above.

I think I find the r-major ordering the most geometrically satisfying.

Reference:
[*]  https://en.wikipedia.org/wiki/Atan2




More information about the Libre-soc-dev mailing list