• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

3D Matrix Algebra

Ginarley

Post-normalist
Joined
Jun 22, 2006
Messages
1,430
Location
Palmy, NZ
This is quite possibly a misguided question but a colleague and I were talking about it yesterday and my googling has failed me.

How does matrix algebra work with higher dimensional matrices?

We understand the algebra around matrices of two dimensions (e.g. 3x3) is well established but don't know if its possible/meaningful to do similar calculations with a matrix of three or more dimensions (e.g. 3x3x3). For example can you multiply two 3x3x3 matrices or find the inverse of a 3x3x3 matrix?

Anyone know anything about this?

Cheers
 
Does it even exist? How would one represent an extension of matrices from rectangular arrays to "cubic" arrays? What would the identity matrix be in 3D? You would also have two more directions to worry about in multiplication. What use would 3D matrices have that 2D ones don't?
 
Well, there are third-rank tensors, which can be kind of like "3D matrices." Matrices in the ordinary sense (i.e. a nxmarray) are special, though, in that they're able to function as linear maps between vector spaces. A nxmxl array would not map a vector (a single column) to another vector, but to a matrix. (Or possibly to a fourth-rank tensor, i.e. a "4D matrix." There are both inner and outer products.)

Matrix multiplication and inversion are pretty much composition and inversion of functions, since matrices can act as functions. You can have inversion and composition of tensors, too, if you use them as maps. (The first only exists, of course, if the map is one-to-one.) It just isn't as pretty as 2D matrices, since they map columns to columns.
 
In format, there are only 2D matrices, in the form of rectangular or square form.

In my research, we do use square matrices to handle 3D situations, which we use block matrices, but the format is still 2D: horizontal and vertical index.

notsure
 
This is quite possibly a misguided question but a colleague and I were talking about it yesterday and my googling has failed me.

How does matrix algebra work with higher dimensional matrices?

We understand the algebra around matrices of two dimensions (e.g. 3x3) is well established but don't know if its possible/meaningful to do similar calculations with a matrix of three or more dimensions (e.g. 3x3x3). For example can you multiply two 3x3x3 matrices or find the inverse of a 3x3x3 matrix?

Anyone know anything about this?

Cheers

Yes. Matrices generalized to higher "dimensions" are known as tensors. And what you refer to as "dimension" is known as "rank" when referring to tensors. So vectors (of any dimension) are rank-1 tensors, matrices are rank-2 tensors, etc.
 
Notation: I'm writing X→Y to mean Hom(X,Y) because I suspect the arrow notation may be a little more familiar to some readers.

Well, there are third-rank tensors, which can be kind of like "3D matrices." Matrices in the ordinary sense (i.e. a nxmarray) are special, though, in that they're able to function as linear maps between vector spaces. A nxmxl array would not map a vector (a single column) to another vector, but to a matrix. (Or possibly to a fourth-rank tensor, i.e. a "4D matrix." There are both inner and outer products.)

Matrix multiplication and inversion are pretty much composition and inversion of functions, since matrices can act as functions. You can have inversion and composition of tensors, too, if you use them as maps. (The first only exists, of course, if the map is one-to-one.) It just isn't as pretty as 2D matrices, since they map columns to columns.
As schrodingasdawg just said, the generalization from matrices to tensors of higher order is easier to understand if you regard them both as multilinear functions.

A matrix m can be regarded as a multilinear function m:V1xV2→F that takes two vectors and returns a scalar from the underlying field F (typically the real or complex numbers). It can also be regarded (via lambda abstractionWP) as a map of type V1→(V2→F) that takes one vector and returns an element of the vector space V2→F, which is the dual of V2. (Each Vi is finite-dimensional here, so V→F is isomorphic to V, which is why it's okay to think of m as a mapping from V1 to V2.)

Similarly, a tensor T of order 3 can be regarded as a multilinear function T:V1xV2xV3→F that takes three vectors and returns a scalar. From T we can obtain (via lambda abstractionWP) a map of type V1xV2→(V3→F) that takes two vectors and returns an element of the vector space V3→F, which is the dual of V3.

There are lots of possibilities here. By changing the pattern of lambda abstraction, we can obtain maps of the following types:

V1xV2→(V3→F)
V2xV1→(V3→F)
V1xV3→(V2→F)
V3xV1→(V2→F)
V2xV3→(V1→F)
V3xV2→(V1→F)

V1→(V2→(V3→F))
V2→(V1→(V3→F))
V1→(V3→(V2→F))
V3→(V1→(V2→F))
V2→(V3→(V1→F))
V3→(V2→(V1→F))

V1→(V2xV3→F)
V2→(V1xV3→F)
V1→(V3xV2→F)
V3→(V1xV2→F)
V2→(V3xV1→F)
V3→(V2xV1→F)

If the vector spaces are finite-dimensional, then every one of those types is isomorphic to the original type of T: V1xV2xV3→F. Thus a single tensor T can be used in lots of different ways, which is a source of power as well as confusion.
 
Oops, got caught out over the weekend and didn't get a chance to check the thread.

Thanks for the useful responses, I'd never heard of tensors before and it looks like they are as close as it gets to what I may be after. Having said that I am kinda glad I haven't as it mostly goes flying over my head! I shall do some reading and see what I can figure out.

Thanks again.
 

Back
Top Bottom