There's a fascinating history of how out of the four founding fathers of modern multi-dimensional mathematics, three preferred geometric algebra, and only one preferred vector and matrix algebra. The three all left academia or died young, the one dissenter lived to a ripe old age and was a vocal proponent of his preferred approach -- because it's more useful for statistics. He wasn't a physicist.
All of these weird "triple products", limitations to 3D space, etc... vanish in geometric algebra. You can use the exact same formulas in 2D space, 3D space, 4D spacetime, or 18 dimensional spaces with degenerate dimensions if you please.
There's this obstinate refusal to just admit that the maths that's ideally suited to solving statistical problems may not be ideal for physics, robotics, or optics. Instead, physicists insist on re-inventing the good stuff over and over, badly, with different names, each time.
All of the following are just geometric algebra in disguise, or various "subsets" of a geometric algebra, or a geometric algebra operation that got renamed:
As a random example: the way we represent 3D rotations using a vector in 3D doesn't work in 2D, because a 2D rotation vector would point out of the plane. It also doesn't work in 4D or higher dimensions. It just happens to "work" in 3D not because 3D is natural, preferred, or special, but because the broken maths happens to not be completely broken in one case due to a simple coincidence.
In geometric algebra, instead of using a vector, a bivector is used, which is like a surface patch. Notice that you can have a surface in 2D, so rotations in GA work in 2D. You can also have a surface in 3D, so rotations work in 3D. And in 4D (including both 4D space and 3+1D space-time!), and 5D... and all of rest, with the same formula!
PS: Geometric algebra also prevents gimbal lock, doesn't store redundant values, and has better numerical precision. Its transformations can be interpolated unlike (famously!) matrix transformations, which can't. In robotics or computer graphics this means everyone uses quaternions instead, which are... drumroll... the "even subset" of a geometric algebra.
Whenever this kind of topic arises there is always someone that has worked a bit on 3D geometry, that comes to school mathematicians and physicists on what they are missing out for not using geometric algebra. Usually with a weird tinge of conspiracy theory and cultish behavior mixed in.
They are not missing out, they are aware of geometric algebras and the work of David Hestenes. They instead prefer to use exterior algebra and Clifford algebras as a generalization. There is no conspiracy theory or large-spread ignorance by the physics/mathematics community. Geometric algebra feels like introducing a quirky notation to talk about the same concepts, with no clear added conceptual or computational benefit. So most people ignore this approach.
Wait till I start telling you about how we should be using tau as a replacement for 2 pi!
In all seriousness, we wouldn't accept this kind of thing in the standard library of a programming language.
Imagine if due to a historical mistake, all arrays reported their length as 2x the number of items in them, and indices would similarly be multiplied by two. So the first item would be a[0], and then a[2], a[4], etc...
Now imagine the entire software development world making excuses and justifications. "Oh, it's isomorphic, and shift-left-by-one is cheap!", or "sure, you can only use half of the system memory for any single array, but do you really need more than 2GB of data? 64-bit is common now anyway!"
Physics is in a dire need of refactoring, but it has never had it happen. My opinion of studying it was that I didn't expect it to be the "history of physics". I expected to be studying the current state of the art, not the three-hundred-year journey of comedic errors getting there.
Imagine starting Chemistry by studying alchemy for years and only then being told -- oh -- there's these things called atoms!
Or trying to understand Linux by reading the commit history from day one.
“Symmetrical equations are good in their place, but ‘vector’ is a useless survival, or offshoot from quaternions, and has never been of the slightest use to any creature.”
— William Thompson, 1st Baron Kelvin (better known as Lord Kelvin) Letter to G. F. FitzGerald (1896) as quoted in A History of Vector Analysis: The Evolution of the Idea of a Vectorial System (1994) by Michael J. Crowe, p. 120
…and in turn “geometric algebra” is the physicist’s name for a subset of the even more general mathematicians Clifford algebra (which of course is a subspace of the tensor algebra over the underlying vector space…)
Clifford Algebra is basically just Geometric Algebra, except with complex numbers shoved into where there ought to be real numbers... because no Mathematician could ever resist doing so.
As a programmer / physicist, the analogy I use is that if I had to represent an 8-dimensional parameter as an array, I would write:
double foo[8];
but a mathematician would be unable to resist writing:
complex foo[4];
They're the same, but the latter is more complex (hah!) for no real (haha!) benefit.
>
They're the same, but the latter is more complex (hah!) for no real (haha!) benefit.
When you consider problems in real geometry, this is plausible. But on the other hand, not all mathematics is real geometry. For some applications in mathematics, the complex-geometric perspective is more natural.
P.S. Of course p-adic (and in particular 2-adic) geometry is even cooler than anything real or complex ... ;-)
It's type confusion caused by duck-typing. Just because can you can add, subtract, multiply, and divide complex numbers doesn't mean they're a natural substitute for real numbers in all scenarios. Sometimes you have '2n' real numbers, not 'n' complex numbers.
If mathematicians designed computer software, their data structures would look like:
This so-called "scalar" triple product is not scalar.
It is a pseudo-scalar, which means that its value depends on the system of coordinates that is used. More precisely, its sign changes between right and left systems of coordinates.
In physics it is very important to understand the differences between the different kinds of quantities, e.g. scalars, vectors a.k.a. polar vectors, pseudo-vectors a.k.a. axial vectors, pseudo-scalars, symmetric tensors and so on, otherwise it is easy to make mistakes.
This description always bothered me. You have some space (like the actual space you’re in), and you have three vectors (which could be very real, e.g. the velocities of three balls, labeled A, B, and C, and you calculate this particular function of the vectors (velocities), and you get a number, say 5.
And then your professor tells you that 5 is a pseudoscalar, and if you change coordinates, it’s actually -5. And you boggle, because there weren’t any coordinates to begin with — there were just actual balls, and either the answer is 5 or -5 or ±5 (like sqrt(25), for example), but it’s weird for the answer to be “5 or -5 depending on coordinates”. And then people struggle on the tests, because they didn’t get it, because the material was taught poorly.
What’s actually happening here is that you need an orientation to calculate the scalar triple product. If you have three moving balls and a right hand or a right shoe or something similar, you get 5. If you look in a mirror and calculate the triple product of the mirrored balls’ velocities, you get -5 and you see a left hand and a left shoe. But if you watch yourself in the mirror calculating the triple product from the balls, the mirror copy of yourself gets 5 correctly, because mirrored right shoes fit on mirrored right feet, and mirror-you is looking at mirror-balls, and everything is consistent.
So a pseudoscalar is a quantity that gets negated in a mirror if the person calculating it isn’t also in a mirror. No coordinate systems needed.
As a simple example, hold your hand in your favorite right hand rule gesture. Your thumb, index finger, and middle finger point along vectors, and their triple product will be positive for one hand and negative for the other hand. Now look in a mirror!
Alright, I got nerd sniped halfway through when he said:
Scalar triple products only exist in 3D, because they involve cross products which only exist in 3D (please keep the “they exist in 7D too” comments for some other time).
There isn't a reference, and me googling "higher dimensional cross product" seems to return results generalising to any number of dimensions.
Can anyone explain what he's talking about or provide references? Off the top of my head I can't see why you can't generalise cross products to D > 3. Or if you can't I don't see what's special about dimensions 3 and 7...
Ok... I'll leave the comment to stand but apparently I just needed to scroll a tiny bit more and get an answer here[0].
The seven-dimensional cross product is one way of generalizing the cross product to other than three dimensions, and it is the only other bilinear product of two vectors that is vector-valued, orthogonal, and has the same magnitude as in the 3D case.[2] In other dimensions there are vector-valued products of three or more vectors that satisfy these conditions, and binary products with bivector results.
The simplest counterexample is just to consider vectors in 2D. It is not possible in general to construct a third vector perpendicular to two other vectors unless those other vectors are collinear or anitcollinear. The notion of an exterior product, represented as a signed area subtended by the two vector arguments of the operator sits in 2 space nicely however. In general, signed areas and M dimensional signed volumes are embeddable in N dimensional subspaces for M < N. The notion of a signed volume is directly tied to both the exterior product and the determinant. In higher dimensions, the cross product is equally unhelpful, given that for two vectors, the set of mutually orthogonal vectors abiding by the right hand rule is often infinite.
Somehow, these topics often seem to devolve into discussions of how cross products are evil and (insert alternative) is better or into math full of jargon that’s incomprehensible without studying the math in question. [0]
I’ll try to say something intelligent about this identity. This comes from a class I took on different geometry, but I’ll leave out the differential geometry :)
First, u, v, w, etc are n-dimensional vectors [1]. n = 3 here, and this is important later.
The object [u v w] is a function of three vectors. You feed it three vectors and it spits out a number. It has two interesting properties:
1. It’s multilinear. This means that [u+v w x] = [u w x] + [v w x] and [a·u w x] = a·[u w x]. The same holds for the other two parameter. This just means that, if you fix all but one parameter, you’re left with a linear function of the parameter you didn’t fix.
2. It’s totally antisymmetric. If you swap any two parameters, you negate the result. So [u v w] = -[u w v] = [v w u]. Play with this — it’s fun. With three parameters, the cyclic permutations are positive. With any number of parameters, the even permissions are positive.
These properties are pretty easy to prove from the definition of the triple product.
Now on to the messy function in the article:
f(u,v,w,x) = u[v w x] – v[w x u] + w[x u v] – x[u v w]
f is multilinear: each term is fairly trivially multilinear, and the sum of multilinear functions is multilinear.
f is antisymmetric. Try it — swapping any two arguments negates it! [2]
Now for the fun: In n dimensions, if you have a k-parameter antisymmetric multilinear function, then, if k>n, your function is always zero!
Lots of other cases are interesting, too:
k=1: these are just linear functions of vectors. One might call them “dual vectors” or “covectors” or whatever. If you are a bit sloppy, you can thing of them as vectors, and they are vectors, but they are not the same type of vector as the type of their input. If you cough erase that type difference, you end up in the rabbit hole that leads to cross products making sense in 3D but not otherwise.
k=n: This is a “volume element”. In high school calculus, x·dx or f(x,y)·dx·dy are things that live inside an integral and you aren’t supposed to do too much in the way of removing the integral sign. In 1D, x·dx is a function mapping x (a point in space) to dx, a volume element. If you have a mapping from points to volume elements, you can integrate it and get a scalar! In Euclidean space, you can get away with integrating something like a volume element (e.g. dx·dy) and ignoring the point part. In non-Euclidean space, a vector here and a vector there are different things — a vector (say, the direction and ant is crawling if it’s at a certain spot on a balloon) is not a vector elsewhere* — an ant crawling that direction elsewhere on the balloon whirl be crawling off the surface!
k=0: scalar. It’s a scalar-valued function of zero parameters. That would be a scalar.
Other goodies from high school geometry and calculus are hiding in here, too, with appropriate k and n.
[0] For example, “Algebra” means something to most people who went to high school. It means something rather different in fancy math. I don’t know the history of how this came to be.
[1] If you are working with a non-Euclidian manifold, then they’re all vectors originating at the same point. Yes, this all generalizes to any differentiable manifold.
[2] If you feel fancy, the “mess” is the exterior product of the identity on one vector with the scalar triple product, times plus or minus 1 (I didn’t check). And wow, the Wikipedia article on the exterior product is a mess. You can read several pages, try to remember a bunch of math you haven’t used in a while, and still have no idea how to compute the thing. It’s really not that bad once you get past the jargon.
> don't handle high dimensional (even 32D, fairly primitive by ML standards) spaces.
You're confusing two different notions of dimensionality here. The Geometric Algebra of 32D is actually a 2^16 (i.e. 65536) dimensional vector space. So it's like talking about vectors in 65536 dimensions, or the space of 256x256 matrices.
I had a problem where I wanted to rotate a high dimensional vector. This was ages ago, so I don't remember the exact terminology. What I ended up doing is writing out the geometric algebra equations by hand (with only the first order bivectors IIRC) and coding that up.
Having an option in the library to not have all the higher order bivectors if I don't need higher order operations would be nice.
All of these weird "triple products", limitations to 3D space, etc... vanish in geometric algebra. You can use the exact same formulas in 2D space, 3D space, 4D spacetime, or 18 dimensional spaces with degenerate dimensions if you please.
There's this obstinate refusal to just admit that the maths that's ideally suited to solving statistical problems may not be ideal for physics, robotics, or optics. Instead, physicists insist on re-inventing the good stuff over and over, badly, with different names, each time.
All of the following are just geometric algebra in disguise, or various "subsets" of a geometric algebra, or a geometric algebra operation that got renamed:
And on, and on, and on.As a random example: the way we represent 3D rotations using a vector in 3D doesn't work in 2D, because a 2D rotation vector would point out of the plane. It also doesn't work in 4D or higher dimensions. It just happens to "work" in 3D not because 3D is natural, preferred, or special, but because the broken maths happens to not be completely broken in one case due to a simple coincidence.
In geometric algebra, instead of using a vector, a bivector is used, which is like a surface patch. Notice that you can have a surface in 2D, so rotations in GA work in 2D. You can also have a surface in 3D, so rotations work in 3D. And in 4D (including both 4D space and 3+1D space-time!), and 5D... and all of rest, with the same formula!
PS: Geometric algebra also prevents gimbal lock, doesn't store redundant values, and has better numerical precision. Its transformations can be interpolated unlike (famously!) matrix transformations, which can't. In robotics or computer graphics this means everyone uses quaternions instead, which are... drumroll... the "even subset" of a geometric algebra.