Tom Veatch

Flames of inspiration often leave smoke signals behind.
From mine, these.


Math

A Parallel Fourier Transform

Wow. This can't be true: an O(10) time algorithm for a a 1024 point fourier transform.

Let me set the stage. Now Gauss could be the greatest mathematician in history. I've always considered Tukey the greatest algorithm-making mind in history. How in the world could you wrap your mind around the FFT?! So they had the same idea (Gauss 1805, Tukey 1965) we call it the Fast Fourier Transform. The FFT is the most famous algorithm in history; almost incomprehensible to follow it, so much the more so it must have been to invent it. And the fastest FFT has always taken a hard limit of O(N*log(N)) time to calculate, with N the number of samples in the input.

The conclusion? My self-imposed, weekend Divacon homework exercise yielded a 12-symbol expression which computes a Parallel Fourier Transform in O(log(N)) time, ignoring communication: 1024x faster than Gauss and Tukey on 1024-sample data.

The story: My friend George Mou's 1990 PhD thesis was about how to think about parallel algorithms, and offered us a new orthogonal structure of analysis of computation, a formalism for algorithm expression, and a programming construct that imposes a particular way of thinking, which is actually an extremely general parallel decomposition structure onto your problem which yields world-beating results. If you can develop this skill, you can conquer the world.

George is what I'd call a world-historical genius, and I'm amazed to know him and lucky to be his friend. So I thought I'd document my education, so far, in Divacon, and maybe offer up some bits of tutorial explanation in the same, as well as in the calculations of, and the intuitions about, the Fourier transform, which I've worked on previously.

It was supposed to be a simple, outcome-uncertain homework exercise. George had told me he had a three line FFT written in Divacon which presumeably like all the others took him a half hour to write 30 years ago, but it wasn't in his thesis or published anywhere I could find, so I figured I would have to do honest homework to do an FFT in it. I figured that if I matched what he said he had done, I would surely know Divacon better, and believe me I'm impressed with the power of what that means. It was a Divacon exercise, not so much FFT algorithm research. So I puzzled on it for a few days, thinking about the geometry of the Fourier transform equation elements, going back and forth doing n'th partial re-reading of George's thesis, and finally just reading Numerical Recipes' section on FFT which used a 1942 version of the math, and it was like plug and play, pick out the divide and the combine from the usual suspects, the one that matched, use the regular base predicate everyone always uses for everything, ok, then discover that the base function was also the standard one which was a big surprise, but I'm okay with that. Then just looking at the equation to try to see the difference between one layer and the next, I could write the postadjustment function as a plain and simple, old, multiply-add, and by then it was essentially done.

A one line expression came out of this, and it says \(PFT\ \sim\ O(log(N)) is its time of computation.

Interested yet? I hope if you are a computer scientist, or even a mere programmer, you will find this super motivating and empowering to read about and then to learn yourself. Divacon is a big deal, and today it is stupid what a big secret it is.

Bayes

the math of learning

Mean Thoughts

Thoughts about the Pythagorean means: the average, the geometric mean, and the hyperbolic mean. This is surprisingly deep for so simple a topic. Understanding at least 2/3 of it is part of your path to wealth and meaningfulness.

Negative Dimensionality

Abstracting geometry.

Hilbert's axiomatization of geometry, full of redundancy, led me to a generalization which makes geometric dimensionality a characteristic that can be counted up (as in point to line to plane to space, etc.) and down (space to plane to line to point: etc.) Geometries, by intersecting, create lower-dimension geometries; for example two intersecting 2D planes create a 1D line. Geometries, by projecting, create higher-dimension geometries; for example, two 0D points project a 1D line.

But if there is no upper limit, perhaps there is also no lower limit. The idea that a geometry might have negative dimensionality seems absurd, considered within the assumptions of spatial thinking, yet it derives from the same less-redundant axiom set as the geometries we understand. Suggestions for intuition and use of this idea are also given.

A More General Theory of the Syllogism

Abstracting logic. Aristotle's list of syllogisms missed half of them; there's nothing to them (H!); and we can do better without.

Still it is pretty fun and cool, considering this was the intellectual pinnacle of humanity for 2000 years, and plus I'd say this is not a bad introduction to "term logic", and might be suggested reading for students of computer science, philosophy, classics, and/or math.

Bliss Theory: Emotion in General

On a mathematical represention of emotion, with decomposed functions including Identification which turns out to have a central role.

Math as Language

Underlying intuition reads out as discrete expression.

Math Tutor

Learn your times table.

Neural Networks + Fuzzy Logic + Space

An attempt at a careful, accessible introduction to neural networks assuming only high school algebra and a little geometry and differentiation. NNs are defined mathematically, along with how to run them, how to train them (by the usual gradient descent), how to train them better (so I suppose: using 'Newton-Raphson', which really ought to kill!). I also discuss how to understand the training algorithm's implicit reasoning about the adjustments it decides to make; I share an interpretation that backpropagation is like an Anti-Dunning-Kruger learning system (and therefore morally superior to most men?). Then I give a whole Fuzzy Logic re-interpretation of NNs, along with suggestions on how to enhance their logical reasoning capabilities. I tried the wikipedia page, and got so frustrated I wrote my own introduction. So yes, I suggest reading this if you want to really understand neural networks, and if your other resources have made it seem inscrutable. It's a few pages of actual math, yes, but all the steps are laid out: no leaps! It's not short, but you don't have to be a math major to follow along. I encourage your study here if you are interested in really knowing how neural nets work.

Also this adds Fuzzy Logic to neural networks, including how to train them. Finally this goes into Space Representing Neural Networks so robots can represent space, or humans' representation of space can be understood better. Three months of work is in here.


Comments?
(will not be shared or abused)
Comment:
    Please give feedback
RSS for updates Subscribe to what's new
 

Copyright © 2000-2021, Thomas C. Veatch. All rights reserved.
Modified: 12/20/2021