Twist And Bend

a time domain representation of space


| NN Top | Introduction | Newton's Method | Fuzzy Logic NNs | & training rule | Twist-and-Bend | Space Representing NNs | Dual Quaternions | Algebroids | Robots |

Twist-and-Bend encodings and transformations.

Let \(\hat u\) be a unit vector in 3D imaginary space anchored on its origin \(\hat u = i\ u_1 + j\ u_2 + k\ u_3 \) where \( u_1^2+u_2^2+u_3^2=1, i^2=j^2=k^2=-1, ij=k,jk=i,ki=j\). Then \(\hat u\) is a Bend applied to a Twist applied to the forward direction vector \(i\).

Starting from the origin and given an orientation frame \(i,j,k\), we consider \(i\) to be forward, \(j\) to be up, and \(k\) to be rightward. Now we operate on the frame itself so that the forward vector points in any direction we might like, as follows.

Let \(T(v,\alpha)\) twist \(v\) around \(i\) in the frame \(i,j,k\) by angle \(\alpha\), and \(B(v,\alpha)\) bend \(v\) around \(k'\) in the frame \(i',j',k'\) by angle \(\alpha\). First, we "twist" the frame, rotating it so that "Up" \((j)\) (and \(k\) too) twists or rotates around \(i\) by \(\theta\) to \(j'\) and \(k'\). That is,

\(i'\) = T\((i,\theta) = i\), and
\(j'\) = T\((j,\theta) = j\ cos(\theta)+k\ sin(\theta)\), and
\(k'\) = T\((k,\theta) = j\ sin(\theta)+k\ cos(\theta)\).

Second, we"bend" our resulting frame \((i', j', k')\), rotating the "Forward" \((i)\) vector in the twisted frame, that is, within the \(i',j'\) plane, by angle \(\beta_u\). Can you visualize it? \(k'\) remains unchanged, while \(i', j'\) rotate around \(k'\):
\( \begin{align*} i'' &= B(i',\beta_u) = i' cos(\beta_u) + j'\ sin(\beta_u) &= i\ cos(\beta_u) + j\ cos(\theta)sin(\beta_u) + k\ sin(\theta)sin(\beta_u) \\ j'' &= B(j',\beta_u) = j' cos(\beta_u)+ i'\ sin(\beta_u) &= i\ sin(\beta_u) + j\ cos(\theta)cos(\beta_u) + k\ sin(\theta)cos(\beta_u) \\ k'' &= B(k',\beta_u) = k' &= j\ sin(\theta) + k\ cos(\theta)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \end{align*} \).

The result is \(\hat u\), which can point in any direction, as a function of \(\theta,\beta_u\).

By composing Twist with Bend, we can specify any direction out from a center, assuming it has "forward" and "up" directions to anchor from.

Further, given a distance \(d>0\), we can encode any location in 3D imaginary space as \(d\hat u\).

The insight merging Twist-and-Bend with quaternions is to apply a second bend \(\beta_q\) operation which is the quaternion rotation angle. While the axis \(\hat u\) of a quaternion rotation is itself derived from a twist and a bend on an original orientation frame, in general, a unit quaternion represents, and applies, a second rotation around that axis \(\hat u\). The angle of the second bend, \(\beta_q\), can then be understood as a rotation angle encoded in a unit quaternion, in the scalar component \(cos(\beta_q)\) and the \(sin(\beta_q)\) value which weights the vector component \(\hat u\). Thus:

\(\begin{align*} q &=cos(\beta_q)+sin(\beta_q)*\hat u \\ &=cos(\beta_q)+sin(\beta_q)*(i\ \hat u_1 + j\ \hat u_2 + k\ \hat u_3) \\ \end{align*} \)

Another useful insight is that the twist of Twist-and-Bend is also a twist around an axis, which can be interpreted in the quaternion space as identifying the "forward", "x", "i", or parent-to-daughter-node link axis as the rotation axis \(\hat u_\theta\), and the twist angle as \(\theta\). We will use these insights after setting the context by anchoring the orientation frame at the root of the tree and then initializing orientation and length parameters as we descend from root to leaf across the tree.

Orientation transforms to back up from periphery to center

To represent and merge sensory inputs into a joint scene view, we describe a physical, tree-structured, set of orientation transforms. From these assumptions we will be able to derive parent perceptual signals agglomerated from daughters on the sensory-information-transmitting perception tree.

In parallel, similarly structured, but downward-transmitting nodes of an actuation tree, conversely, we will be able to filter parent signal agglomerations and send specifically those wanted by the various daughters to them, thus deriving daughter control signals from spatial filters applied to the parent signal.

Let \(\beta\) (for "bend") and \(\theta\) (for "twist") be conceived as operating as follows, First, we are given a "camera focus" direction which is that between a linked pair of nodes in physical space, the direction \(i = \vec x\) facing toward the position of one node from the position of the other. Second, we are given a "camera up" direction \(j=\vec y\), \(\vec y\perp \vec x\) on the link. We refer to these as basis vectors \(i,j\) with \(k=z\perp \vec x\) and \(z\perp \vec y\), z pointing right (left-handed forefinger: x, thumb up: y, bent middle finger pointing right: z).

That's our geometrical frame template, as context.

Next, let's start our calculations, using dual quaternion ideas. First, we set \(u = \vec z\).     \(u'\) will be the axis around which the "Bend" of "Twist-and-Bend" bends. It starts horizontal, that is, on \(z\) to camera left, but it can rotate by our Twist angle up or down as much as \(\frac{\pi}{2}\) before overlapping with the alternative path with negative \(\beta,\theta\).

In that way, then we rotate \(u\) to \(u'\) around the vector \(x\) by an angle \(\theta\) a.k.a. "twist", positively meaning up, negatively down. \(\theta\) rotates the plane within which the bend will occur from the \(i=x,j=y\) plane (defined by its normal \(k=z\)), that is, from the vertical plane containing the \(i=x\) axis, to the left or right by angle \(\theta\). Now calculate:

\(u' = 0i + j\ sin(\theta) + k\ cos(\theta)\).

\(u'' = 0i + j\ cos(\theta) + k\ sin(\theta)\).

\(u'\), formerly \(z\) or \(k\), having itself been rotated by \(\theta\) (twisted around the \(x=i\) axis), is now going to be used as the rotation axis around which to apply the Bend of Twist-and-Bend. (Along with \(i=x\), \(u''\) are vectors in the plane within which the Bend will be bending.)

So next, we add a bend to our vector.

As every point on a circle could be considered its beginning, the zero value for \(\beta\) could in principle be anchored anywhere. However, I would like our reference bend angle of zero to result in the straight ahead camera direction, that is, in the direction of the \(x=i\) axis toward the daughter node, since loss of information in it would leave the result still somewhat interpretable. Irrespective of twist \(\theta\) and even with loss of \(\beta\) we would have a link length and a primary direction from parent to daughter in order to, with some degree of informativeness, spatially interpret the events reported from daughter to parent. "Out that way in the direction of the daughter, a distance at least as far as the daughter" is a much better default interpretation than none.

After a bend of \(\beta \gt 0\), "up", or \(\beta \lt 0\), "down", the final direction vector \(\hat u\) will make an angle of \(|\beta|\)with \(x\), in the plane containing \(i=x\) and \(u'' = 0i + j\ cos(\theta) + k\ sin(\theta)\) which itself makes a plane angle of \(\theta\) with the (vertical) \(i,j\) (or \(x,y\)) plane.

\(\hat u = u'' = i\ cos(\beta) + j\ sin(\beta)sin(\theta) + k\ sin(\beta)cos(\theta) \)

This axis \(\hat u\) will serve as the rotation axis of a pure unit quaternion \(r = 0+\hat u\).

To rotate from parent's primary forward-up frame to a new parent-daughter link orientation frame, we have composed first a twist around that primary forward vector, then a bend around the now-twisted, formerly-vertical plane. This can transform our orientation to from the parent's primary orientation to any other direction. But now we have a third rotation to carry out, which is the twist and bend (and translate) to transform a daughter's upward-transmitted event location information from the daughter's orientation frame, up into the parent's orientation frame. To do this we will imagine another twist-and-bend sequence, because at the daughter's node center, with the daughter's inherited forward and up directions, the daughter will encode an event anywhere in space with \((r,\theta,\beta)\) relative to its orientation frame. Given, then, such a location encoded event with \((r_e,\theta_e,\beta_e)\) in the frame of the daughter, we do the reverse transformation to encode the same event within the frame of the parent. Here, the task involves not just a rotation, but also a translation.

Fortunately, the translation is quite simple, since from the daughter's primary orientation frame, the direction "forward" is \(-1\) times the translation direction, and the distance is also known as the link length. Therefore from the daughter's primary perspective, the translation vector to apply is \(t=(-|pd|,0,0) = -i|pd|\). Operating in perception mode, the information transmission goes from daughter to parent. So to a local-location-encoded packet of information, \((r_e,\theta_e,\beta_e), p(e)\) in the frame of the daughter, we do the reverse transformation to translate that location to center on the frame of the parent by subtracting \(t\), and to rotate that location so that the former distance, theta and beta are adjusted to a longer distance, maybe similar theta, and probably less beta, if the operation is basically a zoom out.

Let's do this in dual quaternion space, where we know the math don't lie. Then let's do it again in physical neuron design-and-operations space, where we suspect something like this work might actually be done by fat and electrolytes.

Deriving daughter location at parent.

Using \((r,\theta,\beta)\) (a.k.a. distance or radius, twist, and bend) representation, let us find the daughter vector from the parent's frame of reference, \((i,j,k)\). Once again, let \(\hat u\) be the axis of rotation around which the bending bends, with \(\beta\) the angle of the bending. This axis of rotation \(\hat u\) has one degree of freedom, \(\theta\), in the Twist-and-Bend model.

\(\hat u = j\ sin(\theta) + k\ cos(\theta)\)

The \(x\) or \(i\) or camera focus direction does not participate in the Twist rotation, which is after all a twist around the axis of the camera focus. Twist all you want, you're still looking forward. Hence \(x\) is constant, or \(i=0\). Further, one angle of rotation is all we want to allow in \(\hat u\), so we let \(j\ sin(\theta) + k\ cos(\theta)\) encode that angle as two numbers; inefficient but we are trying to do this with Quaternions presently.

Encoding a rotation of angle \(\beta\) around the axis \(\hat u\) in the form of a quaternion, \(r\), we have:

\(\begin{align*} r &= cos(\frac{\beta}{2}) + sin(\frac{\beta}{2})(0i + j\ sin(\theta) + k\ cos(\theta)) \\ \end{align*} \)
\(r\) represents a rotation axis derived from \(z\) (or \(k\)) by twisting it around the \(x\) (or \(i\)) axis, followed by a bend \(\beta\) around that twisted rotation axis. Then the quaternion rotation operation can be used:

\(\begin{align*} v' &= rvr^* \\ i_{d} &= i_{pd} = ri_{p}r^* \\ j_{d} &= j_{pd} = rj_{p}r^* \\ \end{align*}\)
Unit quaternion math tells us that \(v'\) is the result of rotating \(v\) by \(\beta\) around axis \(\hat u\). Setting \(v=i_p\), these operations rotate the parent's primary \(x\) axis \(i_p\) to the direction of the daughter, \(i_{pd}\) which will then be the primary \(x\) axis direction also for the daughter, \(i_d\). Setting \(v=j_p\) the same operations rotate the parent's \(y\) axis ("up") to provide the orthogonal "camera up" direction for the parent-daughter link, \(j_{pd}'\), which will also serve as "camera up" for the daughter, \(j_d\).

To reverse this rotation, we use the same axis of rotation with the negative of the bend angle:

\( r' = cos(\frac{\beta}{2}) - sin(\frac{\beta}{2})(j\ sin(\theta)\ + k\ cos(\theta)) \)
\( v = r'v'r'^* \)
Setting \(v'=\vec{pd}\), the reverse rotation \(r'\vec{pd})r'^*\) yields the parent's primary "forward" axis.

Now suppose we have an event location encoded within the daughter's frame of reference as \((l_{de},\theta_{de},\beta_{de})\). This means we are considering the parent-daughter link's frame of reference as centered on the daughter node; the forward and up directions are portable, after all. \((l_{de},\theta_{de},\beta_{de})\) are themselves understood within the daughter's \(d\)'s frame as the length \(l\) distant from \(d\), in a direction specified by \(\theta_{de},\beta_{de}\), relative to daughter's frame of reference \(i_d,j_d,k_d\).

To convert \((l_{de},\theta_{de},\beta_{de})\) from within daughter's frame of reference \(i_d,j_d,k_d\), to the parent's frame of reference \(i_p,j_p,k_p\), \((l_{pe},\theta_{pe},\beta_{pe})\), first we apply the reverse rotation \(r'\) to see what the direction and length are, still centered on the daughter, but using the parent's primary axis directions. Second we apply the reverse translation by simply adding \(-\vec pd\). This can be done in dual quaternion arithmetic also, although we must be careful to use the correct translation vector \(t\) based on the order of operations.

If we do rotation first and translation second, then the translation will be using \(-\vec pd\) from within the parent's orientation frame.

If we do translation first, then the translation applies to a point located within the daughter's frame, by simply adding \(t_d\) where \(t\) is the directed distance from daughter within daughter's frame to parent. After translation to center upon the parent node, this newly-located information must be reoriented to its correct orientation within the parent's frame by applying this same reverse rotation.

This all makes sense only because of an assumed initialization phase in which we define each frame's relative locations, distances, and camera focus and camera up vectors.

To get that, we start with from definitions. A tree is a set of nodes arranged with one parent for each node except the root, and zero or more daughters for each node, zero in case of a leaf or terminal node. Here, the tree is actually physical locations connected by transmission links. Each link "knows" its own length, and which end is which, toward root or not. The root node has a perhaps body-orientation-defined camera-focus direction \(i_{root}\) and camera-up direction \(j_{root}\), which define its orientation frame using the left hand rule (I'm left handed, please accept it.), so \(z_{root}=i_{root}\times j_{root}\).

Now all the nodes need their own orientation frames \((i_n,j_n,k_n)\).

We proceed to apply an initialization process, starting at the root node and proceeding to all the daughters recursively. Let \(p, d\) be a parent and a daughter. Each link \(pd\) from parent node to daughter node learns its own length based on some measureable echo delay which equates to a certain distance, \(|pd|\). \(pd\) also recieves its own camera-focus direction which is along the link axis (which will be an arbitrary rotation, \(r_{pd}\), away from the parent's camera-focus direction), and its own camera-up direction (which, say, is the same rotation applied to the parent's camera-up direction). Hence each daughter node also acquires an orientation frame from the link which joins it to its parent, comprising a camera focus (extending the parent-daughter link direction beyond the daughter to "outward" in the same direction) and camera "up", same as that of the link.

Now the perspective transformation from parent to daughter involves a translation and rotation.

Proceeding recursively from root to leaves of the tree, at each parent node for each daughter, we do the link rotation \(r_{pd}\) for the particular daughter, then the translation in that direction by distance \(|pd|\). After once this process runs to completion for all nodes, each link then has its orientation axis and up vectors known to it, as well as its length. Assuming a left-hand rule, the local coordinate system of each link is \(i\) in the direction from parent to daughter in space, then \(j\) in the direction of the "up" vector for the link, then following the left hand rule, \(k\) is perpendicular to the first two, and pointing to the right. The link orientation frame translated to be centered on the daughter node, preserving all directions parallel is now also the daughter's primary orientation frame.

So initialized, we can now operate, sending location-encoded event information from leaf to root.

All the leafs receive a common timing signal frame from their parent so that parents can put multiple daughters into that frame. The duration should be enough to encode the remotest received packet.

We start with an event at a leaf node, \(n\), whose parent node is, say, \(m\). The event is encoded as a time-limited pulse, signal pattern, or information packet \(p_n\). The location associated with packet \(p_n\) at \(n\) is the location, "here", its origin, \(0=(0,0,0)\), relative to itself. We don't need an orientation frame for node \(n\) since the location described by \(p_n\), from \(n\)'s perspective is "here"; \(n\) has no daughters and its information can be considered local to it. The "up" normal to link \(mn\) is known from the initialization process, but \(n\) can ignore that as well.

Next we take any preterminal node, parent \(m\) of leaf \(n\), and transmit the packet and its location from daughter to parent. Then \(m\) holds the information packet \(p_n\) but its location information \(x_n\) is now a matter of interpretation, that is, of translation and rotation from within \(n\)'s frame to within link \(mn\)'s frame then to within parent node \(m\)'s frame.

Here we apply an appropriate dual quaternion to \(x_n\) yielding \(x_m\), by first translating \(|mn|\) in the direction \(-\vec{mn}\), then rotating \(r_{mn}\) to \(m\)'s primary orientation frame.

The backwards intuition is correct here, that at each adjustment we are backing up from a close-up, adjacent-to-sensor view held in the periphery, to increasingly far-away, remote views of the same events considered increasingly centrally. So the event seen off there at distance \(r\) and angles \(\beta,\theta\), recedes further when transformed from a daughter node's perspective into the parent node's perspective: first it rotates not by moving its position in space but by moving the camera's orientation in space, to the parent's primary orientation directions, then it translates away by the distance and in the direction of the parent. The event seems to move up, but it's because going to a higher perspective you are now looking down, using the parent's orientation frame and seeing the event in that frame. And the event seems to move away, because indeed it is now seen up the tree a link, which will typically be farther away from the event source location.

Repeat the same process for all leaf and preterminal nodes and up the tree to all other nodes until complete, at the root.

Time Domain Twist-and-Bend

I started this thinking that instead of a computation graph here, we have a branching biological neural network structured like a tree, with parallel trees for different information types like pain vs pressure vs heat, etc. Then the calculations found above might be carried out electrochemically in the neuron tree. How so? Location information for a packet in the form of a simple pulse can be encoded by two identifiably related copies of the pulse on one or two (or more) channels using a shift to encode Bend, and a relative magnitude difference or ratio to encode Twist. Any listening subsystem can infer the distance from the timing within the current timing frame, late=far, early=near, and can extract Twist and Bend from the relative timing and magnitudes of the pulse copies. Just think of yourself listening to stereo headphones, obviously the timing differences in the signal arrival times at the ear tell you spatial information. Extending the headphone concept to evolutionary selection effects on the shape of the human external ear or pinna, that is to say: Similarly: perhaps spectral changes in signals due to the signal-shaping effects of the human ear's direction-sensitive, multiple-spatial-frequency filtering effects, might tell more than stereo-left vs stereo-right, but up/down, front/back (that is, Cartesian or x,y,z location information, or informationally-equivalently, \((r,\theta,\beta)\) or Distance/Twist/Bend location information).

Right now, the case I am making is that when delays and relative amplitudes arrive on a matched pair of pulses at some absolute delay within the timing frame, within a neuron which is adding information from multiple sources coming from different directions, whether the neuron can reasonably be inferred to carry out further pulse delay and magnitude adjustments homologous in the two-signal time domain to the translation/rotation math in dual quaternion space. For example, if a source is off to the left from the oriented perspective of a daughter neuron in a neuron tree, with the parent-daughter neural link 'pointing' already to the left from the oriented perspective of the parent neuron, will that source re-encodable at the parent neuron appropriately for its location in the parent's oriented perspective, that is, by a translation and a rotation?

Well, yes, for a translation. The parent's timing period might reasonably be either longer in time, or interpreted as being mapped to space over a larger region. To come up with a way of mapping events to location within a timing frame is already the same task as doing so within a longer timing frame, so if we can do one we can do the other. Hence going up the tree the pulses might get tighter within and between them, or the timing frames might get longer, or both.

What about a rotation? Two racers coming around a curve, the inside one wins (or gains by a radius (times some multiple of \(\pi\)). Two pulses, one slightly ahead, travelling down one or two axons, reaching a bend at the body of a recieving neuron, the one that is ahead might go slightly more ahead if the recieving neuron's curvature favors it, with a shorter path for the signal arriving on one side as compared with another. Hence Bend, encoded in relative delay, can be considered to simply fall out of the anatomy of the neurons along which the signals are travelling. Doubly bent, the inside wins double; bending back, the two signals might even reverse place. Bend accumulates over a bendy pathway, actually.

What about Twist? In this speculative, even imaginary theory of spatial encoding in neural signals, Twist is encoded as relative difference or ratio in signal amplitudes. Could two copies of a signal, one arriving weaker than the other at a listening post which has its head skewed to the left, such that when it a weaker left signal comes in, it hears that left signal as stronger, by the degree its head is skewed left? Well, yes, certainly. Skewing one's head to the left is like having a rotated "up" vector, and here represents being a little more sensitive, a little amplificatory, for a signal coming in weak on the left; that is, taking some of the Twist off it due to the amount of Twist in one's own perspective.

Conclusions

This work ramifies in multiple directions. First, theory-internal work. For example, it would be nice to do experiments to just make sure the math is right, so that packets sent across a computation graph can correctly be joined in scene merger from a many-leafed sensor tree into a central spatial view.

To use this part of, or as a basis for, an accurate engineering or biological model, a number of parameters in model development call out to be explored.

Precision is obviously a limitation, since after enough pulses are added to some timing frame, they won't be distinguishable. What are the limits, informed certainly by Shannon's Information and Communications theories, but also by the sensors actuators and transmitters we might use in engineered systems or find being used in living systems, and do those vary with the parameters below? Noise and redundancy are to be expected, so what kinds of noise, and what kinds and amounts of redundancy?

One parameter is attention, which opens or closes signalling channels to let spatially encoded information in or keep it out. This model is not incompatible with filters on the signals coming in. Another parameter, perhaps subject to the attention variable, is spatial magnitude adjustment. You can focus on the period at the end of this sentence, if I will ever get to it, and probably see a lot more detail than if you subjectively, perceptually zoom out, so as to keep the whole page in view at once -- the letters might themselves disappear entirely like a forest obliviates the trees, under your attentional control. This can be implemented within Twist-and-Bend, neurologically by inhibitory signals at tree branch points, roughly covering spatial zones, for attention, and by timing-frame width control for zoom.

Another parameter is at the leaves of the tree, since so far we have only discussed body-internal sensory perceptionand integration, but we can certainly identically develop exterior space modelling within a Twist-and-Bend approach. Locations in body-exterior space can be encoded in an expanded timing frame -- by which I mean that a certain timing frame interpreted as say, the size of one's body, might be reinterpreted as the size of a truck one has gotten used to driving, or, perhaps, whatever the space-walking astronauts zoom out to, when they, inside their skulls create a finite representation of the visible, effectively infinite universe. These controls within perceptual systems have obvious implementation in a Twist-and-Bend approach (scale adjust by changing timing frame durations or by listener scrolling speed). The simplicity of these transformations further suggests that such general purpose controls and adjustment parameters could have been evolved early, to be useful for hunters and prey even in Cambrian times: the key for these of course is the location of their counterparties in the lethal dance of life, which might be near or far. Tailored perception according to scale or distance would seem to be a natural target of evolutionary selection, under predation.

The sense of "up" might come as much from our rotating-with-gravity self-perception as from our centrally-invariant bodily frame. It randomly occurs to me that some correlate of the zoom function, might relate to a sense of vertigo.

Next, merging a sequence of spatial experiences into a stable internally-represented scene also becomes a possibility, if adjustments at the level of whole timing frames may be made. Rotation and translation of the scene can be carried out by adjusting timing frame parameters or endpoints and relative amplitudes on the two sides of any listener system. Taking visual information from the retina or from a video or still camera, combining pixel information packets associated with different distances and angles, could certainly be encoded in the same time-domain system. Another parameter is action: how to handle movements. Filtering out or filtering in actuator choreography pulse trains based on direction of recipient actuators in an afferent tree of actuator signal transmission nodes would enable a single choreography sent from upstairs to serially actuate even a rather large number of actuators in different parts of body space to carry out any given action. The choreography is not just what triggers and pulls and pushes are to happen, but where in the body's actuator tree they are to happen. But notice the interaction between time and time, the first used for space, and the second to track changes. Perhaps some pulses give or imply rates or accelerations, then what would amount to a Taylor approximation updated every next frame will allow intelligent actuators to follow short-term control trajectories which carefully controls even fast change and which are updateable frame to frame.

Perhaps this work has given if not a key then an ice pick, with which to poke at if not open, these doors of perception and action. There are a lot of possible ramifications of the basic ideas here, and even if time-domain encoding of spatiality remains without anything else here ultimately holding true, I will be quite happy with a significant achievement, namely the merger of Sound (claimed by Hindu philosophers, Muktananda, Kashmir Shaivism, using the word "Spanda" or "pulsation", to be the nature of underlying subjective reality) with Space (claimed by the Buddha, using the word "Void" to be that ultimate "truth" (not Popperian, obviously, yet not meaninglessly)).

Similarly, perhaps this quote about Gurdjieff can be given some sense: Gurdjieff gives ever more complex exercises. the movements and their sequence are given with precision and call for the development of the attention. Each position is felt as a different note in ourselves and in the world.

Second, engineering methods based on this approach may be used in the design of practical systems for many-sensor view integration and robotic perception, planning, and control, not excluding learning robots, all replete with spatial intelligence. The reverse mappings are available, I might mention, mathematically for certain, and perhaps also biologically, so that peripherals can use the same signal representations to carry out timed musculature actuation, not just sensory intake. The possibility of distributed learning in which a central choreographic trigger is sent down the tree, and listeners here and there elaborate, ramify, detail out, and parameterize, generating and adding further pulse trains representing actions in partial zones of space for which they are variously responsible. A flexible system that does the right thing might even be possible to design, when this kind of abstract-to-concrete trigger waterfall is made available as infrastructure to designer professionals and learning systems.

Third, biological experiments are called for, to see if this or related or unrelated encodings of spatial information may be discovered in the signals arriving centrally which were initiated peripherally. Given a poke trajectory, give a poke here, while measuring there, see what arrives! Can it be resolved into two (or more) signal copies, on one or two or more axons, neurons, neuron bundles, etc.? I am curious whether neurons make even lattices which send spatial information encoded like this. I don't suppose it requires human subjects; I am in print claiming that worms likely have this too; worms have a dorsum, after all, and a nervous system. If not in worms, perhaps up the evolutionary tree of life, it might be mapped, or searched for, if not found right away. One could argue that the Cambrian body-plan evolution efflorescence depended on preexistence of body-agnostic perception and control systems. The latter might enable the former, certainly; this another example of post hoc evolutionary ordering by accumulated probability.

Fourth, psychological ramifications. For example, this approach encourages an ancient characterization of consciousness as being essentially or at least importantly spatial. The Buddha declared that all is Void; if the seemingly ineffable Void is taken as the spatial Void, then spatiality in signalling, being fundamental to perception, situational awareness, also to bodily control, and action of every motoric type (which includes spoken and signed language) may be considered as indeed the Nature of Being. The Void can fill up with things happening, or habituate back down to emptiness. If the Buddha found mere spatiality to have an emotional valence of being beyond Christian Love or Hindu Transcendent Bliss, no wonder the laughing Buddha is stereotypical. It might seem lunatic -- and a laughing Buddha is certainly lunatic -- but the non-separation or at least tight integration of emotional interpretation from factual interpretation in motivated (i.e., all) evolutionarily adapted organisms would suggest that taken to their logical extremes, even (mere) Existence might be also (fully) Emotional. But this thought belongs more in Bliss Theory than a math chapter on time-domain signals.

One of the lovely qualities of Twist-and-Bend is the universal shareability of it. Any number of listeners just need to tap into the signal pair, and they will "hear" with full spatial fidelity just what every other listener hears: a spatially distributed scene of sensory events. In this way a certain universality of consciousness may be made concrete. If all your parts are listening to the same party line, even a plural self can be integrated at least to a common reality. Emotional subsystems can track locationally encoded threats and opportunities, while mechanical work oriented subsystems simultaneously lay their bricks and follow their intended trajectories in at least potentially the same space. Indeed is there anything you experience that is not spatially encoded information? Maybe hormonality, being systemic, wouldn't be; though hormones certainly influence how you interact with things in space, tracking your spatial targets more closely or detecting dangers more carefully.

As on the listening side, so on the signal generation side. Multiple sources might come on line by simply adding their pulses to the right time-domain locations within the timing frame; then the received scenery is simply enriched by those information sources. Evolution of sensory capabilities including vision, early, would be benefitted by such an easy integration path.

On a hopeful if sad note, even an aging victim of dementia, who has lost many capabilities, might still be receiving a proper and rich sensory scenery such that whatever listeners remain after some have dropped out, can still participate fully if full copies of these momentary perceptual scenery frames are copied and resonated around for all and sundry subsystems to participate in. When my grandmother was ten years into her dementia and could no longer form a syllable, I saw her identifiable unique emotional presence in her eyes, and she reacting also to mine; those being spatial perception events, surely. You can fail to recognize your family members, but how can you fail to see they are spatially here, even if you are blind and deaf, their very touch will show up on the inner space screen.

| NN Top | Introduction | Newton's Method | Fuzzy Logic NNs | & training rule | Twist-and-Bend | Space Representing NNs | Dual Quaternions | Algebroids | Robots |

Your thoughts?
(will not be shared or abused)
Comment:
                                          Feedback is welcome.
Copyright © 2000 - 2023 Thomas C. Veatch. All rights reserved.
Modified: September 17, 2023