Using this as a weekly notepad to document the questions and ideas I’m currently exploring.

Connectome-specific harmonic waves

By virtue of this Open Theory blog post I learned about Selen Atasoy’s theory of connectome-specific harmonic waves (CSHW), so I’m currently digging deeper into that. Optimistically, the theory provides us with a “new language” for talking about brain states, that’s based on the ideas of brain-wide oscillations and resonance.

My (rough) understanding is as follows: every oscillating system has specific resonant frequencies – frequencies at which oscillations achieve the highest amplitude, or something like that. When the oscillations in the system are at its resonant frequency, standing waves are created. The set of frequencies at which the brain achieves resonance depends on the set of connections it has: its connectome. So, the resonant frequencies of the brain are connectome-specific, hence the name of the theory.

Now, we don’t have anywhere near the degree of fidelity required to track the full connectome of the human brain (as far as I know the only brains we’ve fully mapped out are c elegans and more recently the fly brain). However, we are able to get a high-level picture of the long-range tracts of axons in the human brain with diffusion tensor imaging, which you can see pictures of here.

This is about as much as I understand about Atasoy’s theory at the moment. Here are the things I’m trying to learn next:

  • First of all, I’m trying to get the basics of oscillating systems and waves. I still don’t quite understand what “resonant frequency” even means. More on this below.
  • Once I have a better grasp of that, I’ll come back and peruse Atasoy’s 2016 paper on her theory.

In her talk, Atasoy includes this quote from Nikolai Tesla: “If you want to find the secrets of the universe, think in terms of energy, frequency and vibration.” This tracks with me intuitively. For example, in Sal Reyes’s theory of consciousness, sound (and time) figures prominently.

Basics of oscillations

If you displace a spring, it will start bouncing back and forth. What’s interesting is that no matter how much you displace it, the frequency of the oscillation remains the same (while its amplitude will vary). This feels a little surprising to me. This frequency, which is a constant for the spring, is its resonant frequency. What this means is that if you apply a forced oscillation to it—i.e. you pull it back and forth—at exactly that frequency, it will achieve resonance, or something, and its amplitude will be higher. If you slightly increase or decrease the frequency, it will no longer be in resonance, so the amplitude will come down again. This is shown in this youtube video.

To understand all of this better I’ve pulled up the chapters on oscillations and waves from Halliday Resnick & Walker’s Fundamentals of Physics, along with Feynman’s lectures on the topic. I found this paragraph interesting:

Every oscillating system, be it a diving board or a violin string, has some element of “springiness” and some element of “inertia” or mass, and thus resembles a linear oscillator. In the linear oscillator of Fig. 15-5, these elements are located in separate parts of the system: The springiness is entirely in the spring, which we assume to be massless, and the inertia is entirely in the block, which we assume to be rigid. In a violin string, however, the two elements are both within the string, as you will see in Chapter 16. (390)

This combined “springiness + intertia” property is what determines the default frequency (which is also the resonant frequency?) of the system. In particular, if a spring has stiffness \(k\) and the object at the end of it has mass \(m\), then the angular frequency of the spring will be:

$$ \omega = \sqrt{\frac{k}{m}} $$

Note that this is angular frequency, in other words its the coefficient to \(t\) in the equation of motion for the spring:

$$ x(t) = A \cos ( \omega t + \phi ) $$

and we can get the frequency \(f\) from \(omega\) by \( \omega = 2 \pi f \), or:

$$ f = \frac{1}{2\pi} \sqrt{\frac{k}{m}} $$

Basics of artificial intelligence

I’m also trying to understand the fundamentals of artificial intelligence (I have a rough understanding of regression models and NNs, but less about the foundational theories), so I’ve started to read the first few sections of Russell & Norvig’s Artificial Intelligence textbook. The sections so far have just covered the foundational contributions of other fields to the development of AI. A few interesting notes:

  • Apparently there is no formal definition of what computation is. This reminds me of something Brian Cantwell Smith wrote provocatively in God, Approximately:

    Some of you will know that for almost 30 years I have been engaged in a foundational inquiry into the basic nature of computing—trying to figure out what it is, where it came from, what its intellectual importance is, what it augers for the future. I have spent 30 years, the project is largely complete … and I have failed. Or rather: I have succeeded, I believe, in coming up with the answer. But the answer is: there’s nothing there. … We will never have a theory of computing, I claim, because there is nothing there to have a theory of. Com- puters aren’t sufficiently special. They involve an interplay of meaning and mechanism—period. That’s all there is to say. They’re the whole thing, in other words. A computer is anything we can build that exemplifies that dia- lectical interplay.

  • The world is far too complex to attempt to do formal reasoning on it from a bottom-up analysis of all variables: “Despite the increasing speed of computers, careful use of resources will characterize intelligent systems. Put crudely, the world is an extremely large problem instance!”
  • Some good reminders about the size of the cortex: “Most information processing goes on in the cerebral cortex, the outer layer of the brain. The basic organizational unit appears to be a column of tissue about 0.5 mm in diameter, containing about 20,000 neurons and extending the full depth of the cortex about 4 mm in humans.” And also, “There is almost no theory on how an individual memory is stored.” This might be out of date but tracks with what I’ve seen so far: the engram is still a mystery.

I also read some of Marley’s blog post on how learning actually works. There are three (not mutually exclusive) approaches for trying to understand how the brain learns: by studying the physiology of actual brains as they learn (difficult to study the large number of neurons required for learnings that are more complex than a simple cue-stimulus association), by building biologically plausible mathematical models of learning, or by taking inspiration from the successes of artificial neural networks and seeing what might be relevant for biological brains.