Long-term memories are probably encoded, in part, via very tiny changes across many millions of synapses
Anything relying on tiny changes across many things (rather than significant changes across many individual neurons such as a traversal of a few wholely across their activation thresholds in terms of biases), would be a very unstable, I think?
If you wanted to really understand what kinds of thresholds really matter in the formation of memories, though, you could do a particular experiment with an LLM:
Have a piece of training data whose purpose is to converge the network on the formation of a "memory". This would look roughly like a dialogue where a
heavily quantized LLM is being asked to remember a specific event several times, with specific details of something it "experienced" in a real context window, but where the context where it experienced it is absent from the conversation.
This will essentially force the LLM to "hallucinate" he details and then eventually "hallucinate" the whole memory even without context window.
Then you look at the deltas each time.
The idea here is that you are tracking long term memories as have been laid down in forced conformance rather than context as you are talking about, and you can measure how impactful and necessary tiny components are; a heavily quantized network isn't going to allow "tiny" changes at all.
It would also benefit to pay attention to where in the network changes happen and what those changes are and which ones happen every time the network learns the memory, and which changes are "benign" rather than "positive" mutations.
My expectation is that you could lay down memories this way in a bitnet, and that the human mind has an analogous structure specifically dedicated to training memories into itself in a specific sub-region in pretty much exactly this process.
The point being that you could measure how often the changes of the network "at or near the quantization level" contribute to the "perplexity" in reliability of retrieving and storing memories for LLMs?
If there are a bunch of tiny changes as you say, but that the memory remains when quantizing most of them away, then it's only the changes that survive quantization that ready mattered in the first place.
I suspect that it's not really going to be "tiny" changes that matter, but this is something that bears actual study.
As to the neurotubule, the structural nanotubes of the brain, though, I've actually thought about that a lot.
To understand what function it likely plays, though, my brain usually relies on a particular bit of imagery involving concrete.
So, you mix up the concrete, you pour it, and that shit looks like a hot sick bubbly mess and if you leave it like that, it will dry like that and crack and have major issues.
If you want it to settle, you have to kick it with some energy.
Usually this is done by shoving a vibrating stick called a "donkey dick" or some such into the concrete and hitting it for just a bit so that the equilibrium breaks and the whole thing "slides" together and kicks the bubbles out.
There was a system deadlocked in equilibrium, and then a bit of vibration happens and bam, deadlock gone.
In the neuron, there are similar systems capable of "locking" absent any sort of vibration, and it's really easy to make something vibrate ALL THE TIME when it contains a tube of that particular geometry.
Again this is experimentally verifiable: put two meat neurons containing nanotubules and see how smooth their action is. Then remove or disable the vibrational aspect of the tubes and see if they function less smoothly.
I expect that the tube is like a built-in lubricant to reduce the total neuron count necessary for overall smooth action on any given sub-region, but this experiment would confirm or invalidate that claim.
I would propose an actual experiment to figure out the mechanical role in neuron activation, if any, the tubule accomplished.