Zebrafish Systems Neurobiology:
The Future of Neurodynamics Research
Absent efforts to Individually Identify CNS neurons, the dynamical analysis of neural computations is Fraught with Illusion
PDFs and Resources: from Zebrafish to Conscionsness
Flash Memory/DMR Resources: There are relatively few reports that specifically address the size of our day-long Daily Memory Records (DMRs). James McGaugh and co-workers describe in Neurocase an astonishingly long-last lasting DMRs while Aude Oliva's group reports a massive capacity for the storage of visual objects.
We have now published 5 Posters on this topic and several Papers which are on the DMR page.
How do you get Semantics from the Syntax?
You can read my brief Evol. of Syntax & Semantics Perspective but you should also have some understanding of Consciousness. Here is a recent, brief but thoughtful Precis on the Nature of the Hard Problem of consciousness: Radically Self-Referential. Great short piece and Michel Bitbol refers to Global Workspace and Integrated Information Theory but does not include citations to Franklin and Baars or Giulio Tononi, but both are referred to in my perspective.
If you REALLY want to get down into the weeds, my Neural Words chapter (on the DMR page) takes readers from early sensory inputs into SNOPS, symbolic neuronal operations. These operations include linguistic and non-linguistic processes which are very different. The story begins with the first embrace of Daily Memory Records (our 2009 abstract/.ppt file) and develops over a series of posters up to our 2015 posters on Synaptic Learning Theory. This is relevant to Natural Language Processing, aka NLP, because linguistic words ALL have non-linguistic counterparts in our brains, and (generally) it is how the non-linguistic counterparts are processed in our brains that is important. The DMR process IS the non-linguistic counterpart to language, in that all declarative memory goes thru the DMR (which is largely non-linguisitic). Context-free grammars completely miss these aspects of cognition, thought and language. What language adds is a super-charged compositionality machine that blows away any other species on this planet. But this mainly concerns creativity, not meaning.
Zebrafish Studies: The visually guided zebrafish prey-tracking behavior is innate, as is the capture swim:
Borla et al. 2002
Juvenile Zebrafish: The maturation of zebrafish preycapture from the staccato close strike behavior to elegant, long-distance Homing Strikes was published in FINS 2013.
Zebrafish Hunting: Isaac Bianco at University College London has discovered optic tectum neurons involved in larval hunting (these 7-day old zf larvae are dynamite!).
Optogenetics in Transparent Animals: If you want to optically stimulate neurons, find the best GCaMP or hindbrain neuron birthdates you've come to the right place. But no affiliation with "she came from Planet Claire" song.
Imaging in Depth: This 2009 review article summarizes the performance of confocal, 2-photon and other imaging modalities in regards to imaging in depth. Imaging in Depth
Limits of Dynamic Resolution: This 2003 review (with Qiang Zhou and Ethan Gahtan) examines the combined spatial-temporal resolution limits of calcium imaging. Also summarizes the zebrafish DMCS. Calcium Dynamics and ZF DMCS 2003
ZF Light Sheet Imaging. The first in vivo neuronal population imaging (Fetcho & O'Malley, 1995; O'Malley et al., 1996) imaged a handful of neurons involved in zebrafish escape behaviors. As of March, 2013, it is possible to image the activity of the entire larval CNS (~80,000 neurons) at 1 second intervals!
A few Ed Tech Items (will be moved to MazeFire Resources page soon):
MOOCs Baby! This annual report (33 pages) summarizes the first year of MiTx and Harvardx. Annual Report.
Review of Digital Maze Games. This 3rd party review shows that many students MUCH prefer to learn by playing Digital Maze games, as opposed to online quizzes. But BOTH are welcome additions to rote studying.
Building KNOWLEDGE ARCHITECTURES in your MIND
or, to be more precise, in your Neocortex
Knowledge Architectures (KAs) are constructed within the formal system that is Neocortex: a highly structured cognitive space evolved by a hundred million years of Darwinian decision making (DDM). Long before the advent of hominids, the universal physics of our world was encoded in the vertebrate midbrain and forebrain, and then mammalian neocortex, by the continuous pressure of DDM. But only in the hominid lineage, did a universal grammar evolve to create higher-order KAs that we have conscious access to and experience as *thought*. We have now reached the level where computational and systems neuroscientists are beginning to analyze the neuronal circuits that comprise Neocortex. The problem is that we are trying to reverse engineer a system with 20 billion neurons and trillions of synapses. While fMRI shines a spotlight on hotspots of neural activity, it reveals little of the neural code. Our current knowledge is so impoverished that we cannot say how or where day-long memory records (DMRs) are stored or how neocortex generates sentences. A psychological approach can help define the problem, and we provide here a few highlights from the literature, but we have not a glimpse of the neural code that builds KAs in neocortex. The use of "neural words" (precursors of linguistic words) to route information about neocortex, and the use of autoassociative neuronal networks to store and link knowledge constructs, is one guess at a framework [2016 .ppt avail. on request]. But three things that we can be sure of are (i) it is not a trivial thing to add new knowledge constructs into our exisiting neocortical architectures, (ii) this is accomplished by heroic amounts of sub-conscious information processing and (iii) that making new neural/cognitive connections, by acquiring and applying knowledge, and by thinking about what you do and do not know, are central to cognitive advancement.
How to Grow a Mind and other stories by Joshua B. Tenenbaum & Friends
Joshua Tenenbaum and colleagues have written papers that help to define the problem of Neocortical Information Processing. Taking into account Bayesian Inference and innate knowledge, these works consider induction, concept and number learning, and how humans are able to make inferences based on very sparse experience. To my knowledge, artificial deep learning networks are far from accomplishing what neocortex can easily do, but if I am wrong, please send PDFs to email@example.com. [also other groups working in the concept-learning and neural-Bayes spaces should please send PDFs]
While Bayesian Inference is central to the proposed solution space below, there is at present (methinks) no established neural representations or specific neuronal input-output functions documented in neocortex to perform Bayesian Inference (although see Spikes, 1997).
Structured Domains and Inference, 2006 (by JBT, Griffiths and Kemp) argues that sparse inference, within the context of structured domains, powers the engine of human induction thereby allowing us to infer word meanings, causal relationships and unobserved properties.
Rule-Based Concept Learning, 2008 (by Noah Goodman et al.) posits a "grammatically structured hypothesis space" within which Bayesian learning can operate. Ultimately this derives from pre-hominid neocortex and we anticipate that the universal grammar of humans is an extension of the universal physics of the world, a physics encoded as innate knowledge within the auditory, visual, motor and relatonal-mapping systems of mammalian neocortex. Note that all grammatical elements are derived from intrinsic aspects of the real world (location, speed, color, intensity, shape, movement, gravity, speed, distance, associations and so on) and that elements of such encoding are evident as far back as zebrafish. The beauty of human language, in our vastly expanded neocortex, is our ability to symbolize both the concrete and the abstract and to perform manipulations within a symbolic domain of limitless complexity.
How to Grow a Mind, Science, 2011 (by JBT and colleagues) garnered considerable attention to their earlier works, and vividly illustrates our sparse-inference capabilities. This should be viewed as a throwing-down of the gauntlet to the systems neuroscience community. Whether you think the idea of neocortex performing Bayesian computations is genius or preposterous, there is no denying that human brains can do what they are observed to do!
One and Done: Extreme Decision Making, 2014 (by Vul, Goodman, Griffiths, JBT). Human judgements are made from just a few items in a probability distribution. The authors consider sample-based Bayesian approximations, but I wonder if the actual neuronal algorithm might be COMPLETELY unrelated to Bayesian Inference? I'm venturing into the deep end here and would appreciate expert guidance.
Learning Numbers by Bootstrapping, 2010 (by Piantadosi, JBT, NDG) argues that number -words can be learned by applying statistical inference to a powerful representational system (by implementing Carey's (2009) theory using the lambda calculus). Addresses critiques of boostrapping by Gallistel, Rips and others.
Incomplete Lecture. A slide set linking Bayes Theorem (fro#mce_temp_url#m Spikes), Coding of Linguistic Words (Stanislas Dehaene) and How to Grow a Mind. Feel free to use this in your lectures and if you care to add to it, please send me an update for attribution. FILE IS LARGE-if you would like a PDF version, just email me.
NEURO MAZE GAMES: are now available at www.mazefire.com
VIDEO currently on home page at www.mazefire.com-- works much faster than this link; but it may be replaced by a NEW video soon. If you want to see the original how to play video, here it is:
How-to-Play DM Games
This video lasts about 6 minutes. For some reason it opens immediately in CHROME but can take up to 20 seconds to open in IE. ???