top of page

Written by Andrew Neff

March 2020

The One Neuron Project

Europe squandered much of ten billion dollars on a ludicrously overpromised brain project,

now it’s America’s turn.

or, a moonshot exploring the human brain at single-cell resolution

Dots.gif
A Moonshot for Neuroscience

Summary: Animal models and low-resolution human brain recording technologies haven’t been very useful, but they make up the majority of neuroscientific research. In this article, I propose that much of this billion-plus-dollar U.S. research effort be re-directed towards technology development, specifically, non-invasive or otherwise safe human-brain recording technologies. 

​

 

In the early 2000s, the European Union invested a billion dollars into a moonshot for neuroscience. By the end of the decade, project director Henry Markram tantalized, scientists would simulate the entire living human brain. The whole thing. On a cutting edge supercomputer running custom-built paradigm-rupturing software, each of our hundred billion neurons would be reduced to a silicon abstraction, and we’d embark on a journey inside ourselves, conducting experiments the likes of which we’ve never seen. Soon, we’d begin discovering cures to Alzheimers and Depression, synergizing our consciousness with cognitive-computational enhancements, gratifying our every need - brain simulation was the key to it all, and the revolution was palpable. 

​

More than ten years later, we now know for sure, the project was not going to work. Although it was controversial, some of the scientists were able to simulate a portion of a mouse brain, but that’s it - a small chunk of a small brain.

​

However, no need to despair, because Markram’s unrealistic project is not the only moonshot for neuroscience. Governments from around the world are investing massive resources in cutting-edge brain research.

​

Some, like the European Brain Project’s, were unrealistic - not quite a moonshot, more like a pluto-shot, or an Alpha-Centauri-shot. Other projects like the U.S.’s Brain Initiative are vague - not a moonshot but a somewhere-in-space-shot. Projects like this will undeniably contribute to advances but are largely indebted to status-quo assumptions that I’ll mention shortly. At the same time, a vague-moonshot lacks the capacity to garner attention and muster resources in the way the European project did. Maybe even more importantly, a vague moonshot will not, as JFK said about the real moonshot, “serve to… measure the best of our energies and skills.” Say what you will about the European Brain Project, at least it offered an explicit set of criteria for deciding, ultimately, how terribly the project would fail.

​

Therefore, I propose that we Americans implement our own wildly premature neuroscience project. However, this time, we’ll learn from our mistakes. We won’t be so foolish as to try to simulate the brain, and we definitely won’t set our sights on uploading consciousness, just... no. But we also need not set the mark at a cure for Alzheimer’s or mental illness, at least not explicitly, not yet. One thing the Human Genome Project taught us is that scientific goals are often achievable, while medical or societal advancements, that is, actually useful things, are more difficult to predict. Our moonshot should aim for science, not medicine - not mental health, just the brain - and then, maybe, we can start talking about applications.

​

So without further ado, my moonshot is this: we develop the technology to explore the human brain at single-cell resolution.

​

In the realm of mental health research, the neuroscientific-establishment needs to be radically shaken up. The majority of our energy is spent wandering down dark alleys that are starting to look more and more like dead ends. We may be wasting enormous sums of money by following the status quo, and denying bright young neuroscientist-bloggers their chance to do interesting and impactful research. 

​

And so, again, without further ado, the dead-end alleys we’re walking down are this: using animals to model human psychology, and relying on low-resolution human-brain-recording tech. They’re big alleys, like really big slash almost-all-encompassing alleys.

​

The moonshot has two phases, first is a Saturday Night Massacre where I disenfranchise my political opponents by telling hundreds or thousands of neuroscientists that their projects will no longer be funded. Second, we start bringing more engineers and physicists and computer scientists into the fold. A lot more, maybe, almost exclusively technical folk. Their task, over the next, say 20-50 years, will be to develop increasingly precise technologies for studying the living human brain. Finally, after this prolonged stage of tech-development, we enter the post-moonshot world, where everyday-neuroscientists with theories of consciousness and practical application in mind can start digging back into the data, trying to make sense of our newly compiled incomprehensibly complex datasets. And if we fail again, well, we can re-visit this conversation then.

​

So below, a fantasy proposal for mental health research’s central organizing principle for the mid-21st-century - including a complaint about some of the basic assumptions guiding modern neuroscience, a portion of neurotechnological speculation, and a brief cynical diatribe on the unlikelihood of political implementation.

Dots.gif

Skeptical Science

With a

Humanistic Touch.

Neuroscience From Underground is

about big ideas

in psychology and neuroscience

SHARE

SUPPORT

Dots.gif

Part I - Abandon Animal Models, Mostly

Gnawing at the ankle of animal research in mental health are major philosophical issues. Are animals capable of complex psychological experiences in the same way we humans are? If animals are, in fact, complex beings, are scientists capable of measuring those experiences in a rigorous way? Alongside philosophical questions, nipping at the other heel, frustrated scientists and mental health practitioners continue to ask what good it’s done us - where’s the practical value?

​

So much of psychological dogma relies on animal-research. In the early days, Ivan Pavlov used dogs as he was outlining the principles of learning based on reward and punishment. Harry Harlow worked with monkeys as he performed some of the most devastatingly sad experiments about parent-child nurturing and attachment. Today, we use worms, flies, lobsters, birds, rodents, monkeys, and whatever else suits our particular needs. We do it, when it comes down to it, because their brains are available - because there’s no strong political objection to raising these animals in captivity, depriving them of most rights beyond survival, conducting invasive procedures, then killing and dissecting them or vice-versa. We’ve undoubtedly learned, but from both an ethical and practical standpoint, have we learned enough to justify continuing on this path? 

​

<><-><><-><><-><><-><><-><><-><><-><>

​

The first premise of animal-research in mental health is that animals are capable of complex psychological states similar to human mental illness. Sure, there are good reasons to believe that some animals are complex - watch an episode of Planet Earth or read anything by Robert Sapolsky and see for yourself some of the complex and interesting social behaviors that primates engage in. But how about rodents? How about lobsters and flies and worms (or the fascinating-yet-horrifying lab-grown brain organoids for that matter)? Psychiatric animal research relies on a centrally unknowable premise, that animal psychology is analogous to human psychology in the ways that we care about - that something about human cognition and emotion is shared throughout the animal kingdom. It may be true; in fact, personally, I’d bet it is, at least partially. Then again, when it comes to the real meat of mental illness, our resemblance to rodents may be barely true at all. Without concrete access into an animal’s mind, it’s hard to justify one position or another.

​

Nonetheless, scientists continue conducting animal-research in an effort to understand human psychology. We run rats through mazes to test memory and cognition, we offer them sugar water and cocaine to see how much they want things, we trap them in inescapable water-chambers to find out who will succumb to despair. Some tests seem to offer a clearer analog in human psychology than others, and consequently, some are met with much larger skeptical backlash (like the inescapable-water-chamber-forced-swim-test as a measure of depression.)

​

With this skepticism and widely accepted philosophical concerns in mind, establishment scientists are not making the claim that animals, any animals, are actually experiencing the same thing as human mental illness. There’s too much complexity in the way a person feels anxiety, or the set of experiences that can lead to an addiction. What many scientists do believe is that there are some parallels between animals and humans

​

For example, maybe animals and humans have some similarities in the way they respond to stressful life events. In fact, advocates of this position argue, we share some stress biology.  A wide range of animals, including rodents reptiles and birds (but not worms of flies), in response to physical stressors, secrete a molecular called epinephrine and then some form of another molecule called cortisol. 

​

We do share some common biology, but currently-measurable-biology is only one part of the picture. What about the rest of our stress-response? What about our complex social and behavioral responses to stress? How about the subjective feelings, skeptics challenge. Characterizing the human stress response is not a quick and easy thing, it’s more than just cortisol, and we don’t have the technology to fully understand the neurobiology of what it is to be a human. Animals may share some biology with humans, but surely not all of it, and scientists are not confident about where to draw that line.

 

<><-><><-><><-><><-><><-><><-><><-><>

​

With all that in mind, not all animal research in mental health has been useless. There are examples of at least semi-rationally-designed psychiatric drugs that possibly wouldn’t have found their way into our medicine cabinets without first molecular biology, and second, testing in animal ‘models’ of mental illness. Look no further than Varenicline as a treatment for tobacco-addiction, or Brexanolone for postpartum depression

​

At the same time, you might also look at the actual treatment rates for Varenicline or Brexanolone, which show that neither drug has been overwhelmingly successful. You might also look at the number of drugs that we once thought were effective, but ultimately, probably weren’t that useful. You might also look at psychiatry’s diagnostic manual, the DSM, which only includes biological diagnostic criteria for disorders related to sleep - all other psychiatric diagnoses are based exclusively on a patient’s observable behavior and self-reporting. Animal models of mental illness aren’t useless, but they also haven’t delivered us a bounty in terms of identifying diagnostic tests or new targets for treatment, and that’s putting it generously.

​

<><-><><-><><-><><-><><-><><-><><-><>

​

There’s no open-and-shut-case that psychiatric-animal-research should be abandoned. At the same time, as animal models have yet to prove very useful in promoting mental health, perhaps, maybe, we ought to consider a future where this basic fact remains unchanged.

 

Either animals do experience aspects of mental illness, and therefore, they are useful to research, or they don’t, and they aren’t. Without the ability to delve into the animal-mind in the same way that we sort of can with a human (by asking them), the only way we can try to understand this parallel is by digging deeper into the brain. Which segues into part II of my complaint, because probing further into the brain remains a massive technological challenge. There are techniques like fMRI that are providing scientists with our most detailed glimpses so far into the functioning human brain, however, it’s not at all clear that the information we get from this method is good enough to help us really understand what it is to be a human. Without the technology to assure us that our neurobiology is truly shared, and therefore, our psychology is too, we just can’t be confident in the usefulness of animal research. 

 

Part II - Enough Low Resolution Brain Imaging

There’s an active debate in neuroscience about how we should talk about the brain, that is, what level of detail do we need. In a sense, the brain is really just a massive constellation of atoms, but it’s also just an assembly molecules, and also just a collection of cells, or neural-circuits, or large-scale brain regions. When trying to understand psychology and behavior, we can choose to look at any of those levels for a possible explanation.

​

When it comes to the human brain, scientists almost exclusively use a technology called fMRI. With this method, the brain is broken up into cubes, like pixels, and scientists determine an average measurement of brain activity in that cube. fMRI doesn’t show us molecules or cells, or really neural circuits; instead, it offers us chunks of tissue - something like a million brain cells. 

​

Take an analogy; if you watch something on a modern super-high-def TV, you see a continuous picture, about the same thing as real life. If, instead, you tried to watch something on, say, a TI-83 calculator, or an old Gameboy, you’d miss out on a lot of detail. When fMRI came out, it revolutionized the neuroscientific landscape. To a neuroscientist in the 1950’s, this new technology probably would have sounded like an unreasonably speculative episode of Black Mirror (or, I guess the Twilight Zone). To a neuroscientist of the 2020s, that is, one who isn’t actively using this technology, it’s more like a documentary about Theranos - technological plausibility backed by massive financial investment paired with one unfulfilled promise after another. Without improvements in camera and video-display technology, the picture is grainy, and as a result, there are some details we’re going to miss. If TI-83 calculators got the Twilight Zone, we could make out a stick-figure human-form, but we’d miss the expression of cautious intrigue on Rod Serling’s handsome face - we’d miss his humanity.

 

<><-><><-><><-><><-><><-><><-><><-><>

​

Without a seriously better alternative, fMRI has been enormously popular, attracting a massive allocation of the global mental health research budget. On PubMed, there are nearly half a million published peer-reviewed articles, in the U.S., there are almost 2,000 actively-funded research projects that include the word fMRI in their description, at the cost of almost a billion dollars per year (which, to be fair, are not all projects using fMRI).

​

Despite its popularity, fMRI has made very few, if any, contributions to the treatment of mental illness. The current American psychiatric diagnostic manual doesn’t include a single fMRI-based criterion, not one - not to delineate subtypes of depression, nor to tell us whether a patient would benefit from talk-therapy or antidepressant drugs, none of it. In terms of mental health, like, basically all mental health, there is no widely-accepted, or perhaps even narrowly-accepted, clinically-useful application of fMRI. 

 

<><-><><-><><-><><-><><-><><-><><-><>

​

In reality, scientists study what’s feasible to study, which, in the case of human neuroscience, is somewhat-large-brain-regions. But if the fMRI-pixel-level of understanding is not helping us to predictably shape human psychology, perhaps there is another level of analysis that would be more useful. 

​

I don’t have a really strong argument for single cells. I mean, cells are an intuitively-recognizably-discrete anatomical structure. They’re physically bound by a membrane, they’re isolated in space, we know a ton about how they work. Unlike the ambiguous million-neuron-tissue-chunks provided by fMRI, single cells just feel conceptually homogenous.

​

At the same time, you could make this same argument about other aspects of the brain. For example, a structure called a ‘cortical column’, which is a small organized group of cells, is sometimes considered a fundamental unit of brain computation. Then again, based on fMRI data, we often talk about large-scale brain regions like, say, the amygdala or prefrontal cortex. These regions weren’t arbitrary defined, there are some clear anatomical distinctions in the ways cells organize themselves.

​

On the other hand, you could go the other way too - maybe cells won’t be detailed enough. Maybe we’ll need to know about all the brain connections, or every single brain chemical. Maybe we’ll need better time-resolved data, not every second, but every millisecond, or more.

​

I’m not a single-cell chauvinist, there are no guarantees that that’s where our answers are waiting. My argument is mostly that we need to do something different. We tried the easy stuff, now we’ve tried the hard stuff, and in the future, we may need to try something that can offer more detail, the really-really-hard-stuff, because it will give us more opportunities to find relationships between psychology and behavior. 

 

<><-><><-><><-><><-><><-><><-><><-><>

​

Basing a moonshot on single-cells has practical value because the target is concrete. You can look up at the moon and say we should go there - you can look at a neuron and say we should measure that. There’s no good way to tangibly envision a vague chunk of a million neurons. Neurons provide an explicit goal, that already exists in the public consciousness, which can readily serve JFK’s goal of measuring the best of our energies and skills.

​

Like animal research, fMRI provides us with a murky abyss. The prospect of developing more detailed technologies is not at all certain, but the prospect of discovering treatments with our current level of resolution isn’t certain either. At some point though, when a technology continuously fails to deliver practical outcomes, it’s worth considering when enough is enough.

 

Part III - A Glimpse at the Future

Blah blah there’s no good way to predict blah blah the technological future. This caveat in mind, perhaps a little speculation on what the rocketship would look like, perhaps roughly diagramming the flight path or the space-suit design, could be interesting. So, in the spirit of presenting at least one plausible direction for the future, the new moonshot can follow one of two paths; either we find a way to make medically-invasive technology safe, or we discover new non-invasive technology.

​

The future of invasive technology is fun to think about, and perhaps, not too crazy to imagine, because we already have this technology for animal research. You might think about something like Elon Musk’s Neuralink project, where flexible electrodes permeate your brain. Maybe, a few generations of advancements later, you could envision a hair-net of carbon-something-nanotube-electrodes weaving their way through pores in your skull, carefully meandering around blood vessels, squeezing their way between the outer membranes of brain cells, ultimately, detecting the electromagnetic fluctuations of neighboring neurons. The navigation system would be something to behold, as would the electrode engineering, but in fifty or a hundred years, is this an option? Elon?

​

Or, perhaps you could take a genetic engineering approach. For example, we have developed genetic implants that will literally light up every time a neuron fires - it’s a method called calcium imaging, and it has produced some seriously captivating, albeit, incomprehensible footage of small brains in action. It doesn’t seem to be acutely lethal, but the risk of genetic engineering is not trivial. And if tinkering with your genes isn’t invasive enough, there’s more to this method than just the genetic implants, because you need some way to visually record all this action. In rodents, to watch these genes light-up, scientists create what is called a cranial window. That is, a skull window - I dare you to google that. In general, can we ensure the safety of full-brain gene-editing? In particular, would these light-up genes be safe? Perhaps, maybe, could we find an alternative to the skull window?

​

New non-invasive technology is a lot harder to imagine, it’s a field of research that may be waiting on a paradigm-changing eureka moment. As a non-engineer, the prospect of remotely-monitoring brain cells, either by tracking electrical or molecular functioning, at sub-millimeter resolution, is, for the most part, unfathomable. Considering the enormous value of MRI, our current go-to non-invasive imaging method, if we developed a next-generation version with substantially better resolution, it wouldn’t just be neuroscientific innovation, it would radically restructure healthcare as we know it. But what would the technology be? Is there an engineer or physicist out there with a grand plan for the scalable remote monitoring of voltage at micrometer resolution? How should I know?

​

Then again, who really does.

 

Part IV - Implementing the Plan

Proposing that we put an end to most of modern neuroscience sounds radical, and sure, it is. But a lot has changed since Henry Markram first convinced the European Union to invest in his Moonshot. Among these changes, a skeptical woke-neuroscientist movement continues to express itself. Twitter accounts, blogs, and popular non-fiction books have received huge amounts of attention among scientists based on a similar premise: despite much “promise”, neuroscientific research has failed to deliver things of practical value. The idea of restructuring the foundations of mental health research sounds radical, but a growing movement is accepting the need for things to be shaken up. In fact, neuroscience’s attention towards fMRI is already beginning to turn. According to the scientific publication database Web of Science, since peak interest in 2013, the number of neuroscience publications mentioning the word fMRI decreased by over ten percent in 2019.

​

At the same time, despite this skepticism, it may not be reasonable to expect rapid change. I imagine that many neuroscientists, most of whose work I’m suggesting is not very useful, upon reading this, would mutter to themself something along the lines of this plan being staggeringly naive and fraught with cringeworthy angst and the breathtakingly-transparent bias of a young failed scientist lashing out at an establishment that rejected him. Fair enough, perhaps.

​

But nonetheless, there is a discrepancy between the skepticism expressed by so many scientists and the kind-of-still-status-quo research we’re actually doing. This suggests to me that neuroscientific mental health research is stuck in an eddy, or a vicious cycle. We all seem to recognize the substantial limitations of the approaches we take, but at the same time, in general, aren’t trained or qualified to approach research in other ways. To argue for change is to argue ourselves out of a job.

​

There are two ways to improve scientific knowledge; we can move forward, or combat the forces that are keeping us from moving forward. In mental health, we can simply do more good research, or we can stop doing research that’s holding us stagnant. 

​

I am one of those scientists who can’t ignore the possibly-damning limitations of modern research, but isn’t qualified to approach neuroscience in other ways. I, however, am choosing the pathologically-self-destructive route of arguing myself out of a job. In doing so, I hope that we’re on a path towards re-considering our national priorities in mental health research, and more importantly, I hope that we can grapple with the prospect that animal models and not-very-high-resolution neurotechnologies may only play a small role in improving mental healthcare. 

 

<><-><><-><><-><><-><><-><><-><><-><>

​

Although, not everyone who will object to this plan is selfishly looking out for their own job security. People like Josh Gordon, director of the U.S. National Institute of Mental Health, call for a balance between basic science, like the development of innovative neurotechnologies, and currently-useful-research, like policy research on expanding access to mental healthcare. From this perspective, it would be unwise to devote most of our resources to technology development. Of course, what qualifies as short-term impactful research is a difficult question - if it relies on animal models or fMRI, from a historic perspective, we may be better off focusing on very basic science.

​

Other scientists fear large-centralized politically-driven science projects. Neuroscientists pride themselves on their ability to flexibility and independently pursue their interests. Centralized priorities, many argue, would take power out of scientists hands, and give that decision to people who are less qualified to do so. I’d argue, however, that scientists' hands wouldn’t be any more tied than they currently are, for example, to basing their research on scientifically-unvalidated psychiatric diagnostic categories. The big-picture priorities would be different, but the research need not, in principal, be conducted by teams any larger than those that are currently studying neuroscience.

 

<><-><><-><><-><><-><><-><><-><><-><>

​

While we’ve never given up on animal models, we have substantially decreased our commitment to another neurotechnology. For a time in the mid 20th century, a brain recording tech called EEG was neuroscience’s cutting-edge approach. It’s still around today, and gives us some info that fMRI can’t, but it has been largely superseded because we now have access to better resolution images. If fMRI is a TI-83 calculator, EEG is a stock-market-ticker-display or a Lite-Brite toy - the resolution is very coarse. But we didn’t really give up on EEG until we had alternatives. According to Web of Science, in the 1950’s, up to 30% of neuroscience papers included the word EEG. That number steadily decreased until the 90’s and 2000’s, where merely 5% of neuroscience papers included the term. Looking at Google Engram’s book-mining tool, in 1989, the year fMRI was invented, the number of times the word EEG showed up began to quickly tank (see Google’s engram).

​

But was it right for neuroscience to wait for a new technology before switching? Might it have been smarter if we stopped earlier, perhaps once we realized that EEG had very little clinical usefulness? Might some of the resources used in clinical studies on EEG have been better allocated elsewhere?

 

<><-><><-><><-><><-><><-><><-><><-><>

​

It’s not at all straightforward to know when to stop. Perhaps the most difficult part of the problem is that the people who are best suited to answer this question, the scientists who do the research, have the most to gain by assuring us that the research is still promising. Nonetheless, it seems that the pace of scientific advancement could be accelerated if we could find a better way of turning back before we get too far down alleys that turn out to be dead ends.

​

Admittedly, after we leave one alley, we’re going to choose another one that’s just as dark, if not more so. And maybe that will be a dead-end too. But in my view, the one we’re in has been fairly well explored. We sent search teams to feel the walls, map out the dumpsters, and climb the fire-escapes. They shined their flashlights, and without really seeing an opening at the end, they told us things were looking promising, and that we should send more teams in. 

​

As we re-prioritize our research interests, we’ll need political support to lead us back to the main street, and engineers and physicists to bring us down the next alley. For a period of time, those who specialize in mapping out fire-escapes and dumpsters won’t have a role, and the political power of this constituency may be the biggest obstacle standing in the way of real change. 

​

But it does beg the question of which dark alley to start venturing down. Nobody knows what level of detail will be required to understand the brain in a way that can lead to a meaningful improvement in mental healthcare. But I think the time is rife for a grand and specific project for exploring the human brain, and I believe that approaching the human brain at single-cell resolution could be exciting, fruitful, and concrete challenge. The mid-21st century has an opportunity to leverage popular enthusiasm for neuroscience, and the One Neuron Project could be the central organizing principle that carries us through.

Dots.gif

Social

Skeptical Science

With a

Humanistic Touch

A Psychology and Neuroscience Blog

Business

bottom of page