Year of the Research Domain Criteria
After no less than 3 times placing Infinite Jest back on my bookshelf and vowing never to turn back to it again, I was immersed in a scene where LaMont Chu, aspiring tennis prodigy, approaches Lyle, the surreally creepy but nonetheless wise guru, for advice. It’s explained that LaMont has become obsessed with the idea of becoming a famous professional tennis player, and that deep down, he’s really looking forward to how good it would feel to be envied by younger, aspiring tennis prodigies, like his current self. Lyle carefully advises LaMont that what he’s aspiring to doesn’t exist. He explains that being recognized for impressive accomplishments, and becoming the object of other people’s envy, won’t offer him any lasting peace or happiness. And therefore, although it goes unsaid, LaMont ought not allow his aspirations of salvation-through-fame consume him.
You might imagine how an idea like this could lead to a psychiatric illness like Depression. When someone’s goals or expectations are so lofty, they’re bound to think of their current achievements as worthless. This person might start turning away from things that used to bring them happiness, on account of all the time they spend thinking about and pursuing something they imagine is more valuable, like tennis-fame. They probably would change their eating and sleeping habits, again, to accommodate their obsessive goal-pursuit. One or two more symptoms, over the course of a couple weeks, without any other major medical issues, and there he is, fully certifiably psychiatrically diagnosable.
But there’s a key difference between the advice Lyle provides LaMont, and the perspective of psychiatric researchers. To Lyle, the advice provided had little to do with the symptoms, little to do with the emotional numbness LaMont was experiencing. It was all about that central motivation, that once understood, puts the rest of LaMont’s behaviors in perspective. To some, maybe most, who do psychiatric or neuroscience research, an individual's central motivation rarely matters. We might talk about the existence of happiness, but not the particular achievement or obstacle-removal that it’s tied to. And maybe, psychiatry isn’t wrong for being cautious about wading into the Freudian territory of seeking core motivations. But on the other hand, Psychoanalytic theory (as opposed to practice), fell out of fashion not because it was proven wrong, but because nobody could prove it right, and, well, also, probably, people didn’t appreciate being told they secretly want to do weird and horrific things to their parents. But nonetheless.
Some people may decide to study neuroscience because they think that modern research can provide insight into what we truly want, or who we truly are, deep down. And even if the non-science world is full of great ideas, and excellent examples, like Lamont Chu, these ideas would never allow anyone to be completely sure they’d really achieved self-discovery, because there’s no objective standard.
So neuroscience. If there’s a brain signature for being obsessed with fame, get scanned, and see paragraph 1 for advice. But there isn’t, there isn’t a brain scan that can tell you these-are-your-deepest-darkest-issues, and that these-are-the-things-you-need-to-do-get-over-them.
What Lyle says to LaMont all sounds like pretty sound advice, but a fictional anecdote shouldn’t be considered the Truth that psychiatry ought to rigidly strive for. But it could, and maybe should, be considered a plausible description of a person, and a relatively uncontroversial example of the advice, or treatment, we could consider providing. And maybe, to approach the problem of human suffering in the context of a descriptive account of a human sufferer, for now, is the only way we have to evaluate whether our theories make any sense. In any case, scientists are far from reaching a consensus on what bits of psychology make up the building blocks of the brain, so why not be willing to take on some alternative perspectives at this early point in our journey?
YEAR OF THE US GOVERNMENT'S PLAN TO SAVE PSYCHIATRY
The RDoC, David Foster Wallace, and a personal reaction to governmental psychiatric science
Written & Illustrated by Andrew Neff
the US National Institute of Health, the NIH, decided that psychiatry,
in its current state,
was getting in the way of our ability to understand and treat mental illness.
they began developing a plan to re-orient NIH funding priorities
around an alternative framework, called the RDoC.
In the old-world, crusty psychiatric diagnoses
were disparaged for how out of touch they were with real human experience.
But now, as we steadily approach the middle portion of the 21st century,
might a real science of humanity be on the horizon?
Will the new framework help bring institutional psychiatriatric conceptions of humanity
more in line with reality?
As an illustrative case, new-age psychiatric theory is compared to a concrete example,
a minor character from David Foster Wallace’s Infinite Jest.
What’s wrong with the psychiatry we have, briefly
As far as medicine goes, typically, it starts when a patient arrives at the doctor’s office presenting a set of symptoms. The doctor will then order a series of tests, say a blood draw to find out if there’s a virus, or an MRI to see if there’s any anatomical irregularity. Then based on these objective tests, the doctor will determine what ails the patient, and recommend a treatment. Psychiatry, on the other hand is missing that second step. When a person arrives at the psychiatric clinic, they present their symptoms, then based on those symptoms alone, the psychiatrist will arrive at a diagnosis, and recommend a treatment.
There’s not necessarily anything wrong with not relying on biological tests, but there is something wrong with it if the diagnoses we have aren’t helping researchers or doctors identify appropriate treatments. Arguably, in psychiatry, all pharmacological treatment in use today was discovered by accident, that is, the drug was being used for something else, and doctors happened to notice a consistent psychological side-effect. That doesn’t mean that medication isn’t useful, for example, it can discourage people with Bipolar Disorder from making harmful life-altering decisions during manic episodes. What it does mean is that science can’t really take too much credit for these discoveries, and that says something about the state of our knowledge. And when you consider other diagnoses, like Depression and Anxiety, existing treatments tend to offer little comfort at the cost of a wide range of side-effects. So this lack of effective treatments, coupled with the lack of identifiable biological markers underlying the disorders, is contributing to a widespread skepticism about the reality, and more importantly the utility, of the existing psychiatric system.
And there are two sides of the who-to-blame coin; the psychology, or the technology. It might be that our psychiatric labels are just fine, they characterize human experience perfectly, it’s just that our technology isn’t yet sophisticated enough to detect the real brain-substrates of the diseases. And in a sense, fair enough, the only thing neuroscientists distrust more than psychiatry is research based on brain imaging, including neuroscientists who do brain imaging. On the other hand, it might also be that the labels we’re using just don’t really coincide with human experience. No matter how exciting our technologies get, we’re never going to find the brain-substrate of a construct if it doesn’t correspond to any psychological realities.
So, the NIH says, it’s time for a grand transformation, where we start re-thinking the way we’re doing research on human psychological welfare. The old diseases are out, the old methods are out, and with a 1.6 billion dollar budget to spend on mental health, you’d be forgiven for getting a little excited about the prospect that one day, maybe in our lifetimes, people who are suffering from debilitating Depression, or consuming Anxiety, might be able to consult their psychiatrist, have their brain tested, and be provided with a treatment that’s based on sound theory, and is likely to help.
At the same time, a willingness to move beyond crusty and rejected old systems is only part of what needs to be done. Re-allocating huge amounts of resources involves de-prioritizing one thing, and prioritizing something else. Which gets to the question here. What is the US government’s plan to save psychiatry from widespread societal rejection, and medical not-very-usefulness similar to what it suffered in the time of Freudian psychoanalysis? And how can we re-orient research in a way that ensures new psychological labels and definitions are actually in line with biological realities?
Below, the Pillars of the RDoC, the US government's plan to save psychiatry.
Some report on what scientists report.
We're here for the real thing.
Independent - design heavy - science journalism.
Support us today on Patreon.
The Pillars of the RDoC
The RDoC, the Research Domain Criteria, is a framework designed to orient clinical research in neuroscience towards creating a new psychiatry. More or less, this is what the RDoC looks like.
Physically, the RDoC is a matrix - along the first column are a list of psychological constructs, like Reward Anticipation and Visual Perception. The rest of the columns represent different features of the brain that underlie those constructs, or observable aspects of behavior related to the constructs. In a sentence, the RDoC is a systematic framework for thinking about psychiatric research in an attempt to promote studies that integrate fundamental brain-biology with very basic psychological constructs. In seven pillars - the RDoC is as follows:
(these pillars are our interpretation of Pillars outlined by important RDoC people, Bruce Cuthbert and Thomas Insel, in a 2013 paper)
Pillar 1 - Research needs to take a bottom up approach to defining psychiatric illnesses. Instead of using bulky, complex, multifaceted diseases as our starting point, the RDoC will concern itself with simple behaviors. Only once we understand the real simple stuff can we start working our way up to complex diseases.
In the old-world, starting with just a dash of intuition, diseases were defined by observing behaviors, and clustering co-occurring behaviors into disease bins. People who had hallucinations also tended to be delusional and exhibit erratic behaviors, and so they were schizophrenic. People who experienced traumas were also hypervigilant, and prone to re-experiencing that trauma, and so had PTSD. But the whole isn’t necessarily greater than its parts - just because people who are unhappy also have weird eating and sleeping patterns doesn’t mean there’s such a thing as unhappy-eating-sleeping disorder. Because these things tend to exist together, means they’re part of a larger whole, just isn’t necessarily true. A good idea maybe, a fine basis for a hypothesis, but not a logical necessity.
Although, when it comes to biological psychiatry, we’re wading into swampy territory, and we’ve gotta start with some prospective psychological constructs. It’s just that, if the RDoC had to place a bet, they’d bet we’re more likely to discover how the brain works when we study it in terms of simpler psychological things. In some instances, this might mean that instead of studying “Generalized Anxiety Disorder”, which includes feelings of anxiety, but also things like restlessness, fatigue, and irritability - instead we’d be studying just plain feelings of anxiety. The idea is that complex disease constructs, since they’re made up of multiple psychological pieces, are probably made up of multiple overlapping brain systems. Simple behavioral constructs, on the other hand, are likely made up of a smaller set, ideally, one, neurobiological system.
But, this begs the question of just where to draw the line. Take a complex thing like Major Depressive Disorder. There are a really wide range of ways people can be depressed, both in terms of the officially certified symptoms they can exhibit, and in terms of the way people describe their experience. If you were to look for the brain-biology of Major Depressive Disorder, what you’d find is a bunch of brain-biologies. The RDoC would argue that if we just looked at something like unhappiness, or, let’s pick an RDoC term, “frustrative non-reward”, the underlying neurobiology would be less wide ranging. But take it one step further, and think about the range of “frustrative non-reward”. Doesn’t it matter which non-reward we’re frustrated by? That is, doesn’t Lyle’s advice that fame isn’t all it’s cracked up to be, doesn’t that rely on the particulars of LaMont Chu’s desire? The RDoC decided to choose more basic constructs, like unhappiness instead of Major Depressive Disorder, but it’s possible that there’s meaningful variability in our experience of unhappiness, and if so, there would be variability in the brain-biology of unhappiness. Line-drawing, I call it. Finding constructs of the appropriate grain-size is what the RDoC calls it. More on this in pillar 6.
Pillar 2 - Research needs to consider psychological things along dimensions. There may be a point where normal behavior transitions into pathology, but the whole range of that behavior needs to be considered in research.
Pillar 2 may seem strange, or obvious, but it makes sense when you think about the system we’re trying to shake loose. Psychiatric labels are based on a set of criteria, and while most of those criteria are definitely subject to interpretation, once that interpretation has taken place, the presence or absence of a psychiatric disease is unambiguous, you have it or you don’t. Say a hypothetical diagnosis requires the presence of 5 symptoms, any symptoms, low mood, changes in appetite, suicidal thoughts, so on. It seems pretty clear that someone who has 4 of the required symptoms isn’t that different than someone who has 5. And it may seem like the two people who have 4 and 5 of the symptoms are more similar to each-other than they are to people who have 0 or 8. From another perspective, LaMont can feel unhappy, worthless, suicidal, and be unable to sleep, but only when that fifth criteria of, say, increased appetite, sets in, can we start labelling him as having a psychiatric disease. For a while now, people have been complaining that diagnostic cutoffs are pragmatic, constructed to serve a cultural purpose rather than reasoned from scientific evidence. And hypothetically, from the perspective of finding brain-biological-correlates of psychology, if there’s a lot of variability within a diagnosis, and if there’s a lot of similarity between people with and without diagnoses, then we’re less likely to identify the brain basis of these psychological conditions.
This is why the RDoC is making a push to study behavior along dimensions - yes-or-no questions are out, more-or-less questions are in. Although, surprisingly, the ultimate goal is to find cutoff points, along those dimensions, that mark the transition into mental illness. Which means that LaMont’s frustrative non-reward is, on the one hand, a continuous thing, which he can have more or less of, but at the same time, there’s a point where his frustrative non-reward should eventually be considered pathological. And there is good precedent for this type of thing in other areas of medicine, like blood pressure or cholesterol, that’s measured along a continuous dimension, but is also considered problematic once it reaches a certain range. But whether or not a particular psychological characteristic should be considered dimensional or discrete should depend on that particular characteristic. If there is any such thing as a discrete process in the brain, it wouldn’t be a good idea to think about that thing as dimensional. But then again, taking Lamont’s desire for tennis fame, we all kind of want to be famous don’t we? We just don’t want it quite as bad as some other people do, and so don’t go re-organizing the rest of our life around it. So here, maybe, a dimensional perspective would make a some sense.
Pillar 3 - We need to develop new lab-based behavioral measures. In the old world, laboratory behavioral measures were designed for either very-healthy, or psychiatrically-ill populations, and so mid-range kinda-disappointed-with-their-life-but-just-haven’t-felt-that-appetite-change-yet populations have been neglected.
Ideally, when a scientist creates a behavioral task, they want to validate that it’s actually measuring the thing it’s supposed to be measuring. Like if you wanted a behavioral task to determine how obsessed with tennis-fame LaMont is, you might, hypothetically, present him with a task in which he can work towards fame. Maybe, he clicks a mouse 50 times, and Enfield-Tennis-Academy-resident-sweat-licker-and-spiritual-and-tennis-guru gives him a tennis tip; then he clicks the mouse 200 times, and he gets another tennis tip; at 1000 clicks, another tip. You see how many mouse clicks LaMont is willing to tolerate before he gives up. Your theory then becomes, the more times a person clicks the mouse, the more they value tennis tips, the stronger their desire is for tennis-fame. For those of you who do not come across academic psychology or behavioral economics, mouse clicking in exchange for X is a real thing, and is widely used, like it or not.
But the link between click-number and desire-for-tennis-fame isn’t just accepted because it seems sensible. Instead, this behavioral task needs to be officially validated ~1~. For example, are people who self-report a strong desire for tennis-fame more likely to click more? Does click-number correspond with any known neurobiology of desire-for-tennis-fame? Does click number also predict supposedly unrelated things, like desire for artistic-fame? Or craving for a thing period? Only once researchers can demonstrate that a behavioral task fits with the existing data on the subject should that task be accepted as a validated measure for that thing.
The problem with the old system is that these validations often only occured in full-healthy, or psychiatrically-unhealthy populations. The issue the RDoC is calling into attention is that we don’t know whether many of the existing tests would remain valid if they were performed on the whole range of human experience, including people with different types of diseases, or people with previous mental illnesses, or people taking medications, or people who fall just short of the diagnostic cutoff; or really, just anyone else who has been systematically excluded from psychiatric research in general.
Pillar 4 - Experimental designs & treatment groups need to be changed - In the old world, research was focused on finding differences between people with and without psychiatric illnesses. Since the disease bins aren’t really helping a ton, we need to re-orient our focus on differences between populations defined by different characteristics.
With a 1.6 billion dollar budget from the NIH alone, research on neuroscience is already pretty diverse. But, there seems to be one type of research design, that’s came up over and over again, that the RDoC wishes to avoid, that is, comparing people with and without psychiatric diseases. Comparing these two groups again falls victim to the issue of neglecting differences between people with the same diagnoses, and ignoring similarities between people with and without the same diagnosis. Instead, The RDoC, as mentioned in pillar 1, is prioritizing simple behavioral constructs.
What does this mean for LaMont? Well, under the old system, he might be recruited to participate in a study of people with Major Depressive Disorder, who might then be compared to people without Depression on mouse-clicking-tolerability, and that’d be the study. In the new world, LaMont might be recruited to participate in a study where instead of being grouped with fellow certifiably-Depressed people, he would be defined by the magnitude of his unhappiness, along a dimension, in the context of mouse-clicking-tolerability ~2~.
Pillar 5 - Research needs to integrate multiple perspectives of brain function - There are so many ways to talk about what the brain’s doing, and we need to talk about all of them.
In the old world, brain things are thought about in terms of their relation to the disease, with the disease as a central hub to which facets of neurobiology are attached. As an alternative, the RDoC is trying to encourage research that spans multiple “levels of analysis”, which could mean a couple things.
In the matrix, the RDoC matrix, each row represents a different element of neurobiology; genes, circuits, macroscopic brain regions, etc. Each of these levels can be thought of as a node in the network of things-that-describe-how-the-brain-works. And insofar as “integration” means more connections between network-nodes, there are a couple ways to increase integration. For one, we can start prioritizing studies that evaluate understudied connections, for example, the relationship between genes and neural circuits. Or, we could require more complex designs that simultaneously evaluate three or more variables.
Option 1 - Privileging studies of understudied connections: The idea of focusing on understudied features of neurobiology is exciting if you’re into basic science. And 50% of the NIH’s grant funding is spent on basic research. But basic research doesn’t necessarily mean basic biology, it just means we’re not explicitly studying diseases. The RDoC was developed by a branch of the National Institute of Health, rather than say the National Science Foundation, which means that the health justification has to be front and center. And so it may be that gene x circuit relationships, independent of behavior, are less likely to be supported, on account of their relationship with human health being less clear.
Option 2 - Requiring more variables in each study: For one, more complicated studies leads to more complicated conclusions, like, this gene affects the way this circuit relates to behavior. In LaMont’s case, we might ask, does that fact that LaMont has a mutant BDNF gene mean that his desire for tennis-fame will be correlated with brain activity in the caudate nucleus. This may well be a fine conclusion, so long as that’s the level of complexity the brain operates at ~3~. On the other hand, requiring more multivariate studies will likely involve bigger, more expensive research studies. And without allocating more funding to the NIH, the only way to accomplish that is to provide research funding to fewer projects, centralizing research, and reducing the number of people with different perspectives, who have access to funding to conduct psychiatric research.
Pillar 6 - The RDoC is not planning to include every psychological or behavioral construct. Maybe, psychology is more complicated than the 30 or so constructs included, but, at least in early days of the new world, the scope has to be subject to some amount of pragmatism.
Telling a scientist to be pragmatic is usually not a good idea - there’s a path that leads to truth, and a path that leads to not-truth - if you’re willing to admit you’re a pragmatist, you belong in another profession. But, as long as this is a government project, and is subject to budgetary constraints, there will have to be some amount of compromise. However, the RDoC doesn’t say that they’re limiting the scope of their psychology on account of limited resources. Instead, they place the emphasis on the fact that some psychological constructs are more likely to have a meaningful impact on human welfare. So, maybe, LaMont, in addition to being a fame and glory obsessed teenager, is gay, or a scientologist. Well, in the RDoC’s mind, his sexual orientation and misguided religious convictions just aren’t as likely to be explaining his psychiatric illness as something else could be. But 30 constructs? People can’t be reduced to 30 functions, can they? Just how did the NIH decide which set of psychological features warrant prioritization, or inclusion in the RDoC matrix, you ask?
With a staff of ex-researchers and other mental health professionals, with years of experience researching psychiatry and behavior, the National Institute of Mental Health created a preliminary list of psychological constructs. These constructs, and therefore, the NIH’s conception of the meaningful pieces of human experience, were categorized into 5 domains, positive valence systems, negative valence systems, cognitive systems, social processes, and arousal/regulatory systems. But science shouldn’t be about definitions handed down from political authorities, regardless of their scientific background - and so, it wasn’t. This preliminary list was used as a launching off point. Workgroups were convened for each of the 5 domains, and scientists gathered together to discuss which constructs they deemed most worthy of consideration. They were then given their instructions to create a less-preliminary list of constructs based on the following criteria:
One - Psychological Evidence Back from pillar 3, this means scrutinizing the construct and its measures against existing theory and research.
Two - Neuro-evidence. If scientists have found a neurobiological system that correlates well with a behavioral construct, the RDoC thinks it’s more likely to be a real thing, and therefore important for psychiatry.
Three - Clinical Significance. Which seems self-explanatory, except it’s really ultimately about values. And it leads one to question whether scientists should be the ones who decide what’s a significant problem, or whether that ought that to have been a larger societal question.
Four - Grain size. “The construct should have an appropriate “grain size”, i.e., a granularity that facilitates the study of relationships among measures from the various units of analysis; in other words, the functional aspect of the construct should be neither too broad nor too narrow to find meaningful relationships with its biological measures” (NIH RDoC Website). In one sense, this whole grain size talk might be about differentiating new-world, simple psychological constructs from bulky old-world, complex psychiatric diagnoses. In other words, psychiatric diseases are really made up of several more fundamental constructs, all brought together into a grain that is just too big, I mean look at it, it’s just too big. On the other end of the spectrum, there are a huge number of words, and a limitless number of sentences that have been, or could be used to describe human experience. LaMont’s desire to become a professional tennis player so that he can earn the envy of less successful tennis players is really specific, very grainy, maybe even, too grainy.
In defense of the RDoC, it definitely seems that LaMont’s experience shares some features of many of our own ill-informed goals that we strive for a bit too ardently. And a bit of reductionism in psychology is sometimes hard to swallow, but it’s necessary if we want to do any psychological science. However, it’s not clear, based on what the NIH has published, how studying very specific experiences will prevent us from finding meaningful relationships with biological measures. What is clear is that from a governmental perspective, to study 100 constructs is more difficult, expensive, and time consuming than it is to study 5. Which might lead you to the interpretation that the “grain size” concept represents a compromise, an acceptance that the NIH only has so much money, rather than a scientific position that justifiably expects that constructs of particular grain sizes are more likely to correlate with neurobiology.
Pillar 7 - We shouldn’t allow ourselves to be so constrained by fixed definitions.
The current psychiatric manual, the DSM-V, is claimed to be a living document. After all, we are in the era of edition number V. But releasing a new edition doesn’t necessarily mean adapting to new science, and there are practical societal reasons why the DSM is so hard to edit. Changing diagnostic categories means changes in officially reported disease prevalence rates, changes in insurance reimbursement, in criteria for declarations of disability, and other governmental regulatory practice ~4~. In short, changing a DSM diagnosis has really big societal consequences, and as the DSM-V, or psychiatry more generally, is ultimately one part science, one part medicine, and one part political instrument, these societal consequences are important.
In the new world, the RDoC wants to remain nimble. Since it isn’t a diagnostic system, it’ll never be tied to the same amount of societal baggage as the existing psychiatric framework. Moreover, the RDoC is overtly encouraging researchers to refine existing, or propose new psychological constructs to include in the matrix. This is all great in theory, but, there is legitimate fear that the RDoC’s original goals of flexibly adapting to new science, and encouraging refinements of new criteria, may not amount to practical support for this type of research.
For one, there’s always some built-in cultural in-flexibility in science. The concepts in the RDoC are pretty mainstream, they’ve been around a while, so changing the RDoC constructs might mean having to move away from work that a scientist has spent a lifetime researching. For that scientist, revising our conceptions of their personal psychological construct might mean discrediting the value of their contributions. As another example, once a scientist proposes a construct, and devises a test to measure that construct, the barriers to entry for studying that particular thing using that particular test are negligible - compared to a scientist coming up with their own personal theory and validating their own personal behavioral task. What these examples illustrate is that cultural and practical considerations are still important, and will pose a serious challenge to the RDoC’s goals of maintaining nimbleness.
Some report on what scientists report.
We're here for the real thing.
Independent - design heavy - science journalism.
Support us today on Patreon.
Old world psychiatry prides itself on providing a common language
for clinicians and researchers across disciplines.
If every mental health professional described their patients
using just any old combination of words in the dictionary,
it would be chaos.
Research would be disjointed,
with labs studying the same thing but calling it something different,
and other labs studying different things but calling it the same.
But a common language is only good if that common set of concepts corresponds well to reality.
If, on the other hand, they don’t, the concepts,
we’ll be discouraging creative thinking about humanity
in a time when that might be exactly the thing we need
in order to break free from the rigid constraints of the existing system.
From my perspective, hearing LaMont Chu’s story
is a reminder about the traps of excessive ambition, and fame in particular.
If a new-age-RDoC-oriented mental health professional saw LaMont Chu,
and observed his behaviors,
what they’d be seeing is the consequences of his consuming aspirations.
They’d see his disappointment in his current state (“frustrative non-reward”?),
his altered sleep schedule (“sleep-wakefulness” & “arousal”),
his low mood (“loss”),
his diminished interests in old activities (“reward responsiveness”?),
and they probably wouldn’t know what to do.
Because in LaMont’s case,
it’s not the explicitly measurable behavioral manifestations that matter,
it’s that core belief that fame will be his salvation,
that’s what needs to be targeted,
at least according to Lyle, the sweat-licking tennis guru.
Another major limitation is that there is no official role
in the RDoC matrix for environment, or development.
It doesn’t take into consideration how important it is
that LaMont grew up in a culture where celebrity is worshipped,
and highly incentivized.
Nor does it consider the pressure his parents, or peers, placed on him to be successful.
This doesn’t mean that the NIH doesn’t consider environment or development important,
only, that they wanted to promote flexibility in research
by not tying these factors to the physical matrix.
But, having said that, if the RDoC matrix is influential in guiding research projects,
and it sways some researchers away from considering environmental or developmental factors,
then the lack of overt consideration for environment
is also a major obstacle to understanding LaMont from the RDoC perspective.
Nonetheless, the RDoC is in many ways an advancement over the old ways,
particularly, in that it takes a step back from clinical research.
This project is still a part of the NIH,
and therefore, still has to maintain some focus on meaningful societal outcomes.
Then again, by many neuroscientists estimations,
neuroscience just isn’t ready to inform everyday clinical practice in mental health.
And if we're not ready yet,
then the RDoC is taking a step forward by moving,
just a tiny little bit back,
towards basic research.
Some report on what scientists report.
We're here for the real thing.
Independent - design heavy - science journalism.
Support us today on Patreon.
Cuthbert, Bruce N., and Thomas R. Insel. "Toward the future of psychiatric diagnosis: the seven pillars of RDoC." BMC medicine 11.1 (2013): 126.
Insel, Thomas, et al. "Research domain criteria (RDoC): toward a new classification framework for research on mental disorders." (2010): 748-751.
Kapur, Shitij, Anthony G. Phillips, and Thomas R. Insel. "Why has it taken so long for biological psychiatry to develop clinical tests and what to do about it?." Molecular psychiatry 17.12 (2012): 1174.
Lilienfeld, Scott O., and Michael T. Treadway. "Clashing diagnostic approaches: DSM-ICD versus RDoC." Annual review of clinical psychology 12 (2016): 435-463.
Sanislow, Charles A., et al. "Developing constructs for psychopathology research: research domain criteria." Journal of abnormal psychology 119.4 (2010): 631.
~1~ Insofar as that’s possible in a field like psychology, which, it isn’t really. What we have now is a kind of bootstrapping, where every test is validated in the context of another test, but at the core of it, there is some supposedly independent standard that just couldn’t have been validated, because there was no test to validate that first test with. It’s strange how often the term bootstrapping comes up in science, on account of bootstrapping in the real world being physically impossible. But the point is, you can only really validate a test if you have some existing gold standard, that you know for 100% sure is a certifiable fact, to evaluate your new test against, and we don’t have 100% certifiable facts in psychology.
~2~ And while they’re at it, they might just have him click his mouse in a brain scanner. This added layer provides three variables, mouse clicking + psychological construct + brain imaging, which not only moves this study away from what’s-different-between-people-with-label-X-vs-label-Y, but it would help towards the aim of integration - Pillar 5.
~3~ which it definitely is, the level of complexity the brain operates at, at a bare minimum.
~4~ And also personally, many practitioners have built their careers around the existing definitions. How many thousands of patients do you have to provide a particular type of advice to before you’re just unwilling to accept that that advice you’ve been giving is based on not-very-proven research?
Andrew Neff - 15 min: The Gut-Brain-Axis: Scientists say gut bacteria are shaping human psychology, but the evidence? (more)
Andrew Neff - 25 min: fMRI vs Phrenology: Is neuroscience bound to keep repeating old mistakes? Modern brain imaging vs phrenological weirdness (more)