What Does a Guilty Brain Look Like?

when herbert weinstein stood trial for the murder of his wife in 1992, his attorneys were struck by the measured calm with which he recounted her death na events leading up to it. he made no attempt to deny that he was culpable, and yet his stoicism inna face of his wildly untoonistic actions led his defense to suspect that he mite not be. weinstein underwent neuroimaging tests, which confirmed wha’ his attorneys had suspected: a cyst had impinged upon large pts of weinstein’s frontal lobe, the seat of impulse control inna brain. on these grounds, they reasoned he ‘d be found not guilty by reason of insanity, despite weinstein’s free admission of guilt.

guilt is difficult to define, but it pervades every aspect of our lives, whether we’re chastising ourselves for skipping a workout, or serving onna jury offa criminal trial. humans seem to be hardwired for justice, b'we’re also saddled witha curio compulsion to diagram our own emotional wiring. this drive to assign a neurochemical method to our madness has led to the generation of vast catalogs of neuroimaging studies that detail the neural underpinnings of everything from anxiety to nostalgia. in a recent study, researchers now claim to ‘ve moved us one step closer to knowing wha’ a guilty brain looks like.

since guilt carries ≠ w8 dep'on context or culture, the authors of the study chose to define it operationally as the awareness of having harmed some1 else. a series of functional magnetic resonance imaging (fmri) experiments across two separate cohorts, one swiss and one chinese, revealed wha’ they refer to as a “guilt-rel8d brain signature” that persists across groups. since pervasive guilt is a common feature in severe depression and ptsd, the authors suggest dat a' neural biomarker for guilt ‘d offer + precise insite into these conditions and, potentially, their treatment. but brain-based biomarkers for complex human behaviors also lend themselves to the + ethically fraught discipline of neuroprediction, an emergent branch of behavioral sci that combines neuroimaging data and machine learning to forecast how an individual is likely to act based n'how their brain scans compare to those of other groups.

predictive algorithms ‘ve already been used for yrs in health care, advertising and, most notoriously, the criminal justice system. facial recogg and risk-assessment algorithms are criticized for their racial bias and tendency to be significantly less accurate when assigning offenders to “high risk” versus “lo risk” categories. 1-odda highest profile exposures of such bias in recent news was a 2018 aclu reprt on amazon’s rekognition algorithm for facial identification and analysis, which erroneously identified 28 members of congress as criminal offenders when run against a database of mugshots.  pplz of color made up almost 40 % of the misidentified individuals, bout double their portion of congress. amazon took vocal public issue w'da methodology employed by the study atta time. however, just this summer they suspended the use of rekognition by law enforcement for one yr, amid a nationwide movement to dismantle the racially biased structures of policing and criminal justice that lead to the disproportionate death and incarceration of bipoc.

some researchers argue that neuroimaging data ‘d theoretically eliminate the biases that emerge when predictive algorithms are trained on socioeconomic metrics and criminal records, based onna assumption that biological metrics are inherently + objective than other kinds of data. in one study, fmri data from incarcerated pplz seeking treatment for substance abuse was fed through machine learning algorithms in an attempt to correl8 activity in an zone of the brain called the anterior cingul8 cortex, or acc, w'da likelihood of completing a treatment program. the algorithm was able to correctly predict treatment outcomes bout 80 % of the time. researchers ‘ve linked variations in acc activity to violence, antisocial behavior and increased likelihood of rearrest in similar functional imaging studies. indeed, the quest for the neural center of guilt inna brain also led to the acc.

1-odda problems with fmri, though, s'dat it doesn’t directly measure neural firing patterns. rather, it uses blood flo inna brain as a visual proxy for neural activity. complex behaviors and emotional states engage multiple, widely distributed pts of the brain, na patterns of activity within these networks provide + insite than viewing snapshots of activity in individual regions. so while it maybe tempting for law enforcement to conclude that lo acc activity ‘d be us'das a biomarker for recidivism risk,  altered acc activation patterns are also hallmarks of schizophrenia and autism spectrum disorders. rather than reducing bias by using presumably objective anatomical markers of neural activity, the use of behavioral biomarkers in a criminal justice context runs th'risk of encouraging the criminalization of mental illness and neurodivergence.

there maybe other limits to fmri as a methodology. a recent large-scale review of numerous fmri studies ∴ that the variability of results, even at an individual lvl, is too high to meaningfully generalize them to larger groups, much less use them as the framework for predictive algorithms. the very notion offa risk assessment algorithm itself is based inna deterministic presupposition that pplz don’t change. indeed, this determinism is toonistic of the retributive models of justice that these algorithms serve, which focus on punishing and incarcerating offenders, and not on addressing the conditions that led to an arrest inna 1st place.

indeed, such use of brain imaging as a predictive tool in human behavior overlooks wha’ seems to be a primordial fact of neurosci: that brains, like pplz, are capable of change; t'they constantly remodel themselves, electrically and structurally, dep'on experience. rather than simply representing a + tekally complex means of meting out punishment, neuroprediction has the power to identify those same signatures and instead offer paths to intervention. any algorithm, no matter how sophisticated, will always be as biased as the pplz who use it. we can’t begin to address these biases til we re-examine our basic approaches to criminality and justice.

original content at: rss.sciam.com…
authors: lindsay gray

Share: