Published by MIT Press Hardcover ISBN: 9780262043243
The extraordinary complexity of the mammalian neocortex is the result of millions of years of evolution. Elucidating the principles underlying its development and function has been a major goal in the neurosciences. How a seemingly uniform group of neu- roepithelial stem cells produces the vast array of electrically responsive cell types, and how these resulting cells establish such a rich variety of circuits in the mature neocortex remains, in particular, a key focus of the field. This chapter reviews seminal advances in understanding the production, specification, and migration of neocortical neurons prior to the establishment of mature circuits.
The extraordinary cognitive abilities that humans possess, such as syntactical-grammat- ical language, abstract thinking, episodic memory, or complex reasoning, are largely dependent on the brain, and more specifically on its surface, the cerebral cortex, as was initially proposed by Thomas Willis in 1664. Since then, neuroscience has endeavored to decipher what makes the human brain so unique when compared with other species. Early studies were based on comparative brain anatomy between humans and other extant or extinct species, in the latter case based on data compiled from fossil records. More recently, comparison of the number of neurons and studies of cortical develop- ment have improved our understanding of the field. Nowadays, in the era of genomics, new possibilities have arisen for determining changes in gene expression or regulatory activity that underlie the observed differences in phenotypes. This chapter summarizes what is known about the human cerebral cortex. It focuses on the neocortex, which rep- resents about 80% of the human brain mass, and places it into an evolutionary context by considering other hominins, nonhuman primates, and mammals. Finally, it explores the role of genomics in elucidating the shared and unique features of human nervous system development, organization, and function.
This chapter explores what happens when the development of the cerebral cortex goes awry. It presents results on work with CHMP1A mutations, which highlight the impor- tance of specialized cell-to-cell communication via extracellular vesicles in cortical development and function. It reviews genetic causes of microcephaly, with an emphasis on centrosomal proteins, and presents novel insights about cortical evolution shown using a ferret model of microcephaly caused by ASPM loss of function. It reviews re- cent work to identify noncoding mutations that cause brain malformations, which has expanded understanding of cortical development beyond protein-coding genes. These three examples illustrate general principles of cortical growth and function (cellular communication and synaptic plasticity, evolution, and utilization of large data sets), made possible by recent advances in DNA sequencing technology.
The cerebral cortex controls our unique higher cognitive abilities. Modifications to gene expression, progenitor behavior, cell lineage, and neural circuitry have accompanied evolution of the cerebral cortex. This chapter considers the progress made over the past thirty years in defining potential mechanisms that contribute to cortical development and evolution. It discusses the value of model systems for understanding elaboration of cortical organization in humans, with an emphasis on recent technical and conceptual advances. It then examines our current understanding of the molecular and cellular ba- sis for cortical development and evolution; discusses how neuronal fates are specified and organized in lamina, columns, and areas; and revisits the radial unit and protomap hypotheses. Finally, it considers our current understanding of the development, stabil- ity, and plasticity of cortical circuitry. Throughout, it highlights the profound impact that new technological advances have made at the molecular and cellular level, and how this has changed our understanding of cortical development and evolution. The authors conclude by identifying critical and tractable research directions to address gaps in our understanding of cortical development and evolution.
Unraveling the organizational structure of the brain has, in large measure, been reduc- tionist in nature. While this has revealed, in ever-increasing detail, the fine structure of the brain, it does leave less directly addressed the beautifully integrated nature of brain function. Views of the functional organization of the brain should include a unitary perspective, despite the diversity of its constituent parts. This chapter focuses on recent observations from the authors’ laboratory, which point to the value of an integrated ap- proach as well as to answer the assigned title question: arguably, the brain consists of a single network with functional diversity.
From interacting cellular components to networks of neurons and neural systems, in- terconnected units comprise a fundamental organizing principle of the nervous system. Understanding how their patterns of connections and interactions give rise to the many functions of the nervous system is a primary goal of neuroscience. Recently, this pursuit has begun to benefit from the development of new mathematical tools that can relate a system’s architecture to its dynamics and function. These tools, stemming from the broader field of network science, have been used with increasing success to build models of neural systems across spatial scales and species. This chapter discusses the nature of network models in neuroscience. It begins with a review of model theory from a philosophical perspective to inform our view of networks as models of complex sys- tems, in general, and of the brain, in particular. It summarizes the types of models that are frequently studied in network neuroscience along three primary dimensions: from data representations to first-principles theory, from biophysical realism to functional phenomenology, and from elementary descriptions to coarse-grained approximations. Ways to validate these models are then considered, with a focus on approaches that perturb a system to probe its function. In closing, a description is provided of important frontiers in the construction of network models and their relevance for understanding increasingly complex functions of neural systems.
Since the days of Ramón y Cajal and Golgi, reconstruction of neuronal morphology has been a central element of neuroscience research. The cell body (soma) and dendrites re- ceive and integrate synaptic input patterns from diverse neuronal ensembles. The axon, in turn, broadcasts the results of this integration process to a variety of neurons within and across brain regions. Morphological differences in the dendritic and axonal shapes are thus closely linked to a neuron’s inputs, outputs, computations, and hence func- tions. Quantification of somatic, dendritic, and/or axonal properties by morphological reconstructions thus represents one of the major approaches to define brain areas and neuronal cell types therein. This chapter addresses some of the technical challenges involved in reconstructing neuronal morphologies and in linking morphology to other properties of the neurons, such as intrinsic physiology and synaptic connectivity. It discusses conceptual challenges involved in using morphological reconstructions for the definition of neuronal cell types, as well as for the identification of neural circuit structure and function.
Recent research in the neurosciences has revealed a wealth of new information about the structural organization and physiological operation of the cerebral cortex. These details span vast spatial scales and range from the expression, arrangement, and interac- tion of molecular gene products at the synapse to the organization of computational net- works across the whole brain. This chapter highlights recent discoveries that have laid bare important aspects of the brain’s functional architecture. It begins by describing the dynamic and contingent arrangement of subcellular elements in synaptic connections. Amid this complexity, several common neural circuit motifs, identified across multiple species and preparations, shape the electrophysiological signaling in the cortex. It then turns to the topic of network organization, spurred by routine capacity for noninvasive MRI in humans, where interdisciplinary tools are lending new insights into large-scale principles of brain organization. Discussion follows on one of the most important aspects of brain architecture; namely, the plasticity that affords an animal flexible behav- ior. In closing, reflections are put forth on the nature of the brain’s complexity, and how its biological details might be best captured in computational models in the future.
A hallmark of cortical organization is the coexistence of serial feedforward with re- entrant processing. The latter is based on feedback projections from higher to lower processing levels and massive reciprocal excitatory projections which link neurons lo- cated within the same cortical areas as well as cortical areas occupying the same level in the processing hierarchy. These reentrant connections, together with local negative feedback loops, give rise to exceedingly complex dynamics that are characterized by oscillations in a broad range of frequencies, synchronization of discharges, and cross- frequency coupling. Evidence is reviewed which suggests that these dynamic prop- erties support specific computations: the flexible binding of distributed neurons into functionally coherent assemblies, the attention-dependent selection of sensory signals, the conversion of semantic relations into temporal relations, the comparison of stored priors with sensory evidence, the selective routing of signals in densely interconnected networks, the definition of relations in the context of learning, and the dynamic forma- tion of functional networks. Arguments challenging a functional role of oscillations and synchrony, due to their volatile nature, are discussed in relation to recent evidence that highlights the advantages of volatility.
Information processing in the brain is implemented across several temporal and spatial scales by populations of neurons. This chapter addresses how single neurons, small net- work motifs, and larger networks, in which emergent dynamics are largely shaped by the connectivity of the system, contribute to this processing of information. Computa- tion is defined as a semantic mapping; that is, it is the process by which representations of external (e.g., stimulus-driven) or internal (e.g., memories) information change. A feature specific to neuronal computation is that mappings are mostly local, constrained by connectivity patterns between neurons. This implies that complex mappings from local information onto representations that are highly relational and abstracted, and which rely on information between distant parts of the system, require mechanisms that can bridge, bind, and integrate pieces of information across large scales. An overview of this process in the nervous system is delineated: Local information processing is described at the level of individual neurons and small motifs. Emergent phenomena are addressed that implement information processing across large recurrent neuronal populations. Finally, an omnipresent but mostly ignored feature of neuronal systems, delay-coupled computation, is described.
Theories of information coding in cortical populations have been put forth for many years, but only recently have experimental methods become available to permit simul- taneous recordings from hundreds of neurons, thus allowing these theories to be tested. This chapter discusses some of the more prominent theories and argues that they fall along a spectrum of coding schemes, ranging from population codes that are built up from single-neuron tuning functions to codes that emerge from the collective dynam- ics of cortical populations. At the extremes, these theories are incompatible: one relies on single neurons whereas the other ingrains coarse neuronal activity into low-dimen- sional trajectories that summarize the covariance of activity across multiple neurons. It is proposed that both can be reconciled using a hierarchical coding scheme where relevant information is represented at the level of large-scale spatiotemporal patterns, and both individual neurons and the temporal interrelationships convey information. Antecedents to this contemporary theory can be seen in Donald Hebb’s assembly phase sequences (Hebb 1949): information is encoded at the single-neuron level in terms of tuning functions, but spatiotemporal patterning of individual neurons provides context to interpret the population code fully. Moreover, the encoding perspective proposed here explicitly incorporates the synaptic implementation of the code, thus strengthening the postulate.
A central goal of systems neuroscience is to understand how the brain represents and processes information to guide behavior (broadly defined as encompassing perception, cognition, and observable outcomes of those mental states through action). These con- cepts have been central to research in this field for at least sixty years, and research efforts have taken a variety of approaches. At this Forum, our discussions focused on what is meant by “functional” and “inter-areal,” what new concepts have emerged over the last several decades, and how we need to update and refresh these concepts and ap- proaches for the coming decade. In this chapter, we consider some of the historical conceptual frameworks that have shaped consideration of neural coding and brain function, with an eye toward what aspects have held up well, what aspects need to be revised, and what new concepts may foster future work. Conceptual frameworks need to be revised periodically lest they become counter- productive and actually blind us to the significance of novel discoveries. Take, for example, hippocampal place cells: their accidental discovery led to the generation of new conceptual frameworks linking phenomena (e.g., memory, spatial navigation, and sleep) that previously seemed disparate, revealing unimagined mechanistic connec- tions. Progress in scientific understanding requires an iterative loop from experiment to model/theory and back. Without such periodic reassessment, fields of scientific inquiry risk becoming bogged down by the propagation of outdated frameworks, often across multiple generations of researchers. This not only limits the impact of the truly new and unexpected, it hinders the pace of progress.
This chapter sets the scene for the treatment of complexity and computation in hu- man cognition and discusses how this treatment is informed by the neurobiological and functional properties of the cerebral cortex. Its agenda is to establish some guiding principles that may help identify hypotheses and computational architectures that go beyond mere descriptions of how the cortex underwrites the repertoire of functions we enjoy, such as action, perception, cognition, affect, and consciousness. In short, it explores the computational imperatives that form the basis for human experience. Complexity and computation are considered, as is how they organize our approach to neuronal dynamics. Criteria are identified that any tenable theoretical framework must respect. In addition, it discusses computational theories that can be entertained, and the degree to which they account for empirical data from anatomy and neurophysiology. Finally, some of the deeper issues that face sentient artifacts are considered that, ulti- mately, possess a sense of self, purpose, and agency.
Relative to other primates, humans exhibit a great variety of singular cognitive abilities for language, mathematics, music, tool use, theory of mind, and self-consciousness. What has brought about this singularity? This chapter examines the hypothesis that the human brain is unique in being endowed with a mental representation of nested, tree-like symbolic structures. Such syntactic structures are essential in the modern de- scription of human languages, including natural languages as well as the artificial ones used in music or mathematics. Nonhuman animals may possess abstract representation of temporal sequences, but evidence suggests that those representations do not include the sort of nested tree structures typical of human grammars. Brain imaging, magneto- encephalography and intracranial recordings have begun to reveal the neural correlates of the nested structure of linguistic constituents, which involve Broca’s area and the su- perior temporal sulcus of the left hemisphere. Importantly, the mental manipulation of musical and mathematical structures, which also involves nested trees, is not confined to such classical language areas. Instead, high-level mathematics involves bilateral in- traparietal areas involved in elementary number sense and simple arithmetic as well as bilateral inferotemporal areas involved in processing Arabic numerals. This chapter proposes that several distinct circuits of the human brain have become attuned to nested tree structures for different domains, such as language, mathematics, or music. Accord- ing to the demodularization hypothesis, during human brain evolution, primitive tree structures may have emerged within specialized neural circuits (e.g., those involved in spatial or geometrical computations) and were later exapted toward a more general role in language processing and conscious verbal report.
Currently we do not have a really good idea about what is special about the human brain and how this has led to uniquely human behaviors. To progress forward, we first need to ignore appeals to authority (e.g., Darwin) and accept that mammalian brains are not simply differently sized versions of the same thing. This does not mean that there are not commonalities between the brains of mammalians and other taxonomic groups, but that the only way to identify meaningful similarities and differences is through a comparative approach that looks at a number of different species. This chapter argues that two other lines of investigation are important in comparative neuroscience. First, investigating development will help to solve how evolution finds the same or different solutions. Homologous or convergent developmental trajectories reveal the constraints (or the lack of constraints) on how the brain reaches an adaptive solution. Second, investigating the body and its biomechanics will reveal how the structure of the body generates both constraints and advantages for the nervous system. Understanding the evolution of the human brain requires a comparative understanding of how it develops and operates in concert with the body.
How do the computations of the cerebral cortex and subcortical structures account for human perception, cognition, and affect? Answering this question requires understand- ing how the neurobiological and functional properties of the human brain give rise to the repertoire of human faculties and behavior, and hence, an understanding of the neu- ral mechanisms that implement these functions. While research over the past decades has made substantial progress toward this end, significant challenges still lie ahead, and new opportunities open up daily as neuroscience and related fields develop and implement new theories and technologies. To (begin to) address these challenges, this chapter explores conceptual and methodological aspects inherent to the study of the neurobiology of the human mind that are at the core of the current “central paradigm” (Kuhn 1962) in neuroscience, but are often taken for granted and undergo little scrutiny. In particular, it discusses what defines or constitutes “uniquely human” mental capaci- ties, the promises and pitfalls of using animal models to understand the human brain, whether neural solutions and computations are shared across species or repurposed for potentially uniquely human capacities, and what inspiration and information can be drawn from recent developments in artificial intelligence. Attention is given to laying out desiderata for future investigations into the human mind.