otrdiena, 2019. gada 10. decembris

Where is the seat of consciousness?





Where is the seat of consciousness?

Key mystery about how the brain produces cognition is finally understood
Human behavior is often explained in terms of unseen entities such as motivation, curiosity, anxiety and confidence. What has been unclear is whether these mental entities are coded by specific neurons in specific areas of the brain.
Professor Adam Kepecs at Cold Spring Harbor Laboratory has answered some of these questions in new research published in Nature. The findings could lead to the development of more effective treatments for obsessive compulsive disorder, compulsive gambling and other psychiatric disorders…:


 

OCTOBER 20, 2020

Researcher proposes new theory of consciousness

by University of Surrey

Electromagnetic energy in the brain enables brain matter to create our consciousness and our ability to be aware and think, according to a new theory developed by Professor Johnjoe McFadden from the University of Surrey.

Publishing his theory in the journal Neuroscience of Consciousness, Professor McFadden posits that consciousness is in fact the brain's energy field. This theory could pave the way toward the development of conscious AI, with robots that are aware and have the ability to think becoming a reality.

Early theories on what our consciousness is and how it has been created tended toward the supernatural, suggesting that humans and probably other animals possess an immaterial soul that confers consciousness, thought and free will—capabilities that inanimate objects lack. Most scientists today have discarded this view, known as dualism, to embrace a 'monistic' view of a consciousness generated by the brain itself and its network of billions of nerves. By contrast, McFadden proposes a scientific form of dualism based on the difference between matter and energy, rather than matter and soul.

The theory is based on scientific fact: when neurons in the brain and nervous system fire, they not only send the familiar electrical signal down the wire-like nerve fibres, but they also send a pulse of electromagnetic energy into the surrounding tissue. Such energy is usually disregarded, yet it carries the same information as nerve firings, but as an immaterial wave of energy, rather than a flow of atoms in and out of the nerves.

This electromagnetic field is well-known and is routinely detected by brain-scanning techniques such as electroencephalogram (EEG) and magnetoencephalography (MEG) but has previously been dismissed as irrelevant to brain function. Instead, McFadden proposes that the brain's information-rich electromagnetic field is in fact itself the seat of consciousness, driving 'free will' and voluntary actions. This new theory also accounts for why, despite their immense complexity and ultra-fast operation, today's computers have not exhibited the slightest spark of consciousness; however, with the right technical development, robots that are aware and can think for themselves could become a reality.

Johnjoe McFadden, Professor of Molecular Genetics and Director of the Quantum Biology Doctoral Training Centre at the University of Surrey, said: "How brain matter becomes aware and manages to think is a mystery that has been pondered by philosophers, theologians, mystics and ordinary people for millennia. I believe this mystery has now been solved, and that consciousness is the experience of nerves plugging into the brain's self-generated electromagnetic field to drive what we call 'free will' and our voluntary actions."

https://medicalxpress.com/news/2020-10-theory-consciousness.html

Artificial neural networks are making strides towards consciousness

according to Blaise Agüera y Arcas

Jun 9th 2022 

 Since this article, by a Google vice-president, was published an engineer at the company, Blake Lemoine, has reportedly been placed on leave after claiming in an interview with the Washington Post that LaMDA, Google’s chatbot, had become “sentient”.

In 2013 I joined Google Research to work on artificial intelligence (ai). Following decades of slow progress, neural networks were developing at speed. In the years since, my team has used them to help develop features on Pixel phones for specific “narrow ai” functions, such as face unlocking, image recognition, speech recognition and language translation. More recent developments, though, seem qualitatively different. This suggests that ai is entering a new era…: https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas

 Foundations of human consciousness: Imaging the twilight zone

Journal of Neuroscience 28 December 2020, 

Abstract

What happens in the brain when conscious awareness of the surrounding world fades? We manipulated consciousness in two experiments in a group of healthy males and measured brain activity with positron emission tomography. Measurements were made during wakefulness, escalating and constant levels of two anesthetic agents (Experiment 1, n=39) and during sleep-deprived wakefulness and Non-Rapid Eye Movement sleep (Experiment 2, n=37). In Experiment 1, the subjects were randomized to receive either propofol or dexmedetomidine until unresponsiveness. In both experiments, forced awakenings were applied to achieve rapid recovery from an unresponsive to a responsive state, followed by immediate and detailed interviews of subjective experiences during the preceding unresponsive condition. Unresponsiveness rarely denoted unconsciousness, as the majority of the subjects had internally generated experiences. Unresponsive anesthetic states and verified sleep stages, where a subsequent report of mental content included no signs of awareness of the surrounding world, indicated a disconnected state. Functional brain imaging comparing responsive and connected vs. unresponsive and disconnected states of consciousness during constant anesthetic exposure revealed that activity of the thalamus, cingulate cortices and angular gyri are fundamental for human consciousness. These brain structures were affected independent from the pharmacologic agent, drug concentration and direction of change in the state of consciousness. Analogous findings were obtained when consciousness was regulated by physiological sleep. State-specific findings were distinct and separable from the overall effects of the interventions, which included widespread depression of brain activity across cortical areas. These findings identify a central core brain network critical for human consciousness.

SIGNIFICANCE STATEMENT Trying to understand the biological basis of human consciousness is currently one of the greatest challenges of neuroscience. While the loss and return of consciousness regulated by anesthetic drugs and physiological sleep are employed as model systems in experimental studies on consciousness, previous research results have been confounded by drug effects, by confusing behavioral “unresponsiveness” and internally generated consciousness, and by comparing brain activity levels across states that differ in several other respects than only consciousness. Here, we present carefully designed studies that overcome many previous confounders and for the first time reveal the neural mechanisms underlying human consciousness and its disconnection from behavioral responsiveness, both during anesthesia and during normal sleep, and in the same study subjects.

https://www.jneurosci.org/content/early/2020/12/22/JNEUROSCI.0775-20.2020

Neuroscientists devise scheme for mind-uploading centuries in the future

But first, they have to kill you with embalming fluid, inject antifreeze, and keep your brain at -130 degrees C
March 14, 2018
In a scenario (almost) right out of the show Altered Carbon, two researchers — Robert McIntyre, an MIT graduate, and Gregory M. Fahy, PhD., 21st Century Medicine (21CM) Chief Scientific Officer, have developed a method for scanning a preserved brain’s connectome (the 150 trillion microscopic synaptic connections presumed to encode all of a person’s knowledge).
That data could possibly be used, centuries later, to reconstruct a whole-brain emulation — uploading your mind into a computer or Avatar-style robotic, virtual, or synthetic body, McIntyre and others suggest.
According to MIT Technology Review, McIntyre has formed a startup company called Nectome that has won a large NIH grant for creating “technologies to enable whole-brain nanoscale preservation and imaging.”
McIntyre is also collaborating with Edward Boyden, PhD., a top neuroscientist at MIT and inventor of a new “expansion microscopy” technique (to achieve super-resolution with ordinary confocal microscopes), as KurzweilAI recently reported. The technique also causes brain tissue to swell, making it more accessible.
Preserving brain information patterns, not biological function
Unlike cryonics (freezing people or heads for future revival), the researchers did not intend to revive a pig or pig brain (or human, in the future). Instead, the idea is to develop a bridge to future mind-uploading technology by preserving the information content of the brain, as encoded within the frozen connectome.
The first step in the ASC procedure is to perfuse the brain’s vascular system with the toxic fixative glutaraldehyde (typically used as an embalming fluid but also used by neuroscientists to prepare brain tissue for the highest resolution electron microscopic and immunofluorescent examination). That instantly halts metabolic processes by covalently crosslinking the brain’s proteins in place, leading to death (by contemporary standards). The brain is then quickly stored at -130 degrees C, stopping all further decay.
The method, tested on a pig’s brain, led to 21st Century Medicine (21CM), lead researcher McIntyre, and senior author Fahy winning the $80,000 Large Mammal Brain Preservation Prize offered by the Brain Preservation Foundation (BPF), announced March 13.
To accomplish this, McIntyre’s team scaled up the same procedure they used to previously preserve a rabbit brain, for which they won the BPF’s Small Mammal Prize in February 2016, as KurzweilAI has reported. That research was judged by neuroscientist Ken Hayworth, PhD., President of the Brain Preservation Foundationand noted connectome researcher Prof. Sebastian Seung, PhD., Princeton Neuroscience Institute.
Caveats
However, BRF warns that this single prize-winning laboratory demonstration is “insufficient to address the types of quality control measures that should be expected of any procedure that would be applied to humans.” Hayworth outlines here his position on a required medical procedure and associated quality control protocol, prior to any such offering.
The ASC method, if verified by science, raises serious ethical, legal, and medical questions. For example:
  • Should ASC be developed into a medical procedure and if so, how?
  • Should ASC be available in an assisted suicide scenario for terminal patients?
  • Could ASC be a “last resort” to enable a dying person’s mind to survive and reach a future world?
  • How real are claims of future mind uploading?
  • Is it legal?*
“It may take decades or even centuries to develop the technology to upload minds if it is even possible at all,” says the BPF press release. “ASC would enable patients to safely wait out those centuries. For now, neuroscience is actively exploring the plausibility of mind uploading through ongoing studies of the physical basis of memory, and through development of large-scale neural simulations and tools to map connectomes.”
Interested? Nectome has a $10,000 (refundable) wait list.
* Nectome “has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal.” — MIT Technology Review

Brain Preservation Foundation | ASC Pig Block1 Section1 16nm

Kenneth Hayworth | Aldehyde-Stabilized Cryopreservation is Cryonics for Uploaders


Abstract of Aldehyde-stabilized cryopreservation


We describe here a new cryobiological and neurobiological technique, aldehyde-stabilized cryopreservation (ASC), which demonstrates the relevance and utility of advanced cryopreservation science for the neurobiological research community. ASC is a new brain-banking technique designed to facilitate neuroanatomic research such as connectomics research, and has the unique ability to combine stable long term ice-free sample storage with excellent anatomical resolution. To demonstrate the feasibility of ASC, we perfuse-fixed rabbit and pig brains with a glutaraldehyde-based fixative, then slowly perfused increasing concentrations of ethylene glycol over several hours in a manner similar to techniques used for whole organ cryopreservation. Once 65% w/v ethylene glycol was reached, we vitrified brains at −135 °C for indefinite long-term storage. Vitrified brains were rewarmed and the cryoprotectant removed either by perfusion or gradual diffusion from brain slices. We evaluated ASC-processed brains by electron microscopy of multiple regions across the whole brain and by Focused Ion Beam Milling and Scanning Electron Microscopy (FIB-SEM) imaging of selected brain volumes. Preservation was uniformly excellent: processes were easily traceable and synapses were crisp in both species. Aldehyde-stabilized cryopreservation has many advantages over other brain-banking techniques: chemicals are delivered via perfusion, which enables easy scaling to brains of any size; vitrification ensures that the ultrastructure of the brain will not degrade even over very long storage times; and the cryoprotectant can be removed, yielding a perfusable aldehyde-preserved brain which is suitable for a wide variety of brain assays. http://www.kurzweilai.net/neuroscientists-devise-scheme-for-mind-uploading-centuries-in-the-future?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=e5198516f2-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-e5198516f2-282212701

A giant neuron found wrapped around entire mouse brain
3D reconstructions show a 'crown of thorns' shape stemming from a region linked to consciousness.
·         Sara Reardon
24 February 2017
Like ivy plants that send runners out searching for something to cling to, the brain’s neurons send out shoots that connect with other neurons throughout the organ. A new digital reconstruction method shows three neurons that branch extensively throughout the brain, including one that wraps around its entire outer layer. The finding may help to explain how the brain creates consciousness.
Christof Koch, president of the Allen Institute for Brain Science in Seattle, Washington, explained his group’s new technique at a 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland. He showed how the team traced three neurons from a small, thin sheet of cells called the claustrum — an area that Koch believes acts as the seat of consciousness in mice and humans1.
Tracing all the branches of a neuron using conventional methods is a massive task. Researchers inject individual cells with a dye, slice the brain into thin sections and then trace the dyed neuron’s path by hand. Very few have been able to trace a neuron through the entire organ. This new method is less invasive and scalable, saving time and effort.
Related stories
·         Neurotechnology: BRAIN storm
·         Whole human brain mapped in 3D
Koch and his colleagues engineered a line of mice so that a certain drug activated specific genes in claustrum neurons. When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes. That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain and used a computer program to create a 3D reconstruction of just three glowing cells.
Well connected
The three neurons stretched across both brain hemispheres, and one of the three wrapped around the organ’s circumference like a “crown of thorns”, Koch says. He has never seen neurons extend so far across brain regions. The mouse body contains other long neurons, such as a nerve projection in the leg and neurons from the brainstem that thread through the brain to release signalling molecules. But these claustrum neurons seem to connect to most or all of the outer parts of the brain that take in sensory information and drive behaviour.
Koch sees this as evidence that the claustrum could be coordinating inputs and outputs across the brain to create consciousness. Brain scans have shown that the human claustrum is one of the most densely connected areas of the brain2, but those images do not show the path of individual neurons.
The claustrum is a good brain region in which to test the new technique because it has been extensively studied in mice and consists of only a few cell types, says James Eberwine, a pharmacologist at the University of Pennsylvania in Philadelphia.  
Taking stock
“It’s quite admirable,” Rafael Yuste, a neurobiologist at Columbia University in New York City, says of the method. He doesn’t think that the existence of neurons encircling the brain definitively proves that the claustrum is involved in consciousness. But he says that the technique will be helpful for census efforts that identify different cell types in the brain, which many think will be crucial for understanding how the organ functions. “It’s like trying to decipher language if we don't understand what the alphabet is,” he says.
Yuste and Eberwine would like to see 3D reconstructions of individual neurons compared to analyses of the genes expressed in those neurons. This may offer clues as to the type and function of each cell.
Koch plans to continue mapping neurons emanating from the claustrum, although the technique is too expensive to be used to reconstruct all of these neurons on a large scale. He would like to know whether all the region’s neurons extend throughout the brain, or whether each neuron is unique, projecting to a slightly different area.
Nature  (02 March 2017)

Why your brain is not a computer

For decades it has been the dominant metaphor in neuroscience. But could this idea have been leading us astray all along? By Matthew Cobb
Thu 27 Feb 2020 
We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity.
A neuroscientist explains: the need for ‘empathetic citizens’ - podcast
We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse memory into a good one, and even use a surge of electricity to change how people perceive faces. We are drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species, we can change the brain’s very structure at will, altering the animal’s behaviour as a result. Some of the most profound consequences of our growing mastery can be seen in our ability to enable a paralysed person to control a robotic arm with the power of their mind.
Every day, we hear about new discoveries that shed light on how brains work, along with the promise – or threat – of new technology that will enable us to do such far-fetched things as read minds, or detect criminals, or even be uploaded into a computer. Books are repeatedly produced that each claim to explain the brain in different ways.
And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.
In 2017, the French neuroscientist Yves Frégnac focused on the current fashion of collecting massive amounts of data in expensive, large-scale projects and argued that the tsunami of data they are producing is leading to major bottlenecks in progress, partly because, as he put it pithily, “big data is not knowledge”.
“Only 20 to 30 years ago, neuroanatomical and neurophysiological information was relatively scarce, while understanding mind-related processes seemed within reach,” Frégnac wrote. “Nowadays, we are drowning in a flood of information. Paradoxically, all sense of global understanding is in acute danger of getting washed away. Each overcoming of technological barriers opens a Pandora’s box by revealing hidden variables, mechanisms and nonlinearities, adding new levels of complexity.”
The neuroscientists Anne Churchland and Larry Abbott have also emphasised our difficulties in interpreting the massive amount of data that is being produced by laboratories all over the world: “Obtaining deep understanding from this onslaught will require, in addition to the skilful and creative application of experimental technologies, substantial advances in data analysis methods and intense application of theoretic concepts and models.”
There are indeed theoretical approaches to brain function, including to the most mysterious thing the human brain can do – produce consciousness. But none of these frameworks are widely accepted, for none has yet passed the decisive test of experimental investigation. It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)
As observed by Francis Crick, the co-discoverer of the DNA double helix, the brain is an integrated, evolved structure with different bits of it appearing at different moments in evolution and adapted to solve different problems. Our current comprehension of how it all works is extremely partial – for example, most neuroscience sensory research has been focused on sight, not smell; smell is conceptually and technically more challenging. But the way that olfaction and vision work are different, both computationally and structurally. By focusing on vision, we have developed a very limited understanding of what the brain does and how it does it.
The nature of the brain – simultaneously integrated and composite – may mean that our future understanding will inevitably be fragmented and composed of different explanations for different parts. Churchland and Abbott spelled out the implication: “Global understanding, when it comes, will likely take the form of highly diverse panels loosely stitched together into a patchwork quilt.”

For more than half a century, all those highly diverse panels of patchwork we have been working on have been framed by thinking that brain processes involve something like those carried out in a computer. But that does not mean this metaphor will continue to be useful in the future. At the very beginning of the digital age, in 1951, the pioneer neuroscientist Karl Lashley argued against the use of any machine-based metaphor.
“Descartes was impressed by the hydraulic figures in the royal gardens, and developed a hydraulic theory of the action of the brain,” Lashley wrote. “We have since had telephone theories, electrical field theories and now theories based on computing machines and automatic rudders. I suggest we are more likely to find out about how the brain works by studying the brain itself, and the phenomena of behaviour, than by indulging in far-fetched physical analogies.”
This dismissal of metaphor has recently been taken even further by the French neuroscientist Romain Brette, who has challenged the most fundamental metaphor of brain function: coding. Since its inception in the 1920s, the idea of a neural code has come to dominate neuroscientific thinking – more than 11,000 papers on the topic have been published in the past 10 years. Brette’s fundamental criticism was that, in thinking about “code”, researchers inadvertently drift from a technical sense, in which there is a link between a stimulus and the activity of the neuron, to a representational sense, according to which neuronal codes represent that stimulus.
The unstated implication in most descriptions of neural coding is that the activity of neural networks is presented to an ideal observer or reader within the brain, often described as “downstream structures” that have access to the optimal way to decode the signals. But the ways in which such structures actually process those signals is unknown, and is rarely explicitly hypothesised, even in simple models of neural network function.
The processing of neural codes is generally seen as a series of linear steps – like a line of dominoes falling one after another. The brain, however, consists of highly complex neural networks that are interconnected, and which are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these networks to the behaviour of the animal misses the point of all that processing.
By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function. This view of the brain has been outlined by the Hungarian neuroscientist György Buzsáki in his recent book The Brain from Inside Out. According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.
The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it.
Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology. This could imply that the appearance of new and insightful metaphors for the brain and how it functions hinges on future technological breakthroughs, on a par with hydraulic power, the telephone exchange or the computer. There is no sign of such a development; despite the latest buzzwords that zip about – blockchain, quantum supremacy (or quantum anything), nanotech and so on – it is unlikely that these fields will transform either technology or our view of what brains do.

One sign that our metaphors may be losing their explanatory power is the widespread assumption that much of what nervous systems do, from simple systems right up to the appearance of consciousness in humans, can only be explained as emergent properties – things that you cannot predict from an analysis of the components, but which emerge as the system functions.
In 1981, the British psychologist Richard Gregory argued that the reliance on emergence as a way of explaining brain function indicated a problem with the theoretical framework: “The appearance of ‘emergence’ may well be a sign that a more general (or at least different) conceptual scheme is needed … It is the role of good theories to remove the appearance of emergence. (So explanations in terms of emergence are bogus.)”
This overlooks the fact that there are different kinds of emergence: weak and strong. Weak emergent features, such as the movement of a shoal of tiny fish in response to a shark, can be understood in terms of the rules that govern the behaviour of their component parts. In such cases, apparently mysterious group behaviours are based on the behaviour of individuals, each of which is responding to factors such as the movement of a neighbour, or external stimuli such as the approach of a predator.
This kind of weak emergence cannot explain the activity of even the simplest nervous systems, never mind the working of your brain, so we fall back on strong emergence, where the phenomenon that emerges cannot be explained by the activity of the individual components. You and the page you are reading this on are both made of atoms, but your ability to read and understand comes from features that emerge through atoms in your body forming higher-level structures, such as neurons and their patterns of firing – not simply from atoms interacting.
Strong emergence has recently been criticised by some neuroscientists as risking “metaphysical implausibility”, because there is no evident causal mechanism, nor any single explanation, of how emergence occurs. Like Gregory, these critics claim that the reliance on emergence to explain complex phenomena suggests that neuroscience is at a key historical juncture, similar to that which saw the slow transformation of alchemy into chemistry. But faced with the mysteries of neuroscience, emergence is often our only resort. And it is not so daft – the amazing properties of deep-learning programmes, which at root cannot be explained by the people who design them, are essentially emergent properties.
Interestingly, while some neuroscientists are discombobulated by the metaphysics of emergence, researchers in artificial intelligence revel in the idea, believing that the sheer complexity of modern computers, or of their interconnectedness through the internet, will lead to what is dramatically known as the singularity. Machines will become conscious.
There are plenty of fictional explorations of this possibility (in which things often end badly for all concerned), and the subject certainly excites the public’s imagination, but there is no reason, beyond our ignorance of how consciousness works, to suppose that it will happen in the near future. In principle, it must be possible, because the working hypothesis is that mind is a product of matter, which we should therefore be able to mimic in a device. But the scale of complexity of even the simplest brains dwarfs any machine we can currently envisage. For decades – centuries – to come, the singularity will be the stuff of science fiction, not science.
A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded on to some device or into another brain. In the way this is generally presented, this is wrong, or at best hopelessly naive.
The materialist working hypothesis is that brains and minds, in humans and maggots and everything else, are identical. Neurons and the processes they support – including consciousness – are the same thing. In a computer, software and hardware are separate; however, our brains and our minds consist of what can best be described as wetware, in which what is happening and where it is happening are completely intertwined.
Imagining that we can repurpose our nervous system to run different programmes, or upload our mind to a server, might sound scientific, but lurking behind this idea is a non-materialist view going back to Descartes and beyond. It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind. It would be possible to give this idea a veneer of scientific respectability by posing it in terms of reading the state of a set of neurons and writing that to a new substrate, organic or artificial.
But to even begin to imagine how that might work in practice, we would need both an understanding of neuronal function that is far beyond anything we can currently envisage, and would require unimaginably vast computational power and a simulation that precisely mimicked the structure of the brain in question. For this to be possible even in principle, we would first need to be able to fully model the activity of a nervous system capable of holding a single state, never mind a thought. We are so far away from taking this first step that the possibility of uploading your mind can be dismissed as a fantasy, at least until the far future.

For the moment, the brain-as-computer metaphor retains its dominance, although there is disagreement about how strong a metaphor it is. In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die. Less dramatically, but drawing similar conclusions, two decades earlier the historian S Ryan Johansson argued that “endlessly debating the truth or falsity of a metaphor like ‘the brain is a computer’ is a waste of time. The relationship proposed is metaphorical, and it is ordering us to do something, not trying to tell us the truth.”
On the other hand, the US expert in artificial intelligence, Gary Marcus, has made a robust defence of the computer metaphor: “Computers are, in a nutshell, systematic architectures that take inputs, encode and manipulate information, and transform their inputs into outputs. Brains are, so far as we can tell, exactly that. The real question isn’t whether the brain is an information processor, per se, but rather how do brains store and encode information, and what operations do they perform over that information, once it is encoded.”
Marcus went on to argue that the task of neuroscience is to “reverse engineer” the brain, much as one might study a computer, examining its components and their interconnections to decipher how it works. This suggestion has been around for some time. In 1989, Crick recognised its attractiveness, but felt it would fail, because of the brain’s complex and messy evolutionary history – he dramatically claimed it would be like trying to reverse engineer a piece of “alien technology”. Attempts to find an overall explanation of how the brain works that flow logically from its structure would be doomed to failure, he argued, because the starting point is almost certainly wrong – there is no overall logic.
Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists decided to actually do the experiment on a real computer chip, which had a real logic and real components with clearly designed functions. Things did not go as expected.
The duo – Eric Jonas and Konrad Paul Kording – employed the very techniques they normally used to analyse the brain and applied them to the MOS 6507 processor found in computers from the late 70s and early 80s that enabled those machines to run video games such as Donkey Kong and Space Invaders.
First, they obtained the connectome of the chip by scanning the 3510 enhancement-mode transistors it contained and simulating the device on a modern computer (including running the games programmes for 10 seconds). They then used the full range of neuroscientific techniques, such as “lesions” (removing transistors from the simulation), analysing the “spiking” activity of the virtual transistors and studying their connectivity, observing the effect of various manipulations on the behaviour of the system, as measured by its ability to launch each of the games.
Despite deploying this powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works (it has “ground truth”, in technospeak), the study failed to detect the hierarchy of information processing that occurs inside the chip. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.”
This sobering outcome suggests that, despite the attractiveness of the computer metaphor and the fact that brains do indeed process information and somehow represent the external world, we still need to make significant theoretical breakthroughs in order to make progress. Even if our brains were designed along logical lines, which they are not, our present conceptual and analytical tools would be completely inadequate for the task of explaining them. This does not mean that simulation projects are pointless – by modelling (or simulating) we can test hypotheses and, by linking the model with well-established systems that can be precisely manipulated, we can gain insight into how real brains function. This is an extremely powerful tool, but a degree of modesty is required when it comes to the claims that are made for such studies, and realism is needed with regard to the difficulties of drawing parallels between brains and artificial systems.
Even something as apparently straightforward as working out the storage capacity of a brain falls apart when it is attempted. Such calculations are fraught with conceptual and practical difficulties. Brains are natural, evolved phenomena, not digital devices. Although it is often argued that particular functions are tightly localised in the brain, as they are in a machine, this certainty has been repeatedly challenged by new neuroanatomical discoveries of unsuspected connections between brain regions, or amazing examples of plasticity, in which people can function normally without bits of the brain that are supposedly devoted to particular behaviours.
In reality, the very structures of a brain and a computer are completely different. In 2006, Larry Abbott wrote an essay titled “Where are the switches on this thing?”, in which he explored the potential biophysical bases of that most elementary component of an electronic device – a switch. Although inhibitory synapses can change the flow of activity by rendering a downstream neuron unresponsive, such interactions are relatively rare in the brain.
A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation. The nervous system alters its working by changes in the patterns of activation in networks of cells composed of large numbers of units; it is these networks that channel, shift and shunt activity. Unlike any device we have yet envisaged, the nodes of these networks are not stable points like transistors or valves, but sets of neurons – hundreds, thousands, tens of thousands strong – that can respond consistently as a network over time, even if the component cells show inconsistent behaviour.
Understanding even the simplest of such networks is currently beyond our grasp. Eve Marder, a neuroscientist at Brandeis University, has spent much of her career trying to understand how a few dozen neurons in the lobster’s stomach produce a rhythmic grinding. Despite vast amounts of effort and ingenuity, we still cannot predict the effect of changing one component in this tiny network that is not even a simple brain.
This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level. Finding the link between these two levels of analysis will be a challenge for much of the rest of the century, I suspect. And the prospect of properly understanding what is happening in cases of mental illness is even further away.
Not all neuroscientists are pessimistic – some confidently claim that the application of new mathematical methods will enable us to understand the myriad interconnections in the human brain. Others – like myself – favour studying animals at the other end of the scale, focusing our attention on the tiny brains of worms or maggots and employing the well-established approach of seeking to understand how a simple system works and then applying those lessons to more complex cases. Many neuroscientists, if they think about the problem at all, simply consider that progress will inevitably be piecemeal and slow, because there is no grand unified theory of the brain lurking around the corner.
Why can’t the world’s greatest minds solve the mystery of consciousness?

There are many alternative scenarios about how the future of our understanding of the brain could play out: perhaps the various computational projects will come good and theoreticians will crack the functioning of all brains, or the connectomes will reveal principles of brain function that are currently hidden from us. Or a theory will somehow pop out of the vast amounts of imaging data we are generating. Or we will slowly piece together a theory (or theories) out of a series of separate but satisfactory explanations. Or by focusing on simple neural network principles we will understand higher-level organisation. Or some radical new approach integrating physiology and biochemistry and anatomy will shed decisive light on what is going on. Or new comparative evolutionary studies will show how other animals are conscious and provide insight into the functioning of our own brains. Or unimagined new technology will change all our views by providing a radical new metaphor for the brain. Or our computer systems will provide us with alarming new insight by becoming conscious. Or a new framework will emerge from cybernetics, control theory, complexity and dynamical systems theory, semantics and semiotics. Or we will accept that there is no theory to be found because brains have no overall logic, just adequate explanations of each tiny part, and we will have to be satisfied with that. Or –

 

Artificial neural networks are making strides towards consciousness

according to Blaise Agüera y Arcas

Jun 9th 2022 

 Since this article, by a Google vice-president, was published an engineer at the company, Blake Lemoine, has reportedly been placed on leave after claiming in an interview with the Washington Post that LaMDA, Google’s chatbot, had become “sentient”.

In 2013 I joined Google Research to work on artificial intelligence (ai). Following decades of slow progress, neural networks were developing at speed. In the years since, my team has used them to help develop features on Pixel phones for specific “narrow ai” functions, such as face unlocking, image recognition, speech recognition and language translation. More recent developments, though, seem qualitatively different. This suggests that ai is entering a new era…: https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas

 

The Consciousness Instinct: Unraveling the Mystery of How the Brain Makes the Mind

by Michael S. Gazzaniga

How do neurons turn into minds? How does physical "stuff"-atoms, molecules, chemicals, and cells-create the vivid and various worlds inside our heads? The problem of consciousness has gnawed at us for millennia. In the last century, there have been massive breakthroughs that have rewritten the science of the brain, and yet the puzzles faced by the ancient Greeks are still present. In The Consciousness Instinct, the neuroscience pioneer Michael S. Gazzaniga puts the latest research in conversation with the history of human thinking about the mind, giving a big-picture view of what science has revealed about consciousness.
The idea of the brain as a machine, first proposed centuries ago, has led to assumptions about the relationship between mind and brain that dog scientists and philosophers to this day. Gazzaniga asserts that this model has it backward--brains make machines, but they cannot be reduced to one. New research suggests the brain is actually a confederation of independent modules working together. Understanding how consciousness could emanate from such an organization will help define the future of brain science and artificial intelligence and close the gap between brain and mind.
Captivating and accessible, with insights drawn from a lifetime at the forefront of the field, The Consciousness Instinct sets the course for the neuroscience of tomorrow.

https://www.goodreads.com/book/show/35259598-the-consciousness-instinct

Decoded: What Are Neurons?

You have 86 billion of them inside you, but do you understand how hard it was for us to learn that?

Full Transcript

Neurons are the tiny processing units within the human brain and nervous system.

Our brains have about 86 billion neurons. Even more are spread throughout the body, communicating by electrical and chemical signals through incredibly thin cables.

Whenever we see, hear, or otherwise perceive the world, thousands of sensory neurons send signals to our spinal cord and brain. And thanks to other neurons, we’re able to make sense of those perceptions and react accordingly.

Scientists have been studying the brain for millennia. In fact, the oldest known scientific document is a 4,000-year-old anatomical report on traumatic brain injuries.

But the brain is an extremely difficult organ to study. Even if you manage to get a brain sample under a microscope, you basically just see a tangled web of cells.

In 1873, Italian physician Camillo Golgi found a way to stain brain slices, to show the tissue in far more detail than ever before. Using his technique, a Spanish researcher named Santiago Ramón ee ca-HAL discovered that even though the cells were connected, they were still individual structures. Which became known as neurons.

By breaking down the nervous system into its smallest components, Cajal set the foundation for the next century of neuroscience. He and Golgi split the Nobel Prize in 1906.

Because neurons are miniscule pieces in a giant system, their power lies in their ability to communicate with other neurons. This happens over small gaps called synapses. When neurons communicate frequently, the synapses between them get stronger, making it easier to send future signals. 

This happens all the time, all across the brain. And it explains how we learn and form memories: we literally rewire our brains through our experiences. We refer to the brain’s fundamental ability to change as “neuroplasticity.”

Humans have most of our neurons from birth. Neurons start out as stem cells, before moving to different brain regions where they assume specific roles. Early in our development, the brain prunes away excess neurons and their connections, leaving the ones that remain stronger. Those that remain become part of our sense of smell, others our ability to walk or perform other motor skills.

Unlike other cells in the body, which regenerate at intervals and then die, most neurons last a lifetime

At least, ideally. People lose neurons in brain regions they stop using. For instance, if you never left your home again, you’d likely lose neurons in the brain region involved in spatial navigation. 

Neuron death can lead to loss of basic brain functions and motor skills. That’s what happens in degenerative diseases like Alzheimer’s and Parkinson’s disease, where neurons stop functioning properly and die off. There’s some evidence that these diseases result from protein clumps clogging the brain, but scientists are still working to figure out exactly how this happens.

That may be essential for finding effective treatments, which have remained largely elusive.

Neurological changes aren’t necessarily permanent. In addition to the brain’s general neuroplasticity, there’s solid evidence that even adults are able to form new neurons, through a process called “neurogenesis.” 

Researchers are still studying the extent to which neurogenesis happens in adults. But they think it may be important for healthy brain functioning.

And because neurons communicate through electrical signals, we can directly alter brain circuits with electrical stimulation. Scientists have found ways to stimulate the brain and spinal cord to restore function to paralyzed muscles and relieve chronic pain.

Private companies are also trying to jump on the hype, claiming their brain stimulation products can improve memory and accelerate skill acquisition. But researchers are still trying to figure out which effects are real and which are a placebo. And since zapping your own brain could pose serious health risks, it may be best, for now, to train your neurons the old-fashioned way.

https://www.scientificamerican.com/video/decoded-what-are-neurons/ 


Researchers watch video images people are seeing, decoded from their fMRI brain scans in near-real-time

Advanced deep-learning "mind-reading" system even interprets image meaning and recreates the video images
October 27, 2017
Purdue Engineering researchers have developed a system that can show what people are seeing in real-world videos, decoded from their fMRI brain scans — an advanced new form of  “mind-reading” technology that could lead to new insights in brain function and to advanced AI systems.
The research builds on previous pioneering research at UC Berkeley’s Gallant Lab, which created a computer program in 2011 that translated fMRI brain-wave patterns into images that loosely mirrored a series of images being viewed.
The new system also decodes moving images that subjects see in videos and does it in near-real-time. But the researchers were also able to determine the subjects’ interpretations of the images they saw — for example, interpreting an image as a person or thing — and could even reconstruct the original images that the subjects saw.
Deep-learning AI system for watching what the brain sees
Watching in near-real-time what the brain sees. Visual information generated by a video (a) is processed in a cascade from the retina through the thalamus (LGN area) to several levels of the visual cortex (b), detected from fMRI activity patterns (c) and recorded. A powerful deep-learning technique (d) then models this detected cortical visual processing. Called a convolutional neural network (CNN), this model transforms every video frame into multiple layers of features, ranging from orientations and colors (the first visual layer) to high-level object categories (face, bird, etc.) in semantic (meaning) space (the eighth layer). The trained CNN model can then be used to reverse this process, reconstructing the original videos — even creating new videos that the CNN model had never watched. (credit: Haiguang Wen et al./Cerebral Cortex)
The researchers acquired 11.5 hours of fMRI data from each of three women subjects watching 972 video clips, including clips showing people or animals in action and nature scenes.
To decode the  fMRI images, the research pioneered the use of a deep-learning technique called a convolutional neural network (CNN). The trained CNN model was able to accurately decode the fMRI blood-flow data to identify specific image categories. The researchers could compare (in near-real-time) these viewed video images side-by-side with the computer’s visual interpretation of what the person’s brain saw.
The researchers were also able to figure out how certain locations in the visual cortex were associated with specific information a person was seeing.
Decoding how the visual cortex works
CNNs have been used to recognize faces and objects, and to study how the brain processes static images and other visual stimuli. But the new findings represent the first time CNNs have been used to see how the brain processes videos of natural scenes. This is “a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings,” said doctoral student Haiguang Wen.
Wen was first author of a paper describing the research, appearing online Oct. 20 in the journal Cerebral Cortex.
“Neuroscience is trying to map which parts of the brain are responsible for specific functionality,” Wen explained. “This is a landmark goal of neuroscience. I think what we report in this paper moves us closer to achieving that goal. Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain’s visual cortex. By doing that, you can see how the brain divides a visual scene into pieces, and re-assembles the pieces into a full understanding of the visual scene.”
The researchers also were able to use models trained with data from one human subject to predict and decode the brain activity of a different human subject, a process called “cross-subject encoding and decoding.” This finding is important because it demonstrates the potential for broad applications of such models to study brain function, including people with visual deficits.
The research has been funded by the National Institute of Mental Health. The work is affiliated with the Purdue Institute for Integrative Neuroscience. Data reported in this paper are also publicly available at the Laboratory of Integrated Brain Imaging website.


Abstract of Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision
Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision.
references:
Haiguang Wen, Junxing Shi, Yizhen Zhang, Kun-Han Lu, Jiayue Cao, Zhongming Liu. Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision. Cerebral Cortex, 2017; 1 DOI: 10.1093/cercor/bhx268

The brain learns completely differently than we’ve assumed, new learning theory says

New post-Hebb brain-learning model may lead to new brain treatments and breakthroughs in faster deep learning
March 28, 2018
A revolutionary new theory contradicts a fundamental assumption in neuroscience about how the brain learns. According to researchers at Bar-Ilan University in Israel led by Prof. Ido Kanter, the theory promises to transform our understanding of brain dysfunction and may lead to advanced, faster, deep-learning algorithms.
A biological schema of an output neuron, comprising a neuron’s soma (body, shown as gray circle, top) with two roots of dendritic trees (light-blue arrows), splitting into many dendritic branches (light-blue lines). The signals arriving from the connecting input neurons (gray circles, bottom) travel via their axons (red lines) and their many branches until terminating with the synapses (green stars). There, the signals connect with dendrites (some synapse branches travel to other neurons), which then connect to the soma. (credit: Shira Sardi et al./Sci. Rep)
The brain is a highly complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses. A neuron collects its many synaptic incoming signals through dendritic trees.
In 1949, Donald Hebb suggested that learning occurs in the brain by modifying the strength of synapses. Hebb’s theoryhas remained a deeply rooted assumption in neuroscience.
Synaptic vs. dendritic learning
n vitro experimental setup. A micro-electrode array comprising 60 extracellular electrodes separated by 200 micrometers, indicating a neuron patched (connected) by an intracellular electrode (orange) and a nearby extracellular electrode (green line). (Inset) Reconstruction of a fluorescence image, showing a patched cortical pyramidal neuron (red) and its dendrites growing in different directions and in proximity to extracellular electrodes. (credit: Shira Sardi et al./Scientific Reports adapted by KurzweilAI)
Hebb was wrong, says Kanter. “A new type of experiments strongly indicates that a faster and enhanced learning process occurs in the neuronal dendrites, similarly to what is currently attributed to the synapse,” Kanter and his team suggest in an open-access paper in Nature’s Scientific Reports, published Mar. 23, 2018.
“In this new [faster] dendritic learning process, there are [only] a few adaptive parameters per neuron, in comparison to thousands of tiny and sensitive ones in the synaptic learning scenario,” says Kanter. “Does it make sense to measure the quality of air we breathe via many tiny, distant satellite sensors at the elevation of a skyscraper, or by using one or several sensors in close proximity to the nose,?” he asks. “Similarly, it is more efficient for the neuron to estimate its incoming signals close to its computational unit, the neuron.”
Image representing the current synaptic (pink) vs. the new dendritic (green) learning scenarios of the brain. In the current scenario, a neuron (black) with a small number (two in this example) dendritic trees (center) collects incoming signals via synapses (represented by red valves), with many thousands of tiny adjustable learning parameters. In the new dendritic learning scenario (green) a few (two in this example) adjustable controls (red valves) are located in close proximity to the computational element, the neuron. The scale is such that if a neuron collecting its incoming signals is represented by a person’s faraway fingers, the length of its hands would be as tall as a skyscraper (left). (credit: Prof. Ido Kanter)
The researchers also found that weak synapses, which comprise the majority of our brain and were previously assumed to be insignificant, actually play an important role in the dynamics of our brain.
According to the researchers, the new learning theory may lead to advanced, faster, deep-learning algorithms and other artificial-intelligence-based applications, and also suggests that we need to reevaluate our current treatments for disordered brain functionality.
This research is supported in part by the TELEM grant of the Israel Council for Higher Education.


Abstract of Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links
Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is significantly larger. The nodal, neuronal, fast adaptation follows its relative anisotropic (dendritic) input timings, as indicated experimentally, similarly to the slow learning mechanism currently attributed to the links, synapses. It represents a non-local learning rule, where effectively many incoming links to a node concurrently undergo the same adaptation. The network dynamics is now counterintuitively governed by the weak links, which previously were assumed to be insignificant. This cooperative nonlinear dynamic adaptation presents a self-controlled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to learning by links, and also supports self-oscillations of the effective learning parameters. It hints on a hierarchical computational complexity of nodes, following their number of anisotropic inputs and opens new horizons for advanced deep learning algorithms and artificial intelligence based applications, as well as a new mechanism for enhanced and fast learning by neural networks.


 Brainscapes: The Warped, Wondrous Maps Written in Your Brain―And How They Guide You

Rebecca Schwarzlose

 A path-breaking journey into the brain, showing how perception, thought, and action are products of “maps” etched into your gray matter—and how technology can use them to read your mind.


Your brain is a collection of maps. That is no metaphor: scrawled across your brain’s surfaces are actual maps of the sights, sounds, and actions that hold the key to your survival. Scientists first began uncovering these maps over a century ago, but we are only now beginning to unlock their secrets—and comprehend their profound impact on our lives. Brain maps distort and shape our experience of the world, support complex thought, and make technology-enabled mind reading a modern-day reality, which raises important questions about what is real, what is fair, and what is private. They shine a light on our past and our possible futures. In the process, they invite us to view ourselves from a startling new perspective. 

In Brainscapes, Rebecca Schwarzlose combines unforgettable real-life stories, cutting-edge research, and vivid illustrations to reveal brain maps’ surprising lessons about our place in the world—and about the world’s place within us.

https://www.goodreads.com/en/book/show/53968579

Making Up the Mind: How the Brain Creates Our Mental World

by Chris Frith

 Written by one of the world's leading neuroscientists, Making Up the Mind is the first accessible account of experimental studies showing how the brain creates our mental world.
Uses evidence from brain imaging, psychological experiments and studies of patients to explore the relationship between the mind and the brain
Demonstrates that our knowledge of both the mental and physical comes to us through models created by our brain
Shows how the brain makes communication of ideas from one mind to another possible : https://www.goodreads.com/book/show/581365.Making_Up_the_Mind 

 Is Consciousness Part of the Fabric of the Universe?

Physicists and philosophers recently met to debate a theory of consciousness called panpsychism

 

By Dan Falk on September 25, 2023

 

More than 400 years ago, Galileo showed that many everyday phenomena—such as a ball rolling down an incline or a chandelier gently swinging from a church ceiling—obey precise mathematical laws. For this insight, he is often hailed as the founder of modern science. But Galileo recognized that not everything was amenable to a quantitative approach. Such things as colors, tastes and smells “are no more than mere names,” Galileo declared, for “they reside only in consciousness.” These qualities aren’t really out there in the world, he asserted, but exist only in the minds of creatures that perceive them. “Hence if the living creature were removed,” he wrote, “all these qualities would be wiped away and annihilated.”

Since Galileo’s time the physical sciences have leaped forward, explaining the workings of the tiniest quarks to the largest galaxy clusters. But explaining things that reside “only in consciousness”—the red of a sunset, say, or the bitter taste of a lemon—has proven far more difficult. Neuroscientists have identified a number of neural correlates of consciousness—brain states associated with specific mental states—but have not explained how matter forms minds in the first place. As philosopher David Chalmers asked: “How does the water of the brain turn into the wine of consciousness?” He famously dubbed this quandary the “hard problem” of consciousness.

Scholars recently gathered to debate the problem at Marist College in Poughkeepsie, N.Y., during a two-day workshop focused on an idea known as panpsychism. The concept proposes that consciousness is a fundamental aspect of reality, like mass or electrical charge. The idea goes back to antiquity—Plato took it seriously—and has had some prominent supporters over the years, including psychologist William James and philosopher and mathematician Bertrand Russell. Lately it is seeing renewed interest, especially following the 2019 publication of philosopher Philip Goff’s book Galileo’s Error, which argues forcefully for the idea.

Goff, of the University of Durham in England, organized the recent event along with Marist philosopher Andrei Buckareff, and it was funded through a grant from the John Templeton Foundation. In a small lecture hall with floor-to-ceiling windows overlooking the Hudson River, roughly two dozen scholars probed the possibility that perhaps it’s consciousness all the way down.

Part of the appeal of panpsychism is that it appears to provide a workaround to the question posed by Chalmers: we no longer have to worry about how inanimate matter forms minds because mindedness was there all along, residing in the fabric of the universe. Chalmers himself has embraced a form of panpsychism and even suggested that individual particles might be somehow aware. He said in a TED Talk that a photon “might have some element of raw, subjective feeling, some primitive precursor to consciousness.” Also on board with the idea is neuroscientist Christof Koch, who noted in his 2012 book Consciousness that if one accepts consciousness as a real phenomenon that’s not dependent on any particular material—that it’s “substrate-independent,” as philosophers put it—then “it is a simple step to conclude that the entire cosmos is suffused with sentience.”

Yet panpsychism runs counter to the majority view in both the physical sciences and in philosophy that treats consciousness as an emergent phenomenon, something that arises in certain complex systems, such as human brains. In this view, individual neurons are not conscious, but thanks to the collective properties of some 86 billion neurons and their interactions—which, admittedly, are still only poorly understood—brains (along with bodies, perhaps) are conscious. Surveys suggest that slightly more than half of academic philosophers hold this view, known as “physicalism” or “emergentism,” whereas about one third reject physicalism and lean toward some alternative, of which panpsychism is one of several possibilities.

At the workshop, Goff made the case that physics has missed something essential when it comes to our inner mental life. In formulating their theories, “most physicists think about experiments,” he said. “I think they should be thinking, ‘Is my theory compatible with consciousness?’—because we know that’s real.”

Many philosophers at the meeting appeared to share Goff’s concern that physicalism falters when it comes to consciousness. “If you know every last detail about my brain processes, you still wouldn’t know what it’s like to be me,” says Hedda Hassel Mørch, a philosopher at Inland Norway University of Applied Sciences. “There is a clear explanatory gap between the physical and the mental.” Consider, for example, the difficulty of trying to describe color to someone who has only seen the world in black and white. Yanssel Garcia, a philosopher at the University of Nebraska Omaha, believes that physical facts alone are inadequate for such a task. “There is nothing of a physical sort that you could provide [a person who sees only in shades of gray] in order to have them understand what color experience is like; [they] would need to experience it themselves,” he says. “Physical science is, in principle, incapable of telling us the complete story.” Of the various alternatives that have been put forward, he says that “panpsychism is our best bet.”

But panpsychism attracts many critics as well. Some point out that it doesn’t explain how small bits of consciousness come together to form more substantive conscious entities. Detractors say that this puzzle, known as the “combination problem,” amounts to panpsychism’s own version of the hard problem. The combination problem “is the serious challenge for the panpsychist position,” Goff admits. “And it’s where most of our energies are going.”

Others question panpsychism’s explanatory power. In his 2021 book Being You, neuroscientist Anil Seth wrote that the main problems with panpsychism are that “it doesn’t really explain anything and that it doesn’t lead to testable hypotheses. It’s an easy get-out to the apparent mystery posed by the hard problem.”

While most of those invited to the workshop were philosophers, there were also talks by physicists Sean Carroll and Lee Smolin and by cognitive psychologist Donald Hoffman. Carroll, a hardcore physicalist, served as an unofficial leader of the opposition as the workshop unfolded. (He occasionally quipped, “I’m surrounded by panpsychists!”) During a well-attended public debate between Goff and Carroll, the divergence of their worldviews quickly became apparent. Goff said that physicalism has led “precisely nowhere,” and suggested that the very idea of trying to explain consciousness in physical terms was incoherent. Carroll argued that physicalism is actually doing quite well and that although consciousness is one of many phenomena that can’t be inferred from the goings-on at the microscopic level, it is nonetheless a real, emergent feature of the macroscopic world. He offered the physics of gases as a parallel example. At the micro level, one talks of atoms, molecules and forces; at the macro level, one speaks of pressure, volume and temperature. These are two kinds of explanations, depending on the “level” being studied—but present no great mystery and are not a failure on the part of physics. Before long, Goff and Carroll were deep into the weeds of the so-called knowledge argument (also known as “Mary in the black and white room”), as well as the “zombie” argument. Both boil down to the same key question: Is there something about consciousness that cannot be accounted for by physical facts alone? Much of the rhetorical ping-pong between Goff and Carroll amounted to Goff answering yes to that question and Carroll answering no.

Another objection some attendees raised is that panpsychism doesn’t address what philosophers call the “other minds” problem. (You have direct access to your own mind—but how can you deduce anything at all about another person’s mind?) “Even if panpsychism is true, there will still be vast amounts of things—namely, things related to what the experiences of others are like—that we still won’t know,” says Rebecca Chan, a philosopher at San José State University. She worries that invoking an underlying layer of mindedness is a bit like invoking God. “I sometimes wonder if the panpsychist position is similar to ‘god of the gaps’ arguments,” she says, referring to the notion that God is needed to fill the gaps in scientific knowledge.

Other ideas were batted around. The idea of cosmopsychism was floated—roughly, the notion that the universe itself is conscious. And Paul Draper, a philosopher at Purdue University who participated via Zoom, talked about a subtly different idea known as “psychological ether theory”—essentially that brains don’t produce consciousness but rather make use of consciousness. In this view, consciousness was already there before brains existed, like an all-pervasive ether. If the idea is correct, he writes, “then (in all likelihood) God exists.”

Hoffman, a cognitive scientist at the University of California, Irvine, who also addressed the workshop via Zoom, advocates rejecting the idea of spacetime and looking for something deeper. (He cited the increasingly popular idea in physics lately that space and time may not be fundamental but may instead be emergent phenomena themselves.) The deeper entity related to consciousness, Hoffman suggests, may consist of “subjects and experiences” that he says “are entities beyond spacetime, not within spacetime.” He developed this idea in a 2023 paper entitled “Fusions of Consciousness.”

Smolin, a physicist at the Perimeter Institute for Theoretical Physics in Ontario, who also participated via Zoom, has similarly been working on theories that appear to offer a more central role for conscious agents. In a 2020 paper, he suggested that the universe “is composed of a set of partial views of itself” and that “conscious perceptions are aspects of some views”—a perspective that he says can be thought of as “a restricted form of panpsychism.”

Carroll, speaking after the session that included both Hoffman and Smolin, noted that his own views diverged from those of the speakers within the first couple of minutes. (Over lunch, he noted that attending the workshop sometimes felt like being on a subreddit for fans of a TV show that you’re just not into.) He admitted that endless debates over the nature of “reality” sometimes left him frustrated. “People ask me, ‘What is physical reality?’ It’s physical reality! There’s nothing that it ‘is.’ What do you want me to say, that it’s made of macaroni or something?” (Even Carroll, however, admits that there’s more to reality than meets the eye. He’s a strong supporter of the “many worlds” interpretation of quantum mechanics, which holds that our universe is just one facet of a vast quantum multiverse.)

If all of this sounds like it couldn’t possibly have any practical value, Goff raised the possibility that how we conceive of minds can have ethical implications. Take the question of whether fish feel pain. Traditional science can only study a fish’s outward behavior, not its mental state. To Goff, focusing on the fish’s behavior is not only wrong-headed but “horrific” because it leaves out what’s actually most important—what the fish actually feels. “We’re going to stop asking if fish are conscious and just look at their behavior? Who gives a shit about the behavior? I want to know if it has an inner life; that’s all that matters!” For physicalists such as Carroll, however, feelings and behavior are intimately linked—which means we can avoid causing an animal to suffer by not putting it in a situation where it appears to be suffering based on its behavior. “If there were no connection between them [behavior and feelings], we would indeed be in trouble,” says Carroll, “but that’s not our world.”

Seth, the neuroscientist, was not at the workshop—but I asked him where he stands in the debate over physicalism and its various alternatives. Physicalism, he says, still offers more “empirical grip” than its competitors—and he laments what he sees as excessive hand-wringing over its alleged failures, including the supposed hardness of the hard problem. “Critiquing physicalism on the basis that it has ‘failed’ is willful mischaracterization,” he says. “It’s doing just fine, as progress in consciousness science readily attests.” In a recently published article in the Journal of Consciousness Studies, Seth adds: “Asserting that consciousness is fundamental and ubiquitous does nothing to shed light on the way an experience of blueness is the way it is, and not some other way. Nor does it explain anything about the possible functions of consciousness, nor why consciousness is lost in states such as dreamless sleep, general anaesthesia, and coma.”

Even those who lean toward panpsychism sometimes seem hesitant to dive into the deep end. As Garcia put it, in spite of the allure of a universe imbued with consciousness, “I would love to be talked out of it.”

https://www.scientificamerican.com/article/is-consciousness-part-of-the-fabric-of-the-universe/

 Being You: A New Science of Consciousness 

 by Anil Seth 

A British neuroscientist builds the argument that we do not perceive the world as it objectively is, but rather that we are prediction machines, constantly inventing our world and correcting our mistakes by the microsecond, and that we can now observe the biological mechanisms in the brain that accomplish this process of consciousness.

https://www.goodreads.com/en/book/show/53036979-being-you


Tracking a thought’s fleeting trip through the brain
Why people sometimes say things before they think
January 17, 2018

Repeating a word: as the brain receives (yellow), interpretes (red), and responds (blue) within a second, the prefrontal cortex (red) coordinates all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).
Recording the electrical activity of neurons directly from the surface of the brain, using electrocorticograhy (ECoG)*, neuroscientists were able to track the flow of thought across the brain in real time for the first time. They showed clearly how the prefrontal cortex at the front of the brain coordinates activity to help us act in response to a perception.
Here’s what they found.
For a simple task, such as repeating a word seen or heard:
The visual and auditory cortices react first to perceive the word. The prefrontal cortex then kicks in to interpret the meaning, followed by activation of the motor cortex (preparing for a response). During the half-second between stimulus and response, the prefrontal cortex remains active to coordinate all the other brain areas.
For a particularly hard task, like determining the antonym of a word:
During the time the brain takes several seconds to respond, the prefrontal cortex recruits other areas of the brain — probably including memory networks (not tracked). The prefrontal cortex then hands off to the motor cortex to generate a spoken response.
In both cases, the brain begins to prepare the motor areas to respond very early (during initial stimulus presentation) — suggesting that we get ready to respond even before we know what the response will be.
“This might explain why people sometimes say things before they think,” said Avgusta Shestyuk, a senior researcher in UC Berkeley’s Helen Wills Neuroscience Institute and lead author of a paper reporting the results in the current issue of Nature Human Behavior.

For a more difficult task, like saying a word that is the opposite of another word, people’s brains required 2–3 seconds to detect (yellow), interpret and search for an answer (red), and respond (blue) — with sustained prefrontal lobe activity (red) coordinating all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).
The research backs up what neuroscientists have pieced together over the past decades from studies in monkeys and humans.
“These very selective studies have found that the frontal cortex is the orchestrator, linking things together for a final output,” said co-author Robert Knight, a UC Berkeley professor of psychology and neuroscience and a professor of neurology and neurosurgery at UCSF. “Here we have eight different experiments, some where the patients have to talk and others where they have to push a button, where some are visual and others auditory, and all found a universal signature of activity centered in the prefrontal lobe that links perception and action. It’s the glue of cognition.”
Researchers at Johns Hopkins University, California Pacific Medical Center, and Stanford University were also involved. The work was supported by the National Science Foundation, National Institute of Mental Health, and National Institute of Neurological Disorders and Stroke.
* Other neuroscientists have used functional magnetic resonance imaging (fMRI) and electroencephelography (EEG) to record activity in the thinking brain. The UC Berkeley scientists instead employed a much more precise technique, electrocorticograhy (ECoG), which records from several hundred electrodes placed on the brain surface and detects activity in the thin outer region, the cortex, where thinking occurs. ECoG provides better time resolution than fMRI and better spatial resolution than EEG, but requires access to epilepsy patients undergoing highly invasive surgery involving opening the skull to pinpoint the location of seizures. The new study employed 16 epilepsy patients who agreed to participate in experiments while undergoing epilepsy surgery at UC San Francisco and California Pacific Medical Center in San Francisco, Stanford University in Palo Alto and Johns Hopkins University in Baltimore. Once the electrodes were placed on the brains of each patient, the researchers conducted a series of eight tasks that included visual and auditory stimuli. The tasks ranged from simple, such as repeating a word or identifying the gender of a face or a voice, to complex, such as determining a facial emotion, uttering the antonym of a word, or assessing whether an adjective describes the patient’s personality.


Abstract of Persistent neuronal activity in human prefrontal cortex links perception and action
How do humans flexibly respond to changing environmental demands on a subsecond temporal scale? Extensive research has highlighted the key role of the prefrontal cortex in flexible decision-making and adaptive behaviour, yet the core mechanisms that translate sensory information into behaviour remain undefined. Using direct human cortical recordings, we investigated the temporal and spatial evolution of neuronal activity (indexed by the broadband gamma signal) in 16 participants while they performed a broad range of self-paced cognitive tasks. Here we describe a robust domain- and modality-independent pattern of persistent stimulus-to-response neural activation that encodes stimulus features and predicts motor output on a trial-by-trial basis with near-perfect accuracy. Observed across a distributed network of brain areas, this persistent neural activation is centred in the prefrontal cortex and is required for successful response implementation, providing a functional substrate for domain-general transformation of perception into action, critical for flexible behaviour: http://www.kurzweilai.net/tracking-a-thoughts-fleeting-trip-through-the-brain?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=2c531f5741-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-2c531f5741-282212701
references:
Superconducting ‘synapse’ could enable powerful future neuromorphic supercomputers
Fires 200 million times faster than human brain, uses one ten-thousandth as much energy

February 7, 2018
A superconducting “synapse” that “learns” like a biological system, operating like the human brain, has been built by researchers at the National Institute of Standards and Technology (NIST).
The NIST switch, described in an open-access paper in Science Advances, provides a missing link for neuromorphic (brain-like) computers, according to the researchers. Such “non-von Neumann architecture” future computers could significantly speed up analysis and decision-making for applications such as self-driving cars and cancer diagnosis.
The research is supported by the Intelligence Advanced Research Projects Activity (IARPA) Cryogenic Computing Complexity Program, which was launched in 2014 with the goal of paving the way to “a new generation of superconducting supercomputer development beyond the exascale.”*
NIST’s artificial synapse is a metallic cylinder 10 micrometers in diameter — about 10 times larger than a biological synapse. It simulates a real synapse by processing incoming electrical spikes (pulsed current from a neuron) and customizing spiking output signals. The more firing between cells (or processors), the stronger the connection. That process enables both biological and artificial synapses to maintain old circuits and create new ones.
Dramatically faster, lower-energy-required, compared to human synapses
But the NIST synapse has two unique features that the researchers say are superior to human synapses and to other artificial synapses:
  • Operating at 100 GHz, it can fire at a rate that is much faster than the human brain — 1 billion times per second, compared to a brain cell’s rate of about 50 times per second.
  • It uses only about one ten-thousandth as much energy as a human synapse. The spiking energy is less than 1 attojoule** — roughly equivalent to the miniscule chemical energy bonding two atoms in a molecule — compared to the roughly 10 femtojoules (10,000 attojoules) per synaptic event in the human brain. Current neuromorphic platforms are orders of magnitude less efficient than the human brain. “We don’t know of any other artificial synapse that uses less energy,” NIST      physicist Mike Schneider said.
Superconducting devices mimicking brain cells and transmission lines have been developed, but until now, efficient synapses — a crucial piece — have been missing. The new Josephson junction-based artificial synapse would be used in neuromorphic computers made of superconducting components (which can transmit electricity without resistance), so they would be more efficient than designs based on semiconductors or software. Data would be transmitted, processed, and stored in units of magnetic flux.
The brain is especially powerful for tasks like image recognition because it processes data both in sequence and simultaneously and it stores memories in synapses all over the system. A conventional computer processes data only in sequence and stores memory in a separate unit.
The new NIST artificial synapses combine small size, superfast spiking signals, and low energy needs, and could be stacked into dense 3D circuits for creating large systems. They could provide a unique route to a far more complex and energy-efficient neuromorphic system than has been demonstrated with other technologies, according to the researchers.
Nature News does raise some concerns about the research, quoting neuromorphic-technology experts: “Millions of synapses would be necessary before a system based on the technology could be used for complex computing; it remains to be seen whether it will be possible to scale it to this level. … The synapses can only operate at temperatures close to absolute zero, and need to be cooled with liquid helium. That this might make the chips impractical for use in small devices, although a large data centre might be able to maintain them. … We don’t yet understand enough about the key properties of the [biological] synapse to know how to use them effectively.”


Inside a superconducting synapse 
The NIST synapse is a customized Josephson junction***, long used in NIST voltage standards. These junctions are a sandwich of superconducting materials with an insulator as a filling. When an electrical current through the junction exceeds a level called the critical current, voltage spikes are produced.
Each artificial synapse uses standard niobium electrodes but has a unique filling made of nanoscale clusters (“nanoclusters”) of manganese in a silicon matrix. The nanoclusters — about 20,000 per square micrometer — act like tiny bar magnets with “spins” that can be oriented either randomly or in a coordinated manner. The number of nanoclusters pointing in the same direction can be controlled, which affects the superconducting properties of the junction.
The synapse rests in a superconducting state, except when it’s activated by incoming current and starts producing voltage spikes. Researchers apply current pulses in a magnetic field to boost the magnetic ordering — that is, the number of nanoclusters pointing in the same direction.
This magnetic effect progressively reduces the critical current level, making it easier to create a normal conductor and produce voltage spikes. The critical current is the lowest when all the nanoclusters are aligned. The process is also reversible: Pulses are applied without a magnetic field to reduce the magnetic ordering and raise the critical current. This design, in which different inputs alter the spin alignment and resulting output signals, is similar to how the brain operates.
Synapse behavior can also be tuned by changing how the device is made and its operating temperature. By making the nanoclusters smaller, researchers can reduce the pulse energy needed to raise or lower the magnetic order of the device. Raising the operating temperature slightly from minus 271.15 degrees C (minus 456.07 degrees F) to minus 269.15 degrees C (minus 452.47 degrees F), for example, results in more and higher voltage spikes.


* Future exascale supercomputers would run at 1018 exaflops (“flops” = floating point operations per second) or more. The current fastest supercomputer — the Sunway TaihuLight — operates at about 0.1 exaflops; zettascale computers, the next step beyond exascale, would run 10,000 times faster than that.
** An attojoule is 10-18 joule, a unit of energy, and is one-thousandth of a femtojoule.
*** The Josephson effect is the phenomenon of supercurrent — i.e., a current that flows indefinitely long without any voltage applied — across a device known as a Josephson junction, which consists of two superconductors coupled by a weak link. — Wikipedia


Abstract of Ultralow power artificial synapses using nanotextured magnetic Josephson junctions
Neuromorphic computing promises to markedly improve the efficiency of certain computational tasks, such as perception and decision-making. Although software and specialized hardware implementations of neural networks have made tremendous accomplishments, both implementations are still many orders of magnitude less energy efficient than the human brain. We demonstrate a new form of artificial synapse based on dynamically reconfigurable superconducting Josephson junctions with magnetic nanoclusters in the barrier. The spiking energy per pulse varies with the magnetic configuration, but in our demonstration devices, the spiking energy is always less than 1 aJ. This compares very favorably with the roughly 10 fJ per synaptic event in the human brain. Each artificial synapse is composed of a Si barrier containing Mn nanoclusters with superconducting Nb electrodes. The critical current of each synapse junction, which is analogous to the synaptic weight, can be tuned using input voltage spikes that change the spin alignment of Mn nanoclusters. We demonstrate synaptic weight training with electrical pulses as small as 3 aJ. Further, the Josephson plasma frequencies of the devices, which determine the dynamical time scales, all exceed 100 GHz. These new artificial synapses provide a significant step toward a neuromorphic platform that is faster, more energy-efficient, and thus can attain far greater complexity than has been demonstrated with other technologies.
references:
M.L. Schneider, C.A. Donnelly, S.E. Russek, B. Baek, M.R. Pufall, P.F. Hopkins, P.D. Dresselhaus, S. P. Benz and W.H. Rippard. Ultra-low power artificial synapses using nano-textured magnetic Josephson junctions. Science Advances, 2018 DOI: 10.1126/sciadv.1701329

There Is No Such Thing as Conscious Thought
Philosopher Peter Carruthers insists that conscious thought, judgment and volition are illusions. They arise from processes of which we are forever unaware

Peter Carruthers, Distinguished University Professor of Philosophy at the University of Maryland, College Park, is an expert on the philosophy of mind who draws heavily on empirical psychology and cognitive neuroscience. He outlined many of his ideas on conscious thinking in his 2015 book The Centered Mind: What the Science of Working Memory Shows Us about the Nature of Human Thought. More recently, in 2017, he published a paper with the astonishing title of “The Illusion of Conscious Thought.” In the following excerpted conversation, Carruthers explains to editor Steve Ayan the reasons for his provocative proposal.
What makes you think conscious thought is an illusion?
I believe that the whole idea of conscious thought is an error. I came to this conclusion by following out the implications of the two of the main theories of consciousness. The first is what is called the Global Workspace Theory, which is associated with neuroscientists Stanislas Dehaene and Bernard Baars. Their theory states that to be considered conscious a mental state must be among the contents of working memory (the “user interface” of our minds) and thereby be available to other mental functions, such as decision-making and verbalization. Accordingly, conscious states are those that are “globally broadcast,” so to speak. The alternative view, proposed by Michael Graziano, David Rosenthal and others, holds that conscious mental states are simply those that you know of, that you are directly aware of in a way that doesn’t require you to interpret yourself. You do not have to read you own mind to know of them. Now, whichever view you adopt, it turns out that thoughts such as decisions and judgments should not be considered to be conscious. They are not accessible in working memory, nor are we directly aware of them. We merely have what I call “the illusion of immediacy”—the false impression that we know our thoughts directly.
One might easily agree that the sources of one’s thoughts are hidden from view—we just don’t know where our ideas come from. But once we have them and we know it, that’s where consciousness begins. Don’t we have conscious thoughts at least in this sense?
In ordinary life we are quite content to say things like “Oh, I just had a thought” or “I was thinking to myself.” By this we usually mean instances of inner speech or visual imagery, which are at the center of our stream of consciousness—the train of words and visual contents represented in our minds. I think that these trains are indeed conscious. In neurophilosophy, however, we refer to “thought” in a much more specific sense. In this view, thoughts include only nonsensory mental attitudes, such as judgments, decisions, intentions and goals. These are amodal, abstract events, meaning that they are not sensory experiences and are not tied to sensory experiences. Such thoughts never figure in working memory. They never become conscious. And we only ever know of them by interpreting what does become conscious, such as visual imagery and the words we hear ourselves say in our heads.
So consciousness always has a sensory basis?
I claim that consciousness is always bound to a sensory modality, that there is inevitably some auditory, visual or tactile aspect to it. All kinds of mental imagery, such as inner speech or visual memory, can of course be conscious. We see things in our mind’s eye; we hear our inner voice. What we are conscious of are the sensory-based contents present in working memory.
In your view, is consciousness different from awareness?
That’s a difficult question. Some philosophers believe that consciousness can be richer than what we can actually report. For example, our visual field seems to be full of detail—everything is just there, already consciously seen. Yet experiments in visual perception, especially the phenomenon of inattentional blindness, show that in fact we consciously register only a very limited slice of the world. [Editors’ note: A person experiencing inattentional blindness may not notice that a gorillawalked across a basketball court while the individual was focusing on the movement of the ball.] So, what we think we see, our subjective impression, is different from what we are actually aware of. Probably our conscious mind grasps only the gist of much of what is out there in the world, a sort of statistical summary. Of course, for most people consciousness and awareness coincide most of the time. Still, I think, we are not directly aware of our thoughts. Just as we are not directly aware of the thoughts of other people. We interpret our own mental states in much the same way as we interpret the minds of others, except that we can use as data in our own case our own visual imagery and inner speech.
You call the process of how people learn their own thoughts interpretive sensory access, or ISA. Where does the interpretation come into play?
Let’s take our conversation as an example—you are surely aware of what I am saying to you at this very moment. But the interpretative work and inferences on which you base your understanding are not accessible to you. All the highly automatic, quick inferences that form the basis of your understanding of my words remain hidden. You seem to just hear the meaning of what I say. What rises to the surface of your mind are the results of these mental processes. That is what I mean: The inferences themselves, the actual workings of our mind, remain unconscious. All that we are aware of are their products. And my access to your mind, when I listen to you speak, is not different in any fundamental way from my access to my own mind when I am aware of my own inner speech. The same sorts of interpretive processes still have to take place.
Why, then, do we have the impression of direct access to our mind?
The idea that minds are transparent to themselves (that everyone has direct awareness of their own thoughts) is built into the structure of our “mind reading” or “theory of mind” faculty, I suggest. The assumption is a useful heuristic when interpreting the statements of others. If someone says to me, “I want to help you,” I have to interpret whether the person is sincere, whether he is speaking literally or ironically, and so on; that is hard enough. If I also had to interpret whether he is interpreting his own mental state correctly, then that would make my task impossible. It is far simpler to assume that he knows his own mind (as, generally, he does). The illusion of immediacy has the advantage of enabling us to understand others with much greater speed and probably with little or no loss of reliability. If I had to figure out to what extent others are reliable interpreters of themselves, then that would make things much more complicated and slow. It would take a great deal more energy and interpretive work to understand the intentions and mental states of others. And then it is the same heuristic transparency-of-mind assumption that makes my own thoughts seem transparently available to me.
What is the empirical basis of your hypothesis?
There is a great deal of experimental evidence from normal subjects, especially of their readiness to falsely, but unknowingly, fabricate facts or memories to fill in for lost ones. Moreover, if introspection were fundamentally different from reading the minds of others, one would expect there to be disorders in which only one capacity was damaged but not the other. But that’s not what we find. Autism spectrum disorders, for example, are not only associated with limited access to the thoughts of others but also with a restricted understanding of oneself. In patients with schizophrenia, the insight both into one’s own mind and that of others is distorted. There seems to be only a single mind-reading mechanism on which we depend both internally and in our social relations.
What side effect does the illusion of immediacy have?
The price we pay is that we believe subjectively that we are possessed of far greater certainty about our attitudes than we actually have. We believe that if we are in mental state X, it is the same as being in that state. As soon as I believe I am hungry, I am. Once I believe I am happy, I am. But that is not really the case. It is a trick of the mind that makes us equate the act of thinking one has a thought with the thought itself.
What might be the alternative? What should we do about it, if only we could?
Well, in theory, we would have to distinguish between an experiential state itself on the one hand and our judgment or belief underlying this experience on the other hand. There are rare instances when we succeed in doing so: for example, when I feel nervous or irritated but suddenly realize that I am actually hungry and need to eat.
You mean that a more appropriate way of seeing it would be: “I think I’m angry, but maybe I’m not”?
That would be one way of saying it. It is astonishingly difficult to maintain this kind of distanced view of oneself. Even after many years of consciousness studies, I’m still not all that good at it (laughs).
Brain researchers put a lot of effort into figuring out the neural correlates of consciousness, the NCC. Will this endeavor ever be successful?
I think we already know a lot about how and where working memory is represented in the brain. Our philosophical concepts of what consciousness actually is are much more informed by empirical work than they were even a few decades ago. Whether we can ever close the gap between subjective experiences and neurophysiological processes that produce them is still a matter of dispute.
Would you agree that we are much more unconscious than we think we are?
I would rather say that consciousness is not what we generally think it is. It is not direct awareness of our inner world of thoughts and judgments but a highly inferential process that only gives us the impression of immediacy.
Where does that leave us with our concept of freedom and responsibility? 
We can still have free will and be responsible for our actions. Conscious and unconscious are not separate spheres; they operate in tandem. We are not simply puppets manipulated by our unconscious thoughts, because obviously, conscious reflection does have effects on our behavior. It interacts with and is fueled by implicit processes. In the end, being free means acting in accordance with one’s own reasons—whether these are conscious or not.


Briefly Explained: Consciousness
Consciousness is generally understood to mean that an individual not only has an idea, recollection or perception but also knows that he or she has it. For perception, this knowledge encompasses both the experience of the outer world (“it’s raining”) and one’s internal state (“I’m angry”). Experts do not know how human consciousness arises. Nevertheless, they generally agree on how to define various aspects of it. Thus, they distinguish “phenomenal consciousness” (the distinctive feel when we perceive, for example, that an object is red) and “access consciousness” (when we can report on a mental state and use it in decision-making).
Important characteristics of consciousness include subjectivity (the sense that the mental event belongs to me), continuity (it appears unbroken) and intentionality (it is directed at an object). According to a popular scheme of consciousness known as Global Workspace Theory, a mental state or event is conscious if a person can bring it to mind to carry out such functions as decision-making or remembering, although how such accessing occurs is not precisely understood. Investigators assume that consciousness is not the product of a single region of the brain but of larger neural networks. Some theoreticians go so far as to posit that it is not even the product of an individual brain. For example, philosopher Alva Noë of the University of California, Berkeley, holds that consciousness is not the work of a single organ but is more like a dance: a pattern of meaning that emerges between brains.  –S.A.


   How Do I Know I’m Not the Only Conscious Being in the Universe?

The solipsism problem, also called the problem of other minds, lurks at the heart of science, philosophy, religion, the arts and the human condition

It is a central dilemma of human life—more urgent, arguably, than the inevitability of suffering and death. I have been brooding and ranting to my students about it for years. It surely troubles us more than ever during this plague-ridden era. Philosophers call it the problem of other minds. I prefer to call it the solipsism problem.

Solipsism, technically, is an extreme form of skepticism, at once utterly nuts and irrefutable. It holds that you are the only conscious being in existence. The cosmos sprang into existence when you became sentient, and it will vanish when you die. As crazy as this proposition seems, it rests on a brute fact: each of us is sealed in an impermeable prison cell of subjective awareness. Even our most intimate exchanges might as well be carried out via Zoom.

You experience your own mind every waking second, but you can only infer the existence of other minds through indirect means. Other people seem to possess conscious perceptions, emotions, memories, intentions, just as you do, but you can’t be sure they do. You can guess how the world looks to me, based on my behavior and utterances, including these words you are reading, but you have no first-hand access to my inner life. For all you know, I might be a mindless bot.

Natural selection instilled in us the capacity for a so-called theory of mind—a talent for intuiting others’ emotions and intentions. But we have a countertendency to deceive each other, and to fear we are being deceived. The ultimate deception would be pretending you’re conscious when you’re not.

The solipsism problem thwarts efforts to explain consciousness. Scientists and philosophers have proposed countless contradictory hypotheses about what consciousness is and how it arises. Panpsychists contend that all creatures and even inanimate matter—even a single proton!—possess consciousness. Hard-core materialists insist, conversely (and perversely), that not even humans are all that conscious.

The solipsism problem prevents us from verifying or falsifying these and other claims. I can’t be certain that you are conscious, let alone a jellyfish, sexbot or doorknob. As long as we lack what neuroscientist Christof Koch calls a consciousness meter—a device that can measure consciousness in the same way that a thermometer measures temperature—theories of consciousness will remain in the realm of pure speculation.

But the solipsism problem is far more than a technical philosophical matter. It is a paranoid but understandable response to the feelings of solitude that lurk within us all. Even if you reject solipsism as an intellectual position, you sense it, emotionally, whenever you feel estranged from others, whenever you confront the awful truth that you can never know, really know another person, and no one can really know you.

Religion is one response to the solipsism problem. Our ancestors dreamed up a supernatural entity who bears witness to our innermost fears and desires. No matter how lonesome we feel, how alienated from our fellow humans, God is always there watching over us. He sees our souls, our most secret selves, and He loves us anyway. Wouldn’t it be nice to think so.

The arts, too, can be seen as attempts to overcome the solipsism problem. The artist, musician, poet, novelist says, This is how my life feels or This is how life might feel for another person. She helps us imagine what it’s like to be a Black woman trying to save her children from slavery, or a Jewish ad salesman wandering through Dublin, wondering whether his wife is cheating on him. But to imagine is not to know.

Some of my favorite works of art dwell on the solipsism problem. In I’m thinking of ending things and earlier films, as well as his new novel Antkind, Charlie Kaufman depicts other people as projections of a disturbed protagonist. Kaufman no doubt hopes to help us, and himself, overcome the solipsism problem by venting his anxiety about it, but I find his dramatizations almost too evocative.

Love, ideally, give us the illusion of transcending the solipsism problem. You feel you really know someone, from the inside out, and she knows you. In moments of ecstatic sexual communion or mundane togetherness—while you’re eating pizza and watching The Alienist, say—you fuse with your beloved. The barrier between you seems to vanish.

Inevitably, however, your lover disappoints, deceives, betrays you. Or, less dramatically, some subtle bio-cognitive shift occurs. You look at her as she nibbles her pizza and think, Who, what, is this odd creature? The solipsism problem has reemerged, more painful and suffocating than ever.

It gets worse. In addition to the problem of other minds, there is the problem of our own. As evolutionary psychologist Robert Trivers points out, we deceive ourselves at least as effectively as we deceive others. A corollary of this dark truth is that we know ourselves even less than we know others.

If a lion could talk, Wittgenstein said, we couldn’t understand it. The same is true, I suspect, of our own deepest selves. If you could eavesdrop on your subconscious, you’d hear nothing but grunts, growls and moans—or perhaps the high-pitched squeaks of raw machine-code data coursing through a channel.

For the mentally ill, solipsism can become terrifyingly vivid. Victims of Capgras syndrome think that identical imposters have replaced their loved ones. If you have Cotard’s delusion, also known as walking corpse syndrome, you become convinced that you are dead. A much more common disorder is derealization, which makes everything--you, others, reality as whole--feel strange, phony, simulated

Derealization plagued me throughout my youth. One episode was self-induced. Hanging out with friends in high school, I thought it would be fun to hyperventilate, hold my breath and let someone squeeze my chest until I passed out. When I woke up, I didn’t recognize my buddies. They were demons, jeering at me. For weeks after that horrifying sensation faded, everything still felt unreal, as if I were in a dreadful movie.

What if those afflicted with these alleged delusions actually see reality clearly? According to the Buddhist doctrine of anatta, the self does not really exist. When you try to pin down your own essence, to grasp it, it slips through your fingers.

We have devised methods for cultivating self-knowledge and quelling our anxieties, such as meditation and psychotherapy. But these practices strike me as forms of self-brainwashing. When we meditate or see a therapist, we are not solving the solipsism problem. We are merely training ourselves to ignore it, to suppress the horror and despair that it triggers.

We have also invented mythical places in which the solipsism problem vanishes. We transcend our solitude and merge with others into a unified whole. We call these places heaven, nirvana, the Singularity. But solipsism is a cave from which we cannot escape—except, perhaps, by pretending it doesn’t exist. Or, paradoxically, by confronting it, the way Charlie Kaufman does. Knowing we are in the cave may be as close as we can get to escaping it.

Conceivably, technology could deliver us from the solipsism problem. Christof Koch proposes that we all get brain implants with wi-fi, so we can meld minds through a kind of high-tech telepathy. Philosopher Colin McGinn suggests a technique that involves “brain-splicing,” transferring bits of your brain into mine, and vice versa.

But do we really want to escape the prison of our subjective selves? The archnemesis of Star Trek: The Next Generation is the Borg, a legion of tech-enhanced humanoids who have fused into one big meta-entity. Borg members have lost their separation from each other and hence their individuality. When they meet ordinary humans, they mutter in a scary monotone, “You will be assimilated. Resistance is futile.”

As hard as solitude can be for me to bear, I don’t want to be assimilated. If solipsism haunts me, so does oneness, a unification so complete that it extinguishes my puny mortal self. Perhaps the best way to cope with the solipsism problem in this weird, lonely time is to imagine a world in which it has vanished.

https://www.scientificamerican.com/article/how-do-i-know-im-not-the-only-conscious-being-in-the-universe/



The brain: a radical rethink is needed to understand it
March 17, 2017
 Henrik Jörntell, Senior Lecturer in Neuroscience, Lund University
Understanding the human brain is arguably the greatest challenge of modern science. The leading approach for most of the past 200 years has been to link its functions to different brain regions or even individual neurons (brain cells). But recent research increasingly suggests that we may be taking completely the wrong path if we are to ever understand the human mind.
The idea that the brain is made up of numerous regions that perform specific tasks is known as “modularity.” And, at first glance, it has been successful. For example, it can provide an explanation for how we recognise faces by activating a chain of specific brain regions in the occipital and temporal lobes. Bodies, however, are processed by a different set of brain regions. And scientists believe that yet other areas — memory regions — help combine these perceptual stimuli to create holistic representations of people. The activity of certain brain areas has also been linked to specific conditions and diseases.
The reason this approach has been so popular is partly due to technologies which are giving us unprecedented insight into the brain. Functional magnetic resonance imaging (fMRI), which tracks changes in blood flow in the brain, allows scientists to see brain areas light up in response to activities — helping them map functions. Meanwhile, optogenetics, a technique that uses genetic modification of neurons so that their electrical activity can be controlled with light pulses, can help us to explore their specific contribution to brain function.
Distributed functions
While both approaches generate fascinating results, it is not clear whether they will ever provide a meaningful understanding of the brain. A neuroscientist who finds a correlation between a neuron or brain region and a specific but in principle arbitrary physical parameter, such as pain, will be tempted to draw the conclusion that this neuron or this part of the brain controls pain. This is ironic because, even in the neuroscientist, the brain’s inherent function is to find correlations — in whatever task it performs.
But what if we instead considered the possibility that all brain functions are distributed across the brain and that all parts of the brain contribute to all functions? If that is the case, correlations found so far may be a perfect trap of the intellect. We then have to solve the problem of how the region or the neuron type with the specific function interacts with other parts of the brain to generate meaningful, integrated behaviour. So far, there is no general solution to this problem — just hypotheses in specific cases, such as for recognising people.
The problem can be illustrated by a recent study that found that the psychedelic drug LSD can disrupt the modular organisation that can explain vision. What’s more, the level of disorganisation is linked with the severity of the the “breakdown of the self” that people commonly experience when taking the drug. The study found that the drug affected the way that several brain regions were communicating with the rest of the brain, increasing their level of connectivity. So if we ever want to understand what our sense of self really is, we need to understand the underlying connectivity between brain regions as part of a complex network.
A way forward?
Some researchers now believe the brain and its diseases in general can only be understood as an interplay between tremendous numbers of neurons distributed across the central nervous system. The function of any one neuron is dependent on the functions of all the thousands of neurons it is connected to. These, in turn, are dependent on those of others. The same region or the same neuron may be used across a huge number of contexts, but have different specific functions depending on the context.
It may indeed be a tiny perturbation of these interplays between neurons that, through avalanche effects in the networks, causes conditions like depression or Parkinson’s disease. Either way, we need to understand the mechanisms of the networks in order to understand the causes and symptoms of these diseases. Without the full picture, we are not likely to be able to successfully cure these and many other conditions.
In particular, neuroscience needs to start investigating how network configurations arise from the brain’s lifelong attempts to make sense of the world. We also need to get a clear picture of how the cortex, brainstem, and cerebellum interact together with the muscles and the tens of thousands of optical and mechanical sensors of our bodies to create one integrated picture.
Connecting back to the physical reality is the only way to understand how information is represented in the brain. One of the reasons we have a nervous system in the first place is that the evolution of mobility required a controlling system. Cognitive, mental functions — and even thoughts — can be regarded as mechanisms that evolved in order to better plan for the consequences of movement and actions.
So the way forward for neuroscience may be to focus more on general neural recordings (with optogenetics or fMRI) — without aiming to hold each neuron or brain region responsible for any particular function. This could be fed into theoretical network research, which has the potential to account for a variety of observations and provide an integrated functional explanation. In fact, such a theory should help us design experiments, rather than only the other way around.
Major hurdles
It won’t be easy though. Current technologies are expensive — there are major financial resources as well as national and international prestige invested in them. Another obstacle is that the human mind tends to prefer simpler solutions over complex explanations, even if the former can have limited power to explain findings.
The entire relationship between neuroscience and the pharmaceutical industry is also built on the modular model. Typical strategies when it comes to common neurological and psychiatric diseases are to identify one type of receptor in the brain that can be targeted with drugs to solve the whole problem.
For example, SSRIs — which block absorption of serotonin in the brain so that more is freely available — are currently used to treat a number of different mental health problems, including depression. But they don’t work for many patients and there may be a placebo effect involved when they do.
Similarly, epilepsy is today widely seen as a single disease and is treated with anticonvulsant drugs, which work by dampening the activity of all neurons. Such drugs don’t work for everyone either. Indeed, it could be that any minute perturbation of the circuits in the brain — arising from one of thousands of different triggers unique to each patient — could push the brain into an epileptic state.
In this way, neuroscience is gradually losing compass on its purported path towards understanding the brain. It’s absolutely crucial that we get it right. Not only could it be the key to understanding some of the biggest mysteries known to science — such as consciousness — it could also help treat a huge range of debilitating and costly health problems.

New Clues about the Origins of Biological Intelligence

A common solution is emerging in two different fields: developmental biology and neuroscience

December 11, 2021

Rafael Yuste is a professor of biological sciences at Columbia University and director of its Neurotechnology Center.

Michael Levin is a biology professor and director of the Allen Discovery Center at Tufts University.

https://www.scientificamerican.com/article/new-clues-about-the-origins  

The New Science of Consciousness

 Anil Seth

 https://www.youtube.com/watch?v=m_YV3bjfUQg

 Being You: A New Science of Consciousness

by Anil Seth 

 

K-shell decomposition reveals

 hierarchical cortical organization of

 the human brain

Nir Lahav1,5, Baruch Ksherim1,5, Eti Ben-Simon2,3, Adi Maron-Katz2,3, Reuven Cohen4 and Shlomo Havlin1

Published 2 August 2016icle
Abstract

In recent years numerous attempts to understand the human brain were undertaken from a network point of view. A network framework takes into account the relationships between the different parts of the system and enables to examine how global and complex functions might emerge from network topology. Previous work revealed that the human brain features 'small world' characteristics and that cortical hubs tend to interconnect among themselves. However, in order to fully understand the topological structure of hubs, and how their profile reflect the brain's global functional organization, one needs to go beyond the properties of a specific hub and examine the various structural layers that make up the network. To address this topic further, we applied an analysis known in statistical physics and network theory as k-shell decomposition analysis. The analysis was applied on a human cortical network, derived from MRI\DSI data of six participants. Such analysis enables us to portray a detailed account of cortical connectivity focusing on different neighborhoods of inter-connected layers across the cortex. Our findings reveal that the human cortex is highly connected and efficient, and unlike the internet network contains no isolated nodes. The cortical network is comprised of a nucleus alongside shells of increasing connectivity that formed one connected giant component, revealing the human brain's global functional organization. All these components were further categorized into three hierarchies in accordance with their connectivity profile, with each hierarchy reflecting different functional roles. Such a model may explain an efficient flow of information from the lowest hierarchy to the highest one, with each step enabling increased data integration. At the top, the highest hierarchy (the nucleus) serves as a global interconnected collective and demonstrates high correlation with consciousness related regions, suggesting that the nucleus might serve as a platform for consciousness to emerge.
Export citation and abstract BibTeX RIS
Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
'..And you ask yourself, where is my mind?' The pixies (Where is my mind)
Introduction
The human brain is one of the most complex systems in nature. In recent years numerous attempts to understand such complex systems were undertaken, in physics, from a network point of view (Newman 2003, Carmi 2007, Colizza and Vespignani 2007, Goh et al 2007, Cohen and Havlin 2010). A network framework takes into account the relationships between the different parts of the system and enables to examine how global and complex functions might emerge from network topology. Previous work revealed that the human brain features 'small world' characteristics (i.e. small average distance and large clustering coefficient associated with a large number of local structures (Sporns and Zwi 2004, Sporns et al 2004, Achard et al 2006, He et al 2007, Ponten et al2007, Reijneveld et al 2007, Stam and Reijneveld 2007, Stam et al 2007, van den Heuvel et al 2008, Bullmore and Sporns 2009), and that cortical hubs tend to interconnect and interact among themselves (Eguiluz et al 2005, Achard et al 2006, van den Heuvel et al 2008, Buckner et al 2009). For instance, van den Heuvel and Sporns demonstrated that hubs tend to be more densely connected among themselves than with nodes of lower degrees, creating a closed exclusive 'rich club' (van den Heuvel and Sporns 2011, Harriger et al 2012, van den Heuvel et al 2013, Collin et al 2014). These studies, however, mainly focused on the individual degree (i.e. the number of edges that connect to a specific node) of a given node, not taking into account how their neighbors' connectivity profile might also influence their role or importance. In order to better understand the topological structure of hubs, their relationship with other nodes, and how their connectivity profile might reflect the brain's global functional organization, one needs to go beyond the properties of a specific hub and examine the various structural layers that make up the network.
In order to explore the relations between network topology and its functional organization we applied a statistical physics analysis called k-shell decomposition (Adler 1991, Pittel et al 1996, Alvarez-Hamelin et al 2005a2005b, Carmi 2007, Garas et al 2010, Modha and Singh 2010) on a human cortical network derived from MRI and DSI data. Unlike regular degree analysis, k-shell decomposition does not only check a node's degree but also considers the degree of the nodes connected to it. The k-shell of a node reveals how central this node is in the network with respect to its neighbors, meaning that a higher k-value signifies a more central node belonging to a more connected neighborhood in the network. By removing different degrees iteratively, the process enables to uncover the most connected area of the network (i.e., the nucleus) as well as the connectivity shells that surround it. Therefore, every shell defines a neighborhood of nodes with similar connectivity (see figure 1). A few studies have already applied this analysis in a preliminary way, focusing mainly on the network's nucleus and its relevance to known functional networks (Hagmann et al 2008, van den Heuvel and Sporns 2011). For instance, Hagmann et al revealed that the nucleus of the human cortical network is mostly comprised of default mode network (DMN) regions (Hagmann et al 2008). However, when examined more carefully, k-shell decomposition analysis, as shown here, enables the creation of a topology model for the entire human cortex taking into account the nucleus as well as the different connectivity shells ultimately uncovering a reasonable picture of the global functional organization of the cortical network. Furthermore, using previously published k-shell analysis of internet network topology (Carmi 2007) we were able to compare cortical network topology with other types of networks:http://iopscience.iop.org/article/10.1088/1367-2630/18/8/083013/meta;jsessionid=D011699E9D2D136A7DEF239BE7292A9B.ip-10-40-1-105 

SOCIAL INTELLIGENCE. THE NEW SCIENCE OF HUMAN RELATIONSHIPS

By D. Goleman

Author Daniel Goleman explores the manner in which the brain is designed to engage in brain-to-brain “hookups” with others, and how these interactions affect both our social interactions and physical/mental well being. Based upon conceptualizations pioneered by Edward Thorndike, Goleman analyzes a traditional concept of social intelligence for the purpose of developing a revised model that consists of two categories: Social awareness (e.g., assessing the feelings of others) and social facility (e.g., awareness of how people present themselves). Goleman also explores advances in neuroscience that have made it possible for scientists and psychologists to study the ways in which emotions and biology work together. 

https://www.semanticscholar.org/paper/Social-Intelligence

Emotional Intelligence Why It Can Matter More Than IQ

By Daniel Goleman

https://www.academia.edu/37329006/Emotional_Intelligence_Why_it_Can_Matter_More_Than_IQ_ 

 03-02-21

Here’s how human consciousness works—and how a machine might replicate it

In an excerpt from his new book, Numenta and Palm cofounder Jeff Hawkins says that consciousness isn’t beyond understanding. Nor is replicating it unimaginable.

BY JEFF HAWKINS

I recently attended a panel discussion titled Being Human in the Age of Intelligent Machines. At one point during the evening, a philosophy professor from Yale said that if a machine ever became conscious, then we would probably be morally obligated to not turn it off. The implication was that if something is conscious, even a machine, then it has moral rights, so turning it off is equivalent to murder. Wow! Imagine being sent to prison for unplugging a computer. Should we be concerned about this?

Most neuroscientists don’t talk much about consciousness. They assume that the brain can be understood like every other physical system, and consciousness, whatever it is, will be explained in the same way. Since there isn’t even an agreement on what the word consciousness means, it is best to not worry about it. Philosophers, on the other hand, love to talk (and write books) about consciousness. Some believe that consciousness is beyond physical description. That is, even if you had a full understanding of how the brain works, it would not explain consciousness. Philosopher David Chalmers famously claimed that consciousness is “the hard problem,” whereas understanding how the brain works is “the easy problem.” This phrase caught on, and now many people just assume that consciousness is an inherently unsolvable problem.

Personally, I see no reason to believe that consciousness is beyond explanation. I don’t want to get into debates with philosophers, nor do I want to try to define consciousness. However, the Thousand Brains Theory suggests physical explanations for several aspects of consciousness. For example, the way the brain learns models of the world is intimately tied to our sense of self and how we form beliefs.

Imagine if I could reset your brain to the exact state it was in when you woke up this morning. Before I reset you, you would get up and go about your day, doing the things you normally do. Perhaps on this day you washed your car. At dinnertime, I would reset your brain to the time you got up, undoing any changes—including any changes to the synapses—that occurred during the day. Therefore, all memories of what you did would be erased. After I reset your brain, you would believe that you just woke up. If I then told you that you had washed your car today, you would at first protest, claiming it wasn’t true.

Upon showing you a video of you washing your car, you might admit that it indeed looks like you had, but you could not have been conscious at the time. You might also claim that you shouldn’t be held responsible for anything you did during the day because you were not conscious when you did it. Of course, you were conscious when you washed your car. It is only after deleting your memories of the day that you would believe and claim you were not. This thought experiment shows that our sense of awareness, what many people would call being conscious, requires that we form moment-to-moment memories of our actions.

Consciousness also requires that we form moment-to-moment memories of our thoughts. Thinking is just a sequential activation of neurons in the brain. We can remember a sequence of thoughts just as we can remember the sequence of notes in a melody. If we didn’t remember our thoughts, we would be unaware of why we were doing anything. For example, we have all experienced going to a room in our house to do something but, upon entering the room, forgetting what we went there for. When this happens, we often ask ourselves, “where was I just before I got here and what was I thinking?” We try to recall the memory of our recent thoughts so we know why we are now standing in the kitchen. When our brains are working properly, the neurons form a continuous memory of both our thoughts and actions. Therefore, when we get to the kitchen, we can recall the thoughts we had earlier. We retrieve the recently stored memory of thinking about eating the last piece of cake in the refrigerator and we know why we went to the kitchen.

The active neurons in the brain at some moments represent our present experience, and at other moments represent a previous experience or a previous thought. It is this accessibility of the past—the ability to jump back in time and slide forward again to the present—that gives us our sense of presence and awareness. If we couldn’t replay our recent thoughts and experiences, then we would be unaware that we are alive.

Our moment-to-moment memories are not permanent. We typically forget them within hours or days. I remember what I had for breakfast today, but I will lose this memory in a day or two. It is common that our ability to form short-term memories declines with age. That is why we have more and more of the “why did I come here?” experiences as we get older.

These thought experiments prove that our awareness, our sense of presence—which is the central part of consciousness—is dependent on continuously forming memories of our recent thoughts and experiences and playing them back as we go about our day.

Now let’s say we create an intelligent machine. The machine learns a model of the world using the same principles as a brain. The internal states of the machine’s model of the world are equivalent to the states of neurons in the brain. If our machine remembers these states as they occur and can replay these memories, then would it be aware and conscious of its existence, in the same way that you and I are? I believe so.

If you believe that consciousness cannot be explained by scientific investigation and the known laws of physics, then you might argue that I have shown that storing and recalling the states of a brain is necessary, but I have not proven that it is sufficient. If you take this view, then the burden is on you to show why it is not sufficient. For me, the sense of awareness—the sense of presence, the feeling that I am an acting agent in the world—is the core of what it means to be conscious. It is easily explained by the activity of neurons, and I see no mystery in it.

Excerpted from A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins. Copyright © 2021. Available from Basic Books, an imprint of Hachette Book Group.

https://www.fastcompany.com/90596244/can-a-machine-achieve-


In Search of Memory: The Emergence of a New Science of Mind 

by Eric R. Kandel 

Memory binds our mental life together. We are who we are in large part because of what we learn and remember. But how does the brain create memories? Nobel Prize winner Eric R. Kandel intertwines the intellectual history of the powerful new science of the mind—a combination of cognitive psychology, neuroscience, and molecular biology—with his own personal quest to understand memory. A deft mixture of memoir and history, modern biology and behavior, In Search of Memory brings readers from Kandel's childhood in Nazi-occupied Vienna to the forefront of one of the great scientific endeavors of the twentieth century: the search for the biological basis of memory.
https://www.amazon.com/Search-Memory-Emergence-Science-Mind-ebook/dp/B002PQ7B5O   


The Oracle Of Night: The History and Science of Dreams

By Sidarta Ribeiro

 A groundbreaking history of the human mind told through our experience of dreams - from the earliest accounts to current scientific findings - and their essential role in the formation of who we are and the world we have made.

What is a dream? Why do we dream? How do our bodies and minds use them? These questions are the starting point for this unprecedented study of the role and significance of this phenomenon. An inves­tigation on a grand scale, it encompasses literature, anthropology, religion, and science, articulating the essential place dreams occupy in human culture and how they functioned as the catalyst that compelled us to transform our earthly habitat into a human world.

From the earliest cave paintings - where Sidarta Ribeiro locates a key to humankind’s first dreams and how they contributed to our capacity to perceive past and future and our ability to conceive of the existence of souls and spirits - to today’s cutting-edge scientific research, Ribeiro arrives at revolutionary conclusions about the role of dreams in human existence and evolution. He explores the advances that contempo­rary neuroscience, biochemistry, and psychology have made into the connections between sleep, dreams, and learning. He explains what dreams have taught us about the neural basis of memory and the transfor­mation of memory in recall. And he makes clear that the earliest insight into dreams as oracular has been elucidated by contemporary research.

Accessible, authoritative, and fascinating, The Oracle of Night gives us a wholly new way to under­stand this most basic of human experiences. 

https://www.amazon.com/Oracle-Night-History-Science-Dreams/dp/B08R981399

 

Nav komentāru:

Ierakstīt komentāru