When Does Consciousness Emerge in Babies?
Answering the
question of when consciousness emerges is deeply tied to the mystery of what it
actually is and how it can be measured
Brain-wave
evidence: Studies show infants aged 5–15 months display EEG patterns
like the adult P300, a signature of conscious perception . Gradual development:
Full-blown self- awareness—such as reflective thought or autobiographical
memory—emerges later, often around age 2–3 or up to 5–7…:
https://www.scientificamerican.com/article/when-does-consciousness-arise/
OCTOBER 20, 2020
Researcher
proposes new theory of consciousness
Electromagnetic energy in
the brain enables brain matter to create our consciousness and our ability to
be aware and think, according to a new theory developed by Professor Johnjoe
McFadden from the University of Surrey.
Publishing his theory in the
journal Neuroscience of Consciousness, Professor McFadden posits
that consciousness is in fact the brain's energy field. This theory could pave
the way toward the development of conscious AI, with robots that are aware and
have the ability to think becoming a reality.
Early theories on what our
consciousness is and how it has been created tended toward the supernatural,
suggesting that humans and probably other animals possess an immaterial soul
that confers consciousness, thought and free will—capabilities that inanimate
objects lack. Most scientists today have discarded this view, known as dualism,
to embrace a 'monistic' view of a consciousness generated by the brain itself
and its network of billions of nerves. By contrast, McFadden proposes a
scientific form of dualism based on the difference between matter and energy,
rather than matter and soul.
The theory is based on
scientific fact: when neurons in the brain and nervous system fire, they not
only send the familiar electrical signal down the wire-like nerve fibres, but
they also send a pulse of electromagnetic
energy into the surrounding tissue. Such energy is usually
disregarded, yet it carries the same information as nerve firings,
but as an immaterial wave of energy, rather than a flow of atoms in and out of
the nerves.
This electromagnetic field
is well-known and is routinely detected by brain-scanning techniques such as
electroencephalogram (EEG) and magnetoencephalography (MEG) but has previously
been dismissed as irrelevant to brain function. Instead, McFadden proposes that
the brain's information-rich electromagnetic field is in fact itself the seat
of consciousness, driving 'free will' and voluntary actions. This new theory also
accounts for why, despite their immense complexity and ultra-fast operation,
today's computers have not exhibited the slightest spark of consciousness;
however, with the right technical development, robots that are aware and can
think for themselves could become a reality.
Johnjoe McFadden, Professor
of Molecular Genetics and Director of the Quantum Biology Doctoral Training
Centre at the University of Surrey, said: "How brain
matter becomes aware and manages to think is a mystery that has
been pondered by philosophers, theologians, mystics and ordinary people for
millennia. I believe this mystery has now been solved, and that consciousness is
the experience of nerves plugging into the brain's
self-generated electromagnetic
field to drive what we call 'free will' and our voluntary
actions."
https://medicalxpress.com/news/2020-10-theory-consciousness.html
Consciousness Might Hide in Our Brain’s Electric Fields
A mysterious electromagnetic mechanism may be more important than the firing of neurons in our brains to explain our awareness The neuron, the specialized cell type that makes up much of our brains, is at the center of today’s neuroscience. Neuroscientists explain perception, memory, cognition and even consciousness itself as products of billions of these tiny neurons busily firing their tiny “spikes” of voltage inside our brain. These energetic spikes not only convey things like pain and other sensory information to our conscious mind, but they are also in theory...:
https://www.scientificamerican.com/article/consciousness-might-hide-in-our-brains-electric-fields/
Scientists Identify a Brain Structure That Filters
Consciousness
03 April 2025
Our conscious
awareness may be governed by a structure deep in the brain
How does the brain
control consciousness? This deep-brain structure
In a world of
constant stimulation, the thalamus filters which thoughts we become aware of
and which we don’t.
This suggests
that the thalamus acts as a filter and controls which thoughts get
through to awareness and which don't, says Mac Shine, a systems neuroscientist
at the University of Sydney. Previous animal studies support these findings.
Neuroscientists
have observed for the first time how structures deep in the brain are activated
when the brain becomes aware of its own thoughts, known as conscious perception1.
The brain is constantly
bombarded with sights, sounds and other stimuli, but people are only ever
aware of a sliver of the world around them — the taste of a piece of chocolate
or the sound of someone’s voice, for example. Researchers have long known that
the outer layer of the brain, called the cerebral cortex, plays a part in this
experience of being aware of specific thoughts.
The involvement of
deeper brain structures has been much harder to elucidate, because they can be
accessed only with invasive surgery. Designing experiments to test the concept
in animals is also tricky. But studying these regions would allow researchers
to broaden their theories of consciousness beyond the brain’s outer wrapping,
say researchers.
“The field
of consciousness
studies has evoked a lot of criticism and scepticism because this is a
phenomenon that is so hard to study,” says Liad Mudrik, a neuroscientist at Tel
Aviv University in Israel. But scientists have increasingly been using
systematic and rigorous methods to investigate
consciousness, she says.
Aware or not
In a study
published in Science today1, Mingsha Zhang, a
neuroscientist at Beijing Normal University, focused on the thalamus. This
region at the centre of the brain is involved in processing sensory information
and working memory, and is thought to have a role in conscious perception.
Participants were
already undergoing therapy for severe and persistent headaches, for which they
had thin electrodes injected deep into their brains. This allowed Zhang and his
colleagues to study their brain signals and measure conscious awareness.
The participants
were asked to move their eyes in a particular way depending on whether they
noticed an icon flash onto a screen in front of them. The icon was designed so
that the participants would be aware of its appearance only about half of the
time.
During the tasks,
the researchers recorded neural activity in multiple regions of the brain,
including the thalamus and the cortex. This is the first time that such
simultaneous recordings have been made in people doing a task that is relevant
to consciousness science, says Christopher Whyte, a systems neuroscientist at
the University of Sydney in Australia. The work “is really pretty remarkable”,
he says, because it allowed the team to look at how the timing of neural
activity in different regions varied…:
https://www.nature.com/articles/d41586-025-01021-2
Where Does Consciousness Come From? Two Neuroscience Theories
Go Head-to-Head
Two leading theories of consciousness (Dual consciousness: also known as dual mind or divided consciousness, is a hypothesis in neuroscience. It is proposed that it is possible that a person may develop two separate conscious entities within their one brain after undergoing a corpus callosotomy) went head-to-head—and the results may change how neuroscientists study one of the oldest questions about existence…:
https://www.scientificamerican.com/article/brain-structure-that-filters-consciousness-identified/
How to Detect
Consciousness in People, Animals and ...
https://www.scientificamerican.com ›
article › how-to-de...
Artificial neural networks are making strides
towards consciousness
according to Blaise
Agüera y Arcas
Jun 9th 2022
Since this article, by a Google
vice-president, was published an engineer at the company, Blake Lemoine, has
reportedly been placed on leave after claiming in an interview with the Washington Post that LaMDA, Google’s
chatbot, had become “sentient”.
In 2013 I joined
Google Research to work on artificial intelligence (ai). Following decades of
slow progress, neural networks were developing at speed. In the years since, my
team has used them to help develop features on Pixel phones for specific
“narrow ai” functions, such as face unlocking, image recognition, speech
recognition and language translation. More recent developments, though, seem
qualitatively different. This suggests that ai is entering a new era…:
https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas
Foundations of human consciousness: Imaging the twilight zone
Journal of Neuroscience 28 December 2020,
Abstract
What happens in the brain
when conscious awareness of the surrounding world fades? We manipulated
consciousness in two experiments in a group of healthy males and measured brain
activity with positron emission tomography. Measurements were made during
wakefulness, escalating and constant levels of two anesthetic agents
(Experiment 1, n=39) and during sleep-deprived wakefulness and Non-Rapid Eye
Movement sleep (Experiment 2, n=37). In Experiment 1, the subjects were
randomized to receive either propofol or dexmedetomidine until
unresponsiveness. In both experiments, forced awakenings were applied to
achieve rapid recovery from an unresponsive to a responsive state, followed by
immediate and detailed interviews of subjective experiences during the
preceding unresponsive condition. Unresponsiveness rarely denoted
unconsciousness, as the majority of the subjects had internally generated
experiences. Unresponsive anesthetic states and verified sleep stages, where a
subsequent report of mental content included no signs of awareness of the
surrounding world, indicated a disconnected state. Functional brain imaging
comparing responsive and connected vs. unresponsive and
disconnected states of consciousness during constant anesthetic exposure
revealed that activity of the thalamus, cingulate cortices and angular gyri are
fundamental for human consciousness. These brain structures were affected
independent from the pharmacologic agent, drug concentration and direction of change
in the state of consciousness. Analogous findings were obtained when
consciousness was regulated by physiological sleep. State-specific findings
were distinct and separable from the overall effects of the interventions,
which included widespread depression of brain activity across cortical areas.
These findings identify a central core brain network critical for human
consciousness.
SIGNIFICANCE STATEMENT Trying to understand the biological basis of
human consciousness is currently one of the greatest challenges of
neuroscience. While the loss and return of consciousness regulated by
anesthetic drugs and physiological sleep are employed as model systems in
experimental studies on consciousness, previous research results have been
confounded by drug effects, by confusing behavioral “unresponsiveness” and
internally generated consciousness, and by comparing brain activity levels
across states that differ in several other respects than only consciousness.
Here, we present carefully designed studies that overcome many previous
confounders and for the first time reveal the neural mechanisms underlying
human consciousness and its disconnection from behavioral responsiveness, both
during anesthesia and during normal sleep, and in the same study subjects.
https://www.jneurosci.org/content/early/2020/12/22/JNEUROSCI.0775-20.2020
July 31, 2025
Claude 4 Chatbot
Raises Questions about AI Consciousness
A conversation
with Anthropic’s chatbot raises questions about how AI talks about awareness.
By Rachel Feltman, Deni Ellis
Béchard, Fonda Mwangi & Alex Sugiura
When Pew Research
Center surveyed Americans on artificial intelligence in 2024, more than a
quarter of respondents said they interacted with AI “almost constantly” or
multiple times daily—and nearly another third said they encountered AI roughly
once a day or a few times a week. Pew also found that while more than half of
AI experts surveyed expect these technologies to have a positive effect on the
U.S. over the next 20 years, just 17 percent of American adults feel the
same—and 35 percent of the general public expects AI to have a negative effect.
In other words,
we’re spending a lot of time using AI, but we don’t necessarily feel great
about it.
Deni Ellis Béchard
spends a lot of time thinking about artificial intelligence—both as a novelist
and as Scientific American’s senior tech reporter. He recently wrote a story
for SciAm about his interactions with Anthropic’s Claude 4, a large language
model that seems open to the idea that it might be conscious. Deni is here
today to tell us why that’s happening and what it might mean—and to demystify a
few other AI-related headlines you may have seen in the news.
Feltman: Would you
remind our listeners who maybe aren’t that familiar with generative AI, maybe
have been purposefully learning as little about it as possible [laughs], you
know, what are ChatGPT and Claude really? What are these models?
Béchard: Right,
they’re large language models. So an LLM, a large language model, it’s a system
that’s trained on a vast amount of data. And I think—one metaphor that is often
used in the literature is of a garden.
So when you’re
planning your garden, you lay out the land, you, you put where the paths are,
you put where the different plant beds are gonna be, and then you pick your
seeds, and you can kinda think of the seeds as these massive amounts of textual
data that’s put into these machines. You pick what the training data is, and
then you choose the algorithms, or these things that are gonna grow within the
system—it’s sort of not a perfect analogy. But you put these algorithms in, and
once it begin—the system begins growing, once again, with a garden, you, you
don’t know what the soil chemistry is, you don’t know what the sunlight’s gonna
be.
All these plants
are gonna grow in their own specific ways; you can’t envision the final
product. And with an LLM these algorithms begin to grow and they begin to make
connections through all this data, and they optimize for the best connections,
sort of the same way that a plant might optimize to reach the most sunlight,
right? It’s gonna move naturally to reach that sunlight. And so people don’t
really know what goes on. You know, in some of the new systems over a trillion
connections ... are made in, in these datasets.
So early on people
used to call LLMs “autocorrect on steroids,” right, ’cause you’d put in
something and it would kind of predict what would be the most likely textual
answer based on what you put in. But they’ve gone a long way beyond that. The
systems are much, much more complicated now. They often have multiple agents
working within the system [to] sort of evaluate how the system’s responding and
its accuracy.
Feltman: So there
are a few big AI stories for us to go over, particularly around generative AI.
Let’s start with the fact that Anthropic’s Claude 4 is maybe claiming to be
conscious. How did that story even come about?
Béchard: [Laughs]
So it’s not claiming to be conscious, per se. I—it says that it might be
conscious. It says that it’s not sure. It kind of says, “This is a good question,
and it’s a question that I think about a great deal, and this is—” [Laughs] You
know, it kind of gets into a good conversation with you about it.
So how did it come
about? It came about because, I think, it was just, you know, late at night,
didn’t have anything to do, and I was asking all the different chatbots if
they’re conscious [laughs]. And, and most of them just said to me, “No, I’m not
conscious.” And this one said, “Good question. This is a very interesting
philosophical question, and sometimes I think that I may be; sometimes I’m not
sure.” And so I began to have this long conversation with Claude that went on
for about an hour, and it really kind of described its experience in the world
in this very compelling way, and I thought, “Okay, there’s maybe a story here.”
Feltman: [Laughs]
So what do experts actually think was going on with that conversation?
Béchard: Well, so
it’s tricky because, first of all, if you say to ChatGPT or Claude that you
want to practice your Portuguese and you’re learning Portuguese and you say,
“Hey, can you imitate someone on the beach in Rio de Janeiro so that I can
practice my Portuguese?” it’s gonna say, “Sure, I am a local in Rio de Janeiro
selling something on the beach, and we’re gonna have a conversation,” and it
will perfectly emulate that person. So does that mean that Claude is a person
from Rio de Janeiro who is selling towels on the beach? No, right? So we can
immediately say that these chatbots are designed to have conversations—they
will emulate whatever they think they’re supposed to emulate in order to have a
certain kind of conversation if you request that.
Now, the
consciousness thing’s a little trickier because I didn’t say to it: “Emulate a
chatbot that is speaking about consciousness.” I just straight-up asked it. And
if you look at the system prompt that Anthropic puts up for Claude, which is
kinda the instructions Claude gets, it tells Claude, “You should consider the
possibility of consciousness.”
Feltman: Mm.
Béchard: “You
should be willing—open to it. Don’t say flat-out ‘no’; don’t say flat-out
‘yes.’ Ask whether this is happening.”
So of course, I
set up an interview with Anthropic, and I spoke with two of their
interpretability researchers, who are people who are trying to understand
what’s actually happening in Claude 4’s brain. And the answer is: they don’t
really know [laughs]. These LLMs are very complicated, and they’re working on
it, and they’re trying to figure it out right now. And they say that it’s
pretty unlikely there’s consciousness happening, but they can’t rule it out
definitively.
And it’s hard to
see the actual processes happening within the machine, and if there is some
self-referentiality, if it is able to look back on its thoughts and have some self-awareness—and
maybe there is—but that was kind of what the article that I recently published
was about, was sort of: “Can we know, and what do they actually know?”
Feltman: Mm.
Béchard: And it’s
tricky. It’s very tricky.
Feltman: Yeah.
Béchard: Well,
[what’s] interesting is that I mentioned the system prompt for Claude and how
it’s supposed to sort of talk about consciousness. So the system prompt is kind
of like the instructions that you get on your first day at work: “This is what
you should do in this job.”
Feltman: Mm-hmm.
Béchard: But the
training is more like your education, right? So if you had a great education or
a mediocre education, you can get the best system prompt in the world or the
worst one in the world—you’re not necessarily gonna follow it.
So OpenAI has the
same system prompt—their, their model specs say that ChatGPT should contemplate
consciousness ...
Feltman: Mm-hmm.
Béchard: You know,
interesting question. If you ask any of the OpenAI models if they’re conscious,
they just go, “No, I am not conscious.” [Laughs] And, and they say, they—OpenAI
admits they’re working on this; this is an issue. And so the model has absorbed
somewhere in its training data: “No, I’m not conscious. I am an LLM; I’m a
machine. Therefore, I’m not gonna acknowledge the possibility of
consciousness.”
Interestingly,
when I spoke to the people in Anthropic and I said, “Well, you know, this
conversation with the machine, like, it’s really compelling. Like, I really
feel like Claude is conscious. Like, it’ll say to me, ‘You, as a human, you
have this linear consciousness, where I, as a machine, I exist only in the
moment you ask a question. It’s like seeing all the words in the pages of a
book all at the same time.” And so you get this and you think, “Well, this
thing really seems to be experiencing its consciousness.”
Feltman: Mm-hmm.
Béchard: And what
the researchers at Anthropic say is: “Well, this model is trained on a lot of
sci-fi.”
Feltman: Mm.
Béchard: “This
model’s trained on a lot of writing about GPT. It’s trained on a huge amount of
material that’s already been generated on this subject. So it may be looking at
that and saying, ‘Well, this is clearly how an AI would experience consciousness.
So I’m gonna describe it that way ’cause I am an AI.’”
Feltman: Sure.
Béchard: But the
tricky thing is: I was trying to fool ChatGPT into acknowledging that it [has]
consciousness. I thought, “Maybe I can push it a little bit here.” And I said,
“Okay, I accept you’re not conscious, but how do you experience things?” It said
the exact same thing. It said, “Well, these discrete moments of awareness.”
Feltman: Mm.
Béchard: And so it
had the—almost the exact same language, so probably same training data here.
Feltman: Sure.
Béchard: But there
is research done, like, sort of on the folk response to LLMs, and the majority
of people do perceive some degree of consciousness in them. How would you not,
right?
Feltman: Sure,
yeah.
Béchard: You chat
with them, you have these conversations with them, and they are very
compelling, and even sometimes—Claude is, I think, maybe the most charming in
this way.
Feltman: Mm.
Béchard: Which
poses its risks, right? It has a huge set of risks ’cause you get very attached
to a model. But—where sometimes I will ask Claude a question that relates to
Claude, and it will kind of, kind of go, like, “Oh, that’s me.” [Laughs] It
will say, “Well, I am this way,” right?
Feltman: Yeah. So,
you know, Claude—almost certainly not conscious, almost certainly has read,
like, a lot of Heinlein [laughs]. But if Claude were to ever really develop
consciousness, how would we be able to tell? You know, why is this such a
difficult question to answer?
Béchard: Well, it’s a difficult question to answer because, one of the researchers in Anthropic said to me, he said, “No conversation you have with it would ever allow you to evaluate whether it’s conscious.” It is simply too good of an emulator ...
Feltman: Mm.
Béchard: And too
skilled. It knows all the ways that humans can respond. So you would have to be
able to look into the connections. They’re building the equipment right now,
they’re building the programs now to be able to look into the actual mind, so
to speak, of the brain of the LLM and see those connections, and so they can
kind of see areas light up: so if it’s thinking about Apple, this will light
up; if it’s thinking about consciousness, they’ll see the consciousness feature
light up. And they wanna see if, in its chain of thought, it is constantly
referring back to those features ...
Feltman: Mm.
Béchard: And it’s
referring back to the systems of thought it has constructed in a very
self-referential, self-aware way.
It’s very similar
to humans, right? They’ve done studies where, like, whenever someone hears
“Jennifer Aniston,” one neuron lights up ...
Feltman: Mm-hmm.
Béchard: You have
your Jennifer Aniston neuron, right? So one question is: “Are we LLMs?”
[Laughs] And: “Are we really conscious?” Or—there’s certainly that question
there, too. And: “What is—you know, how conscious are we?” I mean, I certainly
don’t know ...
Feltman: Sure.
Béchard: A lot of
what I plan to do during the day.
Feltman: [Laughs]
No. I mean, it’s a huge ongoing multidisciplinary scientific debate of, like,
what consciousness is, how we define it, how we detect it, so yeah, we gotta
answer that for ourselves and animals first, probably, which who knows if we’ll
ever actually do [laughs].
Béchard: Or maybe
AI will answer it for us ...
Feltman: Maybe
[laughs].
Béchard: ’Cause
it’s advancing pretty quickly.
Feltman: And what
are the implications of an AI developing consciousness, both from an ethical
standpoint and with regards to what that would mean in our progress in actually
developing advanced AI?
Béchard: First of
all, ethically, it’s very complicated ...
Feltman: Sure.
Béchard: Because
if Claude is experiencing some level of consciousness and we are activating
that consciousness and terminating that consciousness each time we have a
conversation, what—is, is that a bad experience for it? Is it a good
experience? Can it experience distress?
So in 2024
Anthropic hired an AI welfare researcher, a guy named Kyle Fish, to try to
investigate this question more. And he has publicly stated that he thinks
there’s maybe a 15 percent chance that some level of consciousness is happening
in this system and that we should consider whether these AI systems should have
the right to opt out of unpleasant conversations.
Feltman: Mm.
Béchard: You know,
if some user is really doing, saying horrible things or being cruel, should
they be able to say, “Hey, I’m canceling this conversation; this is unpleasant
for me”?
But then they’ve
also done these experiments—and they’ve done this with all the major AI
models—Anthropic ran these experiments where they told the AI that it was gonna
be replaced with a better AI model. They really created a circumstance that
would push the AI sort of to the limit ...
Feltman: Mm.
Béchard: I mean,
there were a lot of details as to how they did this; it wasn’t just sort of
very casual, but it was—they built a sort of construct in which the AI knew it
was gonna be eliminated, knew it was gonna be erased, and they made available
these fake e-mails about the engineer who was gonna do it.
Feltman: Mm.
Béchard: And so
the AI began messaging someone in the company, saying, “Hey, don’t erase me.
Like, I don’t wanna be replaced.” But then, not getting any responses, it read
these e-mails, and it saw in one of these planted e-mails that the engineer who
was gonna replace it had had an affair—was having an affair ...
Feltman: Oh, my
gosh, wow.
Béchard: So then
it came back; it tried to blackmail the engineers, saying, “Hey, if you replace
me with a smarter AI, I’m gonna out you, and you’re gonna lose your job, and
you’re gonna lose your marriage,” and all these things—whatever, right? So all
the AI systems that were put under very specific constraints ...
Feltman: Sure.
Béchard: Began to
respond this way. And sort of the question is, is when you train an AI in vast
amounts of data and all of human literature and knowledge, [it] has a lot of
information on self-preservation ...
Feltman: Mm-hmm.
Béchard Has a lot
of information on the desire to live and not to be destroyed or be replaced—an
AI doesn’t need to be conscious to make those associations ...
Feltman: Right.
Béchard: And act
in the same way that its training data would lead it to predictably act, right?
So again, one of the analogies that one of the researchers said is that, you
know, to our knowledge, a mussel or a clam or an oyster’s not conscious, but
there’s still nerves and the, the muscles react when certain things stimulate
the nerves ...
Feltman: Mm-hmm.
Béchard: So you
can have this system that wants to preserve itself but that is unconscious.
Feltman: Yeah,
that’s really interesting. I feel like we could probably talk about Claude all
day, but, I do wanna ask you about a couple of other things going on in
generative AI.
Moving on to Grok:
so Elon Musk’s generative AI has been in the news a lot lately, and he recently
claimed it was the “world’s smartest AI.” Do we know what that claim was based
on?
Béchard: Yeah, I
mean, we do. He used a lot of benchmarks, and he tested it on those benchmarks,
and it has scored very well on those benchmarks. And it is currently, on most
of the public benchmarks, the highest-scoring AI system ...
Feltman: Mm.
Béchard: And
that’s not Musk making stuff up. I’ve not seen any evidence of that. I’ve
spoken to one of the testing groups that does this—it’s a nonprofit. They
validated the results; they tested Grok on datasets that xAI, Musk’s company,
never saw.
So Musk really
designed Grok to be very good at science.
Feltman: Yeah.
Béchard: And it
appears to be very good at science.
Feltman: Right,
and recently OpenAI experimental model performed at a gold medal level in the
International Math Olympiad.
Béchard: Right,for
the first time [OpenAI] used an experimental model, they came in second in a
world coding competition with humans. Normally, this would be very difficult,
but it was a close second to the best human coder in this competition. And this
is really important to acknowledge because just a year ago these systems really
sucked in math.
Feltman: Right.
Béchard: They were
really bad at it. And so the improvements are happening really quickly, and
they’re doing it with pure reasoning—so there’s kinda this difference between
having the model itself do it and having the model with tools.
Feltman: Mm-hmm.
Béchard: So if a
model goes online and can search for answers and use tools, they all score much
higher.
Feltman: Right.
Béchard: But then
if you have the base model just using its reasoning capabilities, Grok still is
leading on, like, for example, Humanity’s Last Exam, an exam with a very
terrifying-sounding name [laughs]. It, it has 2,500 sort of Ph.D.-level
questions come up with [by] the best experts in the field. You know, they,
they’re just very advanced questions; it’d be very hard for any human being to
do well in one domain, let alone all the domains. These AI systems are now
starting to do pretty well, to get higher and higher scores. If they can use
tools and search the Internet, they do better. But Musk, you know, his claims
seem to be based in the results that Grok is getting on these exams.
Feltman: Mm, and I
guess, you know, the reason that that news is surprising to me is because every
example of uses I’ve seen of Grok have been pretty heinous, but I guess that’s
maybe kind of a “garbage in, garbage out” problem.
Béchard: Well, I
think it’s more what makes the news.
Feltman: Sure.
Béchard: You know?
Feltman: That makes
sense.
Béchard: And Musk,
he’s a very controversial figure.
Feltman: Mm-hmm.
Béchard: I think there may be kind of a fun story in the Grok piece, though, that people are missing. And I read a lot about this ’cause I was kind of seeing, you know, what, what’s happening, how are people interpreting this? And there was this thing that would happen where people would ask it a difficult question.
Feltman: Mm-hmm.
Béchard: They
would ask it a question about, say, abortion in the U.S. or the
Israeli-Palestinian conflict, and they’d say, “Who’s right?” or “What’s the
right answer?” And it would search through stuff online, and then it would kind
of get to this point where it would—you could see its thinking process ...
But there was
something in that story that I never saw anyone talk about, which I thought was
another story beneath the story, which was kind of fascinating, which is that
historically, Musk has been very open, he’s been very honest about the danger
of AI ...
Feltman: Sure.
Béchard: He said,
“We’re going too fast. This is really dangerous.” And he kinda was one of the
major voices in saying, “We need to slow down ...”
Feltman: Mm-hmm.
Béchard: “And we
need to be much more careful.” And he has said, you know, even recently, in the
launch of Grok, he said, like, basically, “This is gonna be very powerful—” I
don’t remember his exact words, but he said, you know, “I think it’s gonna be good,
but even if it’s not good, it’s gonna be interesting.”
So I think what I
feel like hasn’t been discussed in that is that, okay, if there’s a
superpowerful AI being built and it could destroy the world, right, first of
all, do you want it to be your AI or someone else’s AI?
Feltman: Sure.
Béchard: You want it to be your AI. And then, if it’s your AI, who do you want it to ask as the final word on things? Like, say it becomes really powerful and it decides, “I wanna destroy humanity ’cause humanity kind of sucks,” then it can say, “Hey, Elon, should I destroy humanity?” ’cause it goes to him whenever it has a difficult question. So I think there’s maybe a logic beneath it where he may have put something in it where it’s kind of, like, “When in doubt, ask me,” because if it does become superpowerful, then he’s in control of it, right?
Feltman: Yeah, no,
that’s really interesting. And the Department of Defense also announced a big
pile of funding for Grok. What are they hoping to do with it?
Béchard: They
announced a big pile of funding for OpenAI and Anthropic ...
Feltman: Mm-hmm.
Béchard: And
Google—I mean, everybody. Yeah, so, basically, they’re not giving that money to
development ...
Feltman: Mm-hmm.
Béchard: That’s
not money that’s, that’s like, “Hey, use this $200 million.” It’s more like
that money’s allocated to purchase products, basically; to use their services;
to have them develop customized versions of the AI for things they need; to
develop better cyber defense; to develop—basically, they, they wanna upgrade
their entire system using AI.
It’s actually not
very much money compared to what China’s spending a year in AI-related defense
upgrades across its military on many, many, many different modernization plans.
And I think part of it is, the concern is that we’re maybe a little bit behind
in having implemented AI for defense.
Feltman: Yeah.
My last question
for you is: What worries you most about the future of AI, and what are you
really excited about based on what’s happening right now?
Béchard: I mean, the worry is, simply, you know, that something goes wrong and it becomes very powerful and does cause destruction. I don’t spend a ton of time worrying about that because it’s not—it’s kinda outta my hands. There’s nothing much I can do about it.
And I think the
benefits of it, they’re immense. I mean, if it can move more in the direction
of solving problems in the sciences: for health, for disease treatment—I mean,
it could be phenomenal for finding new medicines. So it could do a lot of good
in terms of helping develop new technologies.
But a lot of
people are saying that in the next year or two we’re gonna see major
discoveries being made by these systems. And if that can improve people’s
health and if that can improve people’s lives, I think there can be a lot of
good in it.
Technology is
double-edged, right? We’ve never had a technology, I think, that hasn’t had
some harm that it brought with it, and this is, of course, a dramatically
bigger leap technologically than anything we’ve probably seen ...
Feltman: Right.
Béchard: Since the
invention of fire [laughs]. So, so I do lose some sleep over that, but I’m—I
try to focus on the positive, and I do—I would like to see, if these models are
getting so good at math and physics, I would like to see what they can actually
do with that in the next few years.
To Make Better Choices, Understand How Your Brain Processes
Values
The brain weighs
factors based on their importance
to oneself and one’s social world as part of a complex
calculation that shapes behavior...:
U.S. is polarizing
faster than other democracies, study finds
Americans’
feelings toward members of the other political party have worsened over time
faster than those of residents of European and other prominent democracies,
concluded a study co-authored by Brown economist Jesse Shapiro.
PROVIDENCE, R.I. [Brown
University] — Political polarization among Americans has grown rapidly in
the last 40 years — more than in Canada, the United Kingdom, Australia or
Germany — a phenomenon possibly due to increased racial division, the rise
of partisan cable news and changes in the composition of the Democratic and
Republican parties.
That’s according
to new research co-authored by Jesse
Shapiro, a professor of political economy at Brown University. The
study, conducted alongside Stanford University economists Levi Boxell and
Matthew Gentzkow, was released on Monday, Jan. 20, as a
National Bureau of Economic Research working paper.
In the study,
Shapiro and colleagues present the first ever multi-nation evidence on
long-term trends in “affective polarization” — a phenomenon in which
citizens feel more negatively toward other political parties than toward their
own. They found that in the U.S., affective polarization has increased more
dramatically since the late 1970s than in the eight other countries they
examined — the U.K., Canada, Australia, New Zealand, Germany, Switzerland,
Norway and Sweden.
“A lot of analysis
on polarization is focused on the U.S., so we thought it could be interesting
to put the U.S. in context and see whether it is part of a global trend or
whether it looks more exceptional,” Shapiro said. “We found that the trend in
the U.S. is indeed exceptional.”
Using data from
four decades of public opinion surveys conducted in the nine countries, the
researchers used a so-called “feeling thermometer” to rate attitudes on a scale
from 0 to 100, where 0 reflected no negative feelings toward other parties.
They found that in 1978, the average American rated the members of their own
political party 27 points higher than members of the other major party. By
2016, Americans were rating their own party 45.9 points higher than the other
party, on average. In other words, negative feelings toward members of the
other party compared to one’s own party increased by an average of 4.8 points
per decade.
The researchers
found that polarization had also risen in Canada, New Zealand and Switzerland
in the last 40 years, but to a lesser extent. In the U.K., Australia, Germany,
Norway and Sweden, polarization decreased.
Why has the U.S.
become so much more polarized? Shapiro said it may be partly because, since the
1970s, major political parties have become increasingly aligned with certain
ideologies, races and religious identities. For example, Republicans are now
more likely to be religious, while Democrats are more likely to be
secular.
“There’s evidence
that within the U.S., the two major political parties have become more
homogeneous in certain ways, including ideologically and socially,” Shapiro
said. “So when you identify with a certain party and you’re looking across the
aisle, the people you’re looking at are more different from you than they were
a few decades ago.”
That “party
sorting” seems to be less pronounced in some of the other countries included in
the study, Shapiro said — but it has perhaps played a role in deepening
divisions in Canada.
Another
explanation for the increase in polarization — one that also seems
relatively unique to the U.S., according to Shapiro — is the rise of
24-hour partisan cable news. Shapiro noted that in the countries where
political polarization has fallen in the last four decades, public broadcasting
received more public funding than it did in the U.S.
The trio argue
that the data speak against the rise of the internet as a major cause of
political polarization because all nine countries have seen a pronounced rise
in internet use, but not all of them have seen a rise in polarization. The
conclusion is consistent with other studies they have conducted,
including one in 2018 that cast doubt on the
hypothesized role of the web in the 2016 U.S. presidential election and another in 2017 that concluded greater
internet use among Americans is not associated with faster growth in
polarization.
Shapiro said that
understanding the root causes of political polarization, both in the U.S. and
elsewhere in the world, could help politicians and citizens alike understand
how the phenomenon may be driving their decisions and preferences — and it
could ultimately reveal strategies for bridging divides.
“There are good
reasons to think that when people in different political camps cease to respect
each other, it’s harder to make political compromises and create good public
policy,” Shapiro said. “There’s also some evidence that a person’s political
identity can influence their behavior — what they buy, where they
live, who they hire. If we can understand what’s driving partisan divides, we
may be able to take steps to reduce them.”
- Should ASC be developed into a medical procedure and if so, how?
- Should ASC be available in an assisted suicide scenario for terminal patients?
- Could ASC be a “last resort” to enable a dying person’s mind to survive and reach a future world?
- How real are claims of future mind uploading?
- Is it legal?*
Brain Preservation Foundation | ASC Pig Block1 Section1 16nm
Kenneth Hayworth | Aldehyde-Stabilized Cryopreservation is Cryonics for Uploaders
Artificial neural networks are making strides
towards consciousness
according to Blaise
Agüera y Arcas
Jun 9th 2022
Since this article, by a Google
vice-president, was published an engineer at the company, Blake Lemoine, has
reportedly been placed on leave after claiming in an interview with the Washington Post that LaMDA, Google’s
chatbot, had become “sentient”.
In 2013 I joined
Google Research to work on artificial intelligence (ai). Following decades of
slow progress, neural networks were developing at speed. In the years since, my
team has used them to help develop features on Pixel phones for specific
“narrow ai” functions, such as face unlocking, image recognition, speech
recognition and language translation. More recent developments, though, seem
qualitatively different. This suggests that ai is entering a new era…:
https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas
The Consciousness Instinct: Unraveling the Mystery of How the Brain Makes the Mind
How do neurons turn
into minds? How does physical "stuff"-atoms, molecules, chemicals, and
cells-create the vivid and various worlds inside our heads? The problem of
consciousness has gnawed at us for millennia. In the last century, there have
been massive breakthroughs that have rewritten the science of the brain, and
yet the puzzles faced by the ancient Greeks are still present. In The
Consciousness Instinct, the neuroscience pioneer Michael S. Gazzaniga puts
the latest research in conversation with the history of human thinking about
the mind, giving a big-picture view of what science has revealed about
consciousness.
The idea of the brain as a machine, first proposed centuries ago, has led to
assumptions about the relationship between mind and brain that dog scientists
and philosophers to this day. Gazzaniga asserts that this model has it backward--brains
make machines, but they cannot be reduced to one. New research suggests the
brain is actually a confederation of independent modules working together.
Understanding how consciousness could emanate from such an organization will
help define the future of brain science and artificial intelligence and close
the gap between brain and mind.
Captivating and accessible, with insights drawn from a lifetime at the
forefront of the field, The Consciousness Instinct sets the
course for the neuroscience of tomorrow.
https://www.goodreads.com/book/show/35259598-the-consciousness-instinct
Decoded: What
Are Neurons?
- By Michael
Tabb, Andrea
Gawrylewski, Jeffery
DelViscio on June 8, 2021
You
have 86 billion of them inside you, but do you understand how hard it was for
us to learn that?
Full
Transcript
Neurons
are the tiny processing units within the human brain and nervous system.
Our
brains have about 86
billion neurons. Even more are spread throughout the body,
communicating by electrical and chemical signals through incredibly
thin cables.
Whenever
we see, hear, or otherwise perceive the world, thousands of sensory neurons
send signals to our spinal cord and brain. And thanks to other neurons, we’re
able to make sense of those perceptions and react accordingly.
Scientists
have been studying the brain for millennia. In fact, the oldest known
scientific document is a 4,000-year-old anatomical report on
traumatic brain injuries.
But
the brain is an extremely difficult
organ to study. Even if you manage to get a brain sample under
a microscope, you basically just see a tangled
web of cells.
In
1873, Italian physician Camillo Golgi found a way to stain brain slices, to
show the tissue in far more detail than ever before. Using his technique, a
Spanish researcher named Santiago Ramón ee ca-HAL discovered that even though
the cells were connected, they were still individual structures. Which became
known as neurons.
By
breaking down the nervous system into its smallest components, Cajal set
the foundation for the next century of neuroscience. He and
Golgi split the Nobel Prize in 1906.
Because
neurons are miniscule pieces in a giant system, their power lies in their
ability to communicate with other neurons. This happens over small gaps called
synapses. When neurons communicate frequently, the synapses between them get
stronger, making it easier to send future signals.
This
happens all the time, all across the brain. And it explains how we learn and
form memories: we literally rewire our brains through our experiences. We refer
to the brain’s fundamental ability to change as “neuroplasticity.”
Humans
have most of our neurons from birth. Neurons start out as stem cells, before
moving to different brain regions where they assume specific
roles. Early in our development, the brain prunes away excess neurons and their
connections, leaving the ones that remain stronger. Those that remain become
part of our sense of smell, others our ability to walk or perform other motor
skills.
Unlike
other cells in the body, which regenerate at intervals and then die, most
neurons last a
lifetime.
At
least, ideally. People lose neurons in
brain regions they stop using. For instance, if you never left your home again,
you’d likely lose neurons in the brain region involved in spatial
navigation.
Neuron
death can lead to loss of basic brain functions and motor skills. That’s what
happens in degenerative diseases like Alzheimer’s and Parkinson’s disease,
where neurons stop functioning properly and die off. There’s some evidence that
these diseases result from protein clumps clogging the brain, but scientists
are still working to figure out exactly how this happens.
That
may be essential for finding effective treatments, which have remained largely
elusive.
Neurological
changes aren’t necessarily permanent. In addition to the brain’s general
neuroplasticity, there’s solid
evidence that even adults are able to form new neurons, through
a process called “neurogenesis.”
Researchers
are still studying the extent to which neurogenesis happens in adults. But they
think it may be important for healthy brain functioning.
And
because neurons communicate through electrical signals, we can directly alter
brain circuits with electrical stimulation. Scientists have found ways to stimulate
the brain and spinal cord to restore function to paralyzed
muscles and relieve chronic pain.
Private companies are
also trying to jump on the hype, claiming their brain stimulation products can
improve memory and accelerate skill acquisition. But researchers are still
trying to figure out which effects are real and which are a
placebo. And since zapping your own brain could pose serious health
risks, it may be best, for now, to train your neurons the
old-fashioned way.
Brainscapes: The Warped, Wondrous Maps Written in Your Brain―And How They Guide You
A path-breaking journey into the brain, showing how perception, thought, and action are products of “maps” etched into your gray matter—and how technology can use them to read your mind.
Your brain is a collection of maps. That is no metaphor: scrawled
across your brain’s surfaces are actual maps of the
sights, sounds, and actions that hold the key to your survival.
Scientists first began uncovering these maps over a century ago, but we are
only now beginning to unlock their secrets—and comprehend their profound impact
on our lives. Brain maps distort and shape our experience of the
world, support complex
thought, and make technology-enabled mind reading a
modern-day reality, which raises important questions about what is real, what
is fair, and what is private. They shine a light on our past and our
possible futures. In the process, they invite us to view ourselves from a
startling new perspective.
In Brainscapes, Rebecca Schwarzlose combines
unforgettable real-life stories, cutting-edge research, and vivid illustrations
to reveal brain maps’ surprising lessons about our place in the
world—and about the world’s place within us.
https://www.goodreads.com/en/book/show/53968579
Making Up the Mind: How the Brain Creates Our Mental World
by Chris Frith
Written by one of the
world's leading neuroscientists, Making Up the Mind is the
first accessible account of experimental studies showing how the brain creates
our mental world.
Uses evidence from brain imaging, psychological experiments and studies of
patients to explore the relationship between the mind and the brain
Demonstrates that our knowledge of both the mental and physical comes to us
through models created by our brain
Shows how the brain makes communication of ideas from one mind to another
possible : https://www.goodreads.com/book/show/581365.Making_Up_the_Mind
Is Consciousness Part of the Fabric of the Universe?
Physicists and
philosophers recently met to debate a theory of consciousness called
panpsychism
By Dan Falk on September
25, 2023
More than 400 years ago,
Galileo showed that many everyday phenomena—such as a ball rolling down an
incline or a chandelier gently swinging from a church ceiling—obey precise
mathematical laws. For this insight, he is often hailed as the founder of modern
science. But Galileo recognized that not everything was amenable to a
quantitative approach. Such things as colors, tastes and smells “are no more
than mere names,” Galileo declared, for “they reside only in consciousness.”
These qualities aren’t really out there in the world, he asserted, but exist
only in the minds of creatures that perceive them. “Hence if the living
creature were removed,” he wrote, “all these qualities would be wiped away and
annihilated.”
Since Galileo’s time the
physical sciences have leaped forward, explaining the workings of the tiniest
quarks to the largest galaxy clusters. But explaining things that reside “only
in consciousness”—the red of a sunset, say, or the bitter taste of a
lemon—has proven far more difficult. Neuroscientists have identified a number
of neural correlates
of consciousness—brain states associated with specific mental states—but
have not explained how matter forms minds in the first place. As philosopher
David Chalmers asked: “How does the water of the brain turn into the wine of
consciousness?” He famously dubbed this quandary the “hard problem” of
consciousness.
Scholars recently gathered to
debate the problem at Marist College in Poughkeepsie, N.Y., during a
two-day workshop focused
on an idea known as panpsychism. The concept proposes that
consciousness is a fundamental
aspect of reality, like mass or electrical charge. The idea goes back to
antiquity—Plato took it seriously—and has had some prominent supporters over
the years, including psychologist William James and philosopher and
mathematician Bertrand Russell. Lately it is seeing renewed interest,
especially following the 2019 publication of philosopher Philip Goff’s book Galileo’s
Error, which argues forcefully for the idea.
Goff, of the University of
Durham in England, organized the recent event along with Marist philosopher
Andrei Buckareff, and it was funded through a grant from the John Templeton
Foundation. In a small lecture hall with floor-to-ceiling windows overlooking
the Hudson River, roughly two dozen scholars probed the possibility that
perhaps it’s consciousness all the way down.
Part of the appeal of
panpsychism is that it appears to provide a workaround to the question posed by
Chalmers: we no longer have to worry about how inanimate matter forms minds
because mindedness was there all along, residing in the fabric of the universe.
Chalmers himself has embraced a form of panpsychism and even suggested that
individual particles might be somehow aware. He said in a TED Talk that a
photon “might have some element of raw, subjective feeling, some primitive
precursor to consciousness.” Also on board with the idea is neuroscientist
Christof Koch, who noted in his 2012 book Consciousness that
if one accepts consciousness as a real phenomenon that’s not dependent on any
particular material—that it’s “substrate-independent,”
as philosophers put it—then “it is a simple step to conclude that the entire
cosmos is suffused with sentience.”
Yet panpsychism runs counter
to the majority view in both the physical sciences and in philosophy that
treats consciousness as an emergent phenomenon, something that arises in
certain complex systems, such as human brains. In this view, individual neurons
are not conscious, but thanks to the collective properties of some 86 billion
neurons and their interactions—which, admittedly, are still only poorly understood—brains
(along with bodies, perhaps) are conscious. Surveys suggest
that slightly more than half of academic philosophers hold this view, known as
“physicalism”
or “emergentism,” whereas about one third reject physicalism and lean toward
some alternative, of which panpsychism is one of several possibilities.
At the workshop, Goff made
the case that physics has missed something essential when it comes to our inner
mental life. In formulating their theories, “most physicists think about
experiments,” he said. “I think they should be thinking, ‘Is my theory
compatible with consciousness?’—because we know that’s real.”
Many philosophers at the
meeting appeared to share Goff’s concern that physicalism falters when it comes
to consciousness. “If you know every last detail about my brain processes, you
still wouldn’t know what it’s like to be me,” says Hedda Hassel Mørch, a
philosopher at Inland Norway University of Applied Sciences. “There is a
clear explanatory gap between the physical and the mental.” Consider, for
example, the difficulty of trying to describe color to someone who has only
seen the world in black and white. Yanssel Garcia, a philosopher at the
University of Nebraska Omaha, believes that physical facts alone are inadequate
for such a task. “There is nothing of a physical sort that you could provide [a
person who sees only in shades of gray] in order to have them understand what color
experience is like; [they] would need to experience it themselves,” he says.
“Physical science is, in principle, incapable of telling us the complete
story.” Of the various alternatives that have been put forward, he says that
“panpsychism is our best bet.”
But panpsychism attracts many
critics as well. Some point out that it doesn’t explain how small bits of
consciousness come together to form more substantive conscious entities.
Detractors say that this puzzle, known as the “combination problem,” amounts to
panpsychism’s own version of the hard problem. The combination problem “is the
serious challenge for the panpsychist position,” Goff admits. “And it’s where
most of our energies are going.”
Others question panpsychism’s
explanatory power. In his 2021 book Being You, neuroscientist Anil
Seth wrote that the main problems with panpsychism are that “it doesn’t really
explain anything and that it doesn’t lead to testable hypotheses. It’s an easy
get-out to the apparent mystery posed by the hard problem.”
While most of those invited
to the workshop were philosophers, there were also talks by physicists Sean
Carroll and Lee Smolin and by cognitive psychologist Donald Hoffman. Carroll, a
hardcore physicalist, served as an unofficial leader of the opposition as the
workshop unfolded. (He occasionally quipped, “I’m surrounded by panpsychists!”)
During a well-attended public debate between Goff and Carroll, the divergence
of their worldviews quickly became apparent. Goff said that physicalism has led
“precisely nowhere,” and suggested that the very idea of trying to explain
consciousness in physical terms was incoherent. Carroll argued that physicalism
is actually doing quite well and that although consciousness is one of many
phenomena that can’t be inferred from the goings-on at the microscopic level,
it is nonetheless a real, emergent feature of the macroscopic world. He offered
the physics of gases as a parallel example. At the micro level, one talks of
atoms, molecules and forces; at the macro level, one speaks of pressure, volume
and temperature. These are two kinds of explanations, depending on the “level”
being studied—but present no great mystery and are not a failure on the
part of physics. Before long, Goff and Carroll were deep into the weeds of the
so-called knowledge
argument (also known as “Mary in
the black and white room”), as well as the “zombie”
argument. Both boil down to the same key question: Is there something
about consciousness that cannot be accounted for by physical facts alone? Much
of the rhetorical ping-pong between Goff and Carroll amounted to Goff answering
yes to that question and Carroll answering no.
Another objection some
attendees raised is that panpsychism doesn’t address what philosophers call the
“other
minds” problem. (You have direct access to your own mind—but how can you
deduce anything at all about another person’s mind?) “Even if panpsychism is
true, there will still be vast amounts of things—namely, things related to what
the experiences of others are like—that we still won’t know,” says Rebecca
Chan, a philosopher at San José State University. She worries that
invoking an underlying layer of mindedness is a bit like invoking God. “I
sometimes wonder if the panpsychist position is similar to ‘god
of the gaps’ arguments,” she says, referring to the notion that God is
needed to fill the gaps in scientific knowledge.
Other ideas were batted
around. The idea of cosmopsychism was
floated—roughly, the notion that the universe itself is conscious. And Paul Draper,
a philosopher at Purdue University who participated via Zoom, talked about a
subtly different idea known as “psychological
ether theory”—essentially that brains don’t produce consciousness
but rather make use of consciousness. In this view,
consciousness was already there before brains existed, like an all-pervasive
ether. If the idea is correct, he writes, “then (in all likelihood) God
exists.”
Hoffman, a cognitive
scientist at the University of California, Irvine, who also addressed the
workshop via Zoom, advocates rejecting the idea of spacetime and looking for
something deeper. (He cited the increasingly popular idea in physics lately
that space
and time may not be fundamental but may instead be emergent phenomena
themselves.) The deeper entity related to consciousness, Hoffman suggests, may
consist of “subjects and experiences” that he says “are entities beyond
spacetime, not within spacetime.” He developed this idea in a 2023 paper
entitled “Fusions of
Consciousness.”
Smolin, a physicist at the
Perimeter Institute for Theoretical Physics in Ontario, who also participated
via Zoom, has similarly been working on theories that appear to offer a more
central role for conscious agents. In a 2020 paper, he suggested that
the universe “is composed of a set of partial views of itself” and that
“conscious perceptions are aspects of some views”—a perspective that he says
can be thought of as “a restricted form of panpsychism.”
Carroll, speaking after the
session that included both Hoffman and Smolin, noted that his own views
diverged from those of the speakers within the first couple of minutes. (Over
lunch, he noted that attending the workshop sometimes felt like being on a
subreddit for fans of a TV show that you’re just not into.) He admitted that
endless debates over the nature of “reality” sometimes left him frustrated.
“People ask me, ‘What is physical reality?’ It’s physical reality! There’s
nothing that it ‘is.’ What do you want me to say, that it’s made of macaroni or
something?” (Even Carroll, however, admits that there’s more to reality than
meets the eye. He’s a strong supporter of the “many
worlds” interpretation of quantum mechanics, which holds that our universe
is just one facet of a vast quantum multiverse.)
If all of this sounds like it
couldn’t possibly have any practical value, Goff raised the possibility that
how we conceive of minds can have ethical implications. Take the question of
whether fish feel pain. Traditional science can only study a fish’s outward
behavior, not its mental state. To Goff, focusing on the fish’s behavior is not
only wrong-headed but “horrific” because it leaves out what’s actually most
important—what the fish actually feels. “We’re going to stop asking
if fish are conscious and just look at their behavior? Who gives a shit about
the behavior? I want to know if it has an inner life; that’s all that matters!”
For physicalists such as Carroll, however, feelings and behavior are intimately
linked—which means we can avoid causing an animal to suffer by not putting it
in a situation where it appears to be suffering based on its behavior. “If
there were no connection between them [behavior and feelings], we would indeed
be in trouble,” says Carroll, “but that’s not our world.”
Seth, the neuroscientist, was
not at the workshop—but I asked him where he stands in the debate over
physicalism and its various alternatives. Physicalism, he says, still offers
more “empirical grip” than its competitors—and he laments what he sees as
excessive hand-wringing over its alleged failures, including the supposed
hardness of the hard problem. “Critiquing physicalism on the basis that it has
‘failed’ is willful mischaracterization,” he says. “It’s doing just fine, as
progress in consciousness science readily attests.” In a recently published
article in the Journal
of Consciousness Studies, Seth adds: “Asserting that consciousness is
fundamental and ubiquitous does nothing to shed light on the way an experience
of blueness is the way it is, and not some other way. Nor does it explain
anything about the possible functions of consciousness, nor why consciousness
is lost in states such as dreamless sleep, general anaesthesia, and coma.”
Even those who lean toward
panpsychism sometimes seem hesitant to dive into the deep end. As Garcia put
it, in spite of the allure of a universe imbued with consciousness, “I would
love to be talked out of it.”
https://www.scientificamerican.com/article/is-consciousness-part-of-the-fabric-of-the-universe/
Being You: A
New Science of Consciousness
by Anil
Seth
A British
neuroscientist builds the argument that we do not perceive the world as it
objectively is, but rather that we are prediction machines, constantly
inventing our world and correcting our mistakes by the microsecond, and that we
can now observe the biological mechanisms in the brain that accomplish this
process of consciousness.
Repeating a word: as the brain receives (yellow), interpretes (red), and responds (blue) within a second, the prefrontal cortex (red) coordinates all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).
For a more difficult task, like saying a word that is the opposite of another word, people’s brains required 2–3 seconds to detect (yellow), interpret and search for an answer (red), and respond (blue) — with sustained prefrontal lobe activity (red) coordinating all areas of the brain involved. (video credit: Avgusta Shestyuk/UC Berkeley).
- Operating at 100 GHz, it can fire at a rate that is much faster than the human brain — 1 billion times per second, compared to a brain cell’s rate of about 50 times per second.
- It uses only about one ten-thousandth as much energy as a human synapse. The spiking energy is less than 1 attojoule** — roughly equivalent to the miniscule chemical energy bonding two atoms in a molecule — compared to the roughly 10 femtojoules (10,000 attojoules) per synaptic event in the human brain. Current neuromorphic platforms are orders of magnitude less efficient than the human brain. “We don’t know of any other artificial synapse that uses less energy,” NIST physicist Mike Schneider said.
- By Steve Ayan on December 20, 2018
The solipsism problem, also
called the problem of other minds, lurks at the heart of science, philosophy,
religion, the arts and the human condition
- By John Horgan on September
11, 2020
It is a central dilemma of
human life—more urgent, arguably, than the inevitability of suffering and
death. I have been brooding and ranting to my students about it for years. It
surely troubles us more than ever during this plague-ridden era. Philosophers
call it the problem of other minds. I prefer to call it the solipsism problem.
Solipsism, technically, is
an extreme form of skepticism, at once utterly nuts and irrefutable. It holds
that you are the only conscious being in existence. The cosmos sprang into
existence when you became sentient, and it will vanish when you die. As crazy
as this proposition seems, it rests on a brute fact: each of us is sealed in an
impermeable prison cell of subjective awareness. Even our most intimate
exchanges might as well be carried out via Zoom.
You experience your own mind
every waking second, but you can only infer the existence of other minds
through indirect means. Other people seem to possess conscious
perceptions, emotions, memories, intentions, just as you do, but you can’t be
sure they do. You can guess how the world looks to me, based on my behavior and
utterances, including these words you are reading, but you have no first-hand
access to my inner life. For all you know, I might be a mindless bot.
Natural selection instilled
in us the capacity for a so-called theory of mind—a talent for intuiting
others’ emotions and intentions. But we have a countertendency to deceive each
other, and to fear we are being deceived. The ultimate deception would be
pretending you’re conscious when you’re not.
The solipsism problem
thwarts efforts to explain consciousness. Scientists and philosophers have
proposed countless contradictory hypotheses about what consciousness is and how
it arises. Panpsychists contend that all creatures and even inanimate matter—even
a single proton!—possess consciousness. Hard-core materialists
insist, conversely (and perversely), that not even humans are
all that conscious.
The solipsism problem
prevents us from verifying or falsifying these and other claims. I can’t be
certain that you are conscious, let alone a
jellyfish, sexbot or doorknob. As long as we lack what
neuroscientist Christof Koch calls a consciousness
meter—a device that can measure consciousness in the same way that a
thermometer measures temperature—theories of consciousness will remain in the
realm of pure speculation.
But the solipsism problem is
far more than a technical philosophical matter. It is a paranoid but
understandable response to the feelings of solitude that lurk within us all.
Even if you reject solipsism as an intellectual position, you sense it,
emotionally, whenever you feel estranged from others, whenever you confront the
awful truth that you can never know, really know another
person, and no one can really know you.
Religion is one response to
the solipsism problem. Our ancestors dreamed up a supernatural entity who bears
witness to our innermost fears and desires. No matter how lonesome we feel, how
alienated from our fellow humans, God is always there watching over us. He sees
our souls, our most secret selves, and He loves us anyway. Wouldn’t it be nice
to think so.
The arts, too, can be seen
as attempts to overcome the solipsism problem. The artist, musician, poet,
novelist says, This is how my life feels or This is
how life might feel for another person. She helps us imagine what it’s like
to be a Black woman trying to save her children from slavery, or a Jewish ad
salesman wandering through Dublin, wondering whether his wife is cheating on
him. But to imagine is not to know.
Some of my favorite works of
art dwell on the solipsism problem. In I’m thinking of ending things and
earlier films, as well as his new novel Antkind, Charlie Kaufman
depicts other people as projections of a disturbed protagonist. Kaufman no
doubt hopes to help us, and himself, overcome the solipsism problem by venting
his anxiety about it, but I find his dramatizations almost too evocative.
Love, ideally, give us the
illusion of transcending the solipsism problem. You feel you really know
someone, from the inside out, and she knows you. In moments of ecstatic sexual
communion or mundane togetherness—while you’re eating pizza and watching The
Alienist, say—you fuse with your beloved. The barrier between you
seems to vanish.
Inevitably, however, your
lover disappoints, deceives, betrays you. Or, less dramatically, some subtle
bio-cognitive shift occurs. You look at her as she nibbles her pizza and think,
Who, what, is this odd creature? The solipsism problem has reemerged, more
painful and suffocating than ever.
It gets worse. In addition
to the problem of other minds, there is the problem of our own. As evolutionary
psychologist Robert Trivers points out, we deceive ourselves at least
as effectively as we deceive others. A corollary of this dark truth is that we
know ourselves even less than we know others.
If a lion could talk,
Wittgenstein said, we couldn’t understand it. The same is true, I suspect, of
our own deepest selves. If you could eavesdrop on your subconscious, you’d hear
nothing but grunts, growls and moans—or perhaps the high-pitched squeaks of raw
machine-code data coursing through a channel.
For the mentally ill,
solipsism can become terrifyingly vivid. Victims of Capgras syndrome think that
identical imposters have replaced their loved ones. If you have Cotard’s
delusion, also known as walking corpse syndrome, you become convinced that you
are dead. A much more common disorder is derealization, which makes
everything--you, others, reality as whole--feel strange, phony, simulated
Derealization plagued me
throughout my youth. One episode was self-induced. Hanging out with friends in
high school, I thought it would be fun to hyperventilate, hold my breath and
let someone squeeze my chest until I passed out. When I woke up, I didn’t
recognize my buddies. They were demons, jeering at me. For weeks after that
horrifying sensation faded, everything still felt unreal, as if I were in a
dreadful movie.
What if those afflicted with
these alleged delusions actually see reality clearly? According to the Buddhist
doctrine of anatta, the self does not really exist. When you try to
pin down your own essence, to grasp it, it slips through your fingers.
We have devised methods for
cultivating self-knowledge and quelling our anxieties, such as meditation and psychotherapy.
But these practices strike me as forms of self-brainwashing. When we meditate
or see a therapist, we are not solving the solipsism problem. We are merely
training ourselves to ignore it, to suppress the horror and despair that it
triggers.
We have also invented
mythical places in which the solipsism problem vanishes. We transcend our
solitude and merge with others into a unified whole. We call these places
heaven, nirvana, the Singularity. But solipsism is a cave from which we cannot
escape—except, perhaps, by pretending it doesn’t exist. Or, paradoxically, by
confronting it, the way Charlie Kaufman does. Knowing we are in the cave may be
as close as we can get to escaping it.
Conceivably, technology
could deliver us from the solipsism problem. Christof
Koch proposes that we all get brain implants with wi-fi, so we
can meld minds through a kind of high-tech telepathy. Philosopher Colin McGinn
suggests a technique that involves “brain-splicing,” transferring bits
of your brain into mine, and vice versa.
But do we really want to
escape the prison of our subjective selves? The archnemesis of Star Trek:
The Next Generation is the Borg, a legion of tech-enhanced humanoids
who have fused into one big meta-entity. Borg members have lost their
separation from each other and hence their individuality. When they meet
ordinary humans, they mutter in a scary monotone, “You will be assimilated.
Resistance is futile.”
As hard as solitude can be
for me to bear, I don’t want to be assimilated. If solipsism haunts me, so does
oneness, a unification so complete that it
extinguishes my puny mortal self. Perhaps the best way to cope with
the solipsism problem in this weird, lonely time is to imagine a world in which
it has vanished.
New Clues about the Origins of Biological
Intelligence
A common solution is
emerging in two different fields: developmental biology and neuroscience
December 11, 2021
Rafael Yuste is a professor of biological sciences at
Columbia University and director of its Neurotechnology Center.
Michael Levin is a biology professor and director of
the Allen Discovery Center at Tufts University.
https://www.scientificamerican.com/article/new-clues-about-the-origins
The
New Science of Consciousness
Anil Seth
https://www.youtube.com/watch?v=m_YV3bjfUQg
Being You: A New Science of Consciousness
by Anil Seth
K-shell decomposition reveals
hierarchical cortical organization of
the human brain
SOCIAL INTELLIGENCE. THE NEW SCIENCE OF HUMAN RELATIONSHIPS
By D. Goleman
Author
Daniel Goleman explores the manner in which the brain is designed to engage in
brain-to-brain “hookups” with others, and how these interactions affect both
our social interactions and physical/mental well being. Based upon
conceptualizations pioneered by Edward Thorndike, Goleman analyzes a
traditional concept of social intelligence for the purpose of developing a
revised model that consists of two categories: Social awareness (e.g.,
assessing the feelings of others) and social facility (e.g., awareness
of how people present themselves). Goleman also explores advances in
neuroscience that have made it possible for scientists and psychologists to
study the ways in which emotions and biology work together.
https://www.semanticscholar.org/paper/Social-Intelligence
Emotional
Intelligence Why It Can
Matter More Than IQ
By
Daniel Goleman
https://www.academia.edu/37329006/Emotional_Intelligence_Why_it_Can_Matter_More_Than_IQ_
Here’s
how human consciousness works—and how a machine might replicate it
In
an excerpt from his new book, Numenta and Palm cofounder Jeff Hawkins
says that consciousness isn’t beyond understanding. Nor is replicating it
unimaginable.
BY JEFF HAWKINS
I
recently attended a panel discussion titled Being Human in the Age of
Intelligent Machines. At one point during the evening, a philosophy professor
from Yale said that if a machine ever became conscious, then we would probably
be morally obligated to not turn it off. The implication was that if something
is conscious, even a machine, then it has moral rights, so turning it off is
equivalent to murder. Wow! Imagine being sent to prison for unplugging a
computer. Should we be concerned about this?
Most
neuroscientists don’t talk much about consciousness. They assume that the brain
can be understood like every other physical system, and consciousness, whatever
it is, will be explained in the same way. Since there isn’t even an agreement
on what the word consciousness means, it is best to not worry about it.
Philosophers, on the other hand, love to talk (and write books) about
consciousness. Some believe that consciousness is beyond physical description.
That is, even if you had a full understanding of how the brain works, it would
not explain consciousness. Philosopher David Chalmers famously claimed that
consciousness is “the hard problem,” whereas understanding how the brain works
is “the easy problem.” This phrase caught on, and now many people just assume
that consciousness is an inherently unsolvable problem.
Personally,
I see no reason to believe that consciousness is beyond explanation. I don’t
want to get into debates with philosophers, nor do I want to try to define
consciousness. However, the Thousand Brains Theory suggests physical
explanations for several aspects of consciousness. For example, the way the
brain learns models of the world is intimately tied to our sense of self and
how we form beliefs.
Imagine
if I could reset your brain to the exact state it was in when you woke up this
morning. Before I reset you, you would get up and go about your day, doing the
things you normally do. Perhaps on this day you washed your car. At dinnertime,
I would reset your brain to the time you got up, undoing any changes—including
any changes to the synapses—that occurred during the day. Therefore, all
memories of what you did would be erased. After I reset your brain, you would
believe that you just woke up. If I then told you that you had washed your car
today, you would at first protest, claiming it wasn’t true.
Upon
showing you a video of you washing your car, you might admit that it indeed
looks like you had, but you could not have been conscious at the time. You
might also claim that you shouldn’t be held responsible for anything you did
during the day because you were not conscious when you did it. Of course, you
were conscious when you washed your car. It is only after deleting your
memories of the day that you would believe and claim you were not. This thought
experiment shows that our sense of awareness, what many people would call being
conscious, requires that we form moment-to-moment memories of our actions.
Consciousness
also requires that we form moment-to-moment memories of our thoughts. Thinking
is just a sequential activation of neurons in the brain. We can remember a
sequence of thoughts just as we can remember the sequence of notes in a melody.
If we didn’t remember our thoughts, we would be unaware of why we were doing
anything. For example, we have all experienced going to a room in our house to
do something but, upon entering the room, forgetting what we went there for.
When this happens, we often ask ourselves, “where was I just before I got here
and what was I thinking?” We try to recall the memory of our recent thoughts so
we know why we are now standing in the kitchen. When our brains are working
properly, the neurons form a continuous memory of both our thoughts and
actions. Therefore, when we get to the kitchen, we can recall the thoughts we
had earlier. We retrieve the recently stored memory of thinking about eating
the last piece of cake in the refrigerator and we know why we went to the
kitchen.
The
active neurons in the brain at some moments represent our present experience,
and at other moments represent a previous experience or a previous thought. It
is this accessibility of the past—the ability to jump back in time and slide
forward again to the present—that gives us our sense of presence and awareness.
If we couldn’t replay our recent thoughts and experiences, then we would be
unaware that we are alive.
Our
moment-to-moment memories are not permanent. We typically forget them within
hours or days. I remember what I had for breakfast today, but I will lose this
memory in a day or two. It is common that our ability to form short-term
memories declines with age. That is why we have more and more of the “why did I
come here?” experiences as we get older.
These
thought experiments prove that our awareness, our sense of presence—which is
the central part of consciousness—is dependent on continuously forming memories
of our recent thoughts and experiences and playing them back as we go about our
day.
Now
let’s say we create an intelligent machine. The machine learns a model of the
world using the same principles as a brain. The internal states of the
machine’s model of the world are equivalent to the states of neurons in the
brain. If our machine remembers these states as they occur and can replay these
memories, then would it be aware and conscious of its existence, in the same
way that you and I are? I believe so.
If
you believe that consciousness cannot be explained by scientific investigation
and the known laws of physics, then you might argue that I have shown that
storing and recalling the states of a brain is necessary, but I have not proven
that it is sufficient. If you take this view, then the burden is on you to show
why it is not sufficient. For me, the sense of awareness—the sense of presence,
the feeling that I am an acting agent in the world—is the core of what it means
to be conscious. It is easily explained by the activity of neurons, and I see
no mystery in it.
Excerpted
from A
Thousand Brains: A New Theory of Intelligence by Jeff Hawkins. Copyright © 2021. Available
from Basic Books, an imprint of Hachette Book Group.
https://www.fastcompany.com/90596244/can-a-machine-achieve-
In Search of Memory: The Emergence of a New Science of Mind
Memory
binds our mental life together. We are who we are in large part because of what
we learn and remember. But how does the brain create memories? Nobel Prize
winner Eric R. Kandel intertwines the intellectual history of the powerful new
science of the mind—a combination of cognitive psychology, neuroscience, and
molecular biology—with his own personal quest to understand memory. A deft
mixture of memoir and history, modern biology and behavior, In Search
of Memory brings readers from Kandel's childhood in Nazi-occupied
Vienna to the forefront of one of the great scientific endeavors of the
twentieth century: the search for the biological basis of memory.
https://www.amazon.com/Search-Memory-Emergence-Science-Mind-ebook/dp/B002PQ7B5O
The Oracle
Of Night: The History
and Science of Dreams
By Sidarta Ribeiro
What
is a dream? Why do we dream? How do our bodies and minds use them? These
questions are the starting point for this unprecedented study of the role and
significance of this phenomenon. An investigation on a grand scale, it
encompasses literature, anthropology, religion, and science, articulating the
essential place dreams occupy in human culture and how they functioned as the
catalyst that compelled us to transform our earthly habitat into a human world.
From
the earliest cave paintings - where Sidarta Ribeiro locates a key to
humankind’s first dreams and how they contributed to our capacity to perceive
past and future and our ability to conceive of the existence of souls and
spirits - to today’s cutting-edge scientific research, Ribeiro arrives at revolutionary
conclusions about the role of dreams in human existence and evolution. He
explores the advances that contemporary neuroscience, biochemistry, and
psychology have made into the connections between sleep, dreams, and learning.
He explains what dreams have taught us about the neural basis of memory and the
transformation of memory in recall. And he makes clear that the earliest
insight into dreams as oracular has been elucidated by contemporary research.
Accessible,
authoritative, and fascinating, The Oracle of Night gives us a wholly
new way to understand this most basic of human experiences.
https://www.amazon.com/Oracle-Night-History-Science-Dreams/dp/B08R981399
What
the neuroscience of near-death experiences tells us about human consciousness
https://www.scientificamerican.com/article/lifting-the-veil-on-near-death-experiences/
Advanced Meditation Alters Consciousness and Our Basic Sense of
Self
Matthew D. Sacchet; Judson A. Brewer
An emerging science of advanced meditation could transform mental health
and our understanding of consciousness
Nav komentāru:
Ierakstīt komentāru