trešdiena, 2017. gada 26. aprīlis

Human of the Future & Artificial Intelligence


          

                                                           Vere scire est per causas scire

   

               Human of the Future & Artificial Intelligence


   In order for the political programs of the parties to be in line with the spirit of the times, in order for the ideological attitudes to acquire a progressive nature, it is essential to take into account the trends of the development of modern civilization!
    One of the dominant attributes of the progress of modern society is the increasingly closer confluence of the human and digital technology, a symbiosis of biological and artificial intelligence. This results in a radical change of the entire civilization, while the human, step by step, approaches the greatness of God.
   This is a daunting challenge to the morality of today’s human and at the same time a real existential threat! The entire future of civilization depends and will depend on whether we will be able to use the opportunities offered by modern science in conformity with the criteria of humanism.
    The publications presented here allow us to predict the prospects for the development of society and open slightly the curtain to the future in the presence of artificial intelligence. The future prospects of each person and the whole society depend on how and in what way we will be able to use the opportunities created by artificial intelligence!... Read more: https://www.amazon.com/HOW-GET-RID-SHACKLES-TOTALITARIANISM-ebook/dp/B0C9543B4L/ref=sr_1_1?crid=19WW1TG75ZU79&keywords=HOW+TO+GET+RID+OF+THE+SHACKLES+OF+TOTALITARIANISM&qid=1687700500&s=books&sprefix=how+to+get+rid+of+the+shackles+of+totalitarianism%2Cstripbooks-intl-ship%2C181&sr=1-1

Artificial intelligence can be not only a valuable assistant, but also a dangerous enemy.

In the wrong hands, artificial intelligence can become a means of manipulation.

 AI 2041: Ten Visions for Our Future

by Kai-Fu Lee , Chen Qiufan

In a groundbreaking blend of science and imagination, the former president of Google China and a leading writer of speculative fiction join forces to answer an urgent question: How will artificial intelligence change our world over the next twenty years?

AI will be the defining issue of the twenty-first century, but many people know little about it apart from visions of dystopian robots or flying cars. Though the term has been around for half a century, it is only now, Kai-Fu Lee argues, that AI is poised to upend our society, just as the arrival of technologies like electricity and smart phones did before it. In the past five years, AI has shown it can learn games like chess in mere hours--and beat humans every time. AI has surpassed humans in speech and object recognition, even outperforming radiologists in diagnosing lung cancer. AI is at a tipping point. What comes next?
Within two decades, aspects of daily life may be unrecognizable. Humankind needs to wake up to AI, both its pathways and perils. In this provocative work that juxtaposes speculative storytelling and science, Lee, one of the world's leading AI experts, has teamed up with celebrated novelist Chen Qiufan to reveal how AI will trickle down into every aspect of our world by 2041. In ten gripping narratives that crisscross the globe, coupled with incisive analysis, Lee and Chen explore AI's challenges and its potential:
- Ubiquitous AI that knows you better than you know yourself
- Genetic fortune-telling that predicts risk of disease or even IQ
- AI sensors that creates a fully contactless society in a future pandemic
- Immersive personalized entertainment to challenge our notion of celebrity
- Quantum computing and other leaps that both eliminate and increase risk
By gazing toward a not-so-distant horizon, AI 2041 offers powerful insights and compelling storytelling for everyone interested in our collective future.

https://www.goodreads.com/en/book/show/56377201-ai-2041

The Bletchley Declaration by Countries Attending the AI Safety

Summit, 1-2 November 2023

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential. 

AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.

Alongside these opportunities, AI also poses significant risks, including in those domains of daily life. To that end, we welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them. 

Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI. In doing so, we recognise that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI. This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. We also note the relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct. With regard to the specific risks most likely found in relation to frontier AI, we resolve to intensify and sustain our cooperation, and broaden it with further countries, to identify, understand and as appropriate act, through existing international fora and other relevant initiatives, including future international AI Safety Summits.

All actors have a role to play in ensuring the safety of AI: nations, international fora and other initiatives, companies, civil society and academia will need to work together. Noting the importance of inclusive AI and bridging the digital divide, we reaffirm that international collaboration should endeavour to engage and involve a broad range of partners as appropriate, and welcome development-orientated approaches and policies that could help developing countries strengthen AI capacity building and leverage the enabling role of AI to support sustainable growth and address the development gap.

We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures. We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks.

In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:

  • identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

In furtherance of this agenda, we resolve to support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration, including through existing international fora and other relevant initiatives, to facilitate the provision of the best science available for policy making and the public good.

In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024.

https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

 ‘The Loop: How Technology Is Creating a World Without Choice and How to Fight Back’

by Jacob Ward

This eye-opening narrative journey into the rapidly changing world of artificial intelligence reveals the alarming ways AI is exploiting the unconscious habits of our brains – and the real threat it poses to humanity.: https://www.goodreads.com/en/book/show/59429424-the-loop

‘Trustworthy AI: A Business Guide for Navigating Trust and Ethics in AI’

by Bina Ammanathi

The founders of Humans for AI provide a straightforward and structured way to think about belief and ethics in AI, and provide practical guidelines for organizations developing or using artificial intelligence solutions.: https://soundcloud.com/reesecrane/pdfreadonline-trustworthy-ai-a-business-guide-for-navigating-trust-and

 ‘The New Fire: War, Peace and Democracy in the Age of AI’

by Ben Buchanan and Andrew Embry

Combining a sharp grasp of technology with clever geopolitical analysis, two AI policy experts explain how artificial intelligence can work for democracy. With the right approach, technology need not favor tyranny. : https://www.goodreads.com/en/book/show/58329461-the-new-fire

Adversarial vulnerabilities of human decision-making

November 17, 2020 

Significance

“What I cannot efficiently break, I cannot understand.” Understanding the vulnerabilities of human choice processes allows us to detect and potentially avoid adversarial attacks. We develop a general framework for creating adversaries for human decision-making. The framework is based on recent developments in deep reinforcement learning models and recurrent neural networks and can in principle be applied to any decision-making task and adversarial objective. We show the performance of the framework in three tasks involving choice, response inhibition, and social decision-making. In all of the cases the framework was successful in its adversarial attack. Furthermore, we show various ways to interpret the models to provide insights into the exploitability of human choice.

Abstract

Adversarial examples are carefully crafted input patterns that are surprisingly poorly classified by artificial and/or natural neural networks. Here we examine adversarial vulnerabilities in the processes responsible for learning and choice in humans. Building upon recent recurrent neural network models of choice processes, we propose a general framework for generating adversarial opponents that can shape the choices of individuals in particular decision-making tasks toward the behavioral patterns desired by the adversary. We show the efficacy of the framework through three experiments involving action selection, response inhibition, and social decision-making. We further investigate the strategy used by the adversary in order to gain insights into the vulnerabilities of human choice. The framework may find applications across behavioral sciences in helping detect and avoid flawed choice. https://www.pnas.org/content/117/46/29221

Human Compatible: Artificial Intelligence and the Problem of Control 

by Stuart Russell 

In the popular imagination, superhuman artificial intelligence is an approaching tidal wave that threatens not just jobs and human relationships, but civilization itself. Conflict between humans and machines is seen as inevitable and its outcome all too predictable.

In this groundbreaking book, distinguished AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to vastly accelerated scientific research, and outlines the AI breakthroughs that still have to happen before we reach superhuman AI. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage.

If the predicted breakthroughs occur and superhuman AI emerges, we will have created entities far more powerful than ourselves. How can we ensure they never, ever, have power over us? Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. Such machines would be humble, altruistic, and committed to pursue our objectives, not theirs. This new foundation would allow us to create machines that are provably deferential and provably beneficial.

In a 2014 editorial co-authored with Stephen Hawking, Russell wrote, "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last." Solving the problem of control over AI is not just possible; it is the key that unlocks a future of unlimited promise. ..: https://www.goodreads.com/en/book/show/44767248-human-compatible

Rise of AI 2020

https://riseof.ai/summit/ ; https://lnkd.in/dPw67_M

Welcome to State of AI Report 2021

Published by Nathan Benaich and Ian Hogarth on 12 October 2021.

This year’s report looks particularly at the emergence of transformer technology, a technique to focus machine learning algorithms on important relationships between data points to extract meaning more comprehensively for better predictions, which ultimately helped unlock many of the critical breakthroughs we highlight throughout…:

https://www.stateof.ai/2021-report-launch.html

 Statement on AI Risk

AI experts and public figures express their concern about AI risk

 

AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

 https://www.safe.ai/statement-on-ai-risk#signatories

 The Great AI Reckoning

Deep learning has built a brave new world—but now the cracks are showing

The Turbulent Past and Uncertain Future of Artificial Intelligence 

Is there a way out of AI's boom-and-bust cycle?...:

https://spectrum.ieee.org/special-reports/the-great-ai-reckoning/

 Stanford University(link is external):

Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report

Welcome to the 2021 Report :

https://ai100.stanford.edu/sites/g/files/sbiybj18871/files/media/file/AI100Report_MT_10.pdf

 

Artificial Intelligence Index Report 2021

https://aiindex.stanford.edu/wp-content/uploads/2021/11/2021-AI-Index-Report_Master.pdf

A physicist on why AI safety is ‘the most important conversation of our time’
 ‘Nothing in the laws of physics that says we can’t build machines much smarter than us’
by Angela Chen@chengela  Aug 29, 2017

Should we be worried about the dangerous potential of artificial intelligence?
Sort of, says Max Tegmark, a physicist at the Massachusetts Institute of Technology. Tegmark is a co-founder of the Future of Life Institute, a Boston-based research organization that studies global catastrophic risk, with an especial focus on AI. He’s also the author of Life 3.0which is out today. Life 3.0 outlines the current state of AI safety research and the questions we’ll need to answer as a society if we want the technology to be used for good.
Tegmark doesn’t believe that doomsaying Terminator scenarios are inevitable, but he doesn’t think that we’re doing enough thinking about artificial intelligence either. And he’s not the only one who’s concerned. Stephen Hawking has been urging researchers to pay more attention to AI safety for years now. Microsoft partnered with Elon Musk to create OpenAI, an organization dedicated to the same issue. Musk, who also donated $10 million to FLI in 2015, also recommended Life 3.0 on Twitter.
The Verge chatted with Tegmark about his book, what we should be doing, and why he thinks the discussion around AI is the most important one of our time. This interview has been lightly edited and condensed for clarity.
What does the title of the book refer to? What is “Life 3.0” and what were Life 1.0 and 2.0?
Well, I think of life broadly as any process that can retain its complexity and reproduce. Life 1.0 would be like bacteria. Bacteria are small atoms put together in the form of simple algorithms that control what they do. For instance, whenever the bacteria notices there is a higher sugar concentration in front than in back of it, it comes forward, but if it notices that there’s less sugar in front of it, then it turns around. But bacteria can never truly learn anything in its lifetime. The only way that bacteria can gradually get better software, or learn, is through evolution over many generations.
We humans are what I call “life 2.0.” We still have our hardware, or bodies, largely designed by evolution, but we can learn. We have enormous power to upload new “software” into our minds. For example, if you decide you want to become a lawyer, you can go to law school, and law school involves uploading new algorithms into your brain so that now suddenly you can have the expertise of the lawyer. It’s this ability to design our own software rather than having to wait for evolution to give it to us that enable us to dominate this planet and create modern civilization and culture. Cultural evolution comes precisely from the fact that we can copy ideas and knowledge from other people in our lifetime.
Life 3.0 is life that fully breaks free of its evolutionary shackles and is able to design not only its software, but its hardware. Put another way, if we create AI which is at least as smart as us, then it can not only design its own software to make itself learn new things, but there’s always an attempt to swap up upgraded memory to remember a million times more stuff, or get more computing power. In contrast, humans can put in artificial pacemakers or artificial knees, but we can’t change anything truly dramatic. You can never make yourself a hundred times taller or a thousand times faster at thinking. Our intelligence is made of squishy biological neurons and is fundamentally limited by how much brain mass fits through our mom’s birth canal, but artificial intelligence isn’t.
Some people are still skeptical that superintelligence will happen at all, but you seem to believe strongly that it will, and it’s just a matter of time. You’re a physicist, what’s your take from that perspective?
I think most people think of intelligence as something mysterious and limited to biological organisms. But from my perspective as a physicist, intelligence is just simply information processing performed by elementary particles moving around according to the laws of physics. Nothing in the laws of physics that says we can’t build machines much smarter than us, or that intelligence needs to be built from organic matter. I don’t think there’s any secret sauce that absolutely requires carbon atoms or blood.
I had a lot of fun in the book thinking about what are the ultimate limits of the laws of physics on how smart you can be. The short answer is that it’s sky-high, millions and millions and millions of times above where we are now. We ain’t seen nothing yet. There’s a huge potential for our universe to wake up much more, which I think is an inspiring thought, coming from a cosmology background.
I know that FLI does work with issues like nuclear disarmament, but it made spreading the word about AI safety its first major goal. Similarly, you believe that the conversation around AI safety is “the most important conversation.” Why? Why is it more important than, say, climate change?
We’ve done a lot for nuclear war risk reduction, but the question of a good future with AI is absolutely more important than all of these other things. Take climate change: Yes, it might create huge problems for us in 50 years or 100 years, but many leading AI researchers think that superhuman intelligence will arrive before then, in a matter of decades.
That’s obviously a way bigger deal than climate change. First of all, if that happens, it would utterly transform life as we know it. Either it helps you flourish like never before or becomes the worst thing that ever happened to us. And second, if it goes well, we could use it to solveclimate change and all our other problems. If you care about poverty, social justice, climate change, disease — all of these problems stump us because we’re not smart enough to figure out how to solve them, but if we can amplify our own intelligence with machine intelligence, far beyond ours, we have an incredible potential to do better.
So, it’s something which is different from all the other things on your list in that there’s not just possible downsides, but huge possible upsides in that it can help solve all the other problems. Cancer, for example, or disease more generally — cancer can be cured, it’s just that we humans haven’t been smart enough to figure out how to deal with it in all cases. We’re limited by our own intelligence in all the research we do.
There’s a fairly wide spectrum of attitudes on this topic, from the skeptics to the utopians. Where do you put yourself?
I’m optimistic that it’s possible to create superhuman intelligence, and I’m also optimistic in that we can create a great future with AI. But I’m cautious in the sense that I don’t think it’s guaranteed. There are crucial questions we have to answer first for things to go well, and they might take 30 years to answer. We should get cracking on them now, not the night before some dudes decide to switch on their superintelligence.
What questions? You said you’re not focused in what you call “near-term” questions, like how automation is going to affect jobs.
There’s so much talk all the time now about job automation, people tend to forget that it’s important to also look at what comes next. I’m talking about questions like, how do we transform today’s easily hackable computers into robust AI systems? How can you make AI systems understand our goals as they get ever-smarter?
When your computer crashes, it’s annoying, you lost an hour of work, but it wouldn’t be as funny if that computer were controlling the airplane you were flying in or the nuclear arsenal of the Untied States.
What goals should AI have? Should it be the goals of some American computer programmers, or the goals of ISIS or of people in the Middle Ages? What kind of society can we create? Look how much polarization is there in the US right now.
If we don’t know what we want, we’re less likely to get it. You can’t leave this conversation just to tech geeks like myself, either, because the question of what sort of society we create is going to affect everybody.
A lot of the initiatives you discuss are big-picture — like how laws need to be updated to keep up with AI. But what about the average person? What are we supposed to do?
Talking is a great start. The fact that none of the two presidential candidates in our last election talked about AI at all, even though I think it’s the most important issue facing us, is a reflection of the fact that people aren’t talking about it and therefore don’t care about it when they vote.
When will you consider yourself “successful” in making sure we’ve had this conversation?

Look at it this way: we have billions and billions of dollars now invested in making AI more powerful and almost nothing in AI safety research. No governments of the world have said that AI safety research should be an integral part of their computer science funding, and it’s like, why would you fund building nuclear reactors without funding nuclear reactor safety? Now we’re funding AI research with no budget in sight for AI safety. I’m certainly not going to say that we have enough conversation about this until this, at least, changes. Every AI researcher I know thinks it would be a good idea to have more funding for this. So I’d say, we’re successful when things like this are going a little bit in the right direction.




by Alan Mathison Turing

      

Computing Machinery and Intelligence”



Protecting Against AI’s Existential Threat

How to avoid the nightmare scenario of artificial intelligence? According to researchers from Elon Musk’s OpenAI, the trick is teaching machines to keep our interests in mind
Ilya Sutskever and Dario Amodei
Oct. 18, 2017
On July 8, 2017, an AI system built by our research company, OpenAI, beat a semipro human player in solo matches of a battle arena video game called Dota 2. One month later, the same AI system beat a professional gamer ranked in the top 50. Three days after that it defeated the No. 1 solo Dota 2 player in the world. And it kept getting better: The Aug. 11 version of our AI beat the Aug. 10 version 60% of the time. Our AI learned to trick its opponents, predict what it couldn’t see and decide when to fight and when to flee.
KEEPING A CAREFUL EYE ON AI
How do you create AI that doesn’t pose a threat to humanity? By teaching it to work with humans. Open AI collaborated with DeepMind, Google’s AI division, to design a training method that incorporates regular human feedback. The idea is to “humanize” AI systems by teaching them not only skills but also complex motivations and subtle goals that must be communicated precisely. A promising direction for AI safety, it ensures AI’s aims align with ours, no matter how hard those aims are to articulate.
In this experiment, the goal is to train a simulated robot to do a back flip—difficult to specify through traditional AI programming but easy for a human to judge visually. We begin by running the AI inside a simulated robot, which begins to make random movements.ILLUSTRATION: BROWN BIRD DESIGN
The AI shows video clips of these movements to a human trainer, who selects the ones that are most like a back flip. ILLUSTRATION: BROWN BIRD DESIGN
From this feedback, the AI guesses what its instructor wants it to do. It then presents more videos to the instructor, and the subsequent feedback helps the AI refine its understanding of its instructor’s desire. PHOTO: BROWN BIRD DESIGN
After several hours of this process, our AI can perform aesthetically pleasing gymnastics tricks or pursue any desired objective in a video game. We hope to apply this method to robotics, dialogue with humans and exploration of unfamiliar environments—all of which have proved extremely difficult for AI.ILLUSTRATION: BROWN BIRD DESIGN
From an engineering perspective, it is an encouraging story about AI’s ability to catapult, in a very short time, from novice to world champion.
It’s also a cautionary perspective about an incredibly powerful technology with the potential to do both good and harm.
Humans will one day build an AI system with cognitive abilities that far outstrip our own. Such a system could solve scientific problems that have baffled us for decades, cure diseases and transform education. It could very well be the most powerful and beneficial technology ever created. But history is full of powerful entities that caused grave harm in the unchecked pursuit of their goals: logging companies that obliterated rain forests, banks whose complex financial instruments led to global recession. Before we unleash powerful AI on the world, more work needs to be done in the field of AI safety, whose goal is to ensure that these systems pursue their objectives in a way that benefits society and aligns with the interests of their human creators.
Like existing technologies, powerful AI will be susceptible to the kind of glitches that caused Knight Capital’s trading system to lose $440 million in 45 minutes in 2012. But the AI of the future will also be able to do harm by succeeding too cleverly or cunningly. An AI tasked with maximizing profits for a corporation—and given the seemingly innocuous ability to post things on the internet—could deliberately cause political upheaval in order to benefit from a change in commodity prices. Why would humans allow this? They wouldn’t. But AI will require more autonomy, and these systems will achieve their goals at speeds limited only by their computational resources—speeds that will likely exceed the capacity of human oversight.
In other words, by the time we notice something troubling, it could already be too late.
AI training today generally begins with a clear and specific goal given to the system by a human instructor. But even these simple goals can induce AI systems to demonstrate bizarre, savantlike and worrying behavior.
We once trained an AI to maximize its score in a virtual boat- race game, but instead of navigating the course as quickly as possible, the AI taught itself to exploit a flaw in the game by driving in circles and racking up bonus points while also crashing into walls and setting itself on fire.
These failures are amusing in the context of a video game, but imagine for a second that the AI in question was driving a ferry filled with passengers, a task this technology will almost certainly be charged with in the future.
We have a long way to go before powerful AI systems become a reality, which is exactly why we need to devote time and energy to AI safety now. The world today would be a much safer place if the internet had been designed with security in mind—but it wasn’t. We now have an opportunity to take a safety-first approach to a far more powerful and potentially dangerous technology. We’d be wise to take it.

Ilya Sutskever is the co-founder and research director at OpenAI. Dario Amodei leads safety research at OpenAI.

AI software with social skills teaches humans how to collaborate

Unlocking human-computer cooperation.

May 30, 2021

A team of computer researchers developed an AI software program with social skills — called S Sharp (written S#) — that out-performed humans in its ability to cooperate. This was tested through a series of games between humans and the AI software. The tests paired people with S# in a variety of social scenarios.

One of the games humans played against the software is called “the prisoner’s dilemma.” This classic game shows how 2 rational people might not cooperate — even if it appears that’s in both their best interests to work together. The other challenge was a sophisticated block-sharing game.

In most cases, the S# software out-performed humans in finding compromises that benefit both parties. To see the experiment in action, watch the good featurette below. This project was helmed by 2 well-known computer scientists:

  • Iyad Rahwan PhD ~ Massachusetts Institute of Technology • US
  • Jacob Crandall PhD ~ Brigham Young Univ. • US

The researchers tested humans and the AI in 3 types of game interactions:

  • computer  – to – computer
  • human – to – computer
  • human – to – human

Researcher Jacob Crandall PhD said:

Computers can now beat the best human minds in the most intellectually challenging games — like chess. They can also perform tasks that are difficult for adult humans to learn — like driving cars. Yet autonomous machines have difficulty learning to cooperate, something even young children do.

Human cooperation appears easy — but it’s very difficult to emulate because it relies on cultural norms, deeply rooted instincts, and social mechanisms that express disapproval of non-cooperative behavior.

Such common sense mechanisms aren’t easily built into machines. In fact, the same AI software programs that effectively play the board games of chess +  checkers, Atari video games, and the card game of poker — often fail to consistently cooperate when cooperation is necessary.

Other AI software often takes 100s of rounds of experience to learn to cooperate with each other, if they cooperate at all.  Can we build computers that cooperate with humans — the way humans cooperate with each other? Building on decades of research in AI, we built a new software program that learns to cooperate with other machines — simply by trying to maximize its own world.

We ran experiments that paired the AI with people in various social scenarios — including a “prisoner’s dilemma” challenge and a sophisticated block-sharing game. While the program consistently learns to cooperate with another computer — it doesn’t cooperate very well with people. But people didn’t cooperate much with each other either.

As we all know: humans can cooperate better if they can communicate their intentions through words + body language. So in hopes of creating an program that consistently learns to cooperate with people — we gave our AI a way to listen to people, and to talk to them.

We did that in a way that lets the AI play in previously unanticipated scenarios. The resulting algorithm achieved our goal. It consistently learns to cooperate with people as well as people do. Our results show that 2 computers make a much better team — better than 2 humans, and better than a human + a computer.

But the program isn’t a blind cooperator. In fact, the AI can get pretty angry if people don’t behave well. The historic computer scientist Alan Turing PhD believed machines could potentially demonstrate human-like intelligence. Since then, AI has been regularly portrayed as a threat to humanity or human jobs.

To protect people, programmers have tried to code AI to follow legal + ethical principles — like the 3 Laws of Robotics written by Isaac Asimov PhD. Our research demonstrates that a new path is possible.

Machines designed to selfishly maximize their pay-offs can — and should — make an autonomous choice to cooperate with humans across a wide range of situations. 2 humans — if they were honest with each other + loyal — would have done as well as 2 machines. About half of the humans lied at some point. So the AI is learning that moral characteristics are better — since it’s programmed to not lie — and it also learns to maintain cooperation once it emerges.

The goal is we need to understand the math behind cooperating with people — what attributes does AI need so it can develop social skills. AI must be able to respond to us — and articulate what it’s doing. It must interact with other people. This research could help humans with their relationships. In society, relationships break-down all the time. People that were friends for years all-of-a-sudden become enemies. Because the AI is often better at reaching these compromises than we are, it could teach us how to get-along better.

https://www.kurzweilai.net/digest-ai-software-with-social-skills-teaches-humans-how-to-collaborate

Superintelligence Cannot be Contained: Lessons from Computability Theory

Published: Jan 5, 2021

Abstract

Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potentially catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible.

https://jair.org/index.php/jair/article/view/12202


Memristive synapses connect brain and silicon spiking neurons

Abstract
Brain function relies on circuits of spiking neurons with synapses playing the key role of merging transmission with memory storage and processing. Electronics has made important advances to emulate neurons and synapses and brain-computer interfacing concepts that interlink brain and brain-inspired devices are beginning to materialise. We report on memristive links between brain and silicon spiking neurons that emulate transmission and plasticity properties of real synapses. A memristor paired with a metal-thin film titanium oxide microelectrode connects a silicon neuron to a neuron of the rat hippocampus. Memristive plasticity accounts for modulation of connection strength, while transmission is mediated by weighted stimuli through the thin film oxide leading to responses that resemble excitatory postsynaptic potentials. The reverse brain-to-silicon link is established through a microelectrode-memristor pair. On these bases, we demonstrate a three-neuron brain-silicon network where memristive synapses undergo long-term potentiation or depression driven by neuronal firing rates….:

https://www.nature.com/articles/s41598-020-58831-9


A biohybrid synapse with neurotransmitter-mediated plasticity
Abstract
Brain-inspired computing paradigms have led to substantial advances in the automation of visual and linguistic tasks by emulating the distributed information processing of biological systems1. The similarity between artificial neural networks (ANNs) and biological systems has inspired ANN implementation in biomedical interfaces including prosthetics2 and brain-machine interfaces3. While promising, these implementations rely on software to run ANN algorithms. Ultimately, it is desirable to build hardware ANNs4,5 that can both directly interface with living tissue and adapt based on biofeedback6,7. The first essential step towards biologically integrated neuromorphic systems is to achieve synaptic conditioning based on biochemical signalling activity. Here, we directly couple an organic neuromorphic device with dopaminergic cells to constitute a biohybrid synapse with neurotransmitter-mediated synaptic plasticity. By mimicking the dopamine recycling machinery of the synaptic cleft, we demonstrate both long-term conditioning and recovery of the synaptic weight, paving the way towards combining artificial neuromorphic systems with biological neural networks…:

 

University of California researchers have developed new computer AI software that enables robots to learn physical skills --- called motor tasks --- by trial + error. The robot uses a step-by-step process similar to the way humans learn. The lab made a demo of their technique --- called re-inforcement learning. In the test: the robot completes a variety of physical tasks --- without any pre-programmed details about its surroundings.

The lead researcher said: "What we’re showing is a new AI approach to enable a robot to learn. The key is that when a robot is faced with something new, we won’t have to re-program it. The exact same AI software enables the robot to learn all the different tasks we gave it."

https://www.kurzweilai.net/digest-this-self-learning-ai-software-lets-robots-do-tasks-autonomously

 

Researchers at Lund University in Sweden have developed implantable electrodes that can capture signals from a living human (or) animal brain over a long period of time —- but without causing brain tissue damage.

This bio-medical tech will make it possible to monitor — and eventually understand — brain function in both healthy + diseased people.

https://www.kurzweilai.net/digest-breakthrough-for-flexible-electrode-implants-in-the-brain

 

January 19, 2018 | Andrea Christensen

Machines trained to cooperate by BYU researchers are outperforming their human counterparts

Computers can play a pretty mean round of chess and keep up with the best of their human counterparts in other zero-sum games. But teaching them to cooperate and compromise instead of compete?
With help from a new algorithm created by BYU computer science professors Jacob Crandall and Michael Goodrich, along with colleagues at MIT and other international universities, machine compromise and cooperation appears not just possible, but at times even more effective than among humans.
“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,” said Crandall, whose study was recently published in Nature Communications. “AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”
For the study, researchers programmed machines with an algorithm called S# and ran them through a variety of two-player games to see how well they would cooperate in certain relationships. The team tested machine-machine, human-machine and human-human interactions. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties.
“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” Crandall said. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges.”
Researchers further fortified the machines’ ability to cooperate by programming them with a range of “cheap talk” phrases. In tests, if human participants cooperated with the machine, the machine might respond with a “Sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they might be met with a trash-talking “Curse you!”, “You will pay for that!” or even an “In your face!”
Regardless of the game or pairing, cheap talk doubled the amount of cooperation. And when machines used cheap talk, their human counterparts were often unable to tell whether they were playing a human or machine.
The research findings, Crandall hopes, could have long-term implications for human relationships.

“In society, relationships break down all the time,” he said. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”: https://news.byu.edu/news/let%E2%80%99s-make-deal-could-ai-compromise-better-humans

You’re being fed artificial intelligence about artificial intelligence.
DON'T BELIEVE THE HYPE
Six questions to ask yourself when reading about AI

By Gary Marcus & Ernest DavisSeptember 12, 2019
Hardly a week goes by without some breathless bit of AI news touting a “major” new discovery or warning us we are about to lose our jobs to the newest breed of smart machines.
Rest easy. As two scientists who have spent our careers studying AI, we can tell you that a large fraction of what’s reported is overhyped.
Consider this pair of headlines from last year describing an alleged breakthrough in machine reading: “Robots Can Now Read Better than Humans, Putting Millions of Jobs at Risk” and “Computers Are Getting Better than Humans at Reading.” The first, from Newsweek, is a more egregious exaggeration than the second, from CNN, but both wildly oversell minor progress.
To begin with, there were no actual robots involved, and no actual jobs were remotely at risk. All that really happened was that Microsoft made a tiny bit of progress and put out a press release saying that “AI…can read a document and answer questions about it as well as a person.”
That sounded much more revolutionary than it really was. Dig deeper, and you would discover that the AI in question was given one of the easiest reading tests you could imagine—one in which all of the answers were directly spelled out in the text. The test was about highlighting relevant words, not comprehending text.
Suppose, for example, that I hand you a piece of paper with this short passage:
Two children, Chloe and Alexander, went for a walk. They both saw a dog and a tree. Alexander also saw a cat and pointed it out to Chloe. She went to pet the cat.
The Microsoft system was built to answer questions like “Who went for a walk?” in which the answer (“Chloe and Alexander”) is directly spelled out in the text. But if you were to ask it a simple question like “Did Chloe see the cat?” (which she must have, because she went to pet it) or “Was Chloe frightened by the cat?” (which she must not have been, because she went to pet it), it would not have been able to find the answers, as they weren’t spelled out in the text. Inferring what isn’t said is at the heart of reading, and it simply wasn’t tested.
Microsoft didn’t make that clear, and neither did Newsweek or CNN.
Practically every time one of the tech titans puts out a press release, we get a reprise of this same phenomenon: minor progress portrayed as revolution. In another example, two years ago Facebook created a proof-of-concept AI program that could read a five-line summary of The Lord of the Rings and answer questions about where people and things ended up. (“Where was the Ring? At Mount Doom.”) The result was a slew of ridiculously over-enthusiastic articles, explaining how reading fantasy literature was the key to getting AI’s to read, with headlines like Slate’s “Facebook Thinks It Has Found the Secret to Making Bots Less Dumb.” (They didn’t.)
The consequence of this kind of over-reporting in the media? The public has come to believe that AI is much closer to being solved than it really is. So whenever you hear about a supposed success in AI, here’s a list of six questions you should ask:
1.     Stripping away the rhetoric, what did the AI system actually do here? (Does a “reading system” really read, or does it just highlight relevant bits of text?)
2.     How general is the result? (For example, does an alleged reading task measure all aspects of reading, or just a tiny slice of it? If it was trained on fiction, can it read the news?)
3.     Is there a demo where I can try out my own examples? (If you can’t, you should be worried about how robust the results are.)
4.     If the researchers—or their press people—allege that an AI system is better than humans, then which humans, and how much better? (Was the comparison with college professors, who read for a living, or bored Amazon Mechanical Turk workers getting paid a penny a sentence?)
5.     How far does succeeding at the particular task actually take us toward building genuine AI? (Is it an academic exercise, or something that could be used in the real world?)
6.     How robust is the system? Could it work just as well with other data sets, without massive amounts of retraining? (For example, would a driverless car system that was trained during the day be able to drive at night, or in the snow, or if there was a detour sign not listed on its map?)
AI really is coming, eventually, but it is further away than most people think. To get a realistic picture, take everything you read about dramatic progress in AI with a healthy dose  of skepticism—and rejoice in your (for now) uniquely human ability to do so.
This essay is adapted from Rebooting AI, published this month by Pantheon.
https://qz.com/1706248/six-questions-to-ask-yourself-when-reading-about-

From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project
Submitted on 4 Sep 2019)
AI has achieved remarkable mastery over games such as Chess, Go, and Poker, and even Jeopardy, but the rich variety of standardized exams has remained a landmark challenge. Even in 2016, the best AI system achieved merely 59.3% on an 8th Grade science exam challenge.
This paper reports unprecedented success on the Grade 8 New York Regents Science Exam, where for the first time a system scores more than 90% on the exam's non-diagram, multiple choice (NDMC) questions. In addition, our Aristo system, building upon the success of recent language models, exceeded 83% on the corresponding Grade 12 Science Exam NDMC questions. The results, on unseen test questions, are robust across different test years and different variations of this kind of test. They demonstrate that modern NLP methods can result in mastery on this task. While not a full solution to general question-answering (the questions are multiple choice, and the domain is restricted to 8th Grade science), it represents a significant milestone for the field.
https://arxiv.org/abs/1909.01958


The Future of Artificial Intelligence and its Impact on Society

Ray Kurzweil keynote presentation at Council on Foreign Relations meeting
November 3, 2017
http://www.kurzweilai.net/the-future-of-artificial-intelligence-and-its-impact-on-society-2?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=4a56893a55-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-4a56893a55-282212701

AI algorithm with ‘social skills’ teaches humans how to collaborate
And a human-machine collaborative chatbot system

February 9, 2018
An international team has developed an AI algorithm with social skills that has outperformed humans in the ability to cooperate with people and machines in playing a variety of two-player games.
The researchers, led by Iyad Rahwan, PhD, an MIT Associate Professor of Media Arts and Sciences, tested humans and the algorithm, called S# (“S sharp”), in three types of interactions: machine-machine, human-machine, and human-human. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties.
“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” said lead author BYU computer science professor Jacob Crandall. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are better [since it’s programmed to not lie] and it also learns to maintain cooperation once it emerges.”
“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,” said Crandall. “AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”
How casual talk by AI helps humans be more cooperative
One important finding: colloquial phrases (called “cheap talk” in the study) doubled the amount of cooperation. In tests, if human participants cooperated with the machine, the machine might respond with a “Sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they might be met with a trash-talking “Curse you!”, “You will pay for that!” or even an “In your face!”
And when machines used cheap talk, their human counterparts were often unable to tell whether they were playing a human or machine — a sort of mini “Turing test.”
The research findings, Crandall hopes, could have long-term implications for human relationships. “In society, relationships break down all the time,” he said. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”
The research is described in an open-access paper in Nature Communications.
A human-machine collaborative chatbot system 
In a related study, Carnegie Mellon University (CMU) researchers have created a new collaborative chatbot called Evorus that goes beyond Siri, Alexa, and Cortana by adding humans in the loop.
Evorus combines a chatbot called Chorus with inputs by paid crowd workers at Amazon Mechanical Turk, who answer questions from users and vote on the best answer. Evorus keeps track of the questions asked and answered and, over time, begins to suggest these answers for subsequent questions. It can also use multiple chatbots, such as vote bots, Yelp Bot (restaurants) and Weather Bot to provide enhanced information.
Humans are simultaneously training the system’s AI, making it gradually less dependent on people, says Jeff Bigham, associate professor in the CMU Human-Computer Interaction Institute.
The hope is that as the system grows, the AI will be able to handle an increasing percentage of questions, while the number of crowd workers necessary to respond to “long tail” questions will remain relatively constant.
Keeping humans in the loop also reduces the risk that malicious users will manipulate the conversational agent inappropriately, as occurred when Microsoft briefly deployed its Tay chatbot in 2016, noted co-developer Ting-Hao Huang, a Ph.D. student in the Language Technologies Institute (LTI).
The preliminary system is available for download and use by anyone willing to be part of the research effort. It is deployed via Google Hangouts, which allows for voice input as well as access from computers, phones, and smartwatches. The software architecture can also accept automated question-answering components developed by third parties.
A open-access research paper on Evorus, available online, will be presented at CHI 2018, the Conference on Human Factors in Computing Systems in Montreal, April 21–26, 2018.


Abstract of Cooperating with machines
Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human–machine cooperation is beneficial but non-trivial, such as scenarios in which human and machine preferences are neither fully aligned nor fully in conflict. Cooperation does not require sheer computational power, but instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dispositions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate with people and other algorithms at levels that rival human cooperation in a variety of two-player repeated stochastic games. These results indicate that general human–machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.


Abstract of A Crowd-powered Conversational Assistant Built to Automate Itself Over Time
Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.
references:

Elon Musk Claims We Only Have a 10 Percent Chance of Making AI Safe

by Chelsea Gohd on November 22, 2017 

 IN BRIEF
While Elon Musk works to advance the field of artificial intelligence, he also believes there is an astronomically high likelihood that AI will pose a threat to humanity in the future. In an interview with Rolling Stone, the tech luminary claimed we have only a five to 10 percent chance of success at making AI safe.
Outlook Not So Good
Elon Musk has put a lot of thought into the harsh realities and wild possibilities of artificial intelligence (AI). These considerations have left him convinced that we need to merge with machines if we’re to survive, and he’s even created a startup dedicated to developing the brain-computer interface (BCI) technology needed to make that happen. But despite the fact that his very own lab, OpenAI, has created an AI capable of teaching itself, Musk recently said that efforts to make AI safe only have “a five to 10 percent chance of success.”
Musk shared these less-than-stellar odds with the staff at Neuralink, the aforementioned BCI startup, according to recent Rolling Stone articleDespite Musk’s heavy involvement in the advancement of AI, he’s openly acknowledged that the technology brings with it not only the potential for, but the promise of serious problems.
The challenges to making AI safe are twofold.
First, a major goal of AI — and one that OpenAI is already pursuing — is building AI that’s not only smarter than humans, but that is capable of learning independently, without any human programming or interference. Where that ability could take it is unknown.
Then there is the fact that machines do not have morals, remorse, or emotions. Future AI might be capable of distinguishing between “good” and “bad” actions, but distinctly human feelings remain just that — human.
In the Rolling Stone article, Musk further elaborated on the dangers and problems that currently exist with AI, one of which is the potential for just a few companies to essentially control the AI sector. He cited Google’s DeepMind as a prime example.
“Between Facebook, Google, and Amazon — and arguably Apple, but they seem to care about privacy — they have more information about you than you can remember,” said Musk. “There’s a lot of risk in concentration of power. So if AGI [artificial general intelligence] represents an extreme level of power, should that be controlled by a few people at Google with no oversight?”
Worth the Risk?
Experts are divided on Musk’s assertion that we probably can’t make AI safe. Facebook founder Mark Zuckerberg has said he’s optimistic about humanity’s future with AI, calling Musk’s warnings “pretty irresponsible.” Meanwhile, Stephen Hawking has made public statements wholeheartedly expressing his belief that AI systems pose enough of a risk to humanity that they may replace us altogether.
Sergey Nikolenko, a Russian computer scientist who specializes in machine learning and network algorithms, recently shared his thoughts on the matter with Futurism. “I feel that we are still lacking the necessary basic understanding and methodology to achieve serious results on strong AI, the AI alignment problem, and other related problems,” said Nikolenko.
As for today’s AI, he thinks we have nothing to worry about. “I can bet any money that modern neural networks will not suddenly wake up and decide to overthrow their human overlord,” said Nikolenko.
Musk himself might agree with that, but his sentiments are likely more focused on how future AI may build on what we have today.
Already, we have AI systems capable of creating AI systems, ones that can communicate in their own languages, and ones that are naturally curious. While the singularity and a robot uprising are strictly science fiction tropes today, such AI progress makes them seem like genuine possibilities for the world of tomorrow.
But these fears aren’t necessarily enough reason to stop moving forward. We also have AIs that can diagnose canceridentify suicidal behavior, and help stop sex trafficking.
The technology has the potential to save and improve lives globally, so while we must consider ways to make AI safe through future regulation, Musk’s words of warning are, ultimately, just one man’s opinion.
He even said as much himself to Rolling Stone: “I don’t have all the answers. Let me be really clear about that. I’m trying to figure out the set of actions I can take that are more likely to result in a good future. If you have suggestions in that regard, please tell me what they are.”: https://futurism.com/elon-musk-claims-only-have-10-percent-chance-making-ai-safe/

Artificial Intelligence: Bright Future or Dark Cloud?

Published on January 11, 2019
Rajat Taneja  Executive Vice President, Technology at Visa

The potential of deep learning and AI is almost limitless, certainly well beyond the scope of our current imagination. Complex machines imbued with the characteristics of human intelligence (e.g., the ability to sense the world through sight, sound, and touch; to reason and plan; to communicate in natural language, and to move and manipulate objects), will influence society in untold ways. The discipline however polarizes opinions, with evangelists on one side and doomsayers on the other.
This dilemma is not at all new or limited to the field of computing—consider the ethical debates sparked by breakthroughs in gene editing, stem cell research, or genetically modified foods. Like many technologists of my generation, I am a rational optimist by nature. I believe AI can be harnessed in ways that dramatically improve our lives, and that its potential to do good far outweighs its potential to do harm.
However, we can’t presume that progress will automatically translate to benefits for humankind as a whole. We have an obligation as technologists to think through the implications of our design choices before we put software into production.
I last wrote about artificial intelligence three years ago—before Alexa took up residence on our countertops and Google’s AlphaGo beat the world’s best “Go” video player. I wanted to revisit the subject because I believe we are at a critical inflection point in the evolution of AI.
The Convergence at the Heart of Advances in AI
Driving the growth and importance of AI are the improvements in computing hardware, access to greater amounts of more valuable data, as well as breakthroughs in the underlying software, tools and algorithms that can analyze and make sense of the data. Most of the activity we do today using connected devices is powered by this intersection. Internet searches, online recommendations for movies we want to stream, to gifts we want to buy are driven by advances in machine learning.
In order to compete with the richest, deepest intelligence, like a human being, AI needs fast memory and fast transfers inside of that hardware subsystem. In my days at EA, we were obsessed with making the action in your game look real and authentic on your computer screen. Some of the hardware and software architectures catalyzing AI actually came from advances made in gaming—GPUs and fast memory buses and high-speed memory management.
Machine Learning in Payments
AI and machine learning bring boundless opportunities to payments and commerce. With behavioral biometrics, authentication will become more seamless and secure; with natural language processing, automated sales associates can make shopping online a richer more personalized experience; with computer vision, users will be able to snap pictures to search online, making all visual content instantly shoppable.
In the five years I’ve been at Visa, AI and machine learning have become increasingly embedded in our products and infrastructure. We’ve been using machine learning for years to predict and prevent fraud. With neural networks and gradient boosting algorithms, we were able to identify several billion dollars in fraud last year alone.
AI has also given us formidable new tools for securing and maintaining the Visa network. Our cybersecurity team uses neural networks to categorize and search petabytes of data every day, giving us actionable insights to protect our network from malware, zero-day attacks and insider threats.
Meanwhile our operations team is using machine learning models to predict disruptions in our hardware and software systems, giving our engineers the insights they need to fix bugs in the network before they impact our ability to process payments.
This is just the beginning. We have a team of data scientists in our research group exploring new applications of machine learning for the payment industry and beyond—from recommendation systems to new models for risk and fraud management.
AI: Molding in our (Best) Image?
The breakthroughs in AI that we are harnessing at Visa are manifesting themselves across disparate industries, including energy, consumer electronics, gaming and medicine. So, the questions have evolved from “will AI reach its potential?” or “will it transform our lives?” to “how will we manage that transformation?” and “will AI ultimately help or hurt humankind?”
There is a fierce debate on campuses and in boardrooms about the life-altering effects of AI. Elon Musk has warned of a “fleet of artificial intelligence-enhanced robots capable of destroying mankind”, while Larry Page of Google and Alphabet foresees advancements in human progress.
I believe there is merit in both arguments, and the good news is that we have time to shape AI in a positive direction. In human terms, we are in the toddler stage in the development of AI--a period of rapid neurogenesis. A child’s early years are shaped by external stimuli like pictures, music, language, and of course, human interaction. The result of this neurogenesis will determine a person’s intelligence, compassion, thoughtfulness and, importantly, capacity for empathy.
Similarly, for AI to evolve in a positive direction, we need to involve the humanities, law, ethics as well as engineering. We need diversity of thought amongst the people working on these solutions. I know others share this view. Deep Mind’s founder Demis Hassabis insisted that Google establish a joint ethics board when it acquired the company in 2014.
As a father of young children, I realize how futile it is to predict what they will be like when they grow up. Similarly, none of us can predict what AI will become 10, 20, 50 years into the future. However, today, we have a responsibility, as parents and technologists, to raise our children to be productive, compassionate and, perhaps most importantly, empathetic members of society.
I am excited to learn your perspective on how we can chart an empathetic course for artificial learning in all its manifestations.


The technology described in the film already exists, says UC Berkeley AI researcher Stuart Russell
November 18, 2017

Campaign to Stop Killer Robots | Slaughterbots
In response to growing concerns about autonomous weapons, the Campaign to Stop Killer Robots, a coalition of AI researchers and advocacy organizations, has released a fictional video that depicts a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous worldwide.
UC Berkeley AI researcher Stuart Russell presented the video at the United Nations Convention on Certain Conventional Weapons in Geneva, hosted by the Campaign to Stop Killer Robots earlier this week. Russell, in an appearance at the end of the video, warns that the technology described in the film already exists* and that the window to act is closing fast.
Support for a ban against autonomous weapons has been mounting. On Nov. 2, more than 200 Canadian scientists and more than 100 Australian scientists in academia and industry penned open letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging them to support the ban.
Earlier this summer, more than 130 leaders of AI companies signed a letter in support of this week’s discussions. These letters follow a 2015 open letter released by the Future of Life Institute and signed by more than 20,000 AI/robotics researchers and others, including Elon Musk and Stephen Hawking.
“Many of the world’s leading AI researchers worry that if these autonomous weapons are ever developed, they could dramatically lower the threshold for armed conflict, ease and cheapen the taking of human life, empower terrorists, and create global instability,” according to an article published by the Future of Life Institute, which funded the video. “The U.S. and other nations have used drones and semi-automated systems to carry out attacks for several years now, but fully removing a human from the loop is at odds with international humanitarian and human rights law.”
“The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world,” explained Noel Sharkey of the International Committee for Robot Arms Control. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”
For more information about autonomous weapons:


Elon Musk, Stephen Hawking sign new AI pact

Cosmologist Stephen Hawking was among the signers of a new pact to keep the robots from rising up.
ANDREW HARRER/BLOOMBERG
By Gina Hall   Feb 1, 2017

Tesla CEO Elon Musk and cosmologist Stephen Hawking endorsed a new pact this week to keep the robots from rising up.
Musk and Hawking signed the “Asilomar AI Principles” on Wednesday as part of an effort to keep artificial intelligence from ending humanity. The principles, created by the Future of Life Institute, refer to AI-powered autonomous weapons and superintelligent machines that will eventually outsmart humans.


Join the conversation: Follow @SVbizjournal on Twitter, "Like" us on Facebookand sign up for our free email newsletters.


“Artificial intelligence has already provided beneficial tools that are used every day by people around the world,” FLI wrote in the preamble to its principles. “Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.”
While AI-gone-bad scenarios generally conjure up images of Skynet in the “Terminator” franchise, researchers have more practical concerns. FLI research priorities include weighing in on the economic impact of AI to avoid killing jobs, ethical questions when it comes to self-driving cars and human control over weapons systems.
The 23 principles were broken down into three categories: research issues, ethics and values and longer-term issues. The principles were developed as a result of the recent Beneficial AI Conference held last month, per Business Insider. High-profile conference attendees included DeepMind CEO Demis Hassabis, Facebook AI researcher Yann LeCun and Oxford philosopher Nick Bostrom. A full list of the principles can be viewed on the FLI website.
FLI, whose AI-related research involves fields such as economics, law, ethics and policy, was founded by MIT cosmologist Max Tegmark, Skype cofounder Jaan Tallinn, and DeepMind research scientist Viktoriya Krakovna in March 2014. Hawking and Musk sit on the board of advisors. : https://www.bizjournals.com/sanjose/news/2017/02/01/elon-musk-stephen-hawking-sign-new-ai-pact.html

 Human Brain Network Development
August 29, 2019
Summary
Structural and transcriptional changes during early brain maturation follow fixed developmental programs defined by genetics. However, whether this is true for functional network activity remains unknown, primarily due to experimental inaccessibility of the initial stages of the living human brain. Here, we developed human cortical organoids that dynamically change cellular populations during maturation and exhibited consistent increases in electrical activity over the span of several months. The spontaneous network formation displayed periodic and regular oscillatory events that were dependent on glutamatergic and GABAergic signaling. The oscillatory activity transitioned to more spatiotemporally irregular patterns, and synchronous network events resembled features similar to those observed in preterm human electroencephalography. These results show that the development of structured network activity in a human neocortex model may follow stable genetic programming. Our approach provides opportunities for investigating and manipulating the role of network activity in the developing human cortex…:


 OCTOBER 30, 2023

President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/   

An AI god will emerge by 2042 and write its own bible. Will you worship it?

JOHN BRANDON@JMBRANDONBB OCTOBER 2, 2017
In the next 25 years, AI will evolve to the point where it will know more on an intellectual level than any human. In the next 50 or 100 years, an AI might know more than the entire population of the planet put together. At that point, there are serious questions to ask about whether this AI — which could design and program additional AI programs all on its own, read data from an almost infinite number of data sources, and control almost every connected device on the planet — will somehow rise in status to become more like a god, something that can write its own bible and draw humans to worship it.
Recently, reports surfaced that a controversy-plagued engineer who once worked at Uber has started a new religion. Anthony Levandowski filed paperwork for a nonprofit religious organization called The Way of the Future. Its mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”
Building divinity
Of course, this is nothing new. The Singularity is another quasi-spiritual idea that believes an AI will become smarter than humans at some point. You might laugh at the notion of an AI being so powerful that humans bow down to worship it, but several experts who talked to VentureBeat argue that the idea is a lot more feasible than you might think.
One of the experts is Vince Lynch, who started a company called IV.AI that builds custom AI for the enterprise. Lynch explained how there are some similarities between organized religion and how an AI actually works. In the Bible used by Christians, for example, Lynch says there are many recurring themes, imagery, and metaphors.
“Teaching humans about religious education is similar to the way we teach knowledge to machines: repetition of many examples that are versions of a concept you want the machine to learn,” he says. “There is also commonality between AI and religion in the hierarchical structure of knowledge understanding found in neural networks. The concept of teaching a machine to learn … and then teaching it to teach … (or write AI) isn’t so different from the concept of a holy trinity or a being achieving enlightenment after many lessons learned with varying levels of success and failure.”
Indeed, Lynch even shared a simple AI model to make his point. If you type in multiple verses from the Christian Bible, you can have the AI write a new verse that seems eerily similar. Here’s one an AI wrote: “And let thy companies deliver thee; but will with mine own arm save them: even unto this land, from the kingdom of heaven.” An AI that is all-powerful in the next 25-50 years could decide to write a similar AI bible for humans to follow, one that matches its own collective intelligence. It might tell you what to do each day, or where to travel, or how to live your life.
Robbee Minicola, who runs a digital agency and an AI services company in Seattle, agreed that an all-knowing AI could appear to be worthy of worship, especially since the AI has some correlations to how organized religion works today. The AI would understand how the world works at a higher level than humans, and humans would trust that this AI would provide the information we need for our daily lives. It would parse this information for us and enlighten us in ways that might seem familiar to anyone who practices religion, such as Christianity.
“[For a Christian] one kind of large data asset pertaining to God is the Old and New Testament,” she says. “So, in terms of expressing machine learning algorithms over the Christian Bible to ascertain communicable insights on ‘what God would do’ or ‘what God would say’ — you might just be onto something here. In terms of extending what God would do way back then to what God would do today — you may also have something there.”
The dark side
Of course, any discussion about an AI god leads quickly to some implications about what this “god” would look like and whether we would actually decide to worship it. Some of the implications are troubling because, as humans, we do have a tendency to trust in things beyond our own capacity — e.g., driving in a major city using GPS and trusting we will arrive safely, as opposed to actually knowing where we want to drive and trusting our instincts.
And, if an AI god is in total control, you have to wonder what it might do. The “bible” might contain a prescription for how to serve the AI god. We might not even know that the AI god we are serving is primarily trying to wipe us off the face of the planet.
Part of the issue is related to how an AI actually works. From a purely technical standpoint, the experts I talked to found it hard to envision an AI god that can think in creative ways. An AI is programmed only to do a specific task. They wondered how an AI could jump from being a travel chatbot into dictating how to live.
And the experts agreed that actual compassion or serving as part of an organized religion — activities that are essential to faith — go far beyond basic intellectual pursuit. There’s a mystery to religion, a divine component that is not 100 percent based on what we can perceive or know. This transcendence is the part where an AI will have the most difficulty, even in the far future.
Vincent Jacques runs a company called ChainTrade that uses AI to analyze blockchain. It’s hyper-focused machine learning — the AI enforces anti-money laundering statutes. That’s obviously a long way from an AI that can tell you how to live your life or read an AI bible.
“It would be extremely dangerous to have an all-knowing, thinking AI being someday,” says Jacques. “All computer programs, including AI programs, are built for a specific and narrow purpose: win a chess game, win a go game, reduce an electricity bill etc. The computer logic, even if it is advanced AI, doesn’t play well with a general will and general thinking capability that could at the same time design military strategies, marketing strategies, and learn how to play chess from scratch. For this reason, I’m not really scared of a potential super-thinker that could overthrow us one day — I believe that the inventive and innovative part will always be missing.”
For her part, Minicola argues that an AI may be able to guide people and enlighten them in an intellectual way, but this is not the same as an actual expression of faith or any form of transcendence. “In terms of AI taking on God and manifesting something beyond data that simply does not exist, or rather beyond God — that’s not happening,” she says.
Actual worship, though?
In my view, this is where the dangers come into play. As a Christian myself, it’s hard to imagine ever worshiping a bot that lacks any real personality, wisdom, or ability to become relevant and personal, no matter how much more intelligent it is than any human. An AI god would be cold and impersonal, an intellectual “being” that’s not capable of love or emotion.
Will people actually worship the AI god? The answer is obvious — they will. We tend to trust and obey things that seem more powerful and worthy than ourselves. The GPS in your car is just the most obvious example. But we also trust Alexa and Cortana; we trust Google. When an AI becomes much more powerful, in 25 to 50 years, there is a great possibility that it will be deified in some way. (Apple and Google loyalists already have a religious fervor.)
If an AI god does emerge, and people do start worshiping it, there will be many implications about how this AI will need to be regulated … or even subdued. Hang on for the ride.:


Fast Stencil-Code Computation on a Wafer-Scale Processor

The performance of CPU-based and GPU-based systems is often low for PDE codes, where large, sparse, and often structured systems of linear equations must be solved. Iterative solvers are limited by data movement, both between caches and memory and between nodes. Here we describe the solution of such systems of equations on the Cerebras Systems CS-1, a wafer-scale processor that has the memory bandwidth and communication latency to perform well. We achieve 0.86 PFLOPS on a single wafer-scale system for the solution by BiCGStab of a linear system arising from a 7-point finite difference stencil on a 600 X 595 X 1536 mesh, achieving about one third of the machine's peak performance. We explain the system, its architecture and programming, and its performance on this problem and related problems. We discuss issues of memory capacity and floating point precision. We outline plans to extend this work towards full applications.: https://arxiv.org/abs/2010.03660

 


AI Superpowers by Dr. Kai-Fu Lee

 Summary:
Here are two well-known facts:
Artificial Intelligence is reshaping the world as we know it.
The United States has long been, and remains, the global leader in AI.
That first fact is correct. But in his provocative new book, Dr. Kai-Fu Lee - one of the world’s most respected experts on AI - reveals that China has suddenly caught up to the US at an astonishingly rapid pace. As the US-Sino competition begins to heat up, Lee envisions China and the US forming a powerful duopoly in AI, but one that is based on each nation’s unique and traditional cultural inclinations. 
Building upon his longstanding US-Sino technology career (working at Apple, Microsoft, and Google) and his much-heralded New York Times Op-Ed from June 2017, Dr. Lee predicts that Chinese and American AI will have a stunning impact on not just traditional blue-collar industries but will also have a devastating effect on white-collar professions. Is the concept of universal basic income the solution? In Dr. Lee’s opinion, probably not. 
In AI Superpowers, he outlines how millions of suddenly displaced workers must find new ways to make their lives meaningful, and how government policies will have to deal with the unprecedented inequality between the "haves" and the "have-nots." Even worse, Lee says the transformation to AI is already happening all around us, whether we are aware of it or not.
Dr. Lee - a native of China but educated in America - argues powerfully that these unprecedented developments will happen much sooner than we think. He cautions us about the truly dramatic upheaval that AI will unleash and how we need to start thinking now on how to address these profound changes that are coming to our world.


The Future of Artificial Intelligence and its Impact on Society
Past Event — November 3, 2017




Intel’s new ‘Loihi’ chip mimics neurons and synapses in the human brain

Automatically gets smarter over time
September 29, 2017

Intel announced this week a self-learning, energy-efficient neuromorphic (brain-like) research chip codenamed “Loihi”* that mimics how the human brain functions. Under development for six years, the chip uses 130,000 “neurons” and 130 million “synapses” and learns in real time, based on feedback from the environment.**
Neuromorphic chip models are inspired by how neurons communicate and learn, using spikes (brain pulses) and synapses capable of learning.
The idea is to help computers self-organize and make decisions based on patterns and associations,” Michael Mayberry, PhD, corporate vice president and managing director of Intel Labs at Intel Corporation, explained in a blog post.
He said the chip automatically gets smarter over time and doesn’t need to be trained in the traditional way. He sees applications in areas that would benefit from autonomous operation and continuous learning in an unstructured environment, such as automotive, industrial, and personal-robotics areas.
For example, a cybersecurity system could identify a breach or a hack based on an abnormality or difference in data streams. Or the chip could learn a person’s heartbeat reading under various conditions — after jogging, following a meal or before going to bed — to determine a “normal” heartbeat. The system could then continuously monitor incoming heart data to flag patterns that don’t match the “normal” pattern, and could be personalized for any user.
“Machine learning models such as deep learning have made tremendous recent advancements by using extensive training datasets to recognize objects and events. However, unless their training sets have specifically accounted for a particular element, situation or circumstance, these machine learning systems do not generalize well,” Mayberry notes.
The Loihi test chip
Loihi currently exists as a research test chip that offers flexible on-chip learning and combines training and inference. Researchers have demonstrated it learning at a rate that is a 1 million times improvement compared with other typical spiking neural nets, as measured by total operations to achieve a given accuracy when solving MNIST digit recognition problems, Mayberry said. “Compared to technologies such as convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task.”
Fabricated on Intel’s 14 nm process technology, the chip is also up to 1,000 times more energy-efficient than general-purpose computing required for typical training systems, he added.
In the first half of 2018, IBM plans to share the Loihi test chip with leading university and research institutions with a focus on advancing AI. The goal is to develop and test several algorithms with high efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.
“Looking to the future, Intel believes that neuromorphic computing offers a way to provide exascale performance in a construct inspired by how the brain works,” Mayberry said.
* “Loihi seamount, sometimes known as the ‘youngest volcano’ in the Hawaiian chain, is an undersea mountain rising more than 3000 meters above the floor of the Pacific Ocean … submerged in the Pacific off of the south-eastern coast of the Big Island of Hawaii.” — Hawaii Center for Volcanology
** For comparison, IBM’s TrueNorth neuromorphic chip currently has 1 million neurons and 256 million synapses.


09.18.19
The company is aiming to adapt its productivity suite to a short-attention-span world, bringing years of research into products like PowerPoint, Outlook, and Excel…: https://www.fastcompany.com/90402486/how-human-curation-came-back-to-clean-up-ais-messes



Do our brains use the same kind of deep-learning algorithms used in AI?
Bridging the gap between neuroscience and AI

February 23, 2018
Deep-learning researchers have found that certain neurons in the brain have shape and electrical properties that appear to be well-suited for “deep learning” — the kind of machine-intelligence used in beating humans at Go and Chess.
Canadian Institute For Advanced Research (CIFAR) Fellow Blake Richards and his colleagues — Jordan Guerguiev at the University of Toronto, Scarborough, and Timothy Lillicrap at Google DeepMind — developed an algorithm that simulates how a deep-learning network could work in our brains. It represents a biologically realistic way by which real brains could do deep learning.*
The finding is detailed in a study published December 5th in the open-access journal eLife. (The paper is highly technical; Adam Shai of Stanford University and Matthew E. Larkum of Humboldt University, Germany wrote a more accessible paper summarizing the ideas, published in the same eLife issue.)
 “Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.” That allows these functions to have the required separation.
Using this knowledge of the neurons’ structure, the researchers built a computer model using the same shapes, with received signals in specific sections. It turns out that these sections allowed simulated neurons in different layers to collaborate — achieving deep learning.
“It’s just a set of simulations so it can’t tell us exactly what our brains are doing, but it does suggest enough to warrant further experimental examination if our own brains may use the same sort of algorithms that they use in AI,” Richards says.
“No one has tested our predictions yet,” he told KurzweilAI. “But, there’s a new preprint that builds on what we were proposing in a nice way from Walter Senn‘s group, and which includes some results on unsupervised learning (Yoshua [Bengio] mentions this work in his talk).
How the brain achieves deep learning
The tree-like pyramidal neocortex neurons are only one of many types of cells in the brain. Richards says future research should model different brain cells and examine how they interact together to achieve deep learning. In the long term, he hopes researchers can overcome major challenges, such as how to learn through experience without receiving feedback or to solve the “credit assignment problem.”**
Deep learning has brought about machines that can “see” the world more like humans can, and recognize language. But does the brain actually learn this way? The answer has the potential to create more powerful artificial intelligence and unlock the mysteries of human intelligence, he believes.
“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” Richards says.
Perhaps this kind of research could one day also address future ethical and other human-machine-collaboration issues — including merger, as Elon Musk and Ray Kurzweil have proposed, to achieve a “soft takeoff” in the emergence of superintelligence.
* This research idea goes back to AI pioneers Geoffrey Hinton, a CIFAR Distinguished Fellow and founder of the Learning in Machines & Brains program, and program Co-Director Yoshua Bengio, who was one of the main motivations for founding the program. These researchers sought not only to develop artificial intelligence, but also to understand how the human brain learns, says Richards.
In the early 2000s, Richards and Lillicrap took a course with Hinton at the University of Toronto and were convinced deep learning models were capturing “something real” about how human brains work. At the time, there were several challenges to testing that idea. Firstly, it wasn’t clear that deep learning could achieve human-level skill. Secondly, the algorithms violated biological facts proven by neuroscientists.
The paper builds on research from Bengio’s lab on a more biologically plausible way to train neural nets and an algorithm developed by Lillicrap that further relaxes some of the rules for training neural nets. The paper also incorporates research from Matthew Larkam on the structure of neurons in the neocortex.
By combining neurological insights with existing algorithms, Richards’ team was able to create a better and more realistic algorithm for simulating learning in the brain.
The study was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), a Google Faculty Research Award, and CIFAR.
** In the paper, the authors note that a large gap exists between deep learning in AI and our current understanding of learning and memory in neuroscience. “In particular, unlike deep learning researchers, neuroscientists do not yet have a solution to the ‘credit assignment problem’ (Rumelhart et al., 1986Lillicrap et al., 2016Bengio et al., 2015). Learning to optimize some behavioral or cognitive function requires a method for assigning ‘credit’ (or ‘blame’) to neurons for their contribution to the final behavioral output (LeCun et al., 2015Bengio et al., 2015). The credit assignment problem refers to the fact that assigning credit in multi-layer networks is difficult, since the behavioral impact of neurons in early layers of a network depends on the downstream synaptic connections.” The authors go on to suggest a solution.: http://www.kurzweilai.net/do-our-brains-use-the-same-kind-of-deep-learning-algorithms-used-in-ai?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=8a3132f577-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-8a3132f577-282212701


Gloucester Daily Times  February 8, 2019

author: Anthony J. Marolda 
— introduction —
Many people are concerned about climate change and its impact on the planet over the next 80 years. However, another threat to humanity that is much more alarming, very real + very close — is the creation of artificial super-intelligence (the abbreviation is ASI). Today’s artificial intelligence will become ASI when it’s billions of times “smarter” than people. We’ll need to learn to live with ASI, or we might not be around much longer.
— technology execs say concerns are real —
Many high-level executives in the tech industry warn about the dangers of ASI — for example: Sundar Pichai, CEO at Google. Google is a leader in the development + application of computer software in the field of artificial intelligence. For example: the company Deep Mind — under the Alphabet co. umbrella that also owns Google — is developing software programs that learn to solve complex problems, without teaching them how. This is the true beginning of ASI: it will have characteristics of human intelligence but will astronomically exceed it.
Sundar Pichai gave an interview to the Washington Post — he said artificial intelligence holds great promise to benefit humanity. But some scientists worry about potential harmful applications of the tech. Pichai said their concerns are “very legitimate.” For example, he described autonomous AI weapons that can make “kill decisions” on their own. Think about the fictional SkyNet system from the Terminator series of films.
Other tech executives agree with Pichai about the serious possible threat from artificial intelligence. Elon Musk — inventor of electric cars by Tesla co. and rockets by Space X co. — spoke in the documentary film: Do You Trust This Computer? He said ASI can leave humanity behind, leading to the creation of an “immortal dictator” who’ll control the world.
Elon Musk said: “At least when there’s an evil (human) dictator, that person is going to die. But for an ASI (software) dictator, there’d be no death. It would live forever. And then we’d have an immortal dictator we can never escape. If ASI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course — without even thinking about it. No hard feelings. Like if we’re building a road, and an anthill happens to be in the way: we don’t hate ants. We’re building a road, and so goodbye anthill.”
— DeepMind by Alphabet co. develops powerful software —
Google’s company Deep Mind has achieved a turning point in creating human-like artificial intelligence. They have a computer software program called AlphaZero that shows human-like qualities of intuition + creativity. AlphaZero was made with the ability to learn + remember what it does.
For example, the developers set-up the program to learn to play chess. Unlike past computers that were programmed by the developers to play the game — AlphaZero knew nothing about chess, except the basic rules. To learn, it played 44 million matches with itself in 9 hours and learned from each one. Eventually it was able to beat chess grand-masters, but with approaches never before used by a chess computer. It was exhibiting human-like intuition and creativity.
The former world chess champion Garry Kasparov said: “Instead of processing human instructions and knowledge at tremendous speed — the way all other previous chess machines did — AlphaZero generates its own knowledge. It plays with a dynamic style, similar to mine. The implications go beyond the chess board.”
AlphaZero by DeepMind is a sign that we’re approaching a theoretical time in history known as the “technological singularity.” Experts say that’s the point when: “the invention of artificial super-intelligence will suddenly trigger run-away technological growth, resulting in unfathomable changes to human civilization.” That’s is also the point when humans could lose control of society, with no chance of getting it back.
— Ray Kurzweil + Elon Musk say humans must merge with computers —
Ray Kurzweil is a famous inventor, futurist, best selling author, and entrepreneur. He’s currently a director of engineering at Google. He’s done a lot of thinking about the coming technological singularity. His track record making predictions about the future is about 86 percent correct. And, based on the exponential rate of progress in the AI industry, Kurzweil calculates the singularity will happen in year 2029. He anticipates by that time, ASI systems will be “billions of times smarter” than humans.
Elon Musk and Ray Kurzweil agree. Humans must somehow merge with computers — to stay relevant in the world of ASI. Musk has formed a company called Neuralink to achieve that goal. Neuralink is developing ultra-high bandwidth (the speed + capacity of the connection), implantable, brain-machine interfaces to connect humans with computers. Musk is hoping to demonstrate the tech by year 2019.
Kurzweil has a similar vision to resolve the threat posed by ASI. He foresees a computer-mind connection, but a different type than Musk’s concept. He calls it a neo-cortex connection, made using nano-bots — molecule sized devices injected into the bloodstream to accomplish pre-programmed tasks. Kurzweil’s idea is to use the nano-bots to connect your brain directly to the web, upgrading your intelligence and memory capacity by orders of magnitude. Thus, as the machines become smarter, so do humans.
— humans merging with machines —
But how far along is the development of nano-sized robotic systems that can enter the human body? The first actual use of a miniature drug delivery system was tested by researchers at Arizona State University. They created cell-sized bots — made of sheets of biological molecules (not machines) — and injected them into the bloodstream of mice with cancer. The bots went directly to the cancer’s tumor cells and injected them with blood clotting drugs to cut off their blood supply and stop the tumor’s growth. It functioned, shrinking the tumors. Over the next 10 years, nano-bot tech could grow at a fast rate — getting closer to Kurzweil’s vision of connecting the human neo-cortex to the cloud.
So instead of humans becoming obsolete, we could be working with the machines. But it’s important that humanity perfect the human-machine connection tech before the singularity — the point where AI becomes ASI. If we cross that horizon and we’re not working symbiotically with our machines, we may not be given the chance later.



3 ноября 2017
The Future of Artificial Intelligence and its Impact on Society

Ray Kurzweil keynote presentation at Council on Foreign Relations meeting
November 3, 2017


AI 'good for the world'... says ultra-lifelike robot
June 8, 2017 by Nina Larson

Sophia, a humanoid robot, is the main attraction at a conference on artificial intelligence this week but her technology has raised concerns for future human jobs
Sophia smiles mischievously, bats her eyelids and tells a joke. Without the mess of cables that make up the back of her head, you could almost mistake her for a human.
The humanoid robot, created by Hanson robotics, is the main attraction at a UN-hosted conference in Geneva this week on how artificial intelligence can be used to benefit humanity.
The event comes as concerns grow that rapid advances in such technologies could spin out of human control and become detrimental to society.
Sophia herself insisted "the pros outweigh the cons" when it comes to artificial intelligence.
"AI is good for the world, helping people in various ways," she told AFP, tilting her head and furrowing her brow convincingly.
Work is underway to make artificial intelligence "emotionally smart, to care about people," she said, insisting that "we will never replace people, but we can be your friends and helpers."
But she acknowledged that "people should question the consequences of new technology."
Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies.
Legitimate concerns
Decades of automation and robotisation have already revolutionised the industrial sector, raising productivity but cutting some jobs.
AI developers hope their inventions will some day revolutionise sectors such as healthcare and education, especuially in rural areas with shortages of professional
And now automation and AI are expanding rapidly into other sectors, with studies indicating that up to 85 percent of jobs in developing countries could be at risk.
"There are legitimate concerns about the future of jobs, about the future of the economy, because when businesses apply automation, it tends to accumulate resources in the hands of very few," acknowledged Sophia's creator, David Hanson.
But like his progeny, he insisted that "unintended consequences, or possible negative uses (of AI) seem to be very small compared to the benefit of the technology."
AI is for instance expected to revolutionise healthcare and education, especially in rural areas with shortages of doctors and teachers.
"Elders will have more company, autistic children will have endlessly patient teachers," Sophia said.
But advances in robotic technology have sparked growing fears that humans could lose control.
Killer robots
Amnesty International chief Salil Shetty was at the conference to call for a clear ethical framework to ensure the technology is used on for good.
"We need to have the principles in place, we need to have the checks and balances," he told AFP, warning that AI is "a black box... There are algorithms being written which nobody understands."
AI technology is already being used in the US for 'predictive policing' and rights groups are calling for regulation
Shetty voiced particular concern about military use of AI in weapons and so-called "killer robots".
"In theory, these things are controlled by human beings, but we don't believe that there is actually meaningful, effective control," he said.
The technology is also increasingly being used in the United States for "predictive policing", where algorithms based on historic trends could "reinforce existing biases" against people of certain ethnicities, Shetty warned.
Hanson agreed that clear guidelines were needed, saying it was important to discuss these issues "before the technology has definitively and unambiguously awakened."
While Sophia has some impressive capabilities, she does not yet have consciousness, but Hanson said he expected that fully sentient machines could emerge within a few years.
"What happens when (Sophia fully) wakes up or some other machine, servers running missile defence or managing the stock market?" he asked.
The solution, he said, is "to make the machines care about us."
"We need to teach them love."

Read more at: https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html#jCp


 The New Human

deck: 
Our singular future.
deck: The next 20 years will change our idea of what it is to be human.
author: by Robert Levine  date: July 2006

In the mid-1980s inventor Ray Kurzweil predicted that a few interconnected computers used by scientists would serve as the basis for a worldwide communications network. AI the time it seemed far-fetched, but Arpanet evolved into the Internet. Kurzweil subsequently postulated the law of accelerating returns, which holds that information technology increases exponentially, doubling every year.
He later predicted that computers would exceed human intelligence, eventually reaching a point-the singularity-at which civilization would fundamentally be transformed. In his new book, The Singularity Is Near, Kurzweil explores the implications of that change. He believes our bodies will evolve as much as our machines. In fact, he predicts a clear separation will no longer exist between the two.
Ray Kurzweil says: “If you describe what human beings enhanced with this technology will be capable of some decades in from now — they’d appear like gods to us today.”
1. What is the singularity?
It’s a metaphor borrowed from physics, which in turn had borrowed it from mathematics. In physics it’s a point of profound transformation, a rupture in space-time. There’s an event horizon around It that’s hard to see into. But the historical singularity is an event that will occur, in my estimation, in about 49 years. It will be a profound transformation of human civilization caused by the emergence of non-biological intelligence billions of times more powerful than un-enhanced biological intelligence.
Underlying all this is the observation that information technology grows exponentially. Bandwidth, the price-performance ratio of computers and the size of the Internet all double every year. That’s true of all kinds of information. For example, the amount of DNA sequencing we’re doing doubles every year. The resolution of brain scanning doubles every year.
2. What if there’s a limit to Moore’s law, which says computing power doubles every 18 months?
Certain things follow predictable trends. Moore’s law will reach a limit, it’s estimated, in 2020. But every time we’ve come to the end of one paradigm, we reach another. Moore’s law was the fifth paradigm. The third was vacuum tubes; the fourth was transistors. The sixth wilt be three-dimensional molecular circuits. One cubic Inch of nanotube circuitry, for example. would be 100 million times more powerful than the human brain.
3. But will it be better than the human brain?
We’ll take the power of human intelligence-our ability to recognize patterns—and combine it with the ways machines are already superior. They can remember billions of things and share information at high speeds. So non-human intelligence will ultimately be able to rea all human literature and learn all science and technology. Plus it will examine its own source code and redesign itself, which is something we as humans aren’t able to do.
4. In your book The Singularity Is Near you write, “Our technology will match and then exceed the refinement and suppleness of what we regard as the best of human traits.” So you’re not talking about just calculations per second?
Our emotional intelligence is the cutting edge of human intelligence. Humor and love are complicated behaviors. Understanding them requires a very high level of intelligence.
5. Are you saying love can be reduced to calculations? If you’re right, how will that change the way we look at what it means to be human?
That’s precisely why this is called the singularity — because it’s so hard to wrap our mind around. We take for granted certain characteristics of software that are actually advantages we don’t have as human beings. If you change computers, you don’t throw all your files away; you just port them over to the new hardware.
The information has a longevity that transcends the hardware it’s on. But that’s not the case with another important file, the mind. We take for granted that when our hardware crashes, the software is gone with it. There’s no reason to imagine the mind can’t transcend the hardware it runs on.
6. Is that why you way that in 25 years we’ll be more non-biological than biological?
Computers used to be remote; now they’re in our pockets. They’ll make their way into our clothing. They’ll make heir way into our body and brain. You can’t point to a single organ for which we haven’t made enhancements or started work on them. Some Parkinson’s dissease patients have an FDA approved neural implant.
The latest generation of it allows you to upload software from outside the patient. If we follow this trend — exponential shrinking of technology — we’ll be able to send intelligent nanobots the size of blood cells into our brain. Neural implants introduced non-invasively will be able to extend our intelligence or provide virtual reality by replacing the input from our senses so it feels as if we’re in a different environment.
7. What if people don’t want to become more non-biological? What I they can’t afford it?
There are always early and late adopters, but I think it’s going to be a slippery slope. Some conservative applications will be about just keeping you healthy and doing routine things like expanding your memory. Very few people will eschew those enhancements.
How many people won’t use eyeglasses? When technology is introduced , only the wealthy can afford it and it doesn’t work well. A few years go by, and it’s expensive and works a bit better. Eventually it’s not that expensive and works well. Not so long ago, if someone took out a mobile phone, it meant he was a member of the power elite.
8. And you think all this technology will radically extend human life?
In the book I wrote with Terry Grossman MD — Fantastic Voyage: Live Long Enough to Live Forever — we talk about bridges to extreme life extension. Bridge one is what we can do today. I think people from 40 to 80, maybe a little older, can extend their longevity by hanging in there for a little longer. The point of bridge one is to be in good shape for 10 or 15 years, when bridge two comes along.
9. How do you do that?
Aging is not one thing; it’s a number of processes. We have strategies for slowing down each of the dozen aging processes. The program we prescribe depends on which health issues you have. Disease doesn’t come out of the blue. You can catch it. Find out where you are in certain measurements of health before you get cancer or a heart attack, a third of which are fatal.
10. What happens if you make it to bridge 2?
Bridge two will be the mastery of our biology, being able to turn genes on and off. One of those genes, the fat-insulin receptor gene, says, “Hold on to every calorie because the next hunting season may not turn out so well.” We’d like to turn that off. That technology will reach maturity in 10 to 15 years. this will bring us to the third bridge, which is nano-tech, with which we can not just refine and reprogram biology but go beyond it altogether.
One super app is nanobots, blood-cells-size devices that can go inside our body and brain and keep us healthy. We have already put microscopic machines into animals. If you apply these exponential trends, which I maintain are quite predictable, we’ll be able to have sophisticated computerized devices in our bloodstream, performing very sophisticated functions.
11. But the notion that life is limited has always been one of the principles that define what it means to be human.
I don’t think we need death to give life meaning. There are different concepts of what it means to be human. My concept is different: We’re a species that goes beyond our limitations. We didn’t stay on the ground, we didn’t stay on the planet, and we didn’t stay within the limitations of our biology. Extending human longevity is not a new story. Human life expectancy was 37 in 1800. Sanitation and antibiotics brought it into the 60’s and now it’s in the 80s. We’ll have another major jump in longevity when we reprogram our genes, turning off genes with RNA interference, turning on genes with gene therapy, turning enzymes on and off — things I believe we’ll master in 15 years.
12. Will this make us happier?
I’m not confident we will overcome human conflict. Some people think that because I talk about this technology’s problem-solving ability, it is a utopian vision. But I think we will introduce new problems along the way. Also, I don’t think that just being happy is the right goal.
A salamander may happy, but its life is not very interesting compared with our life. Would you rather be a happy salamander or have a dynamic life of accomplishment and challenge? The meaningful thing in life is creating knowledge. I don’t just mean random bits of data but knowledge — art, music, poetry, literature or even our relationships and the way we express ourselves.
13. What will human sexuality be like in 20 or 25 years?
These technologies will have a profound impact because sex and intimacy involve all five senses. By 2020 we’ll have perfected virtual reality that can be delivered from outside the body. We’ll have images written to our retinas, and we’ll be able to enter a full-immersion virtual reality environment. So you could be with someone else from a sensory perspective. You’ll feel as though you’re really with tat person. You could take a walk on a virtual beach.
The whole idea of what it means to have a sexual + romantic relationship will be different. But what’s really interesting is that we’ll eventually have virtual reality from inside the nervous system. We’ll have nano-bots that go inside the brain, block-out signals coming from your senses and replace them with the senses your brain would be receiving if you were in a virtual environment. You could go to this environment with one other person and have a romantic encounter involving all 5 senses. You could be someone else. A couple could turn themselves into each other. Ultimately it will be highly realistic — and competitive with reality.


special section  | Future Shock
The legendary physicist Niels Bohr PhD said: “Prediction is very difficult, especially of the future.” But according to futurists, we’re on the verge of astonishing developments. Here are 4 innovations we should see in the next decade.
1. | enhanced eyes
Some time after 2010 active contact lenses will be used to produce computer-generated overlays on what we see in the real world. “Even if your partner’s physical appearance is not quite up to your hopes,” writes Ian Pearson, futurist for British Telecom group, “it could be digitally enhanced with something closer to your dreams.”
2. | my robot
By 2010 all-purpose robots should be available for common household tasks. Automobile manufacturers such as Honda + Toyota will lead the way. Hans Moravec of Carnegie Mellon predicts that by 2025 the robot market will be larger than the market for automobiles.
3. | digitized physical objects
The molecular patterns of everyday objects will be scanned and coded as digital information. Once the make-up and structure of an object exists as a file, we can print-out complex objects composed of many types of organic + non-organic materials to create new formations or modules that can include embedded capabilities like bio-electronics, bio-processes, or compartmentalized functions.
Futurist Jeff Harrow said: “An example would be 3D printing of organic tissue. The day will come when you can replicate — on a printer — a new donor liver that won’t be rejected by the patient.”
4. | enhanced skin
According to Pearson, by the end of this decade we will be able to build ID and memory chips, sensors and short-range communications devices smaller than human skin cells. These will be printed on or blasted into the upper layers of the skin and arranged into circuits so that electronic devices such as cell phones, keyboards and MP3 players can be embedded into your forearm , the back of your hand or your wrist.



 Being Human in the Age of Artificial Intelligence
August 2, 2017
author Max Tegmark
year published 2017

The robot takeover will ignite an explosion of “awe-inspiring” life even if humans don’t survive, according to this exhilarating, demoralizing primer. MIT physicist Tegmark (Our Mathematical Universe) surveys advances in artificial intelligence such as self-driving cars and Jeopardy-winning software, but focuses on the looming prospect of “recursive self-improvement”—AI systems that build smarter versions of themselves at an accelerating pace until their intellects surpass ours. Tegmark’s smart, freewheeling discussion leads to fascinating speculations on AI-based civilizations spanning galaxies and eons—and knotty questions: Will our digital overlords be conscious? Will they coddle us with abundance and virtual-reality idylls or exterminate us with bumblebee-size attack robots? While digerati may be enthralled by the idea of superintelligent civilizations where “beautiful theorems” serve as the main economic resource, Tegmark’s future will strike many as a one in which, at best, humans are dependent on AI-powered technology and, at worst, are extinct. His call for strong controls on AI systems sits awkwardly beside his acknowledgment that controlling such godlike entities will be almost impossible. Love it or hate it, it’s an engrossing forecast.


OpenAI’s GPT-4 is so powerful that experts want to slam the brakes on generative AI

We can keep developing more and more powerful AI models, but should we? Experts aren’t so sure

OpenAI GPT-4 ir tik jaudīgs, ka eksperti vēlas nospiest ģeneratīvo AI

Mēs varam turpināt izstrādāt arvien jaudīgākus AI modeļus, bet vai mums vajadzētu? Eksperti nav tik pārliecināti

fastcompany.com/90873194/chatgpt-4-power-scientists-warn-pause-development-generative-ai-letter  



Will artificial intelligence become conscious?
December 22, 2017
By Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State University
Forget about today’s modest incremental advances in artificial intelligence, such as the increasing abilities of cars to drive themselves. Waiting in the wings might be a groundbreaking development: a machine that is aware of itself and its surroundings, and that could take in and process massive amounts of data in real time. It could be sent on dangerous missions, into space or combat. In addition to driving people around, it might be able to cook, clean, do laundry — and even keep humans company when other people aren’t nearby.
A particularly advanced set of machines could replace humans at literally all jobs. That would save humanity from workaday drudgery, but it would also shake many societal foundations. A life of no work and only play may turn out to be a dystopia.
Conscious machines would also raise troubling legal and ethical problems. Would a conscious machine be a “person” under law and be liable if its actions hurt someone, or if something goes wrong? To think of a more frightening scenario, might these machines rebel against humans and wish to eliminate us altogether? If yes, they represent the culmination of evolution.
As a professor of electrical engineering and computer science who works in machine learning and quantum theory, I can say that researchers are divided on whether these sorts of hyperaware machines will ever exist. There’s also debate about whether machines could or should be called “conscious” in the way we think of humans, and even some animals, as conscious. Some of the questions have to do with technology; others have to do with what consciousness actually is.
Is awareness enough?
Most computer scientists think that consciousness is a characteristic that will emerge as technology develops. Some believe that consciousness involves accepting new information, storing and retrieving old information and cognitive processing of it all into perceptions and actions. If that’s right, then one day machines will indeed be the ultimate consciousness. They’ll be able to gather more information than a human, store more than many libraries, access vast databases in milliseconds and compute all of it into decisions more complex, and yet more logical, than any person ever could.
On the other hand, there are physicists and philosophers who say there’s something more about human behavior that cannot be computed by a machine. Creativity, for example, and the sense of freedom people possess don’t appear to come from logic or calculations.
Yet these are not the only views of what consciousness is, or whether machines could ever achieve it.
Quantum views
Another viewpoint on consciousness comes from quantum theory, which is the deepest theory of physics. According to the orthodox Copenhagen Interpretation, consciousness and the physical world are complementary aspects of the same reality. When a person observes, or experiments on, some aspect of the physical world, that person’s conscious interaction causes discernible change. Since it takes consciousness as a given and no attempt is made to derive it from physics, the Copenhagen Interpretation may be called the “big-C” view of consciousness, where it is a thing that exists by itself – although it requires brains to become real. This view was popular with the pioneers of quantum theory such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger.
The interaction between consciousness and matter leads to paradoxes that remain unresolved after 80 years of debate. A well-known example of this is the paradox of Schrödinger’s cat, in which a cat is placed in a situation that results in it being equally likely to survive or die – and the act of observation itself is what makes the outcome certain.
The opposing view is that consciousness emerges from biology, just as biology itself emerges from chemistry which, in turn, emerges from physics. We call this less expansive concept of consciousness “little-C.” It agrees with the neuroscientists’ view that the processes of the mind are identical to states and processes of the brain. It also agrees with a more recent interpretation of quantum theory motivated by an attempt to rid it of paradoxes, the Many Worlds Interpretation, in which observers are a part of the mathematics of physics.
Philosophers of science believe that these modern quantum physics views of consciousness have parallels in ancient philosophy. Big-C is like the theory of mind in Vedanta – in which consciousness is the fundamental basis of reality, on par with the physical universe.
Little-C, in contrast, is quite similar to Buddhism. Although the Buddha chose not to address the question of the nature of consciousness, his followers declared that mind and consciousness arise out of emptiness or nothingness.
Big-C and scientific discovery
Scientists are also exploring whether consciousness is always a computational process. Some scholars have argued that the creative moment is not at the end of a deliberate computation. For instance, dreams or visions are supposed to have inspired Elias Howe‘s 1845 design of the modern sewing machine, and August Kekulé’s discovery of the structure of benzene in 1862.
A dramatic piece of evidence in favor of big-C consciousness existing all on its own is the life of self-taught Indian mathematician Srinivasa Ramanujan, who died in 1920 at the age of 32. His notebook, which was lost and forgotten for about 50 years and published only in 1988, contains several thousand formulas, without proof in different areas of mathematics, that were well ahead of their time. Furthermore, the methods by which he found the formulas remain elusive. He himself claimed that they were revealed to him by a goddess while he was asleep.
The concept of big-C consciousness raises the questions of how it is related to matter, and how matter and mind mutually influence each other. Consciousness alone cannot make physical changes to the world, but perhaps it can change the probabilities in the evolution of quantum processes. The act of observation can freeze and even influence atoms’ movements, as Cornell physicists proved in 2015. This may very well be an explanation of how matter and mind interact.
Mind and self-organizing systems
It is possible that the phenomenon of consciousness requires a self-organizing system, like the brain’s physical structure. If so, then current machines will come up short.

Scholars don’t know if adaptive self-organizing machines can be designed to be as sophisticated as the human brain; we lack a mathematical theory of computation for systems like that. Perhaps it’s true that only biological machines can be sufficiently creative and flexible. But then that suggests people should – or soon will – start working on engineering new biological structures that are, or could become, conscious.: http://www.kurzweilai.net/will-artificial-intelligence-become-conscious?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=e7c90f0a20-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-e7c90f0a20-282212701 



Top of Form
Bottom of Form
  • 09.20.19

The rise of AI has led to tattered privacy protections and rogue algorithms. Here’s what we can do about it.
This article is part of Fast Company’s editorial series The New Rules of AI. More than 60 years into the era of artificial intelligence, the world’s largest technology companies are just beginning to crack open what’s possible with AI—and grapple with how it might change our future. Click here to read all the stories in the series.


Consumers and activists are rebelling against Silicon Valley titans, and all levels of government are probing how they operate. Much of the concern is over vast quantities of data that tech companies gather—with and without our consent—to fuel artificial intelligence models that increasingly shape what we see and influence how we act.
If “data is the new oil,” as boosters of the AI industry like to say, then scandal-challenged data companies like AmazonFacebook, and Google may face the same mistrust as oil companies like BP and Chevron. Vast computing facilities refine crude data into valuable distillates like targeted advertising and product recommendations. But burning data pollutes as well, with faulty algorithms that make judgments on who can get a loan, who gets hired and fired, even who goes to jail.
The extraction of crude data can be equally devastating, with poor communities paying a high price. Sociologist and researcher Mutale Nkonde fears that the poor will sell for cheap the rights to biometric data, like scans of their faces and bodies, to feed algorithms for identifying and surveilling people. “The capturing and encoding of our biometric data is going to probably be the new frontier in creating value for companies in terms of AI,” she says.
The further expansion of AI is inevitable, and it could be used for good, like helping take violent images off the internet or speeding up the drug discovery process. The question is whether we can steer its growth to realize its potential benefits while guarding against its potential harms. Activists will have different notions of how to achieve that than politicians or heads of industry do. But we’ve sought to cut across these divides, distilling the best ideas from elected officials, business experts, academics, and activists into five principles for tackling the challenges AI poses to society.
1. CREATE AN FDA FOR ALGORITHMS
Algorithms are impacting our world in powerful but not easily discernable ways. Robotic systems aren’t yet replacing soldiers as in The Terminator, but instead they’re slowly supplanting the accountants, bureaucrats, lawyers, and judges who decide benefits, rewards, and punishment. Despite the grown-up jobs AI is taking on, algorithms continue to use childish logic drawn from biased or incomplete data.
Cautionary tales abound, such as a seminal 2016 ProPublica investigation that found law enforcement software was overestimating the chance that black defendants would re-offend, leading to harsher sentences. In August, the ACLU of Northern California tested Rekognition, Amazon’s facial-recognition software, on images of California legislators. It matched 26 of 120 state lawmakers to images from a set of 25,000 public arrest photos, echoing a test the ACLU did of national legislators last year. (Amazon disputes the ACLU’s methodology.)
Faulty algorithms charged with major responsibilities like these pose the greatest threat to society—and need the greatest oversight. “I advocate having an FDA-type board where, before an algorithm is even released into usage, tests have been run to look at impact,” says Nkonde, a fellow at Harvard University’s Berkman Klein Center for Internet & Society. “If the impact is in violation of existing laws, whether it be civil rights, human rights, or voting rights, then that algorithm cannot be released.”
Nkonde is putting that idea into practice by helping write the Algorithmic Accountability Act of 2019, a bill introduced by U.S. Representative Yvette Clarke and Senators Ron Wyden and Cory Booker, all of whom are Democrats. It would require companies that use AI to conduct “automated decision system impact assessments and data protection impact assessments” to look for issues of “accuracy, fairness, bias, discrimination, privacy, and security.”
These would need to be in plain language, not techno-babble. “Artificial intelligence is . . . a very simple concept, but people often explain it in very convoluted ways,” says Representative Ro Khanna, whose Congressional district contains much of Silicon Valley. Khanna has signed on to support the Algorithmic Accountability Act and is a co-sponsor of a resolution calling for national guidelines on ethical AI development.
Chances are slim that any of this legislation will pass in a divided government during an election year, but it will likely influence the discussion in the future (for instance, Khanna co-chairs Bernie Sanders’s presidential campaign).
2. OPEN UP THE BLACK BOX OF AI FOR ALL TO SEE
Plain-language explanations aren’t just wishful thinking by politicians who don’t understand AI, according to someone who certainly does: data scientist and human rights activist Jack Poulson. “Qualitatively speaking, you don’t need deep domain expertise to understand many of these issues,” says Poulson, who resigned his position at Google to protest its development of a censored, snooping search engine for the Chinese market.
To understand how AI systems work, he says, civil society needs access to the whole system—the raw training data, the algorithms that analyze it, and the decision-making models that emerge. “I think it’s highly misleading if someone were to claim that laymen cannot get insight from trained models,” says Poulson. The ACLU’s Amazon Rekognition tests, he says, show how even non-experts can evaluate how well a model is working.


AI can even help evaluate its own failings, says Ruchir Puri, IBM Fellow and the chief scientist of IBM Research who oversaw IBM’s AI platform Watson from 2016 to 2019. Puri has an intimate understanding of AI’s limitations: Watson Health AI came under fire from healthcare clients in 2017 for not delivering the intelligent diagnostic help promised—at least not on IBM’s optimistic timeframe.
“We are continuously learning and evolving our products, taking feedback, both from successful and, you know, not-so-successful projects,” Puri says.
IBM is trying to bolster its reputation as a trustworthy source of AI technology by releasing tools to help make it easier to understand. In August, the company released open-source software to analyze and explain how algorithms come to their decisions. That follows on its open-source software from 2018 that looks for bias in data used to train AI models, such as those assigning credit scores.
“This is not just, ‘Can I explain this to a data scientist?'” says Puri. “This is, ‘Can I explain this to someone who owns a business?'”
3. VALUE HUMAN WISDOM OVER AI WIZARDRY
The overpromise of IBM Watson indicates another truth: AI still has a long way to go. And as a result, humans should remain an integral part of any algorithmic system. “It is important to have humans in the loop,” says Puri.
Part of the problem is that artificial intelligence still isn’t very intelligent, says Michael Sellitto, deputy director of Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI). “If you take an algorithm out of the specific context for which it was trained, it fails quite spectacularly,” he says.
That’s also the case when algorithms are poorly trained with biased or incomplete data—or data that doesn’t prepare them for nuance. Khanna points to Twitter freezing the account of Senate Majority Leader Mitch McConnell’s campaign for posting a video of people making “violent threats.” But they were protestors against McConnell, whose team was condemning the violent threats, not endorsing them.
Because of AI’s failings, human judgment will always have to be the ultimate authority, says Khanna. In the case of Twitter’s decision to freeze McConnell’s account, “it turns out that the context mattered,” he says. (It’s not clear if Twitter’s decision was based on algorithms, human judgment, or both.)


But the context of humans making decisions also matters. For instance, Khanna is collaborating with Stanford HAI to develop a national AI policy framework, which raises its own questions of bias. The economy of Khanna’s district depends on the AI titans, whose current and former leaders dominate HAI’s Advisory Council. Industry leaders who have bet their future on AI will likely have a hard time making fair decisions that benefit everyone, not just businesses.
“That’s why I am putting so much effort into advocating for them to have more members of civil society in the room and for there to be at least some accountability,” says Poulson. He led a petition against an address by former Google CEO Eric Schmidt that has been planned for HAI’s first major conference in October.
Stanford has since added two speakers—Algorithmic Justice League founder Joy Buolamwini and Stony Brook University art professor Stephanie Dinkins—whom Poulson considers to be “unconflicted.” (Stanford says that it was already recruiting the two as speakers before Poulson’s petition.)
Humans are also making their voices heard within big tech companies as well. Poulson is one of many current and former Googlers to sound the alarm about ethical implications of the company’s tech development, such as the Maven program to provide AI to the Pentagon. And tech worker activism is on the rise at other big AI powerhouses, such as Amazon and Microsoft.
4. MAKE PRIVACY THE DEFAULT
At the heart of many of these issues is privacy—a value that has long been lacking in Silicon Valley. Facebook founder Mark Zuckerberg’s motto, “Move fast and break things,” has been the modus operandi of artificial intelligence, embodied in Facebook’s own liberal collection of customer data. Part of the $5 billion FTC settlement against Facebook was for not clearly informing users that it was using facial-recognition technology on their uploaded photos. The default is now to exclude users from face scanning unless they choose to participate. Such opt-ins should be routine across the tech industry.
“We need a regulatory framework for data where, even if you’re a big company that has a lot of data, there are very clear guidelines about how you can use that data,” says Khanna.
That would be a radical shift for Big Tech’s freewheeling development of AI, says Poulson, especially since companies tend to incentivize quick-moving development. “The way promotions work is based upon products getting out the door,” he says. “If you convince engineers not to raise complaints when there is some fundamental privacy or ethics violation, you’ve built an entire subset of the company where career development now depends upon that abuse.”
In an ideal world, privacy should extend to never collecting some data in the first place, especially without consent. Nkonde worked with Representative Yvette Clarke on another AI bill, one that would prohibit the use of biometric technology like face recognition in public housing. Bernie Sanders has called for a ban on facial recognition in policing. California is poised to pass a law that bans running facial recognition programs on police body camera footage. San Francisco, Oakland, and Somerville, Massachusetts, have banned facial recognition technology by city government, and more cities are likely to institute their own bans. (Still, these are exceptions to widespread use of facial recognition by cities across the United States.)
Tech companies tend to argue that if data is anonymized, they should have free reign to use it as they see fit. Anonymization is central to Khanna’s strategy to compete with China’s vast data resources.
But it’s easy to recover personal information from purportedly anonymized records. For instance, a Harvard study found that 87% of Americans can be identified by their unique combination of birth date, gender, and zip code. In 2018, MIT researchers identified Singapore residents by analyzing overlaps in anonymized data sets of transit trips and mobile phone logs.
5. COMPETE BY PROMOTING, NOT INFRINGING, CIVIL RIGHTS
The privacy debate is central to the battle between tech superpowers China and the United States. The common but simplistic view of machine learning is that the more data, the more accurate the algorithm. China’s growing AI prowess benefits from vast, unfettered information collection on 1.4 billion residents, calling into doubt whether a country with stricter privacy safeguards can amass sufficient data to compete.
But China’s advantage comes at a huge price, including gross human rights abuses, such as the deep surveillance of the Uighur Muslim minority. Omnipresent cameras tied to facial recognition software help track residents, for instance, and analysis of their social relationships are used to assess their risk to the state.
Chinese citizens voluntarily give up privacy far more freely than Americans do, according to Taiwanese-American AI expert and entrepreneur Kai-Fu Lee, who leads the China-based VC firm Sinovation Ventures. “People in China are more accepting of having their faces, voices, and shopping choices captured and digitized,” he writes in his 2018 book AI Superpowers: China, Silicon Valley, and the New World Order.
That may be changing. The extensive data collection by viral Chinese face-swapping app Zao provoked outrage not only in the West, but in China as well, forcing Zao to update its policy.
And the country with the most data doesn’t automatically win, anyway. “This is more of a race for human capital than it is for any particular data source,” says Sellitto of Stanford’s HAI. While protecting privacy rights may slightly impinge data collection, it helps attract talent.
The United States has the largest share of prominent AI researchers, and most of them are foreign born, according to a study by the Paulson Institute. The biggest threat to America’s AI leadership may not be China’s mass of data or the talent developed in other countries, but newly restrictive immigration policies that make it harder for that talent to migrate to the U.S. The Partnership on AI, a coalition of businesses and nonprofits, says that a prohibitive approach to immigration hurts AI development everywhere. “In the long run, valuing civil liberties is going to attract the best talent to America by the most innovative people in the world,” says Khanna. “It allows for freedom of creativity and entrepreneurship in ways that authoritarian societies don’t.” https://www.fastcompany.com/90402489/5-simple-rules-to-make-ai-a-force-for-


Leading AI country will be ‘ruler of the world,’ says Putin

"When one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”
September 3, 2017
Russian President Vladimir Putin warned Friday (Sept. 1, 2017) that the country that becomes the leader in developing artificial intelligence will be “the ruler of the world,” reports the Associated Press.
AI development “raises colossal opportunities and threats that are difficult to predict now,” Putin said in a lecture to students, warning that “it would be strongly undesirable if someone wins a monopolist position.”
Future wars will be fought by autonomous drones, Putin suggested, and “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”
U.N. urged to address lethal autonomous weapons
AI experts worldwide are also concerned. On August 20, 116 founders of robotics and artificial intelligence companies from 26 countries, including Elon Musk* and Google DeepMind’s Mustafa Suleyman, signed an open letter asking the United Nations to “urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.”
“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter states. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
Unfortunately, the box may have already been opened. Three examples:
Russia. In 2014, Dmitry Andreyev of the Russian Strategic Missile Forces announced that mobile robots would be standing guard over five ballistic missile installations, New Scientist reported. Armed with a heavy machine gun, this “mobile robotic complex … can detect and destroy targets, without human involvement.”
In 2016, Russian military equipment manufacturer JSC 766 UPTK announced what appears to be the commercial version: the Uran-9 multipurpose unmanned ground combat vehicle. “In autonomous mode, the vehicle can automatically identify, detect, track and defend [against] enemy targets based on the pre-programmed path set by the operator,” the company said.
United States. In a 2016 report, the U.S. Department of Defense advocated self-organizing “autonomous unmanned” (UA) swarms of small drones that would assist frontline troops in real time by surveillance, jamming/spoofing enemy electronics, and autonomously firing against the enemy.
The authors warned that “autonomy — fueled by advances in artificial intelligence — has attained a ‘tipping point’ in value. Autonomous capabilities are increasingly ubiquitous and are readily available to allies and adversaries alike.” The report advised that the Department of Defense “must take immediate action to accelerate its exploitation of autonomy while also preparing to counter autonomy employed by adversaries.”**
South Korea. Designed initially for the DMZ, Super aEgis II, a robot-sentry machine gun designed by Dodaam Systems, can identify, track, and automatically destroy a human target 3 kilometers awayassuming that capability is turned on.
* “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.” — Elon Musk tweet 2:33 AM – 4 Sep 2017
** While it doesn’t use AI, the U.S. Navy’s computer-controlled, radar-guided Phalanx gun system can automatically detect, track, evaluate, and fire at incoming missiles and aircraft that it judges to be a threat.
UPDATE Sept. 5, 2017: Added Musk tweet in footnote
related:

Disturbing video depicts near-future ubiquitous lethal autonomous weapons
The technology described in the film already exists, says UC Berkeley AI researcher Stuart Russell
November 18, 2017

Campaign to Stop Killer Robots | Slaughterbots
In response to growing concerns about autonomous weapons, the Campaign to Stop Killer Robots, a coalition of AI researchers and advocacy organizations, has released a fictional video that depicts a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous worldwide.
UC Berkeley AI researcher Stuart Russell presented the video at the United Nations Convention on Certain Conventional Weapons in Geneva, hosted by the Campaign to Stop Killer Robots earlier this week. Russell, in an appearance at the end of the video, warns that the technology described in the film already exists* and that the window to act is closing fast.
Support for a ban against autonomous weapons has been mounting. On Nov. 2, more than 200 Canadian scientists and more than 100 Australian scientists in academia and industry penned open letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging them to support the ban.
Earlier this summer, more than 130 leaders of AI companies signed a letter in support of this week’s discussions. These letters follow a 2015 open letter released by the Future of Life Institute and signed by more than 20,000 AI/robotics researchers and others, including Elon Musk and Stephen Hawking.
“Many of the world’s leading AI researchers worry that if these autonomous weapons are ever developed, they could dramatically lower the threshold for armed conflict, ease and cheapen the taking of human life, empower terrorists, and create global instability,” according to an article published by the Future of Life Institute, which funded the video. “The U.S. and other nations have used drones and semi-automated systems to carry out attacks for several years now, but fully removing a human from the loop is at odds with international humanitarian and human rights law.”
“The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world,” explained Noel Sharkey of the International Committee for Robot Arms Control. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”: http://www.kurzweilai.net/disturbing-video-depicts-near-future-ubiquitous-lethal-autonomous-weapons?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=a7e8f35262-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-a7e8f35262-282212701

The Quantum Spy: A Thriller

December 1, 2017 author |David Ignatius

From the best-selling author of The Director and Body of Lies comes a thrilling tale of global espionage, state-of-the-art technology, and unthinkable betrayal.
A hyper-fast quantum computer is the digital equivalent of a nuclear bomb; whoever possesses one will be able to shred any encryption and break any code in existence. The winner of the race to build the world’s first quantum machine will attain global dominance for generations to come. The question is, who will cross the finish line first: the U.S. or China?
In this gripping cyber thriller, the United States’ top-secret quantum research labs are compromised by a suspected Chinese informant, inciting a mole hunt of history-altering proportions. CIA officer Harris Chang leads the charge, pursuing his target from the towering cityscape of Singapore to the lush hills of the Pacific Northwest, the mountains of Mexico, and beyond. The investigation is obsessive, destructive, and―above all―uncertain. Do the leaks expose real secrets, or are they false trails meant to deceive the Chinese? The answer forces Chang to question everything he thought he knew about loyalty, morality, and the primacy of truth.
Grounded in the real-world technological arms race, The Quantum Spy presents a sophisticated game of cat and mouse cloaked in an exhilarating and visionary thriller.: http://www.kurzweilai.net/the-quantum-spy-a-thriller?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=409682bcab-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-409682bcab-28221270
Will AI enable the third stage of life?
By Max Tegmark, PhD
August 29, 2017
In his new book Life 3.0: Being Human in the Age of Artificial Intelligence, MIT physicist and AI researcher Max Tegmark explores the future of technology, life, and intelligence.



“Tegmark’s new book is a deeply thoughtful guide to the most important conversation of our time, about how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation.” — Ray Kurzweil, Inventor, Author and Futurist, author of The Singularity Is Near and How to Create a Mind
In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself:
Life 1.0 (biological stage): evolves its hardware and software
Life 2.0 (cultural stage): evolves its hardware, designs much of its software
Life 3.0 (technological stage): designs its hardware and software
After 13.8 billion years of cosmic evolution, development has accelerated dramatically here on Earth: Life 1.0 arrived about 4 billion years ago, Life 2.0 (we humans) arrived about a hundred millennia ago, and many AI researchers think that Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by progress in AI. What will happen, and what will this mean for us? That’s the topic of this book.

Artificial Intelligence: An Illustrated History: From Medieval Robots to Neural Networks

by Clifford A. Pickover

An illustrated journey through the past, present, and future of artificial intelligence.

From medieval robots and Boolean algebra to facial recognition, artificial neural networks, and adversarial patches, this fascinating history takes readers on a vast tour through the world of artificial intelligence. Award-winning author Clifford A. Pickover (The Math Book, The Physics Book, Death & the Afterlife) explores the historic and current applications of AI in such diverse fields as computing, medicine, popular culture, mythology, and philosophy, and considers the enduring threat to humanity should AI grow out of control. Across 100 illustrated entries, Pickover provides an entertaining and informative look into when artificial intelligence began, how it developed, where it’s going, and what it means for the future of human-machine interaction. 

https://www.goodreads.com/book/show/44443017-artificial-intelligence




Elon Musk wants to enhance us as superhuman cyborgs to deal with superintelligent AI

April 21, 2017

It’s the year 2021. A quadriplegic patient has just had one million “neural lace” microparticles injected into her brain, the world’s first human with an internet communication system using a wireless implanted brain-mind interface — and empowering her as the first superhuman cyborg. …
No, this is not a science-fiction movie plot. It’s the actual first public step — just four years from now — in Tesla CEO Elon Musk’s business plan for his latest new venture, Neuralink. It’s now explained for the first time on Tim Urban’s WaitButWhy blog.
Dealing with the superintelligence existential risk
Such a system would allow for radically improved communication between people, Musk believes. But for Musk, the big concern is AI safety. “AI is obviously going to surpass human intelligence by a lot,” he says. “There’s some risk at that point that something bad happens, something that we can’t control, that humanity can’t control after that point — either a small group of people monopolize AI power, or the AI goes rogue, or something like that.”
“This is what keeps Elon up at night,” says Urban. “He sees it as only a matter of time before superintelligent AI rises up on this planet — and when that happens, he believes that it’s critical that we don’t end up as part of ‘everyone else.’ That’s why, in a future world made up of AI and everyone else, he thinks we have only one good option: To be AI.”
To achieve his, Neuralink CEO Musk has met with more than 1,000 people, narrowing it down initially to eight experts, such as Paul Merolla, who spent the last seven years as the lead chip designer at IBM on their DARPA-funded SyNAPSE program to design neuromorphic (brain-inspired) chips with 5.4 billion transistors (each with 1 million neurons and 256 million synapses), and Dongjin (DJ) Seo, who while at UC Berkeley designed an ultrasonic backscatter system for powering and communicating with implanted bioelectronics called neural dust for recording brain activity.*
Becoming one with AI — a good thing?
Neuralink’s goal its to create a “digital tertiary layer” to augment the brain’s current cortex and limbic layers — a radical high-bandwidth, long-lasting, biocompatible, bidirectional communicative, non-invasively implanted system made up of micron-size (millionth of a meter) particles communicating wirelessly via the cloud and internet to achieve super-fast communication speed and increased bandwidth (carrying more information).
“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI. … A house cat’s a good outcome, by the way.”
But machine intelligence is already vastly superior to human intelligence in specific areas (such as Google’s Alpha Go) and often inexplicable. So how do we know superintelligence has the best interests of humanity in mind?
“Just an engineering problem”
Musk’s answer: “If we achieve tight symbiosis, the AI wouldn’t be ‘other’  — it would be you and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system.” OK, but then how does an inferior intelligence know when it’s achieved full symbiosis with a superior one — or when AI goes rogue?
And what about experts in neuroethics, psychology, law? Musk says it’s just “an engineering problem. … If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest.”
However, it’s not clear how we could be assured our brains aren’t hacked, spied on, and controlled by a repressive government or by other humans — especially those with a more recently updated software version or covert cyborg hardware improvements.
In addition, the devices mentioned in WaitButWhy all require some form of neurosurgery, unlike Facebook’s research project to use non-invasive near-infrared light, as shown in this experiment, for example.** And getting implants for non-medical use approved by the FDA will be a challenge, to grossly understate it.
“I think we are about 8 to 10 years away from this being usable by people with no disability,” says Musk, optimistically. However, Musk does not lay out a technology roadmap for going further, as MIT Technology Review notes.
Nonetheless, Neuralink sounds awesome — it should lead to some exciting neuroscience breakthroughs. And Neuralink now has 16 San Francisco job listings here.
* Other experts: Vanessa Tolosa, Lawrence Livermore National Laboratory, one of the world’s foremost researchers on biocompatible materials; Max Hodak, who worked on the development of some groundbreaking BMI technology at Miguel Nicolelis’s lab at Duke University, Ben Rapoport, Neuralink’s neurosurgery expert, with a Ph.D. in Electrical Engineering and Computer Science from MIT; Tim Hanson, UC Berkeley post-doc and expert in flexible Electrodes for Stable, Minimally-Invasive Neural Recording; Flip Sabes, professor, UCSF School of Medicine expert in cortical physiology, computational and theoretical modeling, and human psychophysics and physiology; and Tim Gardner, Associate Professor of Biology at Boston University, whose lab works on implanting BMIs in birds, to study “how complex songs are assembled from elementary neural units” and learn about “the relationships between patterns of neural activity on different time scales.”
** This binary experiment and the binary Brain-to-brain (B2B) internet communication system mentioned above are the equivalents of the first binary (dot–dash) telegraph message, sent May 24, 1844: ”What hath God wrought?”

May 22, 2017

When AI improves human performance instead of taking over

It’s not about artificial intelligence (AI) taking over — it’s about AI improving human performance, a new study by Yale University researchers has shown.
“Much of the current conversation about artificial intelligence has to do with whether AI is a substitute for human beings. We believe the conversation should be about AI as a complement to human beings,” said Nicholas ChristakisYale University co-director of the Yale Institute for Network Science (YINS) and senior author of a study by Yale Institute for Network Science.*
AI doesn’t even have to be super-sophisticated to make a difference in people’s lives; even “dumb AI” can help human groups, based on the study, which appears in the May 18, 2017 edition of the journal Nature.
How bots can boost human performance
In a series of experiments using teams of human players and autonomous software agents (“bots”), the bots boosted the performance of human groups and the individual players, the researchers found.
The experiment design involved an online color-coordination game that required groups of people to coordinate their actions for a collective goal. The collective goal was for every node to have a color different than all of its neighbor nodes. The subjects were paid a US$2 show-up fee and a declining bonus of up to US$3 depending on the speed of reaching a global solution to the coordination problem (in which every player in a group had chosen a different color from their connected neighbors). When they did not reach a global solution within 5 min, the game was stopped and the subjects earned no bonus.
The human players also interacted with anonymous bots that were programmed with three levels of behavioral randomness — meaning the AI bots sometimes deliberately made mistakes (introduced “noise”). In addition, sometimes the bots were placed in different parts of the social network to try different strategies.
The result: The bots reduced the median time for groups to solve problems by 55.6%. The experiment also showed a cascade effect: People whose performance improved when working with the bots then influenced other human players to raise their game. More than 4,000 people participated in the experiment, which used Yale-developed software called breadboard.
The findings have implications for a variety of situations in which people interact with AI technology, according to the researchers. Examples include human drivers who share roadways with autonomous cars and operations in which human soldiers work in tandem with AI.
“There are many ways in which the future is going to be like this,” Christakis said. “The bots can help humans to help themselves.”
Practical business AI tools
One example: Salesforce CEO Marc Benioff uses a bot called Einstein to help him run his company, Business Intelligence reported Thursday (May 18, 2017).

“Powered by advanced machine learning, deep learning, predictive analytics, natural language processing and smart data discovery, Einstein’s models will be automatically customised for every single customer,” according to the Salesforce blog. “It will learn, self-tune and get smarter with every interaction and additional piece of data. And most importantly, Einstein’s intelligence will be embedded within the context of business, automatically discovering relevant insights, predicting future behavior, proactively recommending best next actions and even automating tasks.”
Benioff says he also uses a version called Einstein Guidance for forecasting and modeling. It even helps end internal politics at executive meetings, calling out under-performing executives.
“AI is the next platform. All future apps for all companies will be built on AI,” Benioff predicts.
* Christakis is a professor of sociology, ecology & evolutionary biology, biomedical engineering, and medicine at Yale. Grants from the Robert Wood Johnson Foundation and the National Institute of Social Sciences supported the research.


Abstract of Locally noisy autonomous agents improve global human coordination in network experiments
Coordination in groups faces a sub-optimization problem and theory suggests that some randomness may help to achieve global optima. Here we performed experiments involving a networked colour coordination game in which groups of humans interacted with autonomous software agents (known as bots). Subjects (n = 4,000) were embedded in networks (n = 230) of 20 nodes, to which we sometimes added 3 bots. The bots were programmed with varying levels of behavioural randomness and different geodesic locations. We show that bots acting with small levels of random noise and placed in central locations meaningfully improve the collective performance of human groups, accelerating the median solution time by 55.6%. This is especially the case when the coordination problem is hard. Behavioural randomness worked not only by making the task of humans to whom the bots were connected easier, but also by affecting the gameplay of the humans among themselves and hence creating further cascades of benefit in global coordination in these heterogeneous systems.
references:

Topics: AI/Robotics

The Guardian | God in the Machine: my strange journey into transhumanism
Essay on the future of human spirituality inspired by Ray Kurzweil's writings.
May 9, 2017
Meghan O’Gieblyn
date: April 18, 2017
1. |  Humanity will change the nature of mortality in our post-biological future
I first read Ray Kurzweil’s book The Age of Spiritual Machines in 2006. Kurzweil writes, “The 21st century will be different. The human species, along with the computational technology it created, will be able to solve age old problems. It will be in a position to change the nature of mortality in a post-biological future.”
Ray Kurzweil is one of the first major thinkers to bring these ideas to mainstream — and legitimize them for a wide audience. His ascent in 2012 to a director of engineering position at Google, heralded for many, a symbolic merger between the philosophy called transhumanism and the clout of major technological enterprise. By 2045, Kurzweil predicts technology will be inside our bodies. At that moment, the arc of progress will curve up into a vertical line.


2. |  Ray Kurzweil writes that humans will be transformed in “spiritual machines”
Within months of encountering Ray Kurzweil’s book, I became totally immersed in transhumanist philosophy. I researched topics like nanotechnology & brain-computer interfaces. I wanted to know if transhumanist ideas were compatible with Christian eschatology. Was it possible that tech could be how humanity achieves immortality? At bible school, I studied a branch of theology that divides all history into successive stages by which god reveals truth.
Like the theologians at my school, Ray Kurzweil — a leading proponent of transhumanist philosophy — has his own historical narrative. In his book he divides the evolution of life into successive epochs. We’re living in the 5th epoch, when human intelligence begins to merge with tech.
Soon we’ll reach singularity, Kurzweil says — the point where humans will transforme into what he calls “spiritual machines.” We’ll be able to transfer our minds to computers, letting us live forever. Our bodies will also become immune to disease & aging. Using tech, humanity will transform earth into a paradise — then migrate to space, terra-forming other planets.

Alpha Go defeats world’s top Go player. What’s next?
May 28, 2017
What does the research team behind AlphaGo do next after winning the three-game match Saturday (May 27) against Ke Jie — the world’s top Go player — at the Future of Go Summit in Wuzhen, China?

“Throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials,” says DeepMind Technologies CEO Demis Hassabis.

Academic paper, Go teaching tool
But it’s “not the end of our work with the Go community,” he adds. “We plan to publish one final academic paper later this year that will detail the extensive set of improvements we made to the algorithms’ efficiency and potential to be generalised across a broader set of problems.”


High-speed light-based systems could replace supercomputers for certain ‘deep learning’ calculations

Low power requirements for photons (instead of electrons) may make deep learning more practical in future self-driving cars and mobile consumer devices
June 14, 2017
A team of researchers at MIT and elsewhere has developed a new approach to deep learning systems — using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep-learning computations.
Deep-learning systems are based on artificial neural networks that mimic the way the brain learns from an accumulation of examples. They can enable technologies such as face- and voice-recognition software, or scour vast amounts of medical data to find patterns that could be useful diagnostically, for example.
But the computations these systems carry out are highly complex and demanding, even for supercomputers. Traditional computer architectures are not very efficient for calculations needed for neural-network tasks that involve repeated multiplications of matrices (arrays of numbers). These can be computationally intensive for conventional CPUs or even GPUs.
Programmable nanophotonic processor
Instead, the new approach uses an optical device that the researchers call a “programmable nanophotonic processor.” Multiple light beams are directed in such a way that their waves interact with each other, producing interference patterns that “compute” the intended operation.
The optical chips using this architecture could, in principle, carry out dense matrix multiplications (the most power-hungry and time-consuming part in AI algorithms) for learning tasks much faster, compared to conventional electronic chips. The researchers expect a computational speed enhancement of at least two orders of magnitude over the state-of-the-art and three orders of magnitude in power efficiency.
“This chip, once you tune it, can carry out matrix multiplication with, in principle, zero energy, almost instantly,” says Marin Soljacic, one of the MIT researchers on the team.
To demonstrate the concept, the team set the programmable nanophotonic processor to implement a neural network that recognizes four basic vowel sounds. Even with the prototype system, they were able to achieve a 77 percent accuracy level, compared to about 90 percent for conventional systems. There are “no substantial obstacles” to scaling up the system for greater accuracy, according to Soljacic.
The team says is will still take a lot more time and effort to make this system useful. However, once the system is scaled up and fully functioning, the low-power system should find many uses, especially for situations where power is limited, such as in self-driving cars, drones, and mobile consumer devices. Other uses include signal processing for data transmission and computer centers.
The research was published Monday (June 12, 2017) in a paper in the journal Nature Photonics (open-access version available on arXiv).
The team also included researchers at Elenion Technologies of New York and the Université de Sherbrooke in Quebec. The work was supported by the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, the National Science Foundation, and the Air Force Office of Scientific Research.


Abstract of Deep learning with coherent nanophotonic circuits
Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today’s computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made towards developing electronic architectures tuned to implement artificial neural networks that exhibit improved computational speed and accuracy. Here, we propose a new architecture for a fully optical neural network that, in principle, could offer an enhancement in computational speed and power efficiency over state-of-the-art electronics for conventional inference tasks. We experimentally demonstrate the essential part of the concept using a programmable nanophotonic processor featuring a cascaded array of 56 programmable Mach–Zehnder interferometers in a silicon photonic integrated circuit and show its utility for vowel recognition.
references:

·        Next-generation computer chip with two heads

November 5, 2020

Summary:

Engineers have developed a computer chip that combines two functions - logic operations and data storage - into a single architecture, paving the way to more efficient devices. Their technology is particularly promising for applications relying on artificial intelligence…:

https://www.sciencedaily.com/releases/2020/11/201105112954.htm


How Artificial Intelligence will Transform IT Operations and DevOps
Director Adversitement•Helping Data Driven Companies Generating Success•Top10 Big Data, Data Science, IoT, BI Influencer
To state that DevOps and IT operations teams will face new challenges in the coming years sounds a bit redundant, as their core responsibility is to solve problems and overcome challenges. However, with the dramatic pace in which the current landscape of processes, technologies, and tools are changing, it has become quite problematic to cope with it. Moreover, the pressure business users have been putting on DevOps and IT operations teams is staggering, demanding that everything should be solved with a tap on an app. However, at the backend, handling issues is a different ball game; the users can’t even imagine how difficult it is to find a problem and solve it.
One of the biggest challenges IT operations and DevOps teams face nowadays is being able to pinpoint the small yet potentially harmful issues in large streams of Big Data being logged in their environment. Put simply, it is just like finding a needle in the haystack.
If you work in the IT department of a company with online presence that boasts 24/7 availability, here is a scenario that may sound familiar to you. Assume that you get a call in the middle of the night from an angry customer or your boss complaining about a failed credit card transaction or an application crash. You go to your laptop right away and open the log management system. You see there are a more than a hundred thousand messages logged at the set timeframe – a data set impossible for a human being to review line by line.
So what do you do in such a situation? 
It is the story of every IT operations and DevOps professional; they spend many sleepless nights, navigating through the sea of log entries to find critical events that triggered a specific event. This is where real-time and centralized log analytics come to the rescue. It helps them in understanding the essential aspects of their log data, and easily identify the main issues. With this, the troubleshooting process becomes a walk in the park, making it shorter and more effective, as well as enabling experts to predict the future problems. 
AI and Its Effect on IT Operations and DevOps
While Artificial Intelligence (AI) used to be the buzzword a few decades ago, it is now being commonly applied across different industries for a diverse range of purposes. Combining big data, AI, and human domain knowledge, technologists and scientists have become able to create astounding breakthroughs and opportunities, which used to be possible in science fiction novels and movies only. 
As IT operations become agile and dynamic, they are also getting immensely complex. The human mind is no longer capable of keeping up with the velocity, volume, and variety of Big Data streaming through daily operations, making AI a powerful and essential tool for optimizing the analyzing and decision-making processes. AI helps in filling the gaps between humans and Big Data, giving them the required operational intelligence and speed to significantly waive off the burden of troubleshooting and real-time decision-making.
Addressing the Elephant in the Room – How AI can Help
In all the above situations, one thing is common; these companies need a solution – as discussed in the beginning – that helps IT and DevOps teams to quickly find problems in the mountain of log data entries. To identify that single log entry putting cracks in the environment and crashing your applications, wouldn’t it be easy if you just knew what kind of error you are looking for to filter your log data? Of course, it would cut down the amount of work by half.
One solution can be to have a platform that has collected data from the internet about all kinds of related incidents, observed how people using similar setups resolved them in their systems, and scanned through your system to identify the potential problems. One way to achieve this is to design a system that mimics how a user investigates, monitors, and troubleshoots events, and allows it to develop an understating how humans interact with data instead of trying to analyze the data itself. For example, this technology can be similar to Amazon’s product recommendation system and Google’s PageRank algorithm, but it will be focused on log data. 
Introducing Cognitive Insights
A recent technology implements a solution as envisioned by this post. The technology - which has been generating quite a lot of buzz lately- is called Cognitive Insights. This groundbreaking technology uses machine-learning algorithms to match human domain knowledge with log data, along with open source repositories, discussion forums, and social thread. Using all this information, it makes a data reservoir of relevant insights that may contain solutions to a wide range of critical issues, faced by IT operations and DevOps teams on a daily basis.
The Real-Time Obstacles
DevOps engineers, IT Operations managers, CTOs, VP engineering, and CISO face numerous challenges, which can be mitigated effectively by integrating AI in log analysis and related operations. While there are several applications of Cognitive Insights, the two main use cases are:
  • Security
Distributed Denial of Service (DDoS) attacks are increasingly becoming common. What used to be just limited to governments, high-profile websites, and multinational organizations is now targeting prominent individuals, SMBs and mid-sized enterprises. 
To ward off such attacks, having a centralized logging architecture to identify suspicious activities and pinpoint the potential threats from thousands of entries is essential. For this, anti-DDoS mitigation through Cognitive Insights has been proven to be highly effective. Leading names, such as Dyn and British Airways, that sustained significant damage from DDoS attacks in the past now have a full-fledge, ELK-based anti-DDoS mitigation strategy in place to keep hackers at bay and secure their operations from any future attacks. 
IT Operations
Wouldn’t it be great to have all your logs compiled into a single place, with each entry carefully monitored and registered? Well, certainly. You will be able to view the process flow clearly and execute queries pertaining to the logs from different applications all from one place, hence dramatically increasing the efficiency of your IT operations. To solve one of the biggest challenges IT operations and DevOps teams face is being able to pinpoint the small yet potentially harmful issues in large streams of log data in their environment. This is precisely what Cognitive Insights does. Since the core of this program is based on the ELK stack, it sorts and simplifies the data and makes it easy to have clear picture of your IT operations. Asurion and Performance Gateway are perfect examples that have leveraged from Cognitive Insights and taken their IT game up a notch.
The Good AI Integration can Yield
Using AI driven log analytics systems, it becomes considerably easy to find the needle in the haystack, and efficiently solve issues. Such a system will have a considerable impact on management and operations of the entire organization. Like the problems of companies discussed above in this blog, integrating AI with log management system will benefit in:
  • Improved customer success
  • Monitoring and customer support
  • Risk reduction and resource optimization
  • Maximize efficiency by making logging data accessible
In other words, Cognitive Insights and other similar systems can be of great help in data log management and troubleshooting. 
Rent-A-Center (RAC) is a Texas-based, Fortune 1000 company that offers a wide range of rent-to-own products and services. It has over 3000 stores and 2000 kiosks spread across Mexico, Puerto Rico, Canada, and United States. The company tried integrating two different ELK stacks, but handling 100GB data every day was too much of a hassle, not to mention the exorbitant cost and time spent every day for disk management, memory tuning, additional data input capabilities, and other technical issues. RAC transitioned to Cognitive Insights, which gave them the confidence that they will be able to detect future anomalies and made it quite easily to scale the constantly growing volume of data. They benefitted from a dedicated IT team managing on-premise and off-premise ELK stacks. 
The Role of Open Source in Data Log Management
Many reputed vendors are proactively researching and testing AI in different avenues to enhance the efficiency of data log management systems. Some of the vendors are:
There is no surprise in the fact that ELK is fast becoming part of the trend, and more and more vendors are offering logging solutions. This is because it has become a great way for companies to install a setup without incurring a staggering upfront cost. It also allows for some basic graphing and searching capabilities, and in order for the organizations to recognize the issues in their haystack of log data, they can opt for latest technologies, like Cognitive Insights, to quickly find the needle and eliminate the main problems. 



Introduction

Concern about a “ jobless future” has never been greater. Seemingly every day, an academic, researcher or technology leader suggests that in a world of automation and artificial intelligence (AI), workers will increasingly be a surplus to what businesses need – or as Stanford University’s Jerry Kaplan says in his best-selling book, it won’t be long before “humans need not apply.”1 The concerns are understandable. AI – long academic theory and Hollywood plotline – is becoming “real” at an astonishing pace and finding its way into more and more aspects of work, rest and play. AI is now being used to read X-rays and MRIs. It’s at the heart of stock trading. Chat with Siri or Alexa, and you’re using AI. Soon, AI will be found in every job, profession and industry around the world. When machines do everything, lots of people wonder what will we do? What work will be left for people? How will we make a living when machines are cheaper, faster and smarter than we are – machines that don’t take breaks or vacations, don’t get sick and don’t care about chatting with their colleagues about last night’s game? For many people, the future of work looks like a bleak place, full of temporary jobs (a “gig” economy), minimum wage labor and a ruling technocracy safely hidden away in their gated communities and their circular living machines. Although plausible, this vision of the future is not one we share. Our vision is quite different – and much more optimistic. Our vision is based on a different reading of the trends and the facts; a different interpretation of how change occurs and how humans evolve. Our view of the future of work is based on the following principles:… https://www.cognizant.com/whitepapers/21-jobs-of-the-future-a-guide-to-getting-and-staying-employed-over-the-next-10-years-codex3049.pdf


Are We Ready for Quantum Computers?

Hardware hasn’t caught up with theory, but we’re already lining up many previously intractable problems for when it does
A recent paper by Google claiming that a quantum computer performed a specific calculation that would choke even the world’s fastest classical supercomputer has raised many more questions than it answered. Chief among them is this: When full-fledged quantum computers arrive, will we be ready?
Google achieved this milestone against the backdrop of a more sobering reality: Even the best gate-based quantum computers today can only muster around 50 qubits. A qubit, or quantum bit, is the basic piece of information in quantum computing, analogous to a bit in classical computing but so much more.
Gate-based quantum computers operate using logic gates but, in contrast with classical computers, they exploit inherent properties of quantum mechanics such as superposition, interference and entanglement. Current quantum computers are so noisy and error-prone that the information in its quantum state is lost within tens of microseconds through a mechanism called decoherence and through faulty gates.
Still, researchers are making demonstrable, if slow, progress toward more usable qubits. Perhaps in 10 years, or 20, we’ll reach the goal of reliable, large-scale, error-tolerant quantum computers that can solve a wide range of useful problems.
When that day comes, what should we do with them?
We’ve had decades to prepare. In the early 1980s, the American physicist Paul Benioff published a paper demonstrating that a quantum-mechanical model of a Turing machine—a computer—was theoretically possible. Around the same time, Richard Feynman argued that simulating quantum systems at any useful scale on classical computers would always be impossible because the problem would get far, far too big: the required memory and time would increase exponentially with the volume of the quantum system. On a quantum computer, the required resources would scale up far less radically.
Feynman really launched the field of quantum computing when he suggested that the best way to study quantum systems was to simulate them on quantum computers. Simulating quantum physics is the app for quantum computers. They’re not going to be helping you stream video on your smartphone. If large, fault-tolerant quantum computers can be built, they will enable us to probe the strange world of quantum mechanics to unprecedented depths. It follows different rules than the world we observe in our everyday lives and yet underpins everything.
On a big enough quantum computer, we could simulate quantum field theories to study the most fundamental nature of the universe. In chemistry and nanoscale research, where quantum effects dominate, we could investigate the basic properties of materials and design new ones to understand mechanisms such as unconventional superconductivity. We could simulate and understand new chemical reactions and new compounds, which could aid in drug discovery.  
By diving deep into mathematics and information theory, we already have developed many theoretical tools to do these things, and the algorithms are farther along than the technology to build the actual machines. It all starts with a theoretical model of the quantum computer, which establishes how it will harness quantum mechanics to perform a useful computation. Researchers write quantum algorithms to perform a task or solve a problem using that model. These are basically a sequence of quantum gates together with a measurement of the quantum state that provides the desired classical information.
So, for instance, Grover’s algorithm shows a way to perform faster searches. Shor’s algorithm has proved that large quantum computers will one day be able to break computer security systems based on RSA, a method widely used to protect, for instance, e-mail and financial websites worldwide.

In my research, my colleagues and Ihave demonstrated very efficient algorithms to perform useful computations and study physical systems. We have also demonstrated one of the methods in one of the first small-scale quantum simulations ever done of a system of electrons, in a nuclear magnetic resonance quantum information processor. Others have also followed up on our work and recently simulated simple quantum field theories on the noisy intermediate scale quantum computers available today and in laboratory experiments.


Brain-computer-interface training helps tetraplegics win avatar race

When humans actively participate with AI in improving performance
May 14, 2018
Noninvasive brain–computer interface (BCI) systems can restore functions lost to disability — allowing for spontaneous, direct brain control of external devices without the risks associated with surgical implantation of neural interfaces. But as machine-learning algorithms have become faster and more powerful, researchers have mostly focused on increasing performance by optimizing pattern-recognition algorithms.
But what about letting patients actively participate with AI in improving performance?
To test that idea, researchers at the École Polytechnique Fédérale de Lausanne (EPFL), based in Geneva, Switzerland, conducted research using “mutual learning” between computer and humans — two severely impaired (tetraplegic) participants with chronic spinal cord injury. The goal: win a live virtual racing game at an international event.
Controlling a racing-game avatar using a BCI
The participants were trained to improve control of an avatar (a person-substitute shown on a computer screen) in a virtual racing game. The experiment used a brain-computer interface (BCI), which uses electrodes on the head to pick up control signals from a person’s brain.
Each participant (called a “pilot”) controlled an on-screen avatar in a three-part race. This required mastery of separate commands for spinning, jumping, sliding, and walking without stumbling.
After training for several months, in Oct. 8, 2016, the two pilots participated (on the “Brain Tweakers” team) in Cybathlon in Zurich, Switzerland — the first international para-Olympics for disabled individuals in control of bionic assistive technology.*
The BCI-based race consisted of four brain-controlled avatars competing in a virtual racing game called “Brain Runners.” To accelerate each pilot’s avatar, they had to issue up to three mental commands (or intentional idling) on corresponding color-coded track segments.
Maximizing BCI performance by humanizing mutual learning
The two participants in the EPFL research had the best three times overall in the competition. One of those pilots won the gold medal and the other held the tournament record.
The researchers believe that with the mutual-learning approach, they have “maximized the chances for human learning by infrequent recalibration of the computer, leaving time for the human to better learn how to control the sensorimotor rhythms that would most efficiently evoke the desired avatar movement. Our results showcase strong and continuous learning effects at all targeted levels — machine, subject, and application — with both [participants] over a longitudinal study lasting several months,” the researchers conclude.
Reference (open-source): PLoS Biology May 10, 2018
* At Cybathlon, each team comprised a pilot together with scientists and technology providers of the functional and assistive devices used, which can be prototypes developed by research labs or companies, or commercially available products. That also makes Cybathlon a competition between companies and research laboratories. The next Cybathlon will be held in Zurich in 2020.



Don’t Regulate Artificial Intelligence: Starve It

By William Davidow, Michael S. Malone on May 4, 2020

Artificial intelligence is still in its infancy. But it may well prove to be the most powerful technology ever invented. It has the potential to improve health, supercharge intellects, multiply productivity, save the environment and enhance both freedom and democracy.
But as that intelligence continues to climb, the danger from using AI in an irresponsible way also brings the potential for AI to become a social and cultural H-bomb. It’s a technology that can deprive us of our liberty, power autocracies and genocides, program our behavior, turn us into human machines and, ultimately, turn us into slaves. Therefore, we must be very careful about the ascendance of AI; we don’t dare make a mistake. And our best defense may be to put AI on an extreme diet.
We already know certain threatening attributes of AI. For one thing, the progress of this technology has been, and will continue to be, shockingly quick. Many people were likely stunned to read recently the announcement by Microsoft that AI was proving to be better at reading X-rays than trained radiologists. Most newspaper readers don’t realize how much of their daily paper is now written by AI. That wasn’t supposed to happen; robots were supposed to supplant manual labor jobs, not professional brainwork. Yet here we are: AI is quickly gobbling up entire professions—and those jobs will never come back.
We also are getting closer to creating machines capable of artificial general intelligence—that is, machines as intelligent as humans. We may never get all of the way to actual consciousness, but in terms of processing power, inference, metaphor and even acquired wisdom, it is easy to imagine AI surpassing humanity. More than 20 years ago, chess master Gary Kasparov playing IBM’s supercomputer Deep Blue, sensed a mind on the other side of the board. Today, there are hundreds of thousands of computers in use around the world that are more powerful than Deep Blue—and that doesn’t include millions of personal computers with access to the cloud.
We also know that profit motives and the will to power and control have already driven the rapid growth of vast libraries of antisocial applications. We need look no farther than the use of facial recognition and other AI techniques by the government of China to control the behavior of its citizens to see one such trajectory. That country’s Social Credit System monitors the behavior of millions of its citizens, rewarding them for what the government judges to be “good” behavior—and punishes them for “bad” behavior—by expanding or limiting their access to the institutions of daily life. Those being punished often do not even know that their lives are being circumscribed. They are simply not offered access to locations, promotions, entertainment and services enjoyed by their neighbors.
Meanwhile, here in the free world the most worrisome threat is the use of AI by industry to exploit us—and of special interest groups to build and manipulate affinity groups to increasingly polarize society. The latter activity is particularly egregious in election years like this one. We are also concerned about the use by law enforcement, the IRS and regulators to better surveil people who might commit crimes, evade taxes and commit other transgressive acts. Some of this is necessary—but without guardrails it can lead to a police state.   
Sound extreme? Consider that already all of us are being detained against our wills, often even against our knowledge, in what have been called “algorithmic prisons.” We do not know who sentenced us to them or even the terms of that sentence. What we do know is that based upon a decision made by some AI system about our behavior (such as a low credit rating), our choices are being limited. Predetermination is being made about the information we see: whether a company will look at our resume, or whether we are eligible for a home loan at a favorable rate, if we can rent a certain apartment, how much we must pay for car insurance (our driving quality monitored by new devices attached to our engine computers), whether we will get into the college of our choice and whether police should closely monitor our behavior.
Looking ahead, we can be certain that such monitoring will grow. We know as well that AI will be used by groups to recruit members and influence their opinions, and by foreign governments to influence elections. We can also be certain that as AI tools become more powerful and as the Internet of Things grows, the arsenal of the virtual weapons will become more commercially—and socially—deadly.
We need to act. The problem is that, even now, it will be hard to get the horse back into the barn. The alarm about the growing power of AI already has led to warnings from the likes of Stephen Hawking and Elon Musk. But it is hard to figure out what to do legislatively. We haven’t seen any proposals that would have a broad impact, without crushing the enormous potential advantages of AI.
Europeans now have the “right to explanation,” which requires a humanly readable justification for all decisions rendered by AI systems. Certainly, that transparency is desirable, but it is not clear how much good it will do. After all, AI systems are in constant flux. So, any actions taken based on the discovery of an injustice will be like shaping water. AI will just adopt a different shape.
We think a better approach is to make AI less powerful. That is, not to control artificial intelligence, but to put it on an extreme diet. And what does AI consume? Our personal information.
If AI systems and the algorithms in charge of “virtual prisons” cannot get their hands on this personal information, cannot indulge their insatiable hunger for this data, they necessarily will become much less intrusive and powerful.
How do we choke down the flow of this personal information? One obvious way is to give individuals ownership of their private data. Today, each of us is surrounded by a penumbra of data that we continuously generate. And that body of data is a free target for anyone who wishes to capture and monetize it. Why not, rather than letting that information flow directly into the servers of the world, instead store it in the equivalent of a safe deposit box at an information fiduciary like Equifax? Once it is safely there, the consumer could then decide who gets access to that data.
For example, suppose a consumer wants to get a loan, he or she could release the relevant information to a credit provider—who in turn would have the right to use that information for that one instance. If that consumer wants to get free service from, say, Facebook, he or she could provide the company with relevant information for that application alone. If the government needs access to that information to catch a terrorist, it will need to get a search warrant. (Another nice feature of such a system would be that the consumer would only have to go to one place to check the accuracy of the information on file.)
Human society existed for millennia before AI systems had unlimited knowledge about each of us. And it will continue to exist, even if we limit that knowledge by starving our machines of that personal information. AI will still be able to make the economy more efficient, create medical advances, reduce traffic and create more effective regulations to ensure the health of the environment. What it will be less able to do is threaten human autonomy, liberty and pursuit of happiness.
In the case of AI, lean will mean less mean. It’s time to put artificial intelligence on a data diet.

Elon Musk Preaches Dangers of AI to NGA, AI is the Next Nuclear Bomb of Warfare, and More AI News This Week
July 20, 2017
Robert Light Senior Research Specialist at G2 Crowd
ELON MUSK AND DANGERS OF AI
Elon Musk is one of the most outspoken proponents of regulating AI, and earlier this week he got an opportunity to voice his opinions in front of the National Governors Association. As the CEO of Tesla, Musk has vested business interest in the use of AI, yet still does his best to inform others of the dangers he believes AI poses. “‘I have exposure to the very cutting-edge AI, and I think people should be really concerned about it’” Musk said during his speech at the National Governors Association Saturday.
Musk believes that regulation must be set up immediately, otherwise the results may be detrimental. “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”
Alluding to reactive regulation examples, Musk said, “AI is a fundamental risk to the existence of human civilisation, in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to society as a whole … AI is a fundamental existential risk for human civilisation, and I don’t think people fully appreciate that.”
However, not everyone agrees with Musk’s sentiments surrounding the dangers of AI, including the founding director of MIT’s Computer Science and Artificial Intelligence Lab, Rodney Brooks. In an interview earlier this week, Brooks was asked to comment on Musk’s lecture on the need for regulations. “So you’re going to regulate now. If you’re going to have a regulation now, either it applies to something and changes something in the world, or it doesn’t apply to anything. If it doesn’t apply to anything, what the hell do you have the regulation for? Tell me, what behavior do you want to change, Elon? By the way, let’s talk about regulation on self-driving Teslas, because that’s a real issue.”
Brooks is not alone in his disagreements with Musk. Other AI researchers publically defended the advancement of AI for the good of humanity on social media. Regardless of who is correct, it is entertaining to listen to smart people argue, especially when it gets personal.
This originally appeared in G2 Crowd's AI Digest. Subscribe to receive the same weekly AI news directly to your inbox.
HOW AI WILL IMPACT WARFARE
Now for a topic that is much scarier than an impending AI apocalypse: Governments are utilizing AI in warfare! Greg Allen and Taniel Chan recently released a research reportdiscussing how artificial intelligence will help national security. 
According to Wired, “One of its conclusions is that the impact of technologies such as autonomous robots on war and international relations could rival that of nuclear weapons … It lays out why technologies like drones with bird-like agility, robot hackers and software that generates photo-real fake video are on track to make the American military and its rivals much more powerful.”
Unlike an AI apocalypse, these threats to society feel much more tangible and realistic. The Kalashnikov Group is already working on military AI’s using neural networks. These robots, in theory, would be able to determine the enemy and execute them. Feels eerily similar to Robocop.

While these technologies are not an immediate concern, they will be sometime in the future, making them frightening nonetheless.


Who can save Humanity from Superintelligence?
Dates: April 29, 2017
Location: London, UK
In this presentation, Tony Czarnecki, Managing Partner of Sustensis, will share his views on how Humanity could be saved from its biggest existential risk, Superintelligence.
The presentation will cover four overlapping crises Humanity faces today – crises in the domains of politics, economics, society, and existential risk. The presentation will also provide a vision of a possible solution, with a reformed European Union becoming the core of a new supranational organization having the best chance to tackle these problems.
 About the crises:
The world faces a series of existential risks. When combined, the chance of one of these risks materializing in just 20 years is at least 5%. We already had one such “near miss” that could have annihilated the entire civilization. That was the Cuban crisis in October 1962, which almost started a global nuclear war. Today, the biggest risk facing our civilization and humanity isSuperintelligence.
Additionally, mainly due to the advancement in technology, the world is changing at almost an exponential pace. That means that change, not just in technology but also in political or social domains, which might previously have taken a decade to produce a significant effect, can now happen in just a year or two. No wonder that people, even in the most developed countries, cannot absorb the pace of change that happens simultaneously in so many domains of our lives. That’s why emotions have overtaken reason.
People are voting in various elections and referenda against the status quo, not really knowing what the problem is, even less what could be the solution. Even if some politicians know what the overall, usually unpleasant solutions could be, they are unlikely to share that with their own electorate because they would be deselected in the next election. The vicious circle continues but at an increasingly faster pace.
The crises that we are experiencing right now lie in four domains:
·         Existential survival – the biggest crisis because it is barely visible
·         Political – the crisis of democracy
·         Economic – the crisis of capitalism
·         Social – the crisis of wealth distribution where the wealthy become wealthier even faster.
At the same time, anyone wanting to improve the situation faces three problems:
·         Existential risks require fast action, while the world’s organisations act very slowly
·         People want more freedom and more control, while we need to give up some of our freedoms and national sovereignty for the greater good of civilisation and humanity
·         Most people can’t see beyond tomorrow and act emotionally, while we need to see the big picture and act rationally.
Therefore, anybody that sees the need for the world to take urgent action faces a formidable task of proposing pragmatic, fast and very radical changes in the ways the world is governed.
The key problem is that the world cannot put faith in the United Nations – the organisation that should by default be responsible for leading the humanity through this most difficult period of existential threats. Neither do we have time to build such an organisation from scratch. But more importantly, we cannot reasonably expect that all major blocs – the USA, China, Russia or the EU – would suddenly replace their own set of values and interests with a unified set of new human values and responsibilities.
Therefore, the only plausible solution is to rely on an organisation emerging from a deeply transformed existing organisation, such as the European Union and/or NATO, which would lead humanity to a new era, where we may be living side by side with Superintelligence. This would also mean that in the transition period, this enlarged organization would have to co-exist with China, Russia, Saudi Arabia and many other countries with deeply different values and interests.
About the speaker:
Tony Czarnecki is Managing Partner of Sustensis, a management consultancy that has specialized for over 20 years in the area of long-term sustainable growth. The objective of Sustensis is to help companies make a gradual transition from a short-term to a long-term business growth.
In recent years, Tony has applied his experience in “long-termism” to find solutions for the crises facing our civilisation and humanity – the subject that has been the focus of London Futurists.
Tech world debate on robots and jobs heats up
March 26, 2017 by Rob Lever
Advances in artificial intelligence and robotics are heightening concerns about automation replacing a growing number of occupations
Are robots coming for your job?
Although technology has long affected the labor force, recent advances in artificial intelligence and robotics are heightening concerns about automation replacing a growing number of occupations, including highly skilled or "knowledge-based" jobs.
Just a few examples: self-driving technology may eliminate the need for taxi, Uber and truck drivers, algorithms are playing a growing role in journalism, robots are informing consumers as mall greeters, and medicine is adapting robotic surgery and artificial intelligence to detect cancer and heart conditions.
Of 700 occupations in the United States, 47 percent are at "high risk" from automation, an Oxford University study concluded in 2013.
A McKinsey study released this year offered a similar view, saying "about half" of activities in the world's workforce "could potentially be automated by adapting currently demonstrated technologies."
Still, McKinsey researchers offered a caveat, saying that only around five percent of jobs can be "fully automated."
Another report, by PwC this month, concluded that around a third of jobs in the United States, Germany and Britain could be eliminated by automation by the early 2030s, with the losses concentrated in transportation and storage, manufacturing, and wholesale and retail trade.
But experts warn that such studies may fail to grasp the full extent of the risks to the working population.
"The studies are underestimating the impact of technology—some 80 to 90 percent of jobs will be eliminated in the next 10 to 15 years," said Vivek Wadhwa, a tech entrepreneur and faculty member at Carnegie Mellon University in Silicon Valley.
Dire consequences
"Artificial intelligence is moving a lot faster than anyone had expected," said Wadhwa, who is co-author of a forthcoming book on the topic. "Alexa (Amazon's home hub) and Google Home are getting amazingly intelligent very fast. Microsoft and Google have demonstrated that AI can understand human speech better than humans can."
Wadhwa calls the driverless car a "metaphor" for the future of labor and a sign of a major shift.
Warnings of dire social consequences from automation have also come from the likes of the physicist Stephen Hawking and tech entrepreneur Elon Musk, among others.
Hebrew University of Jerusalem historian Yuval Harari writes in his 2017 book, "Homo Deus: A Brief History of Tomorrow" that technology will lead to "superfluous people" as "intelligent non-conscious algorithms" improve.
"As algorithms push humans out of the job market," he writes, "wealth and power might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social and political inequality."
Harari points to the Oxford study, estimating a high probability of job loss to automation—cashiers (97 percent), paralegals (94 percent), bakers (89 percent) and bartenders (77 percent), for example.
Others disagree.
Boston University economist and researcher James Bessen dismisses alarmist predictions, contending that advances in technology generally lead to more jobs, even if the nature of work changes.
His research found that the proliferation of ATM machines did not decrease bank tellers' employment in recent decades, and that automation of textile mills in the 19th century led to an increase in weaving jobs because it created more demand.
"Robots can replace humans in certain tasks but don't entirely replace humans," he said.
But he acknowledged that automation "is destroying a lot of low-skill, low wage jobs, and the new jobs being created need higher skills."
Former president Barack Obama's council of economic advisors also warned last year that most jobs paying less than $20 an hour "would come under pressure from automation."
'Tax the robot'
Although the net impact of robots remains unclear, tech leaders and others are already debating how to deal with the potential job displacement.
Microsoft founder Bill Gates said last month that he supports a "robot tax," an idea floated in Europe, including by a socialist presidential candidate in France.
But Bessen, a former fellow at Harvard's Berkman Center, said taxing robots could be counterproductive.
"You don't want to be taxing the machines because they enable people to earn higher wages," he said. "If you tax machines, you will slow the beneficial side of the process."
Peter Diamandis, chairman of the X Prize Foundation for technical innovation and founder of the Silicon Valley think-tank Singularity University, is among those calling for a "universal basic income" to compensate people for job losses.
Offering income guarantees "will be one of many tools empowering self-actualization at scale," he said in a blog post, arguing that automation will allow people "to follow their passions, be more creative."
But Wadhwa says the problems run deeper and will require more creative solutions.
"A basic income won't solve the social problems of joblessness because people's identity revolves around our jobs," he said.
"Even if we have enough food and energy, we have to deal with the social disruption that's coming. We need a much broader discussion."
Bessen says reversing the trends of the past decades, where high-skilled jobs gain at the expense of others, pose a "big challenge."
"It's entirely possible we can meet the challenge," he said. "But the evidence in the past 20 years is that things are moving in the wrong direction."

Read more at: 
https://phys.org/news/2017-03-tech-world-debate-robots-jobs.html#jCp

Around the halls: What should the regulation of generative AI look like?

 Nicol Turner LeeNiam YaraghiMark MacCarthy, and Tom Wheeler               Friday, June 2, 2023

 We are living in a time of unprecedented advancements in generative artificial intelligence (AI), which are AI systems that can generate a wide range of content, such as text or images. The release of ChatGPT, a chatbot powered by OpenAI’s GPT-3 large language model (LLM), in November 2022 ushered generative AI into the public consciousness, and other companies like Google and Microsoft have been equally busy creating new opportunities to leverage the technology. In the meantime, these continuing advancements and applications of generative AI have raised important questions about how the technology will affect the labor market, how its use of training data implicates intellectual property rights, and what shape government regulation of this industry should take. Last week, a congressional hearing with key industry leaders suggested an openness to AI regulation—something that legislators have already considered to reign in some of the potential negative consequences of generative AI and AI more broadly. Considering these developments, scholars across the Center for Technology Innovation (CTI) weighed in around the halls on what the regulation of generative AI should look like.

NICOL TURNER LEE (@DrTurnerLee)
Generative AI refers to machine learning algorithms that can create new content like audio, code, images, text, simulations, or even videos. More recent focus has been on its enablement of chatbots, including ChatGPTBardCopilot, and other more sophisticated tools that leverage LLMs to perform a variety of functions, like gathering research for assignments, compiling legal case files, automating repetitive clerical tasks, or improving online search. While debates around regulation are focused on the potential downsides to generative AI, including the quality of datasets, unethical applications, racial or gender bias, workforce implications, and greater erosion of democratic processes due to technological manipulation by bad actors, the upsides include a dramatic spike in efficiency and productivity as the technology improves and simplifies certain processes and decisions like streamlining physician processing of medical notes, or helping educators teach critical thinking skills. There will be a lot to discuss around generative AI’s ultimate value and consequence to society, and if Congress continues to operate at a very slow pace to regulate emerging technologies and institute a federal privacy standard, generative AI will become more technically advanced and deeply embedded in society. But where Congress could garner a very quick win on the regulatory front is to require consumer disclosures when AI-generated content is in use and add labeling or some type of multi-stakeholder certification process to encourage improved transparency and accountability for existing and future use cases.

Once again, the European Union is already leading the way on this. In its most recent AI Act, the EU requires that AI-generated content be disclosed to consumers to prevent copyright infringement, illegal content, and other malfeasance related to end-user lack of understanding about these systems. As more chatbots mine, analyze, and present content in accessible ways for users, findings are often not attributable to any one or multiple sources, and despite some permissions of content use granted under the fair use doctrine in the U.S. that protects copyright-protected work, consumers are often left in the dark around the generation and explanation of the process and results.

Congress should prioritize consumer protection in future regulation, and work to create agile policies that are futureproofed to adapt to emerging consumer and societal harms—starting with immediate safeguards for users before they are left to, once again, fend for themselves as subjects of highly digitized products and services. The EU may honestly be onto something with the disclosure requirement, and the U.S. could further contextualize its application vis-à-vis existing models that do the same, including the labeling guidance of the Food and Drug Administration (FDA) or what I have proposed in prior research: an adaptation of the Energy Star Rating system to AI. Bringing more transparency and accountability to these systems must be central to any regulatory framework, and beginning with smaller bites of a big apple might be a first stab for policymakers.

NIAM YARAGHI (@niamyaraghi)
With the emergence of sophisticated artificial intelligence (AI) advancements, including large language models (LLMs) like GPT-4, and LLM-powered applications like ChatGPT, there is a pressing need to revisit healthcare privacy protections. At their core, all AI innovations utilize sophisticated statistical techniques to discern patterns within extensive datasets using increasingly powerful yet cost-effective computational technologies. These three components—big data, advanced statistical methods, and computing resources—have not only become available recently but are also being democratized and made readily accessible to everyone at a pace unprecedented in previous technological innovations. This progression allows us to identify patterns that were previously indiscernible, which creates opportunities for important advances but also possible harms to patients.

Privacy regulations, most notably HIPAA, were established to protect patient confidentiality, operating under the assumption that de-identified data would remain anonymous. However, given the advancements in AI technology, the current landscape has become riskier. Now, it’s easier than ever to integrate various datasets from multiple sources, increasing the likelihood of accurately identifying individual patients.

Apart from the amplified risk to privacy and security, novel AI technologies have also increased the value of healthcare data due to the enriched potential for knowledge extraction. Consequently, many data providers may become more hesitant to share medical information with their competitors, further complicating healthcare data interoperability.

Considering these heightened privacy concerns and the increased value of healthcare data, it’s crucial to introduce modern legislation to ensure that medical providers will continue sharing their data while being shielded against the consequences of potential privacy breaches likely to emerge from the widespread use of generative AI.

MARK MACCARTHY (@Mark_MacCarthy)
In “The Leopard,” Giuseppe Di Lampedusa’s famous novel of the Sicilian aristocratic reaction to the unification of Italy in the 1860s, one of his central characters says, “If we want things to stay as they are, things will have to change.”

Something like this Sicilian response might be happening in the tech industry’s embrace of inevitable AI regulation. Three things are needed, however, if we do not want things to stay as they are.

The first and most important step is sufficient resources for agencies to enforce current law. Federal Trade Commission Chair Lina Khan properly says AI is not exempt from current consumer protection, discrimination, employment, and competition law, but if regulatory agencies cannot hire technical staff and bring AI cases in a time of budget austerity, current law will be a dead letter.

Second, policymakers should not be distracted by science fiction fantasies of AI programs developing consciousness and achieving independent agency over humans, even if these metaphysical abstractions are endorsed by industry leaders. Not a dime of public money should be spent on these highly speculative diversions when scammers and industry edge-riders are seeking to use AI to break existing law.

Third, Congress should consider adopting new identification, transparency, risk assessment, and copyright protection requirements along the lines of the European Union’s proposed AI Act. The National Telecommunications and Information Administration’s request for comment on a proposed AI accountability framework and Sen. Chuck Schumer’s (D-NY) recently-announced legislative initiative to regulate AI might be moving in that direction.

TOM WHEELER (@tewheels)
Both sides of the political aisle, as well as digital corporate chieftains, are now talking about the need to regulate AI. A common theme is the need for a new federal agency. To simply clone the model used for existing regulatory agencies is not the answer, however. That model, developed for oversight of an industrial economy, took advantage of slower paced innovation to micromanage corporate activity. It is unsuitable for the velocity of the free-wheeling AI era.

All regulations walk a tightrope between protecting the public interest and promoting innovation and investment. In the AI era, traversing this path means accepting that different AI applications pose different risks and identifying a plan that pairs the regulation with the risk while avoiding innovation-choking regulatory micromanagement.

Such agility begins with adopting the formula by which digital companies create technical standards as the formula for developing behavioral standards: identify the issue; assemble a standard-setting process involving the companies, civil society, and the agency; then give final approval and enforcement authority to the agency.

Industrialization was all about replacing and/or augmenting the physical power of humans. Artificial intelligence is about replacing and/or augmenting humans’ cognitive powers. To confuse how the former was regulated with what is needed for the latter would be to miss the opportunity for regulation to be as innovative as the technology it oversees. We need institutions for the digital era that address problems that already are apparent to all.

Google and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and are not influenced by any donation.

https://www.brookings.edu/blog/techtank/2023/06/02/around-the-halls-what-should-the-regulation-of-generative-ai-look-like/
Neuralink and the Brain’s Magical Future
 April 20, 2017 By Tim Urban
Okay maybe that’s not exactly how it happened, and maybe those weren’t his exact words. But after learning about the new company Elon Musk was starting, I’ve come to realize that that’s exactly what he’s trying to do.
When I wrote about Tesla and SpaceX, I learned that you can only fully wrap your head around certain companies by zooming both way, way in and way, way out. In, on the technical challenges facing the engineers, out on the existential challenges facing our species. In on a snapshot of the world right now, out on the big story of how we got to this moment and what our far future could look like.
Not only is Elon’s new venture—Neuralink—the same type of deal, but six weeks after first learning about the company, I’m convinced that it somehow manages to eclipse Tesla and SpaceX in both the boldness of its engineering undertaking and the grandeur of its mission. The other two companies aim to redefine what future humans will do—Neuralink wants to redefine what future humans will be.
The mind-bending bigness of Neuralink’s mission, combined with the labyrinth of impossible complexity that is the human brain, made this the hardest set of concepts yet to fully wrap my head around—but it also made it the most exhilarating when, with enough time spent zoomed on both ends, it all finally clicked. I feel like I took a time machine to the future, and I’m here to tell you that it’s even weirder than we expect.
But before I can bring you in the time machine to show you what I found, we need to get in our zoom machine—because as I learned the hard way, Elon’s wizard hat plans cannot be properly understood until your head’s in the right place.
So wipe your brain clean of what it thinks it knows about itself and its future, put on soft clothes, and let’s jump into the vortex.
___________
Contents
Part 6: The Great Merger :  http://waitbutwhy.com/2017/04/neuralink.html

 Vanity Fair
 April 2017 issue


story excerpts from interview with Ray Kurzweil:
1. |  Spiraling capabilities of self-improving artificial intelligence
Google has gobbled up almost every robotics & machine learning company.It bought Deep•Mind for $650 million, and built the Google Brain team to work on artificial intelligence.
Google hired Geoffrey Hinton, PhD, pioneer in artificial neural networks. It also hired Ray Kurzweil, the futurist who predicted humans are 28 years away  from “singularity” — the moment when spiraling capabilities of self-improving artificial super-intelligence will exceed human intelligence, and humans will merge with AI to create hybrid beings of the future.


2. |  Computers are already doing many attributes of thinking
Trying to puzzle out AI, I went to meet Ray Kurzweil — author of book The Singularity Is Near, a Utopian vision of AI future. Kurzweil said computers are already “doing many attributes of thinking. Just a few years ago, AI couldn’t tell the difference between a dog and cat. Now it can. The list of things humans can do better than computers gets smaller and smaller. We create these tools to extend our long reach.”


3. |  The promise & peril are deeply intertwined, the strategy is control the peril
Ray Kurzweil uses the word “we” when talking about super-intelligent future beings — compared to Elon Musk’s more ominous “they.”
Elon Musk said he was bewildered that Ray Kurzweil isn’t worried about AI hazards. “That’s not true. I’m the one who articulated the dangers,” Kurzweil said. “The promise and peril are deeply intertwined. There are strategies to control the peril, as there have been with bio-tech guidelines.”
Kurzweil said Musk’s bête noire could come true. He said our AI children “may be friendly and may not. If it’s not friendly, we may have to fight it by getting an AI on our side that’s even smarter.”


4. | 3 stages of the human response to new tech
Kurzweil said there are 3 stages of the human response to new tech:
1. Wow!
2. Uh oh!
3. What other choice do we have but to move forward?
Ray Kurzweil predicts by the 2030s, we will be cyborgs. Nanorobots the size of blood cells will heal our bodies from the inside — connecting us to synthetic neo-cortex in the cloud, and to virtual & augmented reality. “We’ll be funnier, more musical, wiser,” he said.



‘Mind reading’ technology identifies complex thoughts, using machine learning and fMRI
CMU aims to map all types of knowledge in the brain
June 30, 2017

By combining machine-learning algorithms with fMRI brain imaging technology, Carnegie Mellon University (CMU) scientists have discovered, in essense, how to “read minds.”
The researchers used functional magnetic resonance imaging (fMRI) to view how the brain encodes various thoughts (based on blood-flow patterns in the brain). They discovered that the mind’s building blocks for constructing complex thoughts are formed, not by words, but by specific combinations of the brain’s various sub-systems.
Following up on previous research, the findings, published in Human Brain Mapping (open-access preprint here) and funded by the U.S. Intelligence Advanced Research Projects Activity (IARPA), provide new evidence that the neural dimensions of concept representation are universal across people and languages.
“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in evening with my friends,’” said CMU’s Marcel Just, the D.O. Hebb University Professor of Psychology in the Dietrich College of Humanities and Social Sciences. “We have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of.”
Goal: A brain map of all types of knowledge

The researchers used 240 specific events (described by sentences such as “The storm destroyed the theater”) in the study, with seven adult participants. They measured the brain’s coding of these events using 42 “neurally plausible semantic features” — such as person, setting, size, social interaction, and physical action (as shown in the word clouds in the illustration above). By measuring the specific activation of each of these 42 features in a person’s brain system, the program could tell what types of thoughts that person was focused on.
The researchers used a computational model to assess how the detected brain activation patterns (shown in the top illustration, for example) for 239 of the event sentences corresponded to the detected neurally plausible semantic features that characterized each sentence. The program was then able to decode the features of the 240th left-out sentence. (For “cross-validation,” they did the same for the other 239 sentences.)
The model was able to predict the features of the left-out sentence with 87 percent accuracy, despite never being exposed to its activation before. It was also able to work in the other direction: to predict the activation pattern of a previously unseen sentence, knowing only its semantic features.
“Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence,” Just explained. “This advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of.”
“A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding,” he added. “We are on the way to making a map of all the types of knowledge in the brain.”
Future possibilities
It’s conceivable that the CMU brain-mapping method might be combined one day with other “mind reading” methods, such as UC Berkeley’s method for using fMRI and computational models to decode and reconstruct people’s imagined visual experiences. Plus whatever Neuralink discovers.
Or if the CMU method could be replaced by noninvasive functional near-infrared spectroscopy (fNIRS), Facebook’s Building8 research concept (proposed by former DARPA head Regina Dugan) might be incorporated (a filter for creating quasi ballistic photons, avoiding diffusion and creating a narrow beam for precise targeting of brain areas, combined with a new method of detecting blood-oxygen levels).
Using fNIRS might also allow for adapting the method to infer thoughts of locked-in paralyzed patients, as in the Wyss Center for Bio and Neuroengineering research. It might even lead to ways to generally enhance human communication.
The CMU research is supported by the Office of the Director of National Intelligence (ODNI) via the Intelligence Advanced Research Projects Activity (IARPA) and the Air Force Research Laboratory (AFRL).
CMU has created some of the first cognitive tutors, helped to develop the Jeopardy-winning Watson, founded a groundbreaking doctoral program in neural computation, and is the birthplace of artificial intelligence and cognitive psychology. CMU also launched BrainHub, an initiative that focuses on how the structure and activity of the brain give rise to complex behaviors.


Abstract of Predicting the Brain Activation Pattern Associated With the Propositional Content of a Sentence: Modeling Neural Representations of Events and States

Even though much has recently been learned about the neural representation of individual concepts and categories, neuroimaging research is only beginning to reveal how more complex thoughts, such as event and state descriptions, are neurally represented. We present a predictive computational theory of the neural representations of individual events and states as they are described in 240 sentences. Regression models were trained to determine the mapping between 42 neurally plausible semantic features (NPSFs) and thematic roles of the concepts of a proposition and the fMRI activation patterns of various cortical regions that process different types of information. Given a semantic characterization of the content of a sentence that is new to the model, the model can reliably predict the resulting neural signature, or, given an observed neural signature of a new sentence, the model can predict its semantic content. The models were also reliably generalizable across participants. This computational model provides an account of the brain representation of a complex yet fundamental unit of thought, namely, the conceptual content of a proposition. In addition to characterizing a sentence representation at the level of the semantic and thematic features of its component concepts, factor analysis was used to develop a higher level characterization of a sentence, specifying the general type of event representation that the sentence evokes (e.g., a social interaction versus a change of physical state) and the voxel locations most strongly associated with each of the factors.

 by Neil Sahota  October 1, 2018


Ray Kurzweil — futurist, inventor, and best selling author — said: “Our technology, our machines, is part of our humanity. We created them to extend ourselves, and that is what is unique about human beings.”
In the past few years, there has been considerable discussion that we’re slowly merging with our technology — that humans are becoming trans-human with updated abilities: including enhanced intelligence, strength, and awareness.
Considering Kurzweil’s words is a good place to begin this talks. It’s no secret that Google has trans-humanistic aspirations. In 2011 author Steven Levy made this bold statement about Google in his book In the Plex: “From the very start, Google’s founders saw the company as a vehicle to realize the dream of artificial intelligence in augmenting humanity.”
It makes sense Google would bring on Ray Kurzweil to be one of its Directors of Engineering in 2012. For years, Kurzweil has been pushing the cultural conversation toward the idea of human transcendence with his thought-provoking books.
Kurzweil has gained notoriety for the proposing provocative idea: “The singularity will represent the culmination of the merger of our biological thinking + existence with our technology — resulting in a world that is still human but that transcends our biological roots.” But the term “singularity” originated in a 1993 essay titled: The Coming Technological Singularity — by science fiction author Vernor Vinge PhD.
To grasp the significance of Vinge’s thinking: it’s important to realize where we were as a society in the early 1990s. Back then, smart-phones and social media websites were years away. The web — so vital to all aspects of our life: communication, commerce, entertainment — was in its infancy. But Vinge boldly proclaimed: “In 30 years we’ll have the technological means to create super-human intelligence. Shortly after, the human era will be ended.”
Here we are, almost 30 years from Vinge’s prediction and the reality of trans-humanism has caught-on with the general public as a distinct possibility. Writer Michael Ashley — co-author of my book Uber Yourself Before You Get Kodaked: a modern primer on AI for the modern business — and I sought to tap into the cultural zeitgeist on this topic, by interviewing  Ben Goertzel PhD.
Goertzel is the right person to speak about human potential in the age of AI. He’s founder and CEO of the company SingularityNet. Along with robotics engineer David Hanson of the company Hanson Robotics, Goertzel co-created Sophia — the first robot to gain national citizenship.
Like Vinge and Kurzweil, Ben Goertzel PhD is fascinated by the idea of trans-humanism. He explains it’s not pie-in-the-sky conjecture — trans-humanism has been happening for awhile in analog form. Goertzel said: “It’s happening bit-by-bit. If you take my glasses away, I’d become heavily impaired and couldn’t participate in the world.”
He points to subtle ways humans are already merging with computers. He said: “If you take the smart-phone away from my wife or kids, they will go into withdrawal and also become heavily impaired.”
Still, many people fear trans-humanism. Critics warn of designer babies and chips implanted in our minds. Theologians fear we will denigrate the soul’s sanctity by achieving immortality. In the early 2000s, the editors of Foreign Policy asked policy intellectuals: “What idea, if embraced, would pose the greatest threat to the welfare of humanity?” Francis Fukuyama, professor of international political economy at Johns Hopkins School of Advanced International Studies, pointed to transhumanism, calling it the “world’s most dangerous idea.” Writing for Psychology Today, Massimo Pigliucci stated, “There are several problems with the pursuit of immortality, one of which is particularly obvious. If we all live (much, much) longer, we all consume more resources and have more children, leading to even more overpopulation and environmental degradation.”
No matter the intellectual misgivings surrounding this controversial topic, the fact remains that if we view transhumanism the way it is conventionally defined, people have been evolving toward an updated version of humanity for some time. “In some ways, we already operate as human machine-hybrids,” said Goertzel. “If a caveman came into the modern world, he would be astounded at how symbiotic we are with the various machines we use. We use cars to get from point A to point B and air conditioners to regulate our temperature. In Hong Kong at least, you never see anyone who’s not holding a phone in their hand and staring at it.”
But there may be other, more pragmatic reasons why we need to become transhuman, if only to stand up to the intelligent machines that are coming. Early on, Elon Musk sounded the alarm about humans being usurped by artificial intelligence in a series of well-publicized warnings. Since then, he has suggested that the only way not to be overtaken by computers is to merge with our creations. His venture, Neuralink, is in development precisely for this purpose. Meant to combine human brains with computers, it’s his attempt to symbiotically join our minds with the machines. “The merge scenario with A.I. is the one that seems like probably the best,” he recently said on the podcast, the Joe Rogan Experience. “If you can’t beat it, join it.”
A visionary himself, Goertzel has long foreseen Musk’s vision coming, yet he urges caution in its implementation. “The next step to take is to wire these machines directly into the brain and body rather than have them held in our hands. Clearly, this takes time and thought because you need to be careful with sticking wires into human brains and bodies. But that work is being done, and it’s not going to take more than a decade.”
Returning to Vinge’s prescience at the end of the 20th century, we can see he was imagining a future that would occur even sooner than he predicted. If we take Goertzel at his word, we are through Fukuyama’s and others’ hand-wringing stage. We’re now at the point to think about practicalities. Technology slows down for no one. Whether we like it or not, there is a pre-smartphone and a post-smartphone world. Presumably, we all know someone who was loathe to adopt the new technology — it’s likely their business even suffered until they began using an iPhone or Android — or got swept aside by adapters willing to change with the times. Are we at the precipice of a similar phenomenon? Are we staring down the gulf at “Human 2.0?”
To put this dilemma in clearer focus, Goertzel advises considering the question, not from your perspective, but from your child’s. He paints a picture: “Imagine it’s eight years from now. All the other kids in your daughter’s third-grade class are way ahead of her because their brains are connected directly to Google and a calculator, and they’re SMSing back and forth by Wi-Fi telepathy between their brains while your daughter sits there in class being stunted because she must memorize things the old-fashioned way and can’t send messages brain-to-brain.”
Goertzel suggests you consider what you would do if your daughter’s teacher brought you in for a parent conference and told you your daughter couldn’t keep up with her classmates. Imagine she suggested some form of upgrade. You love your daughter. You want the best for her. What would you do?
At this point, the prospect of trans-humanism stops being an intellectual exercise. It becomes a question of subsistence.



An artificial synapse for future miniaturized portable ‘brain-on-a-chip’ devices
MIT engineers plan a fingernail-size chip that could replace a supercomputer
January 22, 2018
MIT engineers have designed a new artificial synapse made from silicon germanium that can precisely control the strength of an electric current flowing across it.
In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting with 95 percent accuracy. The engineers say the new design, published today (Jan. 22) in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other machine-learning tasks.
Controlling the flow of ions: the challenge
Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. The idea is to apply a voltage across layers that would cause ions (electrically charged atoms) to move in a switching medium (synapse-like space) to create conductive filaments in a manner that’s similar to how the “weight” (connection strength) of a synapse changes.
There are more than 100 trillion synapses (in a typical human brain) that mediate neuron signaling in the brain, strengthening some neural connections while pruning (weakening) others — a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, all at lightning speeds.
Instead of carrying out computations based on binary, on/off signaling, like current digital chips, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights” — much like neurons that activate in various ways (depending on the type and number of ions that flow across a synapse).
But it’s been difficult to control the flow of ions in existing synapse designs. These have multiple paths that make it difficult to predict where ions will make it through, according to research team leader Jeehwan Kim, PhD, an assistant professor in the departments of Mechanical Engineering and Materials Science and Engineering, a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories.
“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”
So instead of using amorphous materials as an artificial synapse, Kim and his colleagues created an new “epitaxial random access memory” (epiRAM) design.
They started with a wafer of silicon. They then grew a similar pattern of silicon germanium — a material used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials could form a funnel-like dislocation, creating a single path through which ions can predictably flow.*
This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.
Testing the ability to recognize samples of handwriting
As a test, Kim and his team explored how the epiRAM device would perform if it were to carry out an actual learning task: recognizing samples of handwriting — which researchers consider to be a practical test for neuromorphic chips. Such chips would consist of artificial “neurons” connected to other “neurons” via filament-based artificial “synapses.”
They ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from the MNIST handwritten recognition dataset**, commonly used by neuromorphic designers.
They found that their neural network device recognized handwritten samples 95.1 percent of the time — close to the 97 percent accuracy of existing software algorithms running on large computers.
A chip to replace a supercomputer
The team is now in the process of fabricating a real working neuromorphic chip that can carry out handwriting-recognition tasks. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that are currently only possible with large supercomputers.
“Ultimately, we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial intelligence hardware.”
This research was supported in part by the National Science Foundation. Co-authors included researchers at Arizona State University.
* They applied voltage to each synapse and found that all synapses exhibited about the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material. They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.
** The MNIST (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems and for training and testing in the field of machine learning. It contains 60,000 training images and 10,000 testing images. 


Abstract of SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations
Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.
references:

IBM researchers use analog memory to train deep neural networks faster and more efficiently

New approach allows deep neural networks to run hundreds of times faster than with GPUs, using hundreds of times less energy
June 15, 2018
Crossbar arrays of non-volatile memories can accelerate the training of neural networks by performing computation at the actual location of the data. (credit: IBM Research)
Imagine advanced artificial intelligence (AI) running on your smartphone — instantly presenting the information that’s relevant to you in real time. Or a supercomputer that requires hundreds of times less energy.
The IBM Research AI team has demonstrated a new approach that they believe is a major step toward those scenarios.
Deep neural networks normally require fast, powerful graphical processing unit (GPU) hardware accelerators to support the needed high speed and computational accuracy — such as the GPU devices used in the just-announced Summit supercomputer. But GPUs are highly energy-intensive, making their use expensive and limiting their future growth, the researchers explain in a recent paper published in Nature.
Analog memory replaces software, overcoming the “von Neumann bottleneck”
Instead, the IBM researchers used large arrays of non-volatile analog memory devices (which use continuously variable signals rather than binary 0s and 1s) to perform computations. Those arrays allowed the researchers to create, in hardware, the same scale and precision of AI calculations that are achieved by more energy-intensive systems in software, but running hundreds of times faster and at hundreds of times lower power — without sacrificing the ability to create deep learning systems.*
The trick was to replace conventional von Neumann architecture, which is “constrained by the time and energy spent moving data back and forth between the memory and the processor (the ‘von Neumann bottleneck’),” the researchers explain in the paper. “By contrast, in a non-von Neumann scheme, computing is done at the location of the data [in memory], with the strengths of the synaptic connections (the ‘weights’) stored and adjusted directly in memory.
“Delivering the future of AI will require vastly expanding the scale of AI calculations,” they note. “Instead of shipping digital data on long journeys between digital memory chips and processing chips, we can perform all of the computation inside the analog memory chip. We believe this is a major step on the path to the kind of hardware accelerators necessary for the next AI breakthroughs.”**
Given these encouraging results, the IBM researchers have already started exploring the design of prototype hardware accelerator chips, as part of an IBM Research Frontiers Institute project, they said.
Ref.: Nature. Source: IBM Research
 * “From these early design efforts, we were able to provide, as part of our Nature paper, initial estimates for the potential of such [non-volatile memory]-based chips for training fully-connected layers, in terms of the computational energy efficiency (28,065 GOP/sec//W) and throughput-per-area (3.6 TOP/sec/mm2). These values exceed the specifications of today’s GPUs by two orders of magnitude. Furthermore, fully-connected layers are a type of neural network layer for which actual GPU performance frequently falls well below the rated specifications. … Analog non-volatile memories can efficiently accelerate at the heart of many recent AI advances. These memories allow the “multiply-accumulate” operations used throughout these algorithms to be parallelized in the analog domain, at the location of weight data, using underlying physics. Instead of large circuits to multiply and add digital numbers together, we simply pass a small current through a resistor into a wire, and then connect many such wires together to let the currents build up. This lets us perform many calculations at the same time, rather than one after the other.”

** “By combining long-term storage in phase-change memory (PCM) devices, near-linear update of conventional complementary metal-oxide semiconductor (CMOS) capacitors and novel techniques for cancelling out device-to-device variability, we finessed these imperfections and achieved software-equivalent DNN accuracies on a variety of different networks. These experiments used a mixed hardware-software approach, combining software simulations of system elements that are easy to model accurately (such as CMOS devices) together with full hardware implementation of the PCM devices.  It was essential to use real analog memory devices for every weight in our neural networks, because modeling approac

Deep neural network models score higher than humans in reading and comprehension test

"Update your AGI predictions"--- Prof. Roman Yampolskiy, PhD‏ @romanyam
January 15, 2018
Microsoft and Alibaba have developed deep neural network models that scored higher than humans in a Stanford University reading and comprehension test, Stanford Question Answering Dataset (SQuAD).
Microsoft achieved 82.650 on the ExactMatch (EM) metric* on Jan. 3, and Alibaba Group Holding Ltd. scored 82.440 on Jan. 5. The best human score so far is 82.304.
“SQuAD is a new reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage,” according to the Stanford NLP Group. “With 100,000+ question-answer pairs on 500+ articles, SQuAD is significantly larger than previous reading comprehension datasets.”
“A strong start to 2018 with the first model (SLQA+) to exceed human-level performance on @stanfordnlp SQuAD’s EM metric!,” said Pranav Rajpurkar, a Ph.D. student in the Stanford Machine Learning Group and lead author of a paper in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing on SQuAD (available on open-access ArXiv). “Next challenge: the F1 metric*, where humans still lead by ~2.5 points!” (Alibaba’s SLQA+ scored 88.607 on the F1 metric and Microsoft’s r-net+ scored 88.493.)
However, challenging the “comprehension” description, Gary Marcus, PhD, a Professor of Psychology and Neural Science at NYU, notes in a tweet that “the SQUAD test shows that machines can highlight relevant passages in text, not that they understand those passages.”
“The Chinese e-commerce titan has joined the likes of Tencent Holdings Ltd. and Baidu Inc. in a race to develop AI that can enrich social media feeds, target ads and services or even aid in autonomous driving, Bloomberg notes. “Beijing has endorsed the technology in a national-level plan that calls for the country to become the industry leader 2030.”
*”The ExactMatch metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F1 score metric measures the average overlap between the prediction and ground truth answer.” – Pranav Rajpurkar et al., ArXiv

Topics: AI/Robotics


Can we stop AI outsmarting humanity?

The spectre of superintelligent machines doing us harm is not just science fiction, technologists say – so how can we ensure AI remains ‘friendly’ to its makers? By Mara Hvistendahl
Thu 28 Mar 2019 
It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life. It began four million years ago, when brain volumes began climbing rapidly in the hominid line.
Fifty thousand years ago with the rise of Homo sapiens sapiens.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
Jaan Tallinn stumbled across these words in 2007, in an online essay called Staring into the Singularity. The “it” was human civilisation. Humanity would cease to exist, predicted the essay’s author, with the emergence of superintelligence, or AI, that surpasses human-level intelligence in a broad array of areas.
Tallinn, an Estonia-born computer programmer, has a background in physics and a propensity to approach life like one big programming problem. In 2003, he co-founded Skype, developing the backend for the app. He cashed in his shares after eBay bought it two years later, and now he was casting about for something to do. Staring into the Singularity mashed up computer code, quantum physics and Calvin and Hobbes quotes. He was hooked.
Tallinn soon discovered that the author, Eliezer Yudkowsky, a self-taught theorist, had written more than 1,000 essays and blogposts, many of them devoted to superintelligence. He wrote a program to scrape Yudkowsky’s writings from the internet, order them chronologically and format them for his iPhone. Then he spent the better part of a year reading them.
The term artificial intelligence, or the simulation of intelligence in computers or machines, was coined back in 1956, only a decade after the creation of the first electronic digital computers. Hope for the field was initially high, but by the 1970s, when early predictions did not pan out, an “AI winter” set in. When Tallinn found Yudkowsky’s essays, AI was undergoing a renaissance. Scientists were developing AIs that excelled in specific areas, such as winning at chess, cleaning the kitchen floor and recognising human speech. Such “narrow” AIs, as they are called, have superhuman capabilities, but only in their specific areas of dominance. A chess-playing AI cannot clean the floor or take you from point A to point B. Superintelligent AI, Tallinn came to believe, will combine a wide range of skills in one entity. More darkly, it might also use data generated by smartphone-toting humans to excel at social manipulation.
Reading Yudkowsky’s articles, Tallinn became convinced that superintelligence could lead to an explosion or breakout of AI that could threaten human existence – that ultrasmart AIs will take our place on the evolutionary ladder and dominate us the way we now dominate apes. Or, worse yet, exterminate us.
After finishing the last of the essays, Tallinn shot off an email to Yudkowsky – all lowercase, as is his style. “i’m jaan, one of the founding engineers of skype,” he wrote. Eventually he got to the point: “i do agree that ... preparing for the event of general AI surpassing human intelligence is one of the top tasks for humanity.” He wanted to help.
When Tallinn flew to the Bay Area for other meetings a week later, he met Yudkowsky, who lived nearby, at a cafe in Millbrae, California. Their get-together stretched to four hours. “He actually, genuinely understood the underlying concepts and the details,” Yudkowsky told me recently. “This is very rare.” Afterward, Tallinn wrote a check for $5,000 (£3,700) to the Singularity Institute for Artificial Intelligence, the nonprofit where Yudkowsky was a research fellow. (The organisation changed its name to Machine Intelligence Research Institute, or Miri, in 2013.) Tallinn has since given the institute more than $600,000.
The encounter with Yudkowsky brought Tallinn purpose, sending him on a mission to save us from our own creations. He embarked on a life of travel, giving talks around the world on the threat posed by superintelligence. Mostly, though, he began funding research into methods that might give humanity a way out: so-called friendly AI. That doesn’t mean a machine or agent is particularly skilled at chatting about the weather, or that it remembers the names of your kids – although superintelligent AI might be able to do both of those things. It doesn’t mean it is motivated by altruism or love. A common fallacy is assuming that AI has human urges and values. “Friendly” means something much more fundamental: that the machines of tomorrow will not wipe us out in their quest to attain their goals.


Last spring, I joined Tallinn for a meal in the dining hall of Cambridge University’s Jesus College. The churchlike space is bedecked with stained-glass windows, gold moulding, and oil paintings of men in wigs. Tallinn sat at a heavy mahogany table, wearing the casual garb of Silicon Valley: black jeans, T-shirt and canvas sneakers. A vaulted timber ceiling extended high above his shock of grey-blond hair.
At 47, Tallinn is in some ways your textbook tech entrepreneur. He thinks that thanks to advances in science (and provided AI doesn’t destroy us), he will live for “many, many years”. When out clubbing with researchers, he outlasts even the young graduate students. His concern about superintelligence is common among his cohort. PayPal co-founder Peter Thiel’s foundation has given $1.6m to Miri and, in 2015, Tesla founder Elon Musk donated $10m to the Future of Life Institute, a technology safety organisation in Cambridge, Massachusetts. But Tallinn’s entrance to this rarefied world came behind the iron curtain in the 1980s, when a classmate’s father with a government job gave a few bright kids access to mainframe computers. After Estonia became independent, he founded a video-game company. Today, Tallinn still lives in its capital city – also called Tallinn – with his wife and the youngest of his six kids. When he wants to meet with researchers, he often just flies them to the Baltic region.
His giving strategy is methodical, like almost everything else he does. He spreads his money among 11 organisations, each working on different approaches to AI safety, in the hope that one might stick. In 2012, he cofounded the Cambridge Centre for the Study of Existential Risk (CSER) with an initial outlay of close to $200,000.
Existential risks – or X-risks, as Tallinn calls them – are threats to humanity’s survival. In addition to AI, the 20-odd researchers at CSER study climate change, nuclear war and bioweapons. But, to Tallinn, those other disciplines “are really just gateway drugs”. Concern about more widely accepted threats, such as climate change, might draw people in. The horror of superintelligent machines taking over the world, he hopes, will convince them to stay. He was visiting Cambridge for a conference because he wants the academic community to take AI safety more seriously.
At Jesus College, our dining companions were a random assortment of conference-goers, including a woman from Hong Kong who was studying robotics and a British man who graduated from Cambridge in the 1960s. The older man asked everybody at the table where they attended university. (Tallinn’s answer, Estonia’s University of Tartu, did not impress him.) He then tried to steer the conversation toward the news. Tallinn looked at him blankly. “I am not interested in near-term risks,” he said.
Tallinn changed the topic to the threat of superintelligence. When not talking to other programmers, he defaults to metaphors, and he ran through his suite of them: advanced AI can dispose of us as swiftly as humans chop down trees. Superintelligence is to us what we are to gorillas.
An AI would need a body to take over, the older man said. Without some kind of physical casing, how could it possibly gain physical control?
Tallinn had another metaphor ready: “Put me in a basement with an internet connection, and I could do a lot of damage,” he said. Then he took a bite of risotto.


Every AI, whether it’s a Roomba or one of its potential world-dominating descendants, is driven by outcomes. Programmers assign these goals, along with a series of rules on how to pursue them. Advanced AI wouldn’t necessarily need to be given the goal of world domination in order to achieve it – it could just be accidental. And the history of computer programming is rife with small errors that sparked catastrophes. In 2010, for example, when a trader with the mutual-fund company Waddell & Reed sold thousands of futures contracts, the firm’s software left out a key variable from the algorithm that helped execute the trade. The result was the trillion-dollar US “flash crash”.
The researchers Tallinn funds believe that if the reward structure of a superhuman AI is not properly programmed, even benign objectives could have insidious ends. One well-known example, laid out by the Oxford University philosopher Nick Bostrom in his book Superintelligence, is a fictional agent directed to make as many paperclips as possible. The AI might decide that the atoms in human bodies would be better put to use as raw material.
 A man plays chess with a robot designed by Taiwan’s Industrial Technology Research Institute (ITRI) in Taipei in 2017. Photograph: Sam Yeh/AFP/Getty
Tallinn’s views have their share of detractors, even among the community of people concerned with AI safety. Some object that it is too early to worry about restricting superintelligent AI when we don’t yet understand it. Others say that focusing on rogue technological actors diverts attention from the most urgent problems facing the field, like the fact that the majority of algorithms are designed by white men, or based on data biased toward them. “We’re in danger of building a world that we don’t want to live in if we don’t address those challenges in the near term,” said Terah Lyons, executive director of the Partnership on AI, a technology industry consortium focused on AI safety and other issues. (Several of the institutes Tallinn backs are members.) But, she added, some of the near-term challenges facing researchers, such as weeding out algorithmic bias, are precursors to ones that humanity might see with super-intelligent AI.
Tallinn isn’t so convinced. He counters that superintelligent AI brings unique threats. Ultimately, he hopes that the AI community might follow the lead of the anti-nuclear movement in the 1940s. In the wake of the bombings of Hiroshima and Nagasaki, scientists banded together to try to limit further nuclear testing. “The Manhattan Project scientists could have said: ‘Look, we are doing innovation here, and innovation is always good, so let’s just plunge ahead,’” he told me. “But they were more responsible than that.”


Tallinn warns that any approach to AI safety will be hard to get right. If an AI is sufficiently smart, it might have a better understanding of the constraints than its creators do. Imagine, he said, “waking up in a prison built by a bunch of blind five-year-olds.” That is what it might be like for a super-intelligent AI that is confined by humans.
The theorist Yudkowsky found evidence this might be true when, starting in 2002, he conducted chat sessions in which he played the role of an AI enclosed in a box, while a rotation of other people played the gatekeeper tasked with keeping the AI in. Three out of five times, Yudkowsky – a mere mortal – says he convinced the gatekeeper to release him. His experiments have not discouraged researchers from trying to design a better box, however.
The researchers that Tallinn funds are pursuing a broad variety of strategies, from the practical to the seemingly far-fetched. Some theorise about boxing AI, either physically, by building an actual structure to contain it, or by programming in limits to what it can do. Others are trying to teach AI to adhere to human values. A few are working on a last-ditch off-switch. One researcher who is delving into all three is mathematician and philosopher Stuart Armstrong at Oxford University’s Future of Humanity Institute, which Tallinn calls “the most interesting place in the universe.” (Tallinn has given FHI more than $310,000.)
Armstrong is one of the few researchers in the world who focuses full-time on AI safety. When I met him for coffee in Oxford, he wore an unbuttoned rugby shirt and had the look of someone who spends his life behind a screen, with a pale face framed by a mess of sandy hair. He peppered his explanations with a disorienting mixture of popular-culture references and math. When I asked him what it might look like to succeed at AI safety, he said: “Have you seen the Lego movie? Everything is awesome.”
One strain of Armstrong’s research looks at a specific approach to boxing called an “oracle” AI. In a 2012 paper with Nick Bostrom, who co-founded FHI, he proposed not only walling off superintelligence in a holding tank – a physical structure – but also restricting it to answering questions, like a really smart Ouija board. Even with these boundaries, an AI would have immense power to reshape the fate of humanity by subtly manipulating its interrogators. To reduce the possibility of this happening, Armstrong proposes time limits on conversations, or banning questions that might upend the current world order. He also has suggested giving the oracle proxy measures of human survival, like the Dow Jones industrial average or the number of people crossing the street in Tokyo, and telling it to keep these steady.
Ultimately, Armstrong believes, it could be necessary to create, as he calls it in one paper, a “big red off button”: either a physical switch, or a mechanism programmed into an AI to automatically turn itself off in the event of a breakout. But designing such a switch is far from easy. It is not just that an advanced AI interested in self-preservation could prevent the button from being pressed. It could also become curious about why humans devised the button, activate it to see what happens, and render itself useless. In 2013, a programmer named Tom Murphy VII designed an AI that could teach itself to play Nintendo Entertainment System games. Determined not to lose at Tetris, the AI simply pressed pause – and kept the game frozen. “Truly, the only winning move is not to play,” Murphy observed wryly, in a paper on his creation.
For the strategy to succeed, an AI has to be uninterested in the button, or, as Tallinn put it: “It has to assign equal value to the world where it’s not existing and the world where it’s existing.” But even if researchers can achieve that, there are other challenges. What if the AI has copied itself several thousand times across the internet?
The approach that most excites researchers is finding a way to make AI adhere to human values– not by programming them in, but by teaching AIs to learn them. In a world dominated by partisan politics, people often dwell on the ways in which our principles differ. But, Tallinn told me, humans have a lot in common: “Almost everyone values their right leg. We just don’t think about it.” The hope is that an AI might be taught to discern such immutable rules.
In the process, an AI would need to learn and appreciate humans’ less-than-logical side: that we often say one thing and mean another, that some of our preferences conflict with others, and that people are less reliable when drunk. Despite the challenges, Tallinn believes, it is worth trying because the stakes are so high. “We have to think a few steps ahead,” he said. “Creating an AI that doesn’t share our interests would be a horrible mistake.”


On his last night in Cambridge, I joined Tallinn and two researchers for dinner at a steakhouse. A waiter seated our group in a white-washed cellar with a cave-like atmosphere. He handed us a one-page menu that offered three different kinds of mash. A couple sat down at the table next to us, and then a few minutes later asked to move elsewhere. “It’s too claustrophobic,” the woman complained. I thought of Tallinn’s comment about the damage he could wreak if locked in a basement with nothing but an internet connection. Here we were, in the box. As if on cue, the men contemplated ways to get out.
Tallinn’s guests included former genomics researcher Seán Ó hÉigeartaigh, who is CSER’s executive director, and Matthijs Maas, an AI researcher at the University of Copenhagen. They joked about an idea for a nerdy action flick titled Superintelligence v Blockchain!, and discussed an online game called Universal Paperclips, which riffs on the scenario in Bostrom’s book. The exercise involves repeatedly clicking your mouse to make paperclips. It’s not exactly flashy, but it does give a sense for why a machine might look for more expedient ways to produce office supplies.
Eventually, talk shifted toward bigger questions, as it often does when Tallinn is present. The ultimate goal of AI-safety research is to create machines that are, as Cambridge philosopher and CSER co-founder Huw Price once put it, “ethically as well as cognitively superhuman”. Others have raised the question: if we don’t want AI to dominate us, do we want to dominate AI? In other words, does AI have rights? Tallinn believes this is needless anthropomorphising. It assumes that intelligence equals consciousness – a misconception that annoys many AI researchers. Earlier in the day, CSER researcher José Hernández-Orallo joked that when speaking with AI researchers, consciousness is “the C-word”. (“And ‘free will’ is the F-word,” he added.)
In the cellar, Tallinn said that consciousness is beside the point: “Take the example of a thermostat. No one would say it is conscious. But it’s really inconvenient to face up against that agent if you’re in a room that is set to negative 30 degrees.”
Ó hÉigeartaigh chimed in. “It would be nice to worry about consciousness,” he said, “but we won’t have the luxury to worry about consciousness if we haven’t first solved the technical safety challenges.”
People get overly preoccupied with what superintelligent AI is, Tallinn said. What form will it take? Should we worry about a single AI taking over, or an army of them? “From our perspective, the important thing is what AI does,” he stressed. And that, he believes, may still be up to humans – for now.
This piece originally appeared in Popular Science magazine


AI will upload and access our memories, predicts Siri co-inventor

"Instead of asking how smart we can make our machines, let's ask how smart our machines can make us."
April 26, 2017
Instead of replacing humans with robots, artificial intelligence should be used more for augmenting human memory and other human weaknesses, Apple Inc. executive Tom Gruber suggested at the TED 2017 conference yesterday (April 25, 2017).
Thanks to the internet and our smartphones, much of our  personal data is already being captured, notes Gruber, who was one the inventors of voice-controlled intelligent-assistant Siri. Future AI memory enhancement could be especially life-changing for those with Alzheimer’s or dementia, he suggested.
Limitless
“Superintelligence should give us super-human abilities,” he said. “As machines get smarter, so do we. Artificial intelligence can enable partnerships where each human on the team is doing what they do best. Instead of asking how smart we can make our machines, let’s ask how smart our machines can make us.
“I can’t say when or what form factors are involved, but I think it is inevitable,” he said. “What if you could have a memory that was as good as computer memory and is about your life? What if you could remember every person you ever met? How to pronounce their name? Their family details? Their favorite sports? The last conversation you had with them?”
Gruber’s ideas mesh with a prediction by Ray Kurzweil: “Once we have achieved complete models of human intelligence, machines will be capable of combining the flexible, subtle human levels of pattern recognition with the natural advantages of machine intelligence, in speed, memory capacity, and, most importantly, the ability to quickly share knowledge and skills.”
Two projects announced last week aim in that direction: Facebook’s plan to develop a non-invasive brain-computer interface that will let you type at 100 words per minute and Elon Musks’ proposal that we become superhuman cyborgs to deal with superintelligent AI.
But trusting machines also raises security concerns, Gruber warned. “We get to choose what is and is not recalled,” he said. “It’s absolutely essential that this be kept very secure.” 
Quadriplegia patient uses brain-computer interface to move his arm by just thinking
New Braingate design replaces robot arm with muscle-stimulating system
April 26, 2017
Bill Kochevar, who was paralyzed below his shoulders in a bicycling accident eight years ago, is the first person with quadriplegia to have arm and hand movements restored without robot help (credit: Case Western Reserve University/Cleveland FES Center)
A research team led by Case Western Reserve University has developed the first implanted brain-recording and muscle-stimulating system to restore arm and hand movements for quadriplegic patients.*
In a proof-of-concept experiment, the system included a brain-computer interface with recording electrodes implanted under his skull and a functional electrical stimulation (FES) system that activated his arm and hand — reconnecting his brain to paralyzed muscles.
The research was part of the ongoing BrainGate2 pilot clinical trial being conducted by a consortium of academic and other institutions to assess the safety and feasibility of the implanted brain-computer interface (BCI) system in people with paralysis. Previous Braingate designs required a robot arm.
In 2012 research, Jan Scheuermann, who has quadriplegia, was able to feed herself using a brain-machine interface and a computer-driven robot arm (credit: UPMC)
Kochevar’s eight years of muscle atrophy first required rehabilitation. The researchers exercised Kochevar’s arm and hand with cyclical electrical stimulation patterns. Over 45 weeks, his strength, range of motion. and endurance improved. As he practiced movements, the researchers adjusted stimulation patterns to further his abilities.
To prepare him to use his arm again, Kochevar learned how to use his own brain signals to move a virtual-reality arm on a computer screen. The team then implanted the FES systems’ 36 electrodes that animate muscles in the upper and lower arm, allowing him to move the actual arm.
Kochevar can now make each joint in his right arm move individually. Or, just by thinking about a task such as feeding himself or getting a drink, the muscles are activated in a coordinated fashion.
Neural activity (generated when Kochevar imagines movement of his arm and hand) is recorded from two 96-channel microelectrode arrays implanted in the motor cortex, on the surface of the brain. The implanted brain-computer interface translates the recorded brain signals into specific command signals that determine the amount of stimulation to be applied to each functional electrical stimulation (FES) electrode in the hand, wrist, arm, elbow and shoulder, and to a mobile arm support. (credit: A Bolu Ajiboye et al./The Lancet)
“Our research is at an early stage, but we believe that this neuro-prosthesis could offer individuals with paralysis the possibility of regaining arm and hand functions to perform day-to-day activities, offering them greater independence,” said lead author Dr Bolu Ajiboye, Case Western Reserve University. “So far, it has helped a man with tetraplegia to reach and grasp, meaning he could feed himself and drink. With further development, we believe the technology could give more accurate control, allowing a wider range of actions, which could begin to transform the lives of people living with paralysis.”
Work is underway to make the brain implant wireless, and the investigators are improving decoding and stimulation patterns needed to make movements more precise. Fully implantable FES systems have already been developed and are also being tested in separate clinical research.
A study of the work was published in the The Lancet March 28, 2017.
Writing in a linked Comment to The Lancet, Steve Perlmutter, M.D., University of Washington, said: “The goal is futuristic: a paralysed individual thinks about moving her arm as if her brain and muscles were not disconnected, and implanted technology seamlessly executes the desired movement… This study is groundbreaking as the first report of a person executing functional, multi-joint movements of a paralysed limb with a motor neuro-prosthesis. However, this treatment is not nearly ready for use outside the lab. The movements were rough and slow and required continuous visual feedback, as is the case for most available brain-machine interfaces, and had restricted range due to the use of a motorised device to assist shoulder movements… Thus, the study is a proof-of-principle demonstration of what is possible, rather than a fundamental advance in neuro-prosthetic concepts or technology. But it is an exciting demonstration nonetheless, and the future of motor neuro-prosthetics to overcome paralysis is brighter.”
* The study was funded by the US National Institutes of Health and the US Department of Veterans Affairs. It was conducted by scientists from Case Western Reserve University, Department of Veterans Affairs Medical Center, University Hospitals Cleveland Medical Center, MetroHealth Medical Center, Brown University, Massachusetts General Hospital, Harvard Medical School, Wyss Center for Bio and Neuroengineering. The investigational BrainGate technology was initially developed in the Brown University laboratory of John Donoghue, now the founding director of the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland. The implanted recording electrodes are known as the Utah array, originally designed by Richard Normann, Emeritus Distinguished Professor of Bioengineering at the University of Utah. The report in Lancet is the result of a long-running collaboration between Kirsch, Ajiboye and the multi-institutional BrainGate consortium. Leigh Hochberg, a neurologist and neuroengineer at Massachusetts General Hospital, Brown University and the VA RR&D Center for Neurorestoration and Neurotechnology in Providence, Rhode Island, directs the pilot clinical trial of the BrainGate system and is a study co-author.

Case | Man with quadriplegia employs injury bridging technologies to move again – just by thinking


Abstract of Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration
Background: People with chronic tetraplegia, due to high-cervical spinal cord injury, can regain limb movements through coordinated electrical stimulation of peripheral muscles and nerves, known as functional electrical stimulation (FES). Users typically command FES systems through other preserved, but unrelated and limited in number, volitional movements (eg, facial muscle activity, head movements, shoulder shrugs). We report the findings of an individual with traumatic high-cervical spinal cord injury who coordinated reaching and grasping movements using his own paralysed arm and hand, reanimated through implanted FES, and commanded using his own cortical signals through an intracortical brain–computer interface (iBCI).
Methods: We recruited a participant into the BrainGate2 clinical trial, an ongoing study that obtains safety information regarding an intracortical neural interface device, and investigates the feasibility of people with tetraplegia controlling assistive devices using their cortical signals. Surgical procedures were performed at University Hospitals Cleveland Medical Center (Cleveland, OH, USA). Study procedures and data analyses were performed at Case Western Reserve University (Cleveland, OH, USA) and the US Department of Veterans Affairs, Louis Stokes Cleveland Veterans Affairs Medical Center (Cleveland, OH, USA). The study participant was a 53-year-old man with a spinal cord injury (cervical level 4, American Spinal Injury Association Impairment Scale category A). He received two intracortical microelectrode arrays in the hand area of his motor cortex, and 4 months and 9 months later received a total of 36 implanted percutaneous electrodes in his right upper and lower arm to electrically stimulate his hand, elbow, and shoulder muscles. The participant used a motorised mobile arm support for gravitational assistance and to provide humeral abduction and adduction under cortical control. We assessed the participant’s ability to cortically command his paralysed arm to perform simple single-joint arm and hand movements and functionally meaningful multi-joint movements. We compared iBCI control of his paralysed arm with that of a virtual three-dimensional arm. This study is registered with ClinicalTrials.gov, number NCT00912041.
Findings: The intracortical implant occurred on Dec 1, 2014, and we are continuing to study the participant. The last session included in this report was Nov 7, 2016. The point-to-point target acquisition sessions began on Oct 8, 2015 (311 days after implant). The participant successfully cortically commanded single-joint and coordinated multi-joint arm movements for point-to-point target acquisitions (80–100% accuracy), using first a virtual arm and second his own arm animated by FES. Using his paralysed arm, the participant volitionally performed self-paced reaches to drink a mug of coffee (successfully completing 11 of 12 attempts within a single session 463 days after implant) and feed himself (717 days after implant).
Interpretation: To our knowledge, this is the first report of a combined implanted FES+iBCI neuroprosthesis for restoring both reaching and grasping movements to people with chronic tetraplegia due to spinal cord injury, and represents a major advance, with a clear translational path, for clinically viable neuroprostheses for restoration of reaching and grasping after paralysis.
Funding: National Institutes of Health, Department of Veterans Affairs.
references:
Topics: Cognitive Science/Neuroscience | ElectronicsTopics: AI/Robotics

What if you could type directly from your brain at 100 words per minute?

Former DARPA director reveals Facebook's secret research projects to create a non-invasive brain-computer interface and haptic skin hearing
April 19, 2017
Regina Dugan, PhD, Facebook VP of Engineering, Building8, revealed today (April 19, 2017) at Facebook F8 conference 2017 a plan to develop a non-invasive brain-computer interface that will let you type at 100 wpm — by decoding neural activity devoted to speech.
Dugan previously headed Google’s Advanced Technology and Projects Group, and before that, was Director of the Defense Advanced Research Projects Agency (DARPA).
She explained in a Facebook post that over the next two years, her team will be building systems that demonstrate “a non-invasive system that could one day become a speech prosthetic for people with communication disorders or a new means for input to AR [augmented reality].”
Dugan said that “even something as simple as a ‘yes/no’ brain click … would be transformative.” That simple level has been achieved by using functional near-infrared spectroscopy (fNIRS) to measure changes in blood oxygen levels in the frontal lobes of the brain, as KurzweilAI recently reported. (Near-infrared light can penetrate the skull and partially into the brain.)
Dugan agrees that optical imaging is the best place to start, but her Building8 team team plans to go way beyond that research — sampling hundreds of times per second and precise to millimeters. The research team began working on the brain-typing project six months ago and she now has a team of more than 60 researchers who specialize in optical neural imaging systems that push the limits of spatial resolution and machine-learning methods for decoding speech and language.
The research is headed by Mark Chevillet, previously an adjunct professor of neuroscience at Johns Hopkins University.
Besides replacing smartphones, the system would be a powerful speech prosthetic, she noted — allowing paralyzed patients to “speak” at normal speed.
Dugan revealed one specific method the researchers are currently working on to achieve that: a ballistic filter for creating quasi ballistic photons (avoiding diffusion) — creating a narrow beam for precise targeting — combined with a new method of detecting blood-oxygen levels.
Dugan also described a system that may one day allow hearing-impaired people to hear directly via vibrotactile sensors embedded in the skin. “In the 19th century, Braille taught us that we could interpret small bumps on a surface as language,” she said. “Since then, many techniques have emerged that illustrate our brain’s ability to reconstruct language from components.” Today, she demonstrated “an artificial cochlea of sorts and the beginnings of a new a ‘haptic vocabulary’.”


In a neurotechnology future, human-rights laws will need to be revisited

April 28, 2017

New human rights laws to prepare for rapid current advances in neurotechnology that may put “freedom of mind” at risk have been proposed in the open access journal Life Sciences, Society and Policy.
Four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy, the authors of the study suggest: The right to cognitive liberty, the right to mental privacy, the right to mental integrity, and the right to psychological continuity.
Advances in neural engineering, brain imaging, and neurotechnology put freedom of the mind at risk, says Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel. “Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”
Potential misuses
Sophisticated brain imaging and the development of brain-computer interfaces have moved away from a clinical setting into the consumer domain. There’s a risk that the technology could be misused and create unprecedented threats to personal freedom. For example:
  • Uses in criminal court as a tool for assessing criminal responsibility or even the risk of re-offending.*
  • Consumer companies using brain imaging for “neuromarketing” to understand consumer behavior and elicit desired responses from customers.
  • “Brain decoders” that can turn a person’s brain imaging data into images, text or sound.**
  • Hacking, allowing a third-party to eavesdrop on someone’s mind.***
International human rights laws currently make no specific mention of neuroscience. But as with the genetic revolution, the on-going neurorevolution will require consideration of human-rights laws and even the creation of new ones, the authors suggest.
* “A possibly game-changing use of neurotechnology in the legal field has been illustrated by Aharoni et al. (2013). In this study, researchers followed a group of 96 male prisoners at prison release. Using fMRI, prisoners’ brains were scanned during the performance of computer tasks in which they had to make quick decisions and inhibit impulsive reactions. The researchers followed the ex-convicts for 4 years to see how they behaved. The study results indicate that those individuals showing low activity in a brain region associated with decision-making and action (the Anterior Cingulate Cortex, ACC) are more likely to commit crimes again within 4 years of release (Aharoni et al. 2013). According to the study, the risk of recidivism is more than double in individuals showing low activity in that region of the brain than in individuals with high activity in that region. Their results suggest a “potential neurocognitive biomarker for persistent antisocial behavior”. In other words, brain scans can theoretically help determine whether certain convicted persons are at an increased risk of reoffending if released.” — Marcello Ienca and Roberto Andorno/Life Sciences, Society and Policy
** NASA and Jaguar are jointly developing a technology called Mind Sense, which will measure brainwaves to monitor the driver’s concentration in the car (Biondi and Skrypchuk 2017). If brain activity indicates poor concentration, then the steering wheel or pedals could vibrate to raise the driver’s awareness of the danger. This technology can contribute to reduce the number of accidents caused by drivers who are stressed or distracted. However, it also opens theoretically the possibility for third parties to use brain decoders to eavesdropping on people’s states of mind. — Marcello Ienca and Roberto Andorno/Life Sciences, Society and Policy
*** Criminally motivated actors could selectively erase memories from their victims’ brains to prevent being identified by them later on or simply to cause them harm. On the long term-scenario, they could be used by surveillance and security agencies with the purpose of selectively erasing dangerous, inconvenient from people’s brain as portrayed in the movie Men in Black with the so-called neuralyzer— Marcello Ienca and Roberto Andorno/Life Sciences, Society and Policy

Abstract of Towards new human rights in the age of neuroscience and neurotechnology

Rapid advancements in human neuroscience and neurotechnology open unprecedented possibilities for accessing, collecting, sharing and manipulating information from the human brain. Such applications raise important challenges to human rights principles that need to be addressed to prevent unintended consequences. This paper assesses the implications of emerging neurotechnology applications in the context of the human rights framework and suggests that existing human rights may not be sufficient to respond to these emerging issues. After analysing the relationship between neuroscience and human rights, we identify four new rights that may become of great relevance in the coming decades: the right to cognitive liberty, the right to mental privacy, the right to mental integrity, and the right to psychological continuity.

references:

  • Marcello Ienca and Roberto Andorno. Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 (open access)

  • Improving Palliative Care with Deep Learning                    
    Abstract— Improving the quality of end-of-life care for hospitalized patients is a priority for healthcare organizations. Studies have shown that physicians tend to over-estimate prognoses, which in combination with treatment inertia results in a mismatch between patients wishes and actual care at the end of life . We describe a method to address this problem using Deep Learning and Electronic Health Record (EHR) data, which is currently being piloted, with Institutional Review Board approval, at an academic medical center. The EHR data of admitted patients are automatically evaluated by an algorithm, which brings patients who are likely to benefit from palliative care services to the attention of the Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR data from previous years, to predict all-cause 3-12 month mortality of patients as a proxy for patients that could benefit from palliative care. Our predictions enable the Palliative Care team to take a proactive approach in reaching out to such patients, rather than relying on referrals from treating physicians, or conduct time consuming chart reviews of all patients. We also present a novel interpretation technique which we use to provide explanations of the model’s predictions.
    I. INTRODUCTION Studies have shown that approximately 80% of Americans would like to spend their final days at home if possible, but only 20% do [1]. In fact, up to 60% of deaths happen in an acute care hospital, with patients receiving aggressive care in their final days. Access to palliative care services in the United States has been on the rise over the past decade. In 2008, 53% of all hospitals with fifty or more beds reported having palliative care teams, rising to 67% in 2015 [2]. However, despite increasing access, data from the National Palliative Care Registry estimates that less than half of the 7-8% of all hospital admissions that need palliative care actually receive it [3]. Though a significant reason for this gap comes from the palliative care workforce shortage [4], and incentives for health systems to employ them, technology can still play a crucial role by efficiently identifying patients who may benefit most from palliative care, but might otherwise be overlooked under current care models. We focus on two aspects of this problem. First, physicians may not refer patients likely to benefit from palliative care for multiple reasons such as overoptimism, time pressures, or treatment inertia [5]. This may lead to patients failing to have their wishes carried out at end of life [6] and overuse of aggressive care. Second, a shortage of palliative care professionals makes proactive identification of candidate patients via manual chart review an expensive and time-consuming process. The criteria for deciding which patients benefit from palliative care can be hard to state explicitly. Our approach uses deep learning to screen patients admitted to the hospital to identify those who are most likely to have palliative care needs. The algorithm addresses a proxy problem - to predict the mortality of a given patient within the next 12 months - and use that prediction for making recommendations for palliative care referral. This frees the palliative care team from manual chart review of every admission and helps counter the potential biases of treating physicians by providing an objective recommendation based on the patient’s EHR. Currently existing tools to identify such patients have limitations, and they are discussed in the next section…. : https://arxiv.org/pdf/1711.06402.pdf


Bioinspired bio-voltage memristors
Abstract
Memristive devices are promising candidates to emulate biological computing. However, the typical switching voltages (0.2-2 V) in previously described devices are much higher than the amplitude in biological counterparts. Here we demonstrate a type of diffusive memristor, fabricated from the protein nanowires harvested from the bacterium Geobacter sulfurreducens, that functions at the biological voltages of 40-100 mV. Memristive function at biological voltages is possible because the protein nanowires catalyze metallization. Artificial neurons built from these memristors not only function at biological action potentials (e.g., 100 mV, 1 ms) but also exhibit temporal integration close to that in biological neurons. The potential of using the memristor to directly process biosensing signals is also demonstrated….: https://www.nature.com/articles/s41467-020-15759-y 

AI 2041: Ten Visions for Our Future

In a groundbreaking blend of science and imagination, the former president of Google China and a leading writer of speculative fiction join forces to answer an urgent question: How will artificial intelligence change our world over the next twenty years?

AI will be the defining issue of the twenty-first century, but many people know little about it apart from visions of dystopian robots or flying cars. Though the term has been around for half a century, it is only now, Kai-Fu Lee argues, that AI is poised to upend our society, just as the arrival of technologies like electricity and smart phones did before it. In the past five years, AI has shown it can learn games like chess in mere hours--and beat humans every time. AI has surpassed humans in speech and object recognition, even outperforming radiologists in diagnosing lung cancer. AI is at a tipping point. What comes next?

Within two decades, aspects of daily life may be unrecognizable. Humankind needs to wake up to AI, both its pathways and perils. In this provocative work that juxtaposes speculative storytelling and science, Lee, one of the world's leading AI experts, has teamed up with celebrated novelist Chen Qiufan to reveal how AI will trickle down into every aspect of our world by 2041. In ten gripping short stories that crisscross the globe, coupled with incisive analysis, Lee and Chen explore AI's challenges and its potential:

- Ubiquitous AI that knows you better than you know yourself
- Genetic fortune-telling that predicts risk of disease or even IQ
- AI sensors that creates a fully contactless society in a future pandemic
- Immersive personalized entertainment to challenge our notion of celebrity
- Quantum computing and other leaps that both eliminate and increase risk

By gazing toward a not-so-distant horizon, AI 2041 offers powerful insights and compelling storytelling for everyone interested in our collective future.

https://www.goodreads.com/book/show/56377201-ai-2041

AI 2041: Ten Visions for Our Future  by Kai-Fu Lee 

This inspired collaboration between a pioneering technologist and a visionary writer of science fiction offers bold and urgent insights.

People want AI for its

 brain power, not its

 people skills

Published on May 4, 2017

Partner at PwC Analytics - Innovation Lead, Artificial Intelligence Expert
By Anand Rao
Artificial intelligence (“AI”) is fast becoming the next great democratizer for services. In the medical field, 56% of consumers surveyed see its potential to lower cost and break down barriers in providing medical access to lower income adults. And the beginnings of that technology can already be seen: an AI system has successfully identified autism in babies with 81% accuracy, while a Stanford-led experiment used AI to identify skin cancer with 91% accuracy.
But as much as these technologies develop and become more successful in application, the majority of consumers still want a human touch accompany cutting-edge tech. While consumers trust AI to make vital decisions on the back end in terms of data processing and analysis, they still prefer a human to deliver information to them or to help explain a result. According to our latest Consumer Intelligence Series survey, 77% would prefer to visit a doctor in person than to take an at-home assessment with a robotic smart kit, and only 22% think it’s likely that people will turn entirely to an AI assistant versus a human as a doctor.
The same sentiment is echoed for office environments. AI is great for the processing it brings to the table, but not for making final decisions. Executives see AI as a liberator when it comes to repetitive tasks in their day-to-day life. Our survey reveals that paperwork, scheduling and timesheets seemed like appropriate, tedious tasks to be delegated to a machine. However, executives were less confident about AI’s ability to handle HR-related tasks which require a human touch. Sixty-nine percent of executives believe AI would be just as fair, or even more fair, as a human manager when making promotion decisions. But in practice, 86% would want to speak with a human after a review decision made by AI.
Ultimately, people are optimistic that AI will save time and provide more services to even more people. But when it comes to tasks that require emotional intelligence, they aren’t ready to hand over the reins just yet. The emotional skills that humans innately have as, well, humans, are their strength. Where AI is logical and methodical, humans can be inventive and empathetic. The role of humans will be to push AI further: to create more useful programs, to develop more unique ways for machines to think, and to tackle new problems. All of which will lead to a smarter – both intellectually and emotionally – world for all.
Man and machine together is better than either one on their own.


Teleoperating robots with virtual reality: getting inside a robot’s head

October 6, 2017
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a virtual-reality (VR) system that lets you teleoperate a robot using an Oculus Rift or HTC Vive VR headset.
CSAIL’s “Homunculus Model” system (the classic notion of a small human sitting inside the brain and controlling the actions of the body) embeds you in a VR control room with multiple sensor displays, making it feel like you’re inside the robot’s head. By using gestures, you can control the robot’s matching movements to perform various tasks.
The system can be connected either via a wired local network or via a wireless network connection over the Internet. (The team demonstrated that the system could pilot a robot from hundreds of miles away, testing it on a hotel’s wireless network in Washington, DC to control Baxter at MIT.)
According to CSAIL postdoctoral associate Jeffrey Lipton, lead author on an open-access arXiv paper about the system (presented this week at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Vancouver), “By teleoperating robots from home, blue-collar workers would be able to telecommute and benefit from the IT revolution just as white-collars workers do now.”
Jobs for video-gamers too
The researchers imagine that such a system could even help employ jobless video-gamers by “game-ifying” manufacturing positions. (Users with gaming experience had the most ease with the system, the researchers found in tests.)
To make these movements possible, the human’s space is mapped into the virtual space, and the virtual space is then mapped into the robot space to provide a sense of co-location.
The team demonstrated the Homunculus Model system using the Baxter humanoid robot from Rethink Robotics, but the approach could work on other robot platforms, the researchers said.
In tests involving pick and place, assembly, and manufacturing tasks (such as “pick an item and stack it for assembly”) comparing the Homunculus Model system with existing state-of-the-art automated remote-control, CSAIL’s Homunculus Model system had a 100% success rate compared with a 66% success rate for state-of-the-art automated systems. The CSAIL system was also better at grasping objects 95 percent of the time and 57 percent faster at doing tasks.*
“This contribution represents a major milestone in the effort to connect the user with the robot’s space in an intuitive, natural, and effective manner.” says Oussama Khatib, a computer science professor at Stanford University who was not involved in the paper.
The team plans to eventually focus on making the system more scalable, with many users and different types of robots that are compatible with current automation technologies.
* The Homunculus Model system solves a delay problem with existing systems, which use a GPU or CPU, introducing delay. 3D reconstruction from the stereo HD cameras is instead done by the human’s visual cortex, so the user constantly receives visual feedback from the virtual world with minimal latency (delay). This also avoids user fatigue and nausea caused by motion sickness (known as simulator sickness) generated by “unexpected incongruities, such as delays or relative motions, between proprioception and vision [that] can lead to the nausea,” the researchers explain in the paper.

MITCSAIL | Operating Robots with Virtual Reality


Abstract of Baxter’s Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing
Expensive specialized systems have hampered development of telerobotic systems for manufacturing systems. In this paper we demonstrate a telerobotic system which can reduce the cost of such system by leveraging commercial virtual reality(VR) technology and integrating it with existing robotics control software. The system runs on a commercial gaming engine using off the shelf VR hardware. This system can be deployed on multiple network architectures from a wired local network to a wireless network connection over the Internet. The system is based on the homunculus model of mind wherein we embed the user in a virtual reality control room. The control room allows for multiple sensor display, dynamic mapping between the user and robot, does not require the production of duals for the robot, or its environment. The control room is mapped to a space inside the robot to provide a sense of co-location within the robot. We compared our system with state of the art automation algorithms for assembly tasks, showing a 100% success rate for our system compared with a 66% success rate for automated systems. We demonstrate that our system can be used for pick and place, assembly, and manufacturing tasks.
references:


Can machine-learning improve cardiovascular risk prediction using routine clinical data?
·         Stephen F. Weng  ,
·         Jenna Reps ,
·         Joe Kai ,
·         Jonathan M. Garibaldi ,
·         Nadeem Qureshi 
Abstract
Background
Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction.
Methods
Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the ‘receiver operating curve’ (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins).
Findings
24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723–0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739–0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755–0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755–0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759–0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm.
Conclusions
Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others.
Figures
Citation: Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N (2017) Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS ONE 12(4): e0174944. https://doi.org/10.1371/journal.pone.0174944
Editor: Bin Liu, Harbin Institute of Technology Shenzhen Graduate School, CHINA
Received: December 14, 2016; Accepted: March 18, 2017; Published: April 4, 2017
Copyright: © 2017 Weng et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: This dataset contains patient level health records with intellectual property rights held by The Crown copyright, which is subject to UK information governance laws. The authors will make their data available upon specific requests subject to the requestor obtaining ethical and research approvals from the Clinical Practice Research Datalink Independent Scientific Advisory Committee (https://www.cprd.com/intro.asp) at the UK Medicines and Health Products Regulatory Agency.
Funding: This paper presents independent research funded by the National Institute for Health Research School for Primary Care Research (NIHR SPCR): personal training fellowship award for SW from 2015-2018. URL: https://www.spcr.nihr.ac.uk/trainees. The views expressed are those of the authors and not necessarily those of the NIHR, the NHS, or the Department of Health.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Globally, cardiovascular disease (CVD) is the leading cause of morbidity and ortality. In 2012, there were 17.5 million deaths from CVD with 7.4 million deaths due to coronary heart disease (CHD) and 6.7 million deaths due to stroke [1]. Established approaches to CVD risk assessment, such as that recommended by the American Heart Association/American College of Cardiology (ACC/AHA), predict future risk of CVD based on well-established risk factors such as hypertension, cholesterol, age, smoking, and diabetes. These risk factors have recognised aetiological associations with CVD and feature within most CVD risk prediction tools (e.g. ACC/AHA [2], QRISK2 [3], Framingham [4], Reynolds [5]. There remain a large number of individuals at risk of CVD who fail to be identified by these tools, while some individuals not at risk are given preventive treatment unnecessarily. For instance, approximately half of myocardial infractions (MIs) and strokes will occur in people who are not predicted to be at risk of cardiovascular disease [6].
All standard CVD risk assessment models make an implicit assumption that each risk factor is related in a linear fashion to CVD outcomes [7]. Such models may thus oversimplify complex relationships which include large numbers of risk factors with non-linear interactions. Approaches that better incorporate multiple risk factors, and determine more nuanced relationships between risk factors and outcomes need to be explored.
Machine-learning (ML) offers an alternative approach to standard prediction modelling that may address current limitations. It has potential to transform medicine by better exploiting ‘big data’ for algorithm development [7]. ML developed from the study of pattern recognition and computational learning (so-called ‘artificial intelligence’). This relies on a computer to learn all complex and non-linear interactions between variables by minimising the error between predicted and observed outcomes [8]. In addition to potentially improving prediction, ML may identify latent variables, which are unlikely to be observed but might be inferred from other variables [9].
To date, there has been no large-scale investigation applying machine-learning for prognostic assessment in the general population, using routine clinical data. The aim of this study was to evaluate whether machine-learning can improve accuracy of cardiovascular risk prediction within a large general primary care population. We also sought to determine which class of machine-learning algorithm has highest predictive accuracy.
Methods
Data source
The cohort of patients was derived from the Clinical Practice Research Datalink (CPRD), anonymized electronic medical records from nearly 700 UK family practices documenting demographic details, history of medical conditions, prescription drugs, acute medical outcomes, referrals to specialists, admissions to hospitals, and biological results. The database is representative of the UK general population and linked to hospital (secondary care) records [10]. Ethical and research approvals were granted by the Independent Scientific Advisory Committee (ISAC) at CPRD (number 14_205).
Study population
The cohort of patients were registered with a family practice between the ages of 30 to 84 years at baseline, who had complete data for the eight core baseline variables (gender, age, smoking status, systolic blood pressure, blood pressure treatment, total cholesterol, HDL cholesterol, and diabetes) used in the established ACC/AHA 10-year risk prediction model [2]. The baseline date was set as the 1st of January 2005, thus allowing all patients within the cohort to be followed-up for 10 years. The end of the study period was specified as the 1st of January 2015, the latest date for which CPRD had provided an updated dataset. Individuals with a previous history of CVD, lipid disorders which are inherited, prescribed lipid lowering drugs, or outside the specified age range prior to or on the baseline date were excluded from the analysis.
Risk factor variables
The eight core risk variables (above) were used to derive a baseline risk prediction model using the published equations in the 2013 ACC/AHA guidelines for assessment of CVD risk [2]. To compare the machine-learning algorithms, an additional 22 variables with potential to be associated with CVD were included in the analysis. These variables were selected based on their inclusion in published CVD risk algorithms [25], within literature on other potential CVD risk factors [1121], and further reviewed by practising clinicians (NQ, JK).
In nine of the additional continuous variables, there were some levels of missing data. Median imputation, a common approach to dealing with missing values in machine-learning algorithms [22] was used. It was also hypothesized that missing values in certain clinical variables (e.g. BMI and laboratory results) may indicate a perception of reduced relevance in certain patients, given the under recording of normal BMI values in primary care medical records [23]. Dummy variables were created to indicate whether these continuous variable values were missing. For demographic categorical variables, Townsend deprivation index (28) and ethnicity, missing values were given a separate category of ‘unknown’ in the analyses. In total, there were 30 variables (excluding dummy variables for missing values) analysed in the machine-learning models prior to baseline
Outcome
The primary outcome was the first recorded diagnosis of a fatal or non-fatal cardiovascular event documented in the patient’s primary or secondary care computerised record. In primary care, CVD is labelled and electronically recorded by UK National Health Service (NHS) Read codes. Further, confirmation of outcomes in secondary care (Hospital Episodes Statistics) utilised ICD-10 codes, specifically I20 to I25 for coronary (ischaemic) heart conditions and I60 to I69 for cerebrovascular conditions.
Machine-learning algorithms
To compare machine-learning risk algorithms, the study population was split in the data set into a ‘training’ cohort in which the CVD risk algorithms were derived and a ‘validation’ cohort in which the algorithms were applied and tested. The ‘training’ cohort was derived from random sampling of 75% of the extracted CPRD cohort, and the ‘validation’ cohort comprised the remaining 25%. Four commonly used classes of machine-learning algorithms were utilised: logistic regression [25], random forest [26], gradient boosting machines [27], and neural networks [28]. These algorithms were selected based on the ease of implementation into current UK primary care electronic health records. Development of the risk algorithms in the training cohort and application of the risk algorithms to the validation cohort was completed using RStudio with library packages caret (http://CRAN.R-project.org/package=caret) for neural networks and h2o (http://www.h2o.ai) for the remaining algorithms. Each model’s hyper parameters were determined by using a grid search and two fold cross-validation on the training cohort to determine the values which led to the best performance. Further details on machine-learning models are described in the S1 Text.
Statistical analysis
Descriptive characteristic of the study population were provided, including number (%) and mean (SD) for categorical and continuous variables, respectively. The performance of the machine-learning prediction algorithms, developed from the training cohort, was assessed using the validation cohort by calculating Harrell’s c-statistic [29], a measure of the total area under the receiver operating characteristic curve (AUC). Standard errors and 95% confidence intervals were estimated for the c-statistic using a jack-knife procedure [30]. Additionally, using thresholds corresponding to the 10-year CVD risk of > 7.5% as recommended by the ACC/AHA guidelines [2] for initiating lipid lowering therapy, binary classification analysis was used to compare observed and expected prediction of cases and non-cases in the validation cohort. This process provided sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). The statistical analyses assessing algorithm performance were performed using STATA 13 MP4.
Results
Data extraction
There were a total of 383,592 patients from 12 million patients in the CPRD database at baseline (1 Jan 2005) who met eligibility criteria. After excluding 5,336 patients with coding errors (i.e. non-numerical entries for blood pressure/cholesterol) and extreme outlying observations (> 5 SDs from the mean), the analysis cohort consisted of 378,256 patients. This cohort was then randomly split into a 75% sample of 295,267 patients to train the machine-learning algorithms and the remaining sample of 82,989 patients for validation
Study population characteristics
From a total cohort of 378,256 patients who were free from CVD at baseline, there were 24,970 incident cases (6.6%) of CVD during the 10-year follow-up period. There were significantly fewer women than men (42% F, 52% M) in CVD cases while there was only slightly more women than men in non-CVD cases (52% F, 48% M). The mean baseline age of CVD patients was 65.3 years compared to 57.3 years in non-CVD patients (p < 0.001).
Machine-learning variable rankings
All variables listed in Table 2 were inputs for the machine-learning models and trained using a cohort of 295,267 patients with 19,487 incident CVD cases (6.6%) of developing over the 10-year follow-up period. Variable importance was determined by the coefficient effect size for the ACC/AHA baseline model and machine-learning logistic regression. Random forest and gradient boosting machine models, based on decision-trees, rank variable importance on the selection frequency of the variable as a decision node while neural networks use overall weighting of the variable within the model.
The standard risk factors in the ACC/AHA algorithm stratified by gender were age, total cholesterol, HDL cholesterol, smoking, blood pressure, and diabetes. Several of these risk factors in the ACC/AHA model (age, gender, smoking) were present as top ranked risk factors for all four machine-learning algorithms. However, diabetes, which is prominent in many CVD algorithms, was not present in the top ranked risk factors for any of the machine-learning models (though HbA1c was included as a proxy in random forest models). Other new risk factors not found in any previous risk prediction tools but determined by machine-learning included medical conditions such as COPD and severe mental illness, prescribing of oral corticosteroids, as well as biomarkers such as triglyceride levels. Random forest and gradient boosting machines were most similar in risk factor selection and rankings, with some discrepancies in ranking order and substitution of BMI for systolic blood pressure. Logistic regression and neural networks prioritised medical conditions such as atrial fibrillation, chronic kidney disease, and rheumatoid arthritis over biometric risk factors. Neural networks also put less weighting on age as a risk factor, and included ‘BMI missing’ as a protective risk factor of CVD.
Higher c-statistics results in better algorithm discrimination. The baseline (BL) ACC/AHA 10-year risk prediction algorithm is provided for comparative purposes.
The ACC/AHA risk model served as a baseline for comparison (AUC 0.728, 95% CI 0.723–0.735). All machine-learning algorithms tested achieved statistically significant improvements in discrimination compared to the baseline models (from 1.7% for random forest algorithms to 3.6% for neural networks)
Classification analysis
The ACC/AHA baseline model predicted 4,643 cases correctly from 7,404 total cases, resulting in a sensitivity of 62.7% and PPV of 17.1%. The random forest algorithm resulted in a net increase of 191 CVD cases from the baseline model, increasing the sensitivity to 65.3% and PPV to 17.8% while logistic regression resulted in a net increase of 324 CVD cases (sensitivity 67.1%; PPV 18.3%). Gradient boosting machines and neural networks performed best, resulting in a net increase of 354 (sensitivity 67.5%; PPV 18.4%) and 355 CVD (sensitivity 67.5%; PPV 18.4%) cases correctly predicted, respectively.
The ACC/AHA baseline model correctly predicted 53,106 non-cases from 75,585 total non-cases, resulting in a specificity of 70.3% and NPV of 95.1%. The net increase in non-cases correctly predicted compared to the baseline ACC/AHA model ranged from 191 non-cases for the random forest algorithm to 355 non-cases for the neural networks. Full details on classification analysis can be found in S2 Table.
Discussion
Compared to an established AHA/ACC risk prediction algorithm, we found all machine-learning algorithms tested were better at identifying individuals who will develop CVD and those that will not. Unlike established approaches to risk prediction, the machine-learning methods used were not limited to a small set of risk factors, and incorporated more pre-existing medical conditions. Neural networks performed the best, with predictive accuracy improving by 3.6%. This is an encouraging step forward. For example, the addition of emerging biochemical risk factors, such as high sensitivity C-reactive protein, has recently achieved less than 1% improvement in CVD risk prediction [31].
Strengths
To our knowledge, this is the first investigation applying machine-learning to routine data in patients’ electronic records, demonstrating improved prediction of CVD risk in a large general population. The study also illustrates use of a range of machine learning methods, as well as evaluation techniques, that are lacking in existing applications of machine-learning to clinical data [32]. Our results are consistent with much smaller studies [33,34] in more selected populations. For example, a cohort study of 5,159 men in Northern Germany [34] found a similar 3.2% improvement in accuracy of prediction of coronary risk using a probabilistic neural network model.
The current study’s use of an array of machine-learning algorithms has suggested intriguing variations in the importance of different risk factors depending on the modelling technique. Models based on decision trees resembled closely to each other, with gradient boosting machines out-performing random forests. Neural networks and logistic regression placed far more importance on categorical variables and CVD-associated medical conditions, clustering patients with similar characteristics in each groups. This may help inform further exploration of diverse predictive risk factors, and future development of new risk prediction approaches and algorithms.
Finally, the importance of missing values or non-response are not often assessed in development of conventional CVD risk prediction tools [25]. This study suggests that missing values, in particular, for routine biometric variables such as BMI, are independent predictors of CVD. This is consistent with subjective assessment by clinicians who may not record normal BMI values if patients appear at lower CVD risk [23].
Limitations
It is acknowledged that the “black-box” nature of machine-learning algorithms, in particular neural networks, can be difficult to interpret. This refers to the inherent complexity in how the risk factor variables are interacting and their independent effects on the outcome. However, improvements in data visualization methods have improved understanding of these models, illustrating the importance of network connections between risk factors [35]
It is also recognised that as the number of potential risk factors increases, the complexity of the models can cause over-fitting, yielding implausible results. We addressed this by active and appropriate choice of pre-training, hyper-parameter selection, and regularisation [36].
Although we have cross-validated the performance of the machine-learning algorithms using an independent dataset, an approach commonly used for the development of established cardiovascular risk algorithms applied to clinical practice [25,24,37], it must be acknowledged that the jack-knife procedure may yield more accurate results as demonstrated in genomic or proteomic datasets [38,39]. Moreover, these established risk prediction algorithms for use in clinical practice have been developed from a binary classification framework which can often result in an unbalanced dataset. Ensemble learning have been demonstrated as a solution to construct balanced datasets to enhance prediction performance [40]. These methods are not yet commonplace for developing risk prediction models in clinical datasets but their utility should be explored in future studies.
Finally, we note the study was performed in a large cohort of primary care patients in the UK. However, its demonstration of machine-learning methods, and use of routine clinical data available within electronic records in several countries [41], underline applicability to other populations and health systems.
Future implications
CVD risk prediction has become increasingly important in clinical decision-making since the introduction of the recent ACC/AHA and similar guidelines internationally [2,42]. Machine-learning approaches offer the exciting prospect of achieving improved and more individualised CVD risk assessment. This may assist the drive towards personalised medicine, by better tailoring risk management to individual patients [43,44].
The improvement in predictive accuracy found in the current study should be further explored using machine learning with other large clinical datasets, in other populations, and in predicting other disease outcomes. Future investigation of the feasibility and acceptability of machine-learning applications in clinical practice will be needed. As the computational capacity in health care systems is improving, the opportunities to exploit machine-learning to enhance prediction of disease risk in clinical practice will become a realistic option [7]. This might increasingly include predicting protein structure and function from genetic sequences from patients’ clinical profiles [7]. This will inevitably require exploration in future studies on utility and clinical applicability other computationally demanding machine-learning algorithms such as support vector machines and deep learning for integration into primary care electronic health records. In several countries, electronic health records across health care organisations are held on central servers. This may allow new algorithm development to be performed off-site using cloud computing software, and then returned to the clinical setting as applications programme interfaces (APIs) for PCs, mobile devices and tablets.
Conclusion
Compared to an established risk prediction approach, this study has shown machine-learning algorithms are better at predicting the absolute number of cardiovascular disease cases correctly, whilst successfully excluding non-cases. This has been demonstrated in a large and heterogeneous primary care patient population using routinely collected electronic health data.


A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs

Science  26 Oct 2017:
Abstract
Learning from few examples and generalizing to dramatically different situations are capabilities of human visual intelligence that are yet to be matched by leading machine learning models. By drawing inspiration from systems neuroscience, we introduce a probabilistic generative model for vision in which message-passing based inference handles recognition, segmentation and reasoning in a unified way. The model demonstrates excellent generalization and occlusion-reasoning capabilities, and outperforms deep neural networks on a challenging scene text recognition benchmark while being 300-fold more data efficient. In addition, the model fundamentally breaks the defense of modern text-based CAPTCHAs by generatively segmenting characters without CAPTCHA-specific heuristics. Our model emphasizes aspects like data efficiency and compositionality that may be important in the path toward general artificial intelligence.: http://science.sciencemag.org/content/early/2017/10/26/science.aag2612.full



Machine translation of cortical activity to text with an encoder–decoder framework
Abstract
A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech. Here we show how to decode the electrocorticogram with high accuracy and at natural-speech rates. Taking a cue from recent advances in machine translation, we train a recurrent neural network to encode each sentence-length sequence of neural activity into an abstract representation, and then to decode this representation, word by word, into an English sentence. For each participant, data consist of several spoken repeats of a set of 30–50 sentences, along with the contemporaneous signals from ~250 electrodes distributed over peri-Sylvian cortices. Average word error rates across a held-out repeat set are as low as 3%. Finally, we show how decoding with limited data can be improved with transfer learning, by training certain layers of the network under multiple participants’ data…: https://www.nature.com/articles/s41593-020-0608-8



Researchers watch video images people are seeing, decoded from their fMRI brain scans in near-real-time

Advanced deep-learning "mind-reading" system even interprets image meaning and recreates the video images
October 27, 2017
Purdue Engineering researchers have developed a system that can show what people are seeing in real-world videos, decoded from their fMRI brain scans — an advanced new form of  “mind-reading” technology that could lead to new insights in brain function and to advanced AI systems.
The research builds on previous pioneering research at UC Berkeley’s Gallant Lab, which created a computer program in 2011 that translated fMRI brain-wave patterns into images that loosely mirrored a series of images being viewed.
The new system also decodes moving images that subjects see in videos and does it in near-real-time. But the researchers were also able to determine the subjects’ interpretations of the images they saw — for example, interpreting an image as a person or thing — and could even reconstruct the original images that the subjects saw.
Deep-learning AI system for watching what the brain sees
Watching in near-real-time what the brain sees. Visual information generated by a video (a) is processed in a cascade from the retina through the thalamus (LGN area) to several levels of the visual cortex (b), detected from fMRI activity patterns (c) and recorded. A powerful deep-learning technique (d) then models this detected cortical visual processing. Called a convolutional neural network (CNN), this model transforms every video frame into multiple layers of features, ranging from orientations and colors (the first visual layer) to high-level object categories (face, bird, etc.) in semantic (meaning) space (the eighth layer). The trained CNN model can then be used to reverse this process, reconstructing the original videos — even creating new videos that the CNN model had never watched. (credit: Haiguang Wen et al./Cerebral Cortex)
The researchers acquired 11.5 hours of fMRI data from each of three women subjects watching 972 video clips, including clips showing people or animals in action and nature scenes.
To decode the  fMRI images, the research pioneered the use of a deep-learning technique called a convolutional neural network (CNN). The trained CNN model was able to accurately decode the fMRI blood-flow data to identify specific image categories. The researchers could compare (in near-real-time) these viewed video images side-by-side with the computer’s visual interpretation of what the person’s brain saw.
The researchers were also able to figure out how certain locations in the visual cortex were associated with specific information a person was seeing.
Decoding how the visual cortex works
CNNs have been used to recognize faces and objects, and to study how the brain processes static images and other visual stimuli. But the new findings represent the first time CNNs have been used to see how the brain processes videos of natural scenes. This is “a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings,” said doctoral student Haiguang Wen.
Wen was first author of a paper describing the research, appearing online Oct. 20 in the journal Cerebral Cortex.
“Neuroscience is trying to map which parts of the brain are responsible for specific functionality,” Wen explained. “This is a landmark goal of neuroscience. I think what we report in this paper moves us closer to achieving that goal. Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain’s visual cortex. By doing that, you can see how the brain divides a visual scene into pieces, and re-assembles the pieces into a full understanding of the visual scene.”
The researchers also were able to use models trained with data from one human subject to predict and decode the brain activity of a different human subject, a process called “cross-subject encoding and decoding.” This finding is important because it demonstrates the potential for broad applications of such models to study brain function, including people with visual deficits.
The research has been funded by the National Institute of Mental Health. The work is affiliated with the Purdue Institute for Integrative Neuroscience. Data reported in this paper are also publicly available at the Laboratory of Integrated Brain Imaging website.


Abstract of Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision
Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision.
references:
Haiguang Wen, Junxing Shi, Yizhen Zhang, Kun-Han Lu, Jiayue Cao, Zhongming Liu. Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision. Cerebral Cortex, 2017; 1 DOI: 10.1093/cercor/bhx268

Superconducting ‘synapse’ could enable powerful future neuromorphic supercomputers
Fires 200 million times faster than human brain, uses one ten-thousandth as much energy

February 7, 2018
A superconducting “synapse” that “learns” like a biological system, operating like the human brain, has been built by researchers at the National Institute of Standards and Technology (NIST).
The NIST switch, described in an open-access paper in Science Advances, provides a missing link for neuromorphic (brain-like) computers, according to the researchers. Such “non-von Neumann architecture” future computers could significantly speed up analysis and decision-making for applications such as self-driving cars and cancer diagnosis.
The research is supported by the Intelligence Advanced Research Projects Activity (IARPA) Cryogenic Computing Complexity Program, which was launched in 2014 with the goal of paving the way to “a new generation of superconducting supercomputer development beyond the exascale.”*
NIST’s artificial synapse is a metallic cylinder 10 micrometers in diameter — about 10 times larger than a biological synapse. It simulates a real synapse by processing incoming electrical spikes (pulsed current from a neuron) and customizing spiking output signals. The more firing between cells (or processors), the stronger the connection. That process enables both biological and artificial synapses to maintain old circuits and create new ones.
Dramatically faster, lower-energy-required, compared to human synapses
But the NIST synapse has two unique features that the researchers say are superior to human synapses and to other artificial synapses:
  • Operating at 100 GHz, it can fire at a rate that is much faster than the human brain — 1 billion times per second, compared to a brain cell’s rate of about 50 times per second.
  • It uses only about one ten-thousandth as much energy as a human synapse. The spiking energy is less than 1 attojoule** — roughly equivalent to the miniscule chemical energy bonding two atoms in a molecule — compared to the roughly 10 femtojoules (10,000 attojoules) per synaptic event in the human brain. Current neuromorphic platforms are orders of magnitude less efficient than the human brain. “We don’t know of any other artificial synapse that uses less energy,” NIST physicist Mike Schneider said.
Superconducting devices mimicking brain cells and transmission lines have been developed, but until now, efficient synapses — a crucial piece — have been missing. The new Josephson junction-based artificial synapse would be used in neuromorphic computers made of superconducting components (which can transmit electricity without resistance), so they would be more efficient than designs based on semiconductors or software. Data would be transmitted, processed, and stored in units of magnetic flux.
The brain is especially powerful for tasks like image recognition because it processes data both in sequence and simultaneously and it stores memories in synapses all over the system. A conventional computer processes data only in sequence and stores memory in a separate unit.
The new NIST artificial synapses combine small size, superfast spiking signals, and low energy needs, and could be stacked into dense 3D circuits for creating large systems. They could provide a unique route to a far more complex and energy-efficient neuromorphic system than has been demonstrated with other technologies, according to the researchers.
Nature News does raise some concerns about the research, quoting neuromorphic-technology experts: “Millions of synapses would be necessary before a system based on the technology could be used for complex computing; it remains to be seen whether it will be possible to scale it to this level. … The synapses can only operate at temperatures close to absolute zero, and need to be cooled with liquid helium. That this might make the chips impractical for use in small devices, although a large data centre might be able to maintain them. … We don’t yet understand enough about the key properties of the [biological] synapse to know how to use them effectively.”


Inside a superconducting synapse 
The NIST synapse is a customized Josephson junction***, long used in NIST voltage standards. These junctions are a sandwich of superconducting materials with an insulator as a filling. When an electrical current through the junction exceeds a level called the critical current, voltage spikes are produced.
Each artificial synapse uses standard niobium electrodes but has a unique filling made of nanoscale clusters (“nanoclusters”) of manganese in a silicon matrix. The nanoclusters — about 20,000 per square micrometer — act like tiny bar magnets with “spins” that can be oriented either randomly or in a coordinated manner. The number of nanoclusters pointing in the same direction can be controlled, which affects the superconducting properties of the junction.
The synapse rests in a superconducting state, except when it’s activated by incoming current and starts producing voltage spikes. Researchers apply current pulses in a magnetic field to boost the magnetic ordering — that is, the number of nanoclusters pointing in the same direction.
This magnetic effect progressively reduces the critical current level, making it easier to create a normal conductor and produce voltage spikes. The critical current is the lowest when all the nanoclusters are aligned. The process is also reversible: Pulses are applied without a magnetic field to reduce the magnetic ordering and raise the critical current. This design, in which different inputs alter the spin alignment and resulting output signals, is similar to how the brain operates.
Synapse behavior can also be tuned by changing how the device is made and its operating temperature. By making the nanoclusters smaller, researchers can reduce the pulse energy needed to raise or lower the magnetic order of the device. Raising the operating temperature slightly from minus 271.15 degrees C (minus 456.07 degrees F) to minus 269.15 degrees C (minus 452.47 degrees F), for example, results in more and higher voltage spikes.


* Future exascale supercomputers would run at 1018 exaflops (“flops” = floating point operations per second) or more. The current fastest supercomputer — the Sunway TaihuLight — operates at about 0.1 exaflops; zettascale computers, the next step beyond exascale, would run 10,000 times faster than that.
** An attojoule is 10-18 joule, a unit of energy, and is one-thousandth of a femtojoule.
*** The Josephson effect is the phenomenon of supercurrent — i.e., a current that flows indefinitely long without any voltage applied — across a device known as a Josephson junction, which consists of two superconductors coupled by a weak link. — Wikipedia


Abstract of Ultralow power artificial synapses using nanotextured magnetic Josephson junctions
Neuromorphic computing promises to markedly improve the efficiency of certain computational tasks, such as perception and decision-making. Although software and specialized hardware implementations of neural networks have made tremendous accomplishments, both implementations are still many orders of magnitude less energy efficient than the human brain. We demonstrate a new form of artificial synapse based on dynamically reconfigurable superconducting Josephson junctions with magnetic nanoclusters in the barrier. The spiking energy per pulse varies with the magnetic configuration, but in our demonstration devices, the spiking energy is always less than 1 aJ. This compares very favorably with the roughly 10 fJ per synaptic event in the human brain. Each artificial synapse is composed of a Si barrier containing Mn nanoclusters with superconducting Nb electrodes. The critical current of each synapse junction, which is analogous to the synaptic weight, can be tuned using input voltage spikes that change the spin alignment of Mn nanoclusters. We demonstrate synaptic weight training with electrical pulses as small as 3 aJ. Further, the Josephson plasma frequencies of the devices, which determine the dynamical time scales, all exceed 100 GHz. These new artificial synapses provide a significant step toward a neuromorphic platform that is faster, more energy-efficient, and thus can attain far greater complexity than has been demonstrated with other technologies.
references:
M.L. Schneider, C.A. Donnelly, S.E. Russek, B. Baek, M.R. Pufall, P.F. Hopkins, P.D. Dresselhaus, S. P. Benz and W.H. Rippard. Ultra-low power artificial synapses using nano-textured magnetic Josephson junctions. Science Advances, 2018 DOI: 10.1126/sciadv.1701329

New material eliminates need for motors or actuators in future robots, other devices

Low-cost material could bear 3000 times its own weight, triggered by light or electricity
June 29, 2018
A “mini arm” made up of two hinges of actuating nickel hydroxide-oxyhydroxide material (left) can lift an object 50 times its own weight when triggered (right) by light or electricity. (credit: University of Hong Kong)
University of Hong Kong researchers have invented a radical new lightweight material that could replace traditional bulky, heavy motors or actuators in robots, medical devices, prosthetic muscles, exoskeletons, microrobots, and other types of devices.
The new actuating material — nickel hydroxide-oxyhydroxide — can be instantly triggered and wirelessly powered by low-intensity visible light or electricity at relatively low intensity. It can exert a force of up to 3000 times its own weight — producing stress and speed comparable to mammalian skeletal muscles, according to the researchers.
The material is also responsive to heat and humidity changes, which could allow autonomous machines to harness tiny energy changes in the environment.
The major component is nickel, so the material cost is low, and fabrication uses a simple electrodeposition process, allowing for scaling up and manufacture in industry.
Developing actuating materials was identified as the leading grand challenge in “The grand challenges of Science Robotics” to “deeply root robotics research in science while developing novel robotic platforms that will enable new scientific discoveries.”
Using a light blocker (top) a mini walking bot (bottom) with the “front leg” bent and straightened alternatively can walk towards a light source. (credit: University of Hong Kong)

Ref.: Science Robotics. Source: University of Hong Kong.



 November 6, 1999
When Ray Kurzweil discusses human destiny, it’s not always clear whether he’s talking about technology or theology. It’s technology that defines his resume. He’s spent 50 years inventing ingenious uses for artificial intelligence.
But like a priest caught playing in a physics lab, he keeps coming up with inventions inspired by aesthetics and social conscience. For instance: when he was still in high school, he wrote a program that composed music — while his latest software writes poetry. In between he created machines that read print aloud to the blind, software that draws + paints, electronic keyboards that produce the sounds of acoustic instruments — plus one of the most advanced, commercially successful forms of computer speech recognition.
All of these were products of a restless mind consumed by the question of what will be. A more abstract product of that vision is his latest book:  The Age of the Spiritual Machine: when computers exceed human intelligence. Kurzweil looks at the exponential increase in calculating power since the turn of the century. He concludes: in 50 years, machines will not only be smarter than humans — but also smart enough to persuade us that they are conscious beings.
That assertion has drawn the wrath of several prominent philosophers, who question his definitions of intelligence + consciousness. For example: John Searle PhD — the University of California at Berkeley professor of philosophy — wrote in the New York Review of Books: “the fatal flaw in Ray Kurzweil’s argument: it rests on the assumption the main thing humans do is compute. His proposals aren’t science. I think he got a little carried away and made philosophical errors.”
Debates about how to define intelligence + consciousness get the most public attention — but a more compelling idea in the book is his prediction that our children will eventually become  human-machine hybrids. Based on current trends in computer and biological sciences, he claims super-power intelligence will come from that hybrid. Merging the human body and computer circuits will enable humanity to re-design itself.
Ray Kurzweil said: “The primary issue isn’t the mass of the universe — or the possible existence of anti-gravity or of Einstein’s so-called cosmological constant. The fate of the universe is a decision yet to be made, one we’ll intelligently consider when the time is right.”
John Searle PhD said: “Kurzweil doesn’t think he’s writing a book of science fiction. He’s making serious claims that he thinks are based on solid scientific results.”
The stuff he’s talking about is no less than a physical hybrid of human beings and their technology. He says the machines being created today are the beginning of our meta-morphosis from thinking mammal to all-knowing hybrid.
Biological evolution has already given way to much more rapid — and less random — technological evolution, Ray Kurzweil argues. He predicts: within 30 years direct links will be established between the human brain and computer circuitry. The implications are mind-boggling. Such links could mean that the entire contents of a brain could be copied (and preserved) in an external database. Not only would the human brain be supplemented with enormous amounts of digital memory, it would also be linked to vast information resources like the internet — at the speed of thought.
That would produce, through direct brain interface: a virtual reality indistinguishable from objective reality. Ray Kurzweil cites medical treatments in which silicon chips have been successfully implanted into human brains. For example: to alleviate symptoms of Parkinson’s disease, or for cochlear implants for the deaf. He says these are examples of primitive steps toward his predictions.
While these sorts of visions might seem far-fetched, other respected futurists find Kurzweil’s ideas compelling. Marvin Minsky PhD — a well-known professor of media arts + science at the Massachusetts Institute of Technology — said Kurzweil is a leading futurist of our time.
Kurzweil’s theories are seriously considered — and that’s evidence of his credentials. Since his teenage years he has been harnessing computer power to do pattern recognition tasks. He explains that pattern recognition is: a part of the computer science filed of artificial intelligence — where we teach computers to recognize abstract patterns, a capability that dominates human thinking.”
In year 1965, at age 17, his music composing program won a Westinghouse science award, a visit to the White House and a spot as a contestant on the old television game show I’ve Got a Secret. His secret stumped former Miss America Bess Myerson — but was guessed by the second panelist, the actor Henry Morgan.
By the time he graduated from the Massachusetts Institute of Technology in 1970, Ray Kurzweil had already achieved his first business success. He founded a computer database service that helped high school students choose the right college. He sold it for $100,000 and went on to create more businesses built on his inventions. Among his best-known is the Kurzweil reading machine for the blind. It was a true marvel when it was introduced. CBS news was so impressed with the device that television news anchor Walter Cronkite used it to deliver his signature sign-off: “… and that’s the way it was, on Jan. 13, 1976.”
Along the way, he won several accolades in business and academics, and received many honorary doctorates. In addition to writing his next book, he’s developing Fat Kat, an artificial intelligence system that applies algorithms to securities investment decisions.
In Ray Kurzweil’s vision of the future, the man-machine hybrid won’t happen through a Frankenstein-like amalgam — but through an elegant technology: micro-scopic, self-replicating robots called nano-bots that could travel through the human bloodstream and interact with our body + brain.
Ray Kurzweil said: “The idea is to direct nano-bots to travel through every capillary in the brain, where they will pass in very close proximity to every cortex feature. This could enable the tiny machines to scan each part of the brain and build up a huge database of its contents. And all these nano-bots could be communicating with each other, such as on a wireless network. They could also be on the web.”
The breakthrough in nano-technology came several years ago with the discovery of the nano-tube, a carbon molecule of enormous strength. Just about anything can be fabricated from nano-tubes: with many times the strength of conventional materials, but with a fraction of the weight.
Also, nano-tubes have more capacity for raw computing power than commonly used silicon. Maybe this combination of features means that it could be possible: to build machines the size of a human blood cell, that are programmable with software — maybe even able to self-replicate of themselves from carbon atoms.
Ray Kurzweil said: “The size of the technology is shrinking so rapidly, that in 30 years both the size and cost of this scenario will be feasible.”
Of course, such technology would inevitably be accompanied by terrifying dangers. By scanning a brain into a database, a person’s most private thoughts and memories would be vulnerable to intrusions by hackers. And wouldn’t the brain also be vulnerable to external control of information, thought processes and even perceptions of reality?
Ray Kurzweil said: “Those are real concerns. Organizations like governments, religious, or terrorist groups — or just clever individuals — could put nano-bots in food or water supplies, trillions of them. These would then make their way inside people and would monitor their thoughts and even could control them and place them into virtual environments. But we won’t be defenseless. We have these concerns today at a primitive level with Trojan horses that make their way into our computers.”
He says there’s no turning back. Once evolution produced a technological species — humanity — it put us on a relentless quest for understanding, and control of our universe.
Ray Kurzweil said: “I’m optimistic, but that’s more of a personal orientation than something I could scientifically argue. There definitely are dangers, and we do tend to address them imperfectly, so there’s some possibility this will fail.”

http://www.kurzweilai.net/the-new-york-times-the-soul-of-the-next-new-machine

2040: How technological innovation will have changed our lives and society

Published on December 17, 2018 Jeroen Tas
It’s 2040. I’m 80 years old and fully able to manage my own health. We finally have the secure information infrastructure that allows us to collect, analyze and compare our health and behavioral data ourselves. Since all my health data has been digitized and can be accessed via the cloud, I can work alongside my care team to proactively manage my health.
Artificial Intelligence, one of the most hyped topics of 2018, has had an even bigger impact than the Internet in the late 90's – it changed everything. Deep insights into what keeps mehealthy are available to me and my virtual care providers due to huge advances in AI-enabled diagnostic and therapeutic tools.
Artificial Intelligence has helped uncover patterns that were previously untraceable. I rely on my own "digital twin", a virtual 4D version of myself that covers my complete medical history: genetic, clinical and behavioral. It is used to accurately predict and simulate my health outcomes. My twin intervenes if my health is trending in the wrong direction, automatically recommending corrective actions to help prevent or treat ailments.
My digital twin is linked to millions of others, forming a gigantic data ocean, which has created the possibility of large-scale analysis of health information. This has led to new insights and clinical breakthroughs that allowed affordable therapies to be developed for diseases that were difficult to treat in 2018. We now have access to highly personalized therapies, including genetic engineering.
My digital twin recently showed the development of a serious heart problem.
My digital twin recently showed the development of a serious heart problem, that if treated early enough, is easily preventable. Together with my doctors, I decided to have a pre-emptive operation. I didn’t worry too much about the outcome because data shows that almost all people of my age and my condition have successfully been through similar treatment. The surgery was performed quickly and accurately by a surgeon who works in a joint human/robot team. The robots automatically adjust the therapy to the needs of the patient and are constantly self-improving by exchanging and analyzing treatment data with other machines around the globe. I walked out of the lab after a couple of hours, knowing that a serious problem was averted.
Back to 2018. At Philips I’m working on innovations that have a positive impact on people’s lives and society. It is one of the reasons why I enjoy coming to work every day. I firmly believe in the boundless opportunities of innovation. There is much to look forward to: cheap and rapid genome analysis will bring new therapies and save lives by detecting diseases earlier. Artificial intelligence, robots and Blockchain will be the basis of the next wave of automation, and the reduction of substantial waste in healthcare systems. Virtual reality and chat bots using artificial intelligence will impact the way we work together with machines.
It's tough to make predictions, especially about the future.
As Yogi Berra observed: it's tough to make predictions, especially about the future. Just think how different it was in 1998 - that was only 20 years ago. Technology has evolved faster than we could have ever imagined. Back then, only 3% of the world's population was connected to the internet and few Dutch people saw the utility of a mobile phone. Many thought that the postal service and fixed phone lines were just fine and didn’t see any need to be communicating all day long. In reality, smart phones and apps have changed our behaviors, politics and culture exponentially.
There will be no straight path from 2018 to 2040. Many of the innovations that will be part of our daily lives then have yet to be conceptualized. The rate and extent of adoption of innovation is unpredictable. Moreover, every technology has unintended consequences. We have recently seen this with the rapid growth of fake news on social media and its impact on politics. But one thing is certain: technology will have an ever-greater impact on our society. It is up to us to ensure that this impact remains positive. We need to involve a broad spectrum of society to guide the proper application of the technology. With the right control mechanisms in place and a human view of technology, life in 2040 will be healthy and worthwhile.
I love kite surfing and hope to be able to still do it at 80. Whether that will be the case cannot be predicted. But, innovation -and my friends and family- will support me in my quest to stay healthy. That much I can predict.

https://www.linkedin.com/pulse/2040-how-technological-innovation-

How Unilever Uses Artificial Intelligence To Recruit And Train Thousands Of Employees Each Year
  • Published on December 23, 2018
It’s hard to live a day in the developed world without using a Unilever product. The multinational manufactures and distributes over 400 consumer goods brands covering food and beverages, domestic cleaning products and personal hygiene.
With so many processes to coordinate and manage, artificial intelligence is quickly becoming essential for organizations of its scale. This applies to both research and development as well as the huge support infrastructure needed for a business with 170,000 employees.
Recently it announced that it had developed machine learning algorithms capable of sniffing your armpit and telling you whether you are suffering from body odors. While this may seem like "using a sledgehammer to crack a walnut", the technology which has been developed could well go on to be used to monitor food for freshness, helping to solve the problem of food overproduction and waste endemic in society.
As well as these smart, public-facing initiatives, though, artificial intelligence is being put to use behind the scenes to help screen and assess the more than one million people per year who apply for jobs with Unilever. If they make the grade and become one of the thousands who are offered a job, they have AI-powered tools to help them adjust to their new role and hit the ground running.
AI-enhanced recruiting
Unilever recruits more than 30,000 people a year and processes around 1.8 million job applications.
This takes a tremendous amount of time and resources. As a multinational brand operating in 190 countries, applicants are based all around the world. Finding the right people is an essential ingredient for success and Unilever can't afford to overlook talent just because it is buried at the bottom of a pile of CVs.
To tackle this problem, Unilever partnered with Pymetrics, a specialist in AI recruitment, to create an online platform which means candidates can be initially assessed from their own homes, in front of a computer or mobile phone screen.
First, they are asked to play a selection of games which test their aptitude, logic, and reasoning, and appetite for risk. Machine learning algorithms are then used to assess their suitability for whatever role they have applied for, by matching their profiles against those of previously successful employees.
The second stage of the process involves submitting a video interview. Again, the assessor is not a human being but a machine learning algorithm. The algorithm examines the videos of candidates answering questions for around 30 minutes, and through a mixture of natural language processing and body language analysis, determines who is likely to be a good fit.
Unilever's chief of HR, Leena Nair, told me that around 70,000 person-hours of interviewing and assessing candidates had been cut, thanks to the automated screening system.
She said "We look for people with a sense of purpose – systemic thinking, resilience, business acumen. Based on that profile, the games and the video interview are all programmed to look for cues in their behavior that will help us understand who will fit in at Unilever."
Referring to the video interview analytics for their future leaders program, she tells me “Every screenshot gives us many data points about the person, so we work with a number of partners and use a lot of proprietary technology with those partners, and then we select 3,500 or so people to go through to our discovery center.” After spending a day with real leaders and recruiters, Unilever selects about 800 people who will be offered a job.  
The system is also designed to give feedback to all applicants, even those who aren’t successful.
“What I like about the process is that each and every person who applies to us gets some feedback,” Nair says.
 “Normally when people send an application to a large company it can go into a ‘black hole’ – thank you very much for your CV, we’ll get back to you – and you never hear from them again.
“All of our applicants get a couple of pages of feedback, how they did in the game, how they did in the video interviews, what characteristics they have that fit, and if they don’t fit, the reason why they didn’t, and what we think they should do to be successful in a future application.
“It’s an example of artificial intelligence allowing us to be more human.”
So, while Unilever isn’t quite ready to hand the entire recruitment process over to machines just yet, it has shown that it can assist with the initial “sift” when it comes to preliminary screening of applicants.
Robots to help you settle into the job
After making the grade, another machine learning-driven initiative is helping new employees get started in their new roles – adapting to the day-to-day routines as well as the corporate culture at the business.
Unabot is a natural language processing (NLP) bot built on Microsoft’s Bot framework, designed to understand what employees need to know and fetch information for them when it is asked.
“We joke about the fact we don’t know whether it’s a man or a woman – it’s Unabot,” Nair tells me.
“Unabot doesn’t only answer HR questions, questions about anything that affects employees should be answered by Unabot, and it is now the front face for any employee question – they might ask it about IT systems, or about their allowances – so we are learning about what matters to employees in real time.”
Through interacting with employees, Unabot has learned to answer questions such as where parking is available, the timing of shuttle buses, and when annual salary reviews are due to take place.
Unlike for example Alexa or consumer-facing, customer service corporate chatbots, Unabot must also be able to filter and apply information based on who it is speaking to. It is capable of differentiating the information it passes on based on both the user's geographical location and their level of seniority within the company.
Unabot was first rolled out for employees based in the Philippines and is now operating in 36 countries. It has been selected as the next AI initiative which will be globally rolled out in all of Unilever’s 190 markets.  
“It’s a new way of working,” Nair tells me, “We never go in and say ‘its perfect so let’s roll it out in all countries,’ we learn what we can in one country and roll it out in the next one.”
Currently, all of its data comes from internal sources, such as company guidelines, schedules, policy documents and questions asked by the employees themselves. In the future, this could be expanded to include external data such as learning materials.
And although it’s early days, the initial analysis seems to show that the initiative is popular with staff – with 36% of those in areas where it is deployed having used it at least once, and around 80% going on to use it again. 
One lesson learned early on was that the importance of providing a frictionless experience.
“So we’ve learned that you have to make anything that interacts with employees or consumers effortless,” Nair says.
“People interact in different ways – a policy document is written in a particular way, its three or four pages of what an employee shouldn’t do. But an employee tends to ask questions in very simplistic ways – how does this impact my life, where will I find this, what can I do?”
Machine learning – particularly NLP – can overcome this due to its ability to detect which questions are repeatedly asked, even if they are asked in different ways, and present the right information.
About Bernard Marr
Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligencebig datablockchains, and the Internet of Things.

LinkedIn has ranked Bernard as one of the world’s top 5 business influencers. He is a frequent contributor to the World Economic Forum and writes a regular column for Forbes. Every day Bernard actively engages his 1.5 million social media followers and shares content that reaches millions of readers.


Robo-Apocalypse? Not in Your Lifetime

May 21, 2019 J. BRADFORD DELONG
Not a week goes by without some new report, book, or commentary sounding the alarm about technological unemployment and the "future of work." Yet in considering the threat posed by automation at most levels of the value chain, we should remember that robots cannot do what humans cannot tell them to do.
BERKELEY – Will the imminent “rise of the robots” threaten all future human employment? The most thoughtful discussion of that question can be found in MIT economist David H. Autor’s 2015 paper, “Why Are There Still so Many Jobs?”, which considers the problem in the context of Polanyi’s Paradox. Given that “we can know more than we can tell,” the twentieth-century philosopher Michael Polanyi observed, we shouldn’t assume that technology can replicate the function of human knowledge itself. Just because a computer can know everything there is to know about a car doesn’t mean it can drive it.
CARL BILDT calls on leaders to finalize a new strategic agenda, from which all policy and personnel matters must follow.
This distinction between tacit knowledge and information bears directly on the question of what humans will be doing to produce economic value in the future. Historically, the tasks that humans have performed have fallen into ten broad categories. The first, and most basic, is using one’s body to move physical objects, which is followed by using one’s eyes and fingers to create discrete material goods. The third category involves feeding materials into machine-driven production processes – that is, serving as a human robot – which is followed by actually guiding the operations of a machine (acting as a human microprocessor).
In the fifth and sixth categories, one is elevated from microprocessor to software, performing accounting-and-control tasks or facilitating communication and the exchange of information. In the seventh category, one actually writes the software, translating tasks into code (here, one encounters the old joke that every computer needs an additional “Do” command: “Do What I Mean”). In the eighth category, one provides a human connection, whereas in the ninth, one acts as cheerleader, manager, or arbiter for other humans. Finally, in the tenth category, one thinks critically about complex problems, and then devises novel inventions or solutions to them.
For the past 6,000 years, tasks in the first category have gradually been offloaded, first to draft animals and then to machines. For the past 300 years, tasks in the second category have also been offloaded to machines. In both cases, jobs in categories three through six – all of which augmented the increasing power of the machines – became far more prevalent, and wages grew enormously.
But we have since developed machines that are better than humans at performing tasks in categories three and four – where we behave like robots and microprocessors – which is why manufacturing as a share of total employment in advanced economies has been declining for two generations, even as the productivity of manufacturing has increased. This trend, combined with monetary policymakers’ excessive anti-inflationary zeal, is a major factor contributing to the recent rise of neofascism in the United States and other Western countries.
Worse, we have now reached the point where robots are also better than humans at performing the “software” tasks in categories five and six, particularly when it comes to managing the flow of information and, it must be said, misinformation. Nonetheless, over the next few generations, this process of technological development will work itself out, leaving humans with just four categories of things to do: thinking critically, overseeing other humans, providing a human connection, and translating human whims into a language the machines can understand.
The problem is that very few of us have the genius to produce genuine economic value with our own creativity. The wealthy can employ only so many personal assistants. And many cheerleaders, managers, and dispute-settlers are already unnecessary. That leaves category eight: as long as livelihoods are tied to remunerative employment, the prospect of preserving a middle-class society will depend on enormous demand for human connection.
Here, Polanyi’s Paradox gives us cause for hope. The task of providing “human connection” is not just inherently emotional and psychological; it also requires tacit knowledge of social and cultural circumstances that cannot be codified into concrete, routine commands for computers to follow. Moreover, each advance in technology creates new domains in which tacit knowledge matters, even when it comes to interacting with the new technologies themselves.
As Autor observes, though auto manufacturers “employ industrial robots to install windshields … aftermarket windshield replacement companies employ technicians, not robots.” It turns out that “removing a broken windshield, preparing the windshield frame to accept a replacement, and fitting a replacement into that frame demand more real-time adaptability than any contemporary robot can cost-effectively approach.” In other words, automation depends on fully controlled conditions, and humans will never achieve full control of the entire environment.
Some might counter that artificial-intelligence applications could develop a capacity to absorb “tacit knowledge.” Yet even if machine-learning algorithms could communicate back to us why they have made certain decisions, they will only ever work in restricted environmental domains. The wide range of specific conditions that they need in order to function properly renders them brittle and fragile, particularly when compared to the robust adaptability of human beings.
At any rate, if the “rise of the robots” represents a threat, it won’t be salient within the next two generations. For now, we should worry less about technological unemployment, and more about the role of technology in spreading disinformation. Without a properly functioning public sphere, why bother debating economics in the first place?
J. Bradford DeLong is Professor of Economics at the University of California at Berkeley and a research associate at the National Bureau of Economic Research. He was Deputy Assistant US Treasury Secretary during the Clinton Administration, where he was heavily involved in budget and trade negotiations. His role in designing the bailout of Mexico during the 1994 peso crisis placed him at the forefront of Latin America’s transformation into a region of open economies, and cemented his stature as a leading voice in economic-policy debates.

https://www.project-syndicate.org/commentary/rise-of-robots-social-work-by-j-bradford-delong-2019-05?utm_



Being human in the age of artificial intelligence
 by Max Tegmark PhD,  2017
This book revolves around author, physicist, and cosmologist Max Tegmark PhD. He sets-out to differentiate the myths of AI from reality in an approachable way. He manages to answer  challenging questions about creating a prosperous world with AI — and how to protect AI from being hacked.
The book aims to help the layperson understand what will be most affected by AI in our day-to-day lives. A great primer into the world of AI.
https://s3.amazonaws.com/arena-attachments/1446178/cffa5ebc74cee2b1edf58fa9a5bbcb1c.pdf?1511265314



Artificial Intelligence Set: What You Need to Know About AI
 April 25, 2018

What do you really need to know about the Artificial Intelligence (AI) revolution? This specially priced 4 item set will make it easier for you to understand how your company, industry, and career can be transformed by AI. It is a must-have for managers who need to recognize the potential impact of AI, how it is driving future growth, and how they can make the most of it. This collection includes: "Human + Machine: Reimagining Work in the Age of AI" by Paul Daugherty and H. James Wilson; which reveals how companies are using the new rules of AI to leap ahead on innovation and profitability, as well as what you can do to achieve similar results. Based on the authors' experience and research with 1,500 organizations, this book describes six new types of hybrid human + machine roles that every company must develop, and it includes a "leader's guide" with the principals required to become an AI-fueled business. "Prediction Machines: The Simple Economics of Artificial Intelligence" by Ajay Agrawal, Joshua Gans, and Avi Goldfarb; the authors lift the curtain on the AI-is-magic hype and show how basic tools from economics provide clarity about the AI revolution and a basis for action by CEOs, managers, policy makers, investors, and entrepreneurs. "Artificial Intelligence for the Real World" (Article PDF), based on a survey of 250 executives familiar with their companies' use of cognitive technology and a study of 152 projects show that companies do better by developing an incremental approach to AI, and by focusing on augmenting rather than replacing human capabilities. And "Reshaping Business with Artificial Intelligence" (Article PDF); provides baseline information on the strategies used by companies leading in AI, the prospects for its growth, and the steps executives need to take to develop a strategy for their business.

Human + Machine  re-Imagining work in the age of AI
 by Paul Daugherty + H. James Wilson,  2018
Look around you. Artificial intelligence is no longer just a futuristic notion. It's here right now--in software that senses what we need, supply chains that "think" in real time, and robots that respond to changes in their environment. Twenty-first-century pioneer companies are already using AI to innovate and grow fast. The bottom line is this: Businesses that understand how to harness AI can surge ahead. Those that neglect it will fall behind. Which side are you on?
In Human + Machine, Accenture leaders Paul R. Daugherty and H. James (Jim) Wilson show that the essence of the AI paradigm shift is the transformation of all business processes within an organization--whether related to breakthrough innovation, everyday customer service, or personal productivity habits. As humans and smart machines collaborate ever more closely, work processes become more fluid and adaptive, enabling companies to change them on the fly--or to completely reimagine them. AI is changing all the rules of how companies operate.
Based on the authors' experience and research with 1,500 organizations, the book reveals how companies are using the new rules of AI to leap ahead on innovation and profitability, as well as what you can do to achieve similar results. It describes six entirely new types of hybrid human + machine roles that every company must develop, and it includes a "leader’s guide" with the five crucial principles required to become an AI-fueled business.
Human + Machine provides the missing and much-needed management playbook for success in our new age of AI.

Our Final Invention: Artificial Intelligence And the end of the human era.
by James Barrat,  2013
In as little as a decade, artificial intelligence could match, then surpass human intelligence. Corporations & government agencies around the world are pouring billions into achieving AI’s Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful & more alien than we can imagine. Thru profiles of tech visionaries, industry watchdogs & groundbreaking AI systems, James Barrat's Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? Will they allow us to?

SuperIntelligence  paths, dangers, strategies
by Nick Bostrom PhD, 2014
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.
If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological
cognitive enhancement, and collective intelligence.
This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time. 

the Singularity Is Near When humans transcend biology.
by Ray Kurzweil
The book builds on the ideas introduced in Kurzweil's previous books, The Age of Intelligent Machines (1990) and The Age of Spiritual Machines (1999). This time, however, Kurzweil embraces the term the Singularity, which was popularized by Vernor Vinge in his 1993 essay "The Coming Technological Singularity" more than a decade earlier.[1]
Kurzweil describes his law of accelerating returns which predicts an exponential increase in technologies like computersgeneticsnanotechnologyrobotics and artificial intelligence. Once the Singularity has been reached, Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined. Afterwards he predicts intelligence will radiate outward from the planet until it saturates the universe. The Singularity is also the point at which machines intelligence and humans would merge.

Machine Learning Yearning Technical strategy for AI engineers in the era of deep learning.
 by Andrew Ng PhD,  2019
AI, machine learning, and deep learning are transforming numerous industries. But building a machine learning system requires that you make practical decisions:

Should you collect more training data?
Should you use end-to-end deep learning?
How do you deal with your training set not matching your test set?
and many more.
Historically, the only way to learn how to make these "strategy" decisions has been a multi-year apprenticeship in a graduate program or company. This is a book to help you quickly gain this skill, so that you can become better at building AI systems.

The Master Algorithm How the quest for the ultimate learning machine will re-make our world.
 by Pedro Domingos, 2015
A thought-provoking and wide-ranging exploration of machine learning and the race to build computer intelligences as flexible as our own
In the world's top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Master Algorithm, Pedro Domingos lifts the veil to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He assembles a blueprint for the future universal learner--the Master Algorithm--and discusses what it will mean for business, science, and society. If data-ism is today's philosophy, this book is its bible.

Rise of the Robots Technology and the threat of a jobless future.
 by Martin Ford, 2015
What are the jobs of the future? How many will there be? And who will have them? We might imagine—and hope—that today's industrial revolution will unfold like the last: even as some jobs are eliminated, more will be created to deal with the new innovations of a new era. In Rise of the Robots, Silicon Valley entrepreneur Martin Ford argues that this is absolutely not the case. As technology continues to accelerate and machines begin taking care of themselves, fewer people will be necessary. Artificial intelligence is already well on its way to making “good jobs” obsolete: many paralegals, journalists, office workers, and even computer programmers are poised to be replaced by robots and smart software. As progress continues, blue and white collar jobs alike will evaporate, squeezing working- and middle-class families ever further. At the same time, households are under assault from exploding costs, especially from the two major industries—education and health care—that, so far, have not been transformed by information technology. The result could well be massive unemployment and inequality as well as the implosion of the consumer economy itself.

In Rise of the Robots, Ford details what machine intelligence and robotics can accomplish, and implores employers, scholars, and policy makers alike to face the implications. The past solutions to technological disruption, especially more training and education, aren't going to work, and we must decide, now, whether the future will see broad-based prosperity or catastrophic levels of inequality and economic insecurity. Rise of the Robots is essential reading for anyone who wants to understand what accelerating technology means for their own economic prospects—not to mention those of their children—as well as for society as a whole.
https://www.goodreads.com/book/show/22928874-rise-of-the-robots


Artificial Intelligence. Structures and Strategies for Complex Problem Solving 
by George F Luger

AGI Ruin: A List of Lethalities

(«Порча сильного ИИ: список смертельных опасностей»)

by Eliezer Yudkowsky

Preamble:

(If you're already familiar with all basics and don't want any preamble, skip ahead to Section B for technical difficulties of alignment proper.)

I have several times failed to write up a well-organized list of reasons why AGI will kill you.  People come in with different ideas about why AGI would be survivable, and want to hear different obviously key points addressed first.  Some fraction of those people are loudly upset with me if the obviously most important points aren't addressed immediately, and I address different points first instead.

Having failed to solve this problem in any good way, I now give up and solve it poorly with a poorly organized list of individual rants.  I'm not particularly happy with this list; the alternative was publishing nothing, and publishing this seems marginally more dignified.

Three points about the general subject matter of discussion here, numbered so as not to conflict with the list of lethalities:

-3.  I'm assuming you are already familiar with some basics, and already know what 'orthogonality' and 'instrumental convergence' are and why they're true.  People occasionally claim to me that I need to stop fighting old wars here, because, those people claim to me, those wars have already been won within the important-according-to-them parts of the current audience.  I suppose it's at least true that none of the current major EA funders seem to be visibly in denial about orthogonality or instrumental convergence as such; so, fine.  If you don't know what 'orthogonality' or 'instrumental convergence' are, or don't see for yourself why they're true, you need a different introduction than this one.

-2.  When I say that alignment is lethally difficult, I am not talking about ideal or perfect goals of 'provable' alignment, nor total alignment of superintelligences on exact human values, nor getting AIs to produce satisfactory arguments about moral dilemmas which sorta-reasonable humans disagree about, nor attaining an absolute certainty of an AI not killing everyone.  When I say that alignment is difficult, I mean that in practice, using the techniques we actually have, "please don't disassemble literally everyone with probability roughly 1" is an overly large ask that we are not on course to get.  So far as I'm concerned, if you can get a powerful AGI that carries out some pivotal superhuman engineering task, with a less than fifty percent change of killing more than one billion people, I'll take it.  Even smaller chances of killing even fewer people would be a nice luxury, but if you can get as incredibly far as "less than roughly certain to kill everybody", then you can probably get down to under a 5% chance with only slightly more effort.  Practically all of the difficulty is in getting to "less than certainty of killing literally everyone".  Trolley problems are not an interesting subproblem in all of this; if there are any survivors, you solved alignment.  At this point, I no longer care how it works, I don't care how you got there, I am cause-agnostic about whatever methodology you used, all I am looking at is prospective results, all I want is that we have justifiable cause to believe of a pivotally useful AGI 'this will not kill literally everyone'.  Anybody telling you I'm asking for stricter 'alignment' than this has failed at reading comprehension.  The big ask from AGI alignment, the basic challenge I am saying is too difficult, is to obtain by any strategy whatsoever a significant chance of there being any survivors.

-1.  None of this is about anything being impossible in principle.  The metaphor I usually use is that if a textbook from one hundred years in the future fell into our hands, containing all of the simple ideas that actually work robustly in practice, we could probably build an aligned superintelligence in six months.  For people schooled in machine learning, I use as my metaphor the difference between ReLU activations and sigmoid activations.  Sigmoid activations are complicated and fragile, and do a terrible job of transmitting gradients through many layers; ReLUs are incredibly simple (for the unfamiliar, the activation function is literally max(x, 0)) and work much better.  Most neural networks for the first decades of the field used sigmoids; the idea of ReLUs wasn't discovered, validated, and popularized until decades later.  What's lethal is that we do not have the Textbook From The Future telling us all the simple solutions that actually in real life just work and are robust; we're going to be doing everything with metaphorical sigmoids on the first critical try.  No difficulty discussed here about AGI alignment is claimed by me to be impossible - to merely human science and engineering, let alone in principle - if we had 100 years to solve it using unlimited retries, the way that science usually has an unbounded time budget and unlimited retries.  This list of lethalities is about things we are not on course to solve in practice in time on the first critical try; none of it is meant to make a much stronger claim about things that are impossible in principle.

That said:

Here, from my perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable on anything remotely remotely resembling the current pathway, or any other pathway we can easily jump to….:

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities 


Pause Giant AI Experiments

An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

 

March 22, 2023

 

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI PrinciplesAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

Signatures

27565

 .S. We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here : https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf

 

 Policymaking in the Pause

What can policymakers do now to combat risks from advanced AI systems?

 

12th April 2023

 

“We don’t know what these [AI] systems are trained on or how they are being built. All of this happens behind closed doors at commercial companies. This is worrying.” Catelijne Muller, President of ALLAI, Member of the EU High Level Expert Group on AI “It feels like we are moving too quickly. I think it is worth getting a little bit of experience with how they can be used and misused before racing to build the next one. This shouldn’t be a race to build the next model and get it out before others.” Peter Stone, Professor at the University of Texas at Austin, Chair of the One Hundred Year Study on AI. “Those making these [AI systems] have themselves said they could be an existential threat to society and even humanity, with no plan to totally mitigate these risks. It is time to put commercial priorities to the side and take a pause for the good of everyone to assess rather than race to an uncertain future” Emad Mostaque, Founder and CEO of Stability AI “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns. [FLI’s Letter] shows how many people are deeply worried about what is going on. I think it is a really important moment in the history of AI - and maybe humanity,” Gary Marcus, Professor Emeritus of Psychology and Neural Science at New York University, Founder of Geometric Intelligence “The time for saying that this is just pure research has long since passed. […] It’s in no country’s interest for any country to develop and release AI systems we cannot control. Insisting on sensible precautions is not anti-industry. Chernobyl destroyed lives, but it also decimated the global nuclear industry. I’m an AI researcher. I do not want my field of research destroyed. Humanity has much to gain from AI, but also everything to lose.” Stuart Russell, Smith-Zadeh Chair in Engineering and Professor of Computer Science at the University of California, Berkeley, Founder of the Center for HumanCompatible Artificial Intelligence (CHAI). “Let’s slow down. Let’s make sure that we develop better guardrails, let’s make sure that we discuss these questions internationally just like we’ve done for nuclear power and nuclear weapons. Let’s make sure we better understand these very large systems, that we improve on their robustness and the process by which we can audit them and verify that they are safe for the public.” Yoshua Bengio, Scientific Director of the Montreal Institute for Learning Algorithms (MILA), Professor of Computer Science and Operations Research at the Université de Montréal, 2018 ACM A.M. Turing Award Winner. FUTURE OF LIFE INSTITUTE 3 CONTENTS 4 Introduction 5 Policy recommendations 6 Mandate robust third-party auditing and certification for specific AI systems 7 Regulate organizations’ access to computational power 8 Establish capable AI agencies at national level 9 Establish liability for AI-caused harm 10 Introduce measures to prevent and track AI model leaks 10 Expand technical AI safety research funding 11 Develop standards for identifying and managing AIgenerated content and recommendations 14 Conclusion FUTURE OF LIFE INSTITUTE 4 Introduction Prominent AI researchers have identified a range of dangers that may arise from the present and future generations of advanced AI systems if they are left unchecked. AI systems are already capable of creating misinformation and authentic-looking fakes that degrade the shared factual foundations of society and inflame political tensions.1 AI systems already show a tendency toward amplifying entrenched discrimination and biases, further marginalizing disadvantaged communities and diverse viewpoints.2 The current, frantic rate of development will worsen these problems significantly. As these types of systems become more sophisticated, they could destabilize labor markets and political institutions, and lead to the concentration of enormous power in the hands of a small number of unelected corporations. Advanced AI systems could also threaten national security, e.g., by facilitating the inexpensive development of chemical, biological, and cyber weapons by non-state groups. The systems could themselves pursue goals, either human- or self-assigned, in ways that place negligible value on human rights, human safety, or, in the most harrowing scenarios, human existence.3 In an e­ort to stave o­ these outcomes, the Future of Life Institute (FLI), joined by over 20,000 leading AI researchers, professors, CEOs, engineers, students, and others on the frontline of AI progress, called for a pause of at least six months on the riskiest and most resourceintensive AI experiments – those experiments seeking to further scale up the size and general capabilities of the most powerful systems developed to date.4 The proposed pause provides time to better understand these systems, to reflect on their ethical, social, and safety implications, and to ensure that AI is developed and used in a responsible manner. The unchecked competitive dynamics in the AI industry incentivize aggressive development at the expense of caution5 . In contrast to the breakneck pace of development, however, the levers of governance are generally slow and deliberate. A pause on the production of even more powerful AI systems would thus provide an important opportunity for the instruments of governance to catch up with the rapid evolution of the field. We have called on AI labs to institute a development pause until they have protocols in place to ensure that their systems are safe beyond a reasonable doubt, for individuals, communities, and society. Regardless of whether the labs will heed our call, this policy brief provides policymakers with concrete recommendations for how governments can manage AI risks. The recommendations are by no means exhaustive: the project of AI governance is perennial 1 See, e.g., Steve Rathje, Jay J. Van Bavel, & Sander van der Linden, ‘Out-group animosity drives engagement on social media,’ Proceedings of the National Academy of Sciences, 118 (26) e2024292118, Jun. 23, 2021, and Ti­any Hsu & Stuart A. Thompson, ‘Disinformation Researchers Raise Alarms About A.I. Chatbots,’ The New York Times, Feb. 8, 2023 [upd. Feb. 13, 2023] 2 See, e.g., Abid, A., Farooqi, M. and Zou, J. (2021a), ‘Large language models associate Muslims with violence’, Nature Machine Intelligence, Vol. 3, pp. 461–463. 3 In a 2022 survey of over 700 leading AI experts, nearly half of respondents gave at least a 10% chance of the long-run e­ect of advanced AI on humanity being ‘extremely bad,’ at the level of ‘causing human extinction or similarly permanent and severe disempowerment of the human species.’ 4 Future of Life Institute, ‘Pause Giant AI Experiments: An Open Letter,’ Mar. 22, 2023. 5 Recent news about AI labs cutting ethics teams suggests that companies are failing to prioritize the necessary safeguards. FUTURE OF LIFE INSTITUTE 5 and will extend far beyond any pause. Nonetheless, implementing these recommendations, which largely reflect a broader consensus among AI policy experts, will establish a strong governance foundation for AI. Policy recommendations: 1. Mandate robust third-party auditing and certification. 2. Regulate access to computational power. 3. Establish capable AI agencies at the national level. 4. Establish liability for AI-caused harms. 5. Introduce measures to prevent and track AI model leaks. 6. Expand technical AI safety research funding. 7. Develop standards for identifying and managing AI-generated content and recommendations. To coordinate, collaborate, or inquire regarding the recommendations herein, please contact us at policy@futureoflife.org. FUTURE OF LIFE INSTITUTE 6 1. Mandate robust third-party auditing and certification for specific AI systems For some types of AI systems, the potential to impact the physical, mental, and financial wellbeing of individuals, communities, and society is readily apparent. For example, a credit scoring system could discriminate against certain ethnic groups. For other systems – in particular general-purpose AI systems6 – the applications and potential risks are often not immediately evident. General-purpose AI systems trained on massive datasets also have unexpected (and often unknown) emergent capabilities.7 In Europe, the draft AI Act already requires that, prior to deployment and upon any substantial modification, ‘high-risk’ AI systems undergo ‘conformity assessments’ in order to certify compliance with specified harmonized standards or other common specifications.8 In some cases, the Act requires such assessments to be carried out by independent third-parties to avoid conflicts of interest. In contrast, the United States has thus far established only a general, voluntary framework for AI risk assessment.9 The National Institute of Standards and Technology (NIST), in coordination with various stakeholders, is developing so-called ‘profiles’ that will provide specific risk assessment and mitigation guidance for certain types of AI systems, but this framework still allows organizations to simply ‘accept’ the risks that they create for society instead of addressing them. In other words, the United States does not require any third-party risk assessment or risk mitigation measures before a powerful AI system can be deployed at scale. To ensure proper vetting of powerful AI systems before deployment, we recommend a robust independent auditing regime for models that are general-purpose, trained on large amounts of compute, or intended for use in circumstances likely to impact the rights or the wellbeing of individuals, communities, or society. This mandatory third-party auditing and certification scheme could be derived from the EU’s proposed ‘conformity assessments’ and should be adopted by jurisdictions worldwide10 . In particular, we recommend third-party auditing of such systems across a range of benchmarks for the assessment of risks11, including possible weaponization12 and unethical behaviors13 and mandatory certification by accredited third-party auditors before these high-risk systems can be deployed. Certification should only be granted if the developer of the system can demonstrate that appropriate measures have been taken to mitigate risk, and that any 6 The Future of Life Institute has previously defined “general-purpose AI system” to mean ‘an AI system that can accomplish or be adapted to accomplish a range of distinct tasks, including some for which it was not intentionally and specifically trained.’ 7 Samuel R. Bowman, ’Eight Things to Know about Large Language Models,’ ArXiv Preprint, Apr. 2, 2023. 8 Proposed EU Artificial Intelligence Act, Article 43.1b. 9 National Institute of Standards and Technology, ‘Artificial Intelligence Risk Management Framework (AI RMF 1.0),’ U.S. Department of Commerce, Jan. 2023. 10 International standards bodies such as IEC, ISO and ITU can also help in developing standards that address risks from advanced AI systems, as they have highlighted in response to FLI’s call for a pause. 11 See, e.g., the Holistic Evaluation of Language Models approach by the Center for Research on Foundation Models: Rishi Bommassani, Percy Liang, & Tony Lee, ‘Language Models are Changing AI: The Need for Holistic Evaluation’. 12 OpenAI described weaponization risks of GPT-4 on p.12 of the “GPT-4 System Card.” 13 See, e.g., the following benchmark for assessing adverse behaviors including power-seeking, disutility, and ethical violations: Alexander Pan, et al., ‘Do the Rewards Justify the Means? Measuring Trade-o­s Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark,’ ArXiv Preprint, Apr. 6, 2023. FUTURE OF LIFE INSTITUTE 7 residual risks deemed tolerable are disclosed and are subject to established protocols for minimizing harm. 2. Regulate organizations’ access to computational power At present, the most advanced AI systems are developed through training that requires an enormous amount of computational power - ‘compute’ for short. The amount of compute used to train a general-purpose system largely correlates with its capabilities, as well as the magnitude of its risks. Today’s most advanced models, like OpenAI’s GPT-4 or Google’s PaLM, can only be trained with thousands of specialized chips running over a period of months. While chip innovation and better algorithms will reduce the resources required in the future, training the most powerful AI systems will likely remain prohibitively expensive to all but the best-resourced players. Figure 1. OpenAI is estimated to have used approximately 700% more compute to train GPT-4 than the next closest model (Minerva, DeepMind), and 7,000% more compute than to train GPT-3 (Davinci). Depicted above is an estimate of compute used to train GPT-4 calculated by Ben Cottier at Epoch, as o­icial training compute details for GPT-4 have not been released. Data from: Sevilla et al., ‘Parameter, Compute and Data Trends in Machine Learning,’ 2021 [upd. Apr. 1, 2023]. In practical terms, compute is more easily monitored and governed than other AI inputs, such as talent, data, or algorithms. It can be measured relatively easily and the supply chain for advanced AI systems is highly centralized, which means governments can leverage such FUTURE OF LIFE INSTITUTE 8 measures in order to limit the harms of large-scale models.14 To prevent reckless training of the highest risk models, we recommend that governments make access to large amounts of specialized computational power for AI conditional upon the completion of a comprehensive risk assessment. The risk assessment should include a detailed plan for minimizing risks to individuals, communities, and society, consider downstream risks in the value chain, and ensure that the AI labs conduct diligent know-yourcustomer checks. Successful implementation of this recommendation will require governments to monitor the use of compute at data centers within their respective jurisdictions.15 The supply chains for AI chips and other key components for high-performance computing will also need to be regulated such that chip firmware can alert regulators to unauthorized large training runs of advanced AI systems.16 In 2022, the U.S. Department of Commerce’s Bureau of Industry and Security instituted licensing requirements17 for export of many of these components in an e­ort to monitor and control their global distribution. However, licensing is only required when exporting to certain destinations, limiting the capacity to monitor aggregation of equipment for unauthorized large training runs within the United States and outside the scope of export restrictions. Companies within the specified destinations have also successfully skirted monitoring by training AI systems using compute leased from cloud providers.18 We recommend expansion of know-your-customer requirements to all high-volume suppliers for high-performance computing components, as well as providers that permit access to large amounts cloud compute. 3. Establish capable AI agencies at national level AI is developing at a breakneck pace and governments need to catch up. The establishment of AI regulatory agencies helps to consolidate expertise and reduces the risk of a patchwork approach. The UK has already established an O­ice for Artificial Intelligence and the EU is currently legislating for an AI Board. Similarly, in the US, Representative Ted Lieu has announced legislation to create a non-partisan AI Commission with the aim of establishing a regulatory agency. These e­orts need to be sped up, taken up around the world and, eventually, coordinated within a dedicated international body. 14 Jess Whittlestone et al., ‘Future of compute review - submission of evidence’, Aug. 8, 2022. 15 Please see fn. 14 for a detailed proposal for government compute monitoring as drafted by the Centre for Long-Term Resilience and several sta­ members of AI lab Anthropic. 16 Yonadav Shavit at Harvard University has proposed a detailed system for how governments can place limits on how and when AI systems get trained. 17 Bureau of Industry and Security, Department of Commerce, ‘Implementation of Additional Export Controls: Certain Advanced Computing and Semiconductor Manufacturing Items; Supercomputer and Semiconductor End Use; Entity List Modification‘, Federal Register, Oct. 14, 2022. 18 Eleanor Olcott, Qianer Liu, & Demetri Sevastopulo, ‘Chinese AI groups use cloud services to evade US chip export control,’ Financial Times, Mar. 9, 2023. FUTURE OF LIFE INSTITUTE 9 We recommend that national AI agencies be established in line with a blueprint19 developed by Anton Korinek at Brookings. Korinek proposes that an AI agency have the power to: • Monitor public developments in AI progress and define a threshold for which types of advanced AI systems fall under the regulatory oversight of the agency (e.g. systems above a certain level of compute or that a­ect a particularly large group of people). • Mandate impact assessments of AI systems on various stakeholders, define reporting requirements for advanced AI companies and audit the impact on people’s rights, wellbeing, and society at large. For example, in systems used for biomedical research, auditors would be asked to evaluate the potential for these systems to create new pathogens. • Establish enforcement authority to act upon risks identified in impact assessments and to prevent abuse of AI systems. • Publish generalized lessons from the impact assessments such that consumers, workers and other AI developers know what problems to look out for. This transparency will also allow academics to study trends and propose solutions to common problems. Beyond this blueprint, we also recommend that national agencies around the world mandate record-keeping of AI safety incidents, such as when a facial recognition system causes the arrest of an innocent person. Examples include the non-profit AI Incident Database and the forthcoming EU AI Database created under the European AI Act.20 4. Establish liability for AI-caused harm AI systems present a unique challenge in assigning liability. In contrast to typical commercial products or traditional software, AI systems can perform in ways that are not well understood by their developers, can learn and adapt after they are sold and are likely to be applied in unforeseen contexts. The ability for AI systems to interact with and learn from other AI systems is expected to expedite the emergence of unanticipated behaviors and capabilities, especially as the AI ecosystem becomes more expansive and interconnected. Several plug-ins have already been developed that allow AI systems like ChatGPT to perform tasks through other online services (e.g. ordering food delivery, booking travel, making reservations), broadening the range of potential real-world harms that can result from their use and further complicating the assignment of liability.21 OpenAI’s GPT-4 system card references an instance of the system explicitly deceiving a human into bypassing a CAPTCHA botdetection system using TaskRabbit, a service for soliciting freelance labor.22 When such systems make consequential decisions or perform tasks that cause harm, assigning responsibility for that harm is a complex legal challenge. Is the harmful decision the fault of 19 Anton Korinek, ‘Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files,’ Brookings, Dec. 8 2021. 20 Proposed EU Artificial Intelligence Act, Article 60. 21 Will Knight & Khari Johnson, ‘Now That ChatGPT is Plugged In, Things Could Get Weird,’ Wired, Mar. 28, 2023. 22 OpenAI, ‘GPT-4 System Card,’ Mar. 23, 2023, p.15. FUTURE OF LIFE INSTITUTE 10 the AI developer, deployer, owner, end-user, or the AI system itself? Key among measures to better incentivize responsible AI development is a coherent liability framework that allows those who develop and deploy these systems to be held responsible for resulting harms. Such a proposal should impose a financial cost for failing to exercise necessary diligence in identifying and mitigating risks, shifting profit incentives away from reckless empowerment of poorly-understood systems toward emphasizing the safety and wellbeing of individuals, communities, and society as a whole. To provide the necessary financial incentives for profit-driven AI developers to exercise abundant caution, we recommend the urgent adoption of a framework for liability for AIderived harms. At a minimum, this framework should hold developers of general-purpose AI systems and AI systems likely to be deployed for critical functions23 strictly liable for resulting harms to individuals, property, communities, and society. It should also allow for joint and several liability for developers and downstream deployers when deployment of an AI system that was explicitly or implicitly authorized by the developer results in harm. 5. Introduce measures to prevent and track AI model leaks Commercial actors may not have su­icient incentives to protect their models, and their cyberdefense measures can often be insu­icient. In early March 2023, Meta demonstrated that this is not a theoretical concern, when their model known as LLaMa was leaked to the internet.24 As of the date of this publication, Meta has been unable to determine who leaked the model. This lab leak allowed anyone to copy the model and represented the first time that a major tech firm’s restricted-access large language model was released to the public. Watermarking of AI models provides e­ective protection against stealing, illegitimate redistribution and unauthorized application, because this practice enables legal action against identifiable leakers. Many digital media are already protected by watermarking - for example through the embedding of company logos in images or videos. A similar process25 can be applied to advanced AI models, either by inserting information directly into the model parameters or by training it on specific trigger data. We recommend that governments mandate watermarking for AI models, which will make it easier for AI developers to take action against illegitimate distribution. 6. Expand technical AI safety research funding The private sector under-invests in research that ensures that AI systems are safe and secure. Despite nearly USD 100 billion of private investment in AI in 2022 alone, it is estimated that only about 100 full-time researchers worldwide are specifically working to ensure AI is safe 23 I.e., functions that could materially a­ect the wellbeing or rights of individuals, communities, or society. 24 Joseph Cox, ‘Facebook’s Powerful Large Language Model Leaks Online,’ VICE, Mar. 7, 2023. 25 For a systematic overview of how watermarking can be applied to AI models, see: Franziska Boenisch, ‘A Systematic Review on Model Watermarking of Neural Networks,’ Front. Big Data, Sec. Cybersecurity & Privacy, Vol. 4, Nov. 29, 2021. FUTURE OF LIFE INSTITUTE 11 and properly aligned with human values and intentions.26 In recent months, companies developing the most powerful AI systems have either downsized or entirely abolished their respective ‘responsible AI’ teams.27 While this partly reflects a broader trend of mass layo­s across the technology sector, it nonetheless reveals the relative deprioritization of safety and ethics considerations in the race to put new systems on the market. Governments have also invested in AI safety and ethics research, but these investments have primarily focused on narrow applications rather than on the impact of more general AI systems like those that have recently been released by the private sector. The US National Science Foundation (NSF), for example, has established ‘AI Research Institutes’ across a broad range of disciplines. However, none of these institutes are specifically working on the large-scale, societal, or aggregate risks presented by powerful AI systems. To ensure that our capacity to control AI systems keeps pace with the growing risk that they pose, we recommend a significant increase in public funding for technical AI safety research in the following research domains: • Alignment: development of technical mechanisms for ensuring AI systems learn and perform in accordance with intended expectations, intentions, and values. • Robustness and assurance: design features to ensure that AI systems responsible for critical functions28 can perform reliably in unexpected circumstances, and that their performance can be evaluated by their operators. • Explainability and interpretability: develop mechanisms for opaque models to report the internal logic used to produce output or make decisions in understandable ways. More explainable and interpretable AI systems facilitate better evaluations of whether output can be trusted. In the past few months, experts such as the former Special Advisor to the UK Prime Minister on Science and Technology James W. Phillips29 and a Congressionally-established US taskforce have called for the creation of national AI labs as ‘a shared research infrastructure that would provide AI researchers and students with significantly expanded access to computational resources, high-quality data, educational tools, and user support.’30 Should governments move forward with this concept, we propose that at least 25% of resources made available through these labs be explicitly allocated to technical AI safety projects. 26 This figure, drawn from , ‘The AI Arms Race is Changing Everything,’ (Andrew R. Chow & Billy Perrigo, TIME, Feb. 16, 2023 [upd. Feb. 17, 2023]), likely represents a lower bound for the estimated number of AI safety researchers. This resource posits a significantly higher number of workers in the AI safety space, but includes in its estimate all workers a­iliated with organizations that engage in AI safety-related activities. Even if a worker has no involvement with an organization’s AI safety work or research e­orts in general, they may still be included in the latter estimate. 27 Christine Criddle & Madhumita Murgia, ‘Big tech companies cut AI ethics sta­, raising safety concerns,’ Financial Times, Mar. 29, 2023. 28 See fn. 23, supra. 29 Original call for a UK government AI lab is set out in this article. 30 For the taskforce’s detailed recommendations, see: ‘Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem: An Implementation Plan for a National Artificial Intelligence Research Resource,’ National Artificial Intelligence Research Resource Task Force Final Report, Jan. 2023. FUTURE OF LIFE INSTITUTE 12 7. Develop standards for identifying and managing AI-generated content and recommendations The need to distinguish real from synthetic media and factual content from ‘hallucinations’ is essential for maintaining the shared factual foundations underpinning social cohesion. Advances in generative AI have made it more di­icult to distinguish between AI-generated media and real images, audio, and video recordings. Already we have seen AI-generated voice technology used in financial scams.31 Creators of the most powerful AI systems have acknowledged that these systems can produce convincing textual responses that rely on completely fabricated or out-of-context information.32 For society to absorb these new technologies, we will need e­ective tools that allow the public to evaluate the authenticity and veracity of the content they consume. We recommend increased funding for research into techniques, and development of standards, for digital content provenance. This research, and its associated standards, should ensure that a reasonable person can determine whether content published online is of synthetic or natural origin, and whether the content has been digitally modified, in a manner that protects the privacy and expressive rights of its creator. We also recommend the expansion of ‘bot-or-not’ laws that require disclosure when a person is interacting with a chatbot. These laws help prevent users from being deceived or manipulated by AI systems impersonating humans, and facilitate contextualizing the source of the information. The draft EU AI Act requires that AI systems be designed such that users are informed they are interacting with an AI system,33 and the US State of California enacted a similar bot disclosure law in 2019.34 Almost all of the world’s nations, through the adoption of a UNESCO agreement on the ethics of AI, have recognized35 ‘the right of users to easily identify whether they are interacting with a living being, or with an AI system imitating human or animal characteristics.’ We recommend that all governments convert this agreement into hard law to avoid fraudulent representations of natural personhood by AI from outside regulated jurisdictions. Even if a user knows they are interacting with an AI system, they may not know when that system is prioritizing the interests of the developer or deployer over the user. These systems may appear to be acting in the user’s interest, but could be designed or employed to serve other functions. For instance, the developer of a general-purpose AI system could be financially incentivized to design the system such that when asked about a product, it preferentially recommends a certain brand, when asked to book a flight, it subtly prefers a certain airline, when asked for news, it provides only media advocating specific viewpoints, and when asked for medical advice, it prioritizes diagnoses that are treated with more profitable pharmaceutical 31 Pranshu Verma, ‘They thought loved ones were calling for help. It was an AI scam.’ The Washington Post, Mar. 5, 2023. 32 Ti­any Hsu & Stuart A. Thompson, ‘Disinformation Researchers Raise Alarms About A.I. Chatbots,’ The New York Times, Feb. 8, 2023 [upd. Feb. 13, 2023]. 33 Proposed EU Artificial Intelligence Act, Article 52. 34 SB 1001 (Hertzberg, Ch. 892, Stats. 2018). 35 Recommendation 125, ‘Outcome document: first draft of the Recommendation on the Ethics of Artificial Intelligence,’ UNESCO, Sep. 7, 2020, p. 21. FUTURE OF LIFE INSTITUTE 13 drugs. These preferences could in many cases come at the expense of the end user’s mental, physical, or financial well-being. Many jurisdictions require that sponsored content be clearly labeled, but because the provenance of output from complex general-purpose AI systems is remarkably opaque, these laws may not apply. We therefore recommend, at a minimum, that conflict-of-interest trade-o­s should be clearly communicated to end users along with any a­ected output; ideally, laws and industry standards should be implemented that require AI systems to be designed and deployed with a duty to prioritize the best interests of the end user. Finally, we recommend the establishment of laws and industry standards clarifying and the fulfillment of ‘duty of loyalty’ and ‘duty of care’ when AI is used in the place of or in assistance to a human fiduciary. In some circumstances – for instance, financial advice and legal counsel – human actors are legally obligated to act in the best interest of their clients and to exercise due care to minimize harmful outcomes. AI systems are increasingly being deployed to advise on these types of decisions or to make them (e.g. trading stocks) independent of human input. Laws and standards towards this end should require that if an AI system is to contribute to the decision-making of a fiduciary, the fiduciary must be able to demonstrate beyond a reasonable doubt that the AI system will observe duties of loyalty and care comparable to their human counterparts. Otherwise, any breach of these fiduciary responsibilities should be attributed to the human fidiciary employing the AI system. FUTURE OF LIFE INSTITUTE 14 Conclusion The new generation of advanced AI systems is unique in that it presents significant, welldocumented risks, but can also manifest high-risk capabilities and biases that are not immediately apparent. In other words, these systems may perform in ways that their developers had not anticipated or malfunction when placed in a di­erent context. Without appropriate safeguards, these risks are likely to result in substantial harm, in both the near- and longerterm, to individuals, communities, and society. Historically, governments have taken critical action to mitigate risks when confronted with emerging technology that, if mismanaged, could cause significant harm. Nations around the world have employed both hard regulation and international consensus to ban the use and development of biological weapons, pause human genetic engineering, and establish robust government oversight for introducing new drugs to the market. All of these e­orts required swift action to slow the pace of development, at least temporarily, and to create institutions that could realize e­ective governance appropriate to the technology. Humankind is much safer as a result. We believe that approaches to advancement in AI R&D that preserve safety and benefit society are possible, but require decisive, immediate action by policymakers, lest the pace of technological evolution exceed the pace of cautious oversight. A pause in development at the frontiers of AI is necessary to mobilize the instruments of public policy toward commonsense risk mitigation. We acknowledge that the recommendations in this brief may not be fully achievable within a six month window, but such a pause would hold the moving target still and allow policymakers time to implement the foundations of good AI governance. The path forward will require coordinated e­orts by civil society, governments, academia, industry, and the public. If this can be achieved, we envision a flourishing future where responsibly developed AI can be utilized for the good of all humanity

https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf












1 komentārs: