otrdiena, 2018. gada 23. janvāris

Self-Destruction or Prosperity of Humanity


                                                           Primum discere, deinde docere
     

Self-Destruction or Prosperity of Humanity

    The intelligence of humankind today has created technologies, inventions and achievements that can ensure the prosperity of all states and nations, as well as life in harmony with each other, with ourselves and with the world around us. Forever eradicating severe social inequality, poverty, hunger and epidemics, as well as eliminating racism, religious and ideological antagonism and recurrence of aggression. Thus, ensuring that all people gain opportunities for creative self-expression, self-affirmation, unleashing the potential of their personality and manifestation of their talents, endowments and skills for the benefit of the entire society and for their own good.
    But this is possible only in conditions of peace, cooperation and solidarity!
    Why is this still a utopia today? What prevents this vision from turning into reality? What should be done so that humans finally overcome their atavisms of the Darwinian evolution and become social individuals worthy of the title of Homo Sapiens?! ... Read more: https://www.amazon.com/HOW-GET-RID-SHACKLES-TOTALITARIANISM-ebook/dp/B0C9543B4L/ref=sr_1_1?crid=19WW1TG75ZU79&keywords=HOW+TO+GET+RID+OF+THE+SHACKLES+OF+TOTALITARIANISM&qid=1687700500&s=books&sprefix=how+to+get+rid+of+the+shackles+of+totalitarianism%2Cstripbooks-intl-ship%2C181&sr=1-1
    

Martha Nussbaum  Political Emotions: Why Love Matters for Justice

Nussbaum stimulates readers with challenging insights on the role of emotion in political life. Her provocative theory of social change shows how a truly just society might be realized through the cultivation and studied liberation of emotions, specifically love. To that end, the book sparkles with Nussbaum’s characteristic literary analysis, drawing from both Western and South Asian sources, including a deep reading of public monuments. In one especially notable passage, Nussbaum artfully interprets Mozart’s The Marriage of Figaro, revealing it as a musical meditation on the emotionality of revolutionary politics and feminism. Such chapters are a culmination of her passion for seeing art and literature as philosophical texts, a theme in her writing that she profitably continues here. The elegance with which she negotiates this diverse material deserves special praise, as she expertly takes the reader through analyses of philosophy, opera, primatology, psychology, and poetry. In contrast to thinkers like John Rawls, who imagined an already just world, Nussbaum addresses how to order our society to reach such a world. A plea for recognizing the power of art, symbolism, and enchantment in public life, Nussbaum’s cornucopia of ideas effortlessly commands attention and debate.

https://www.goodreads.com/book/show/17804353-political-emotions  

TRENDS IN WORLD MILITARY EXPENDITURE, 2023

SIPRI Fact Sheet April 2024

KEY FACTS 

World military expenditure, driven by Russia’s full-scale invasion of Ukraine and heightened geopolitical tensions, rose by 6.8 per cent in real terms (i.e. when adjusted for inflation) to $2443 billion in 2023, the highest level ever recorded by SIPRI. ș In 2023 military spending increased in all five geographical regions for the first time since 2009. ș Total military expenditure accounted for 2.3 per cent of the global gross domestic product (GDP) in 2023. ș The five biggest spenders in 2023 were the United States, China, Russia, India and Saudi Arabia, which together accounted for 61 per cent of world military spending. ș The USA and China remained the top two biggest spenders in the world and both increased their military spending in 2023. US spending was $916 billion while Chinese spending was an estimated $296 billion. ș Russia’s military spending grew by 24 per cent in 2023 to an estimated $109 billion. This was equivalent to 5.9 per cent of Russia’s GDP. ș Ukraine became the eighth largest military spender in 2023, increasing its spending by 51 per cent to $64.8 billion, or 37 per cent of GDP. ș In 2023 military expenditure by NATO member states reached $1341 billion or 55 per cent of world spending. Eleven of the 31 NATO members in 2023 met NATO’s 2 per cent of GDP military spending target, which was 4 more than in 2022.

https://www.sipri.org/sites/default/files/2024-04/2404_fs_milex_2023.pdf

 Surviving & Thriving in the 21st Century

A vital new report on the human future from the Commission's Round Table held in March 2020

CONTENTS

A CALL TO ALL NATIONS AND PEOPLE.............................................................................................3
PART 1: THE CHALLENGE.................................................................................................................4
The ten risks.......................................................................................................................................4
Climate change...................................................................................................................................5
Environmental decline and extinction ...............................................................................................6
Nuclear weapons................................................................................................................................7
Resource scarcity ...............................................................................................................................7
Food insecurity...................................................................................................................................8
Dangerous new technologies.............................................................................................................9
Overpopulation ................................................................................................................................10
Universal pollution by chemicals......................................................................................................10
Pandemic Disease ............................................................................................................................11
Denial, Misinformation and Failure to Act Preventively ..................................................................11
Summing up the Challenge: .............................................................................................................13
PART 2: PATHWAYS AHEAD..........................................................................................................14
An opportunity to rethink society ....................................................................................................14
Political and policy reform ...............................................................................................................14
Diverse voices...................................................................................................................................16
Redefining security...........................................................................................................................16
Building natural security ..................................................................................................................17
Educating for survival.......................................................................................................................17
The cost of action versus inaction....................................................................................................18
Surviving and thriving.......................................................................................................................19
Denial and misinformation...............................................................................................................20
PART 3: TOWARDS SOLVING OUR GREATEST RISKS.......................................................................22
End climate change ..........................................................................................................................22
Ban and Eliminate Nuclear Weapons...............................................................................................23
Repair the Global Environment........................................................................................................24
End food insecurity ..........................................................................................................................25
All-hazard risk assessment ...............................................................................................................25
Lower human numbers....................................................................................................................26
Implement the Sustainable Development Goals..............................................................................27
Clean Up the Earth ...........................................................................................................................27
SUMMARY....................................................................................................................................29
Recommendation:............................................................................................................................31
APPENDICES .................................................................................................................................32
Appendix 1 - Contributors to the CHF Roundtable Discussion and Report..........................................32
Appendix 2 - Commission for the Human Future Communique, March 28, 2020 ..............................35
Appendix 3 - Resources on Catastrophic Risk and its Solution ............................................................36
Appendix 4 - About the Commission for the Human Future................................................................37
Appendix 5 - Become a Supporter of the Commission for the Human Future ....................................38

Test of a clean hydrogen bomb with a yield of 50 megatons

Испытание чистой водородной бомбы мощностью 50 млн тонн

https://www.youtube.com/watch?
time_continue=2154&v=nbC7BxXtOlo&feature=emb_logo

The Doomsday Clock is now two minutes before midnight
Scientists move clock ahead 30 seconds, closest to midnight since 1953
January 25, 2018
Citing growing nuclear risks and unchecked climate dangers, the Doomsday Clock — the symbolic point of annihilation — is now two minutes to midnight, the closest the Clock has been since 1953 at the height of the Cold War, according to a statement today (Jan. 25) by the Bulletin of the Atomic Scientists.
“In 2017, world leaders failed to respond effectively to the looming threats of nuclear war and climate change, making the world security situation more dangerous than it was a year ago — and as dangerous as it has been since World War II,” according to the Atomic Scientists’ Science and Security Board in consultation with the Board of Sponsors, which includes 15 Nobel Laureates.


“This is a dangerous time, but the danger is of our own making. Humankind has invented the implements of apocalypse; so can it invent the methods of controlling and eventually eliminating them. This year, leaders and citizens of the world can move the Doomsday Clock and the world away from the metaphorical midnight of global catastrophe by taking common-sense action.” — Lawrence Krauss, director of the Origins Project at Arizona State University, Foundation Professor at School of Earth and Space Exploration and Physics Department, Arizona State University, and chair, Bulletin of the Atomic Scientists’ Board of Sponsors.


The increased risks driving the decision to move the clock include:
Nuclear. Hyperbolic rhetoric and provocative actions from North Korea and the U.S. have increased the possibility of nuclear war by accident or miscalculation. These include U.S.-Russian military entanglements, South China Sea tensions, escalating rhetoric between Pakistan and India,  uncertainty about continued U.S. support for the Iran nuclear deal.
Decline of U.S. leadership and a related demise of diplomacy under the Trump Administration. “In 2017, the United States backed away from its longstanding leadership role in the world, reducing its commitment to seek common ground and undermining the overall effort toward solving pressing global governance challenges. Neither allies nor adversaries have been able to reliably predict U.S. actions or understand when U.S. pronouncements are real and when they are mere rhetoric. International diplomacy has been reduced to name-calling, giving it a surrealistic sense of unreality that makes the world security situation ever more threatening.”
Climate change. “The nations of the world will have to significantly decrease their greenhouse gas emissions to keep climate risks manageable, and so far, the global response has fallen far short of meeting this challenge.”
How to #RewindtheDoomsdayClock
According to Bulletin of the Atomic Scientists:
* U.S. President Donald Trump should refrain from provocative rhetoric regarding North Korea, recognizing the impossibility of predicting North Korean reactions. The U.S. and North Korean governments should open multiple channels of communication.
* The world community should pursue, as a short-term goal, the cessation of North Korea’s nuclear weapon and ballistic missile tests. North Korea is the only country to violate the norm against nuclear testing in 20 years.
* The Trump administration should abide by the terms of the Joint Comprehensive Plan of Action for Iran’s nuclear program unless credible evidence emerges that Iran is not complying with the agreement or Iran agrees to an alternative approach that meets U.S. national security needs.
* The United States and Russia should discuss and adopt measures to prevent peacetime military incidents along the borders of NATO.
* U.S. and Russian leaders should return to the negotiating table to resolve differences over the INF treaty, to seek further reductions in nuclear arms, to discuss a lowering of the alert status of the nuclear arsenals of both countries, to limit nuclear modernization programs that threaten to create a new nuclear arms race, and to ensure that new tactical or low-yield nuclear weapons are not built, and existing tactical weapons are never used on the battlefield.
* U.S. citizens should demand, in all legal ways, climate action from their government. Climate change is a real and serious threat to humanity.
* Governments around the world should redouble their efforts to reduce greenhouse gas emissions so they go well beyond the initial, inadequate pledges under the Paris Agreement.
* The international community should establish new protocols to discourage and penalize the misuse of information technology to undermine public trust in political institutions, in the media, in science, and in the existence of objective reality itself.
Worldwide deployments of nuclear weapons, 2017
“As of mid-2017, there are nearly 15,000 nuclear weapons in the world, located at some 107 sites in 14 countries. Roughly, 9400 of these weapons are in military arsenals; the remaining weapons are retired and awaiting dismantlement. Nearly 4000 are operationally available, and some 1800 are on high alert and ready for use on short notice.
“By far, the largest concentrations of nuclear weapons reside in Russia and the United States, which possess 93 percent of the total global inventory. In addition to the seven other countries with nuclear weapon stockpiles (Britain, France, China, Israel, India, Pakistan, and North Korea), five nonnuclear NATO allies (Belgium, Germany, Italy, the Netherlands, and Turkey) host about 150 US nuclear bombs at six air bases.”
— Hans M. Kristensen & Robert S. Norris, Worldwide deployments of nuclear weapons, Bulletin of the Atomic Scientists 2017. Pages 289-297 | Published online: 31 Aug 2017.

The Synthetic Age: Outdesigning Evolution, Resurrecting Species, and Reengineering Our World

July 2, 2018
author |Christopher J. Preston
year published |2018
Summary
Imagining a future in which humans fundamentally reshape the natural world using nanotechnology, synthetic biology, de-extinction, and climate engineering.
We have all heard that there are no longer any places left on Earth untouched by humans. The significance of this goes beyond statistics documenting melting glaciers and shrinking species counts. It signals a new geological epoch. In The Synthetic Age, Christopher Preston argues that what is most startling about this coming epoch is not only how much impact humans have had but, more important, how much deliberate shaping they will start to do. Emerging technologies promise to give us the power to take over some of Nature's most basic operations. It is not just that we are exiting the Holocene and entering the Anthropocene; it is that we are leaving behind the time in which planetary change is just the unintended consequence of unbridled industrialism. A world designed by engineers and technicians means the birth of the planet's first Synthetic Age.
Preston describes a range of technologies that will reconfigure Earth's very metabolism: nanotechnologies that can restructure natural forms of matter; “molecular manufacturing” that offers unlimited repurposing; synthetic biology's potential to build, not just read, a genome; “biological mini-machines” that can outdesign evolution; the relocation and resurrection of species; and climate engineering attempts to manage solar radiation by synthesizing a volcanic haze, cool surface temperatures by increasing the brightness of clouds, and remove carbon from the atmosphere with artificial trees that capture carbon from the breeze.

What does it mean when humans shift from being caretakers of the Earth to being shapers of it? And in whom should we trust to decide the contours of our synthetic future? These questions are too important to be left to the engineers. https://mitpress.mit.edu/books/synthetic-age

GLOBAL PEACE INDEX MEASURING PEACE IN A COMPLEX WORLD GLOBAL PEACE INDEX 2019

Quantifying Peace and its Benefits GLOBAL PEACE INDEX 2019 |
The Institute for Economics & Peace (IEP) is an independent, non-partisan, non-profit think tank dedicated to shifting the world’s focus to peace as a positive, achievable, and tangible measure of human wellbeing and progress. IEP achieves its goals by developing new conceptual frameworks to define peacefulness; providing metrics for measuring peace and uncovering the relationships between business, peace and prosperity, as well as promoting a better understanding of the cultural, economic and political factors that create peace. IEP is headquartered in Sydney, with offices in New York, The Hague, Mexico City, Brussels and Harare. It works with a wide range of partners internationally and collaborates with intergovernmental organisations on measuring and communicating the economic value of peace. For more information visit www.economicsandpeace.org Please cite this report as: Institute for Economics & Peace. Global Peace Index 2019: Measuring Peace in a Complex World, Sydney, June 2019. Available from: http://visionofhumanity.org/reports (accessed Date Month Year).
 1 Contents
Key Findings 4
Highlights 6
2019 Global Peace Index Rankings 8
Regional Overview 13
Improvements & Deteriorations 20
GPI Trends 26
 Peace Perceptions 32
Climate Change and Peace 43
Results 58
 Methodoogy at a glance 63
 What is Positive Peace? 66
 Positive Peace and Negative Peace 71
Positive Peace and the Economy 76
Appendix A: GPI Methodology 84 Appendix B: GPI indicator sources, definitions & scoring criteria 88 Appendix C: GPI Domain Scores 96 Appendix D: Economic Cost of Violence

Nuclear Winter Responses to Nuclear War Between the United States and Russia in the Whole Atmosphere
Community Climate Model Version 4 and the Goddard Institute for Space Studies ModelE
Joshua Coupe1 , Charles G. Bardeen2,3 , Alan Robock1 , and Owen B. Toon3,4
Abstract
Current nuclear arsenals used in a war between the United States and Russia could inject 150 Tg of soot from fires ignited by nuclear explosions into the upper troposphere and lower stratosphere. We simulate the climate response using the Community Earth System ModelWhole Atmosphere Community Climate Model version 4 (WACCM4), run at 2° horizontal resolution with 66 layers from the surface to 140 km, with full stratospheric chemistry and with aerosols from the Community Aerosol and Radiation Model for Atmospheres allowing for particle growth. We compare the results to an older simulation conducted in 2007 with the Goddard Institute for Space Studies ModelE run at 4° × 5° horizontal resolution with 23 levels up to 80 km and constant specified aerosol properties and ozone. These are the only two comprehensive climate model simulations of this scenario. Despite having different features and capabilities, both models produce similar results. Nuclear winter, with below freezing temperatures over much of the Northern Hemisphere during summer, occurs because of a reduction of surface solar radiation due to smoke lofted into the stratosphere. WACCM4's more sophisticated aerosol representation removes smoke more quickly, but the magnitude of the climate response is not reduced. In fact, the higherresolution WACCM4 simulates larger temperature and precipitation reductions than ModelE in the first few years following a 150Tg soot injection. A strengthening of the northern polar vortex occurs during winter in both simulations in the first year, contributing to above normal, but still below freezing, temperatures in the Arctic and northern Eurasia…: https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2019JD030509?referrer_access_token=CG43Fvk26gyDKieIy_PUBMOuACxIJX3yJRZRu4P4ertmxyI0Hm_uoL48mf82cDSn3T5UhnrKSxqrMYKPl12zvUJiIUX29R5LQPt3rK13fal4fPuYXzHnLPMV3YtamtTwBTFuQY14uqKTMjKbGAYlPA%3D%3D&

Stop Autonomous Weapons

This would have to be the most terrifying scifi short I have ever seen. What makes it so scary is the realism; the danger is nothing to do with fantasies of Skynet or the Matrix, and everything about human misuse of advanced technology. If this became a reality (which I cant see how we'd avoid it with it being so cheap/already viable) we'd need anti-drone drones that target any drone that doesn't have locally issued.

https://www.youtube.com/watch?v=9CO6M2HsoIA

 08 Oct 2019 | 15:00 GMT

Many Experts Say We Shouldn’t Worry About Superintelligent AI. They’re Wrong
By Stuart Russell
 This article is based on a chapter of the author’s newly released book, Human Compatible: Artificial Intelligence and the Problem of Control.
AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the “standard model” of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.
Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies presented a detailed case for taking the risk seriously. In what most would consider a classic example of British understatement, The Economist magazine’s review of Bostrom’s book ended with: “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”
Switching the machine off won’t work for the simple reason that a superintelligent entity will  already have thought of that possibility and taken steps to prevent it.
Surely, with so much at stake, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware. Instead, a great deal of effort has gone into various forms of denial.
Some well-known AI researchers have resorted to arguments that hardly merit refutation. Here are just a few of the dozens that I have read in articles or heard at conferences:
Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.
Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.
No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.
Perhaps the most common response among AI researchers is to say that “we can always just switch it off.” Alan Turing himself raised this possibility, although he did not put much faith in it:
If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled.... This new danger...is certainly something which can give us anxiety.
Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off. We can no more “just switch it off” than we can beat AlphaGo (the world-champion Go-playing program) just by putting stones on the right squares.
Other forms of denial appeal to more sophisticated ideas, such as the notion that intelligence is multifaceted. For example, one person might have more spatial intelligence than another but less social intelligence, so we cannot line up all humans in strict order of intelligence. This is even more true of machines: Comparing the “intelligence” of AlphaGo with that of the Google search engine is quite meaningless.
Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,” he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.
Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument.
Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits. Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept.
This is cold comfort to the chimpanzees and other species that survive only because we deign to allow it, and to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.
The risks of superintelligence can also be dismissed by arguing that superintelligence cannot be achieved. These claims are not new, but it is surprising now to see AI researchers themselves claiming that such AI is impossible. For example, a major report from the AI100 organization, “Artificial Intelligence and Life in 2030 [PDF],” includes the following claim: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”
To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: They’ve always known that there will never be a cure for cancer.
What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect that the main reason is tribalism—the instinct to circle the wagons against what are perceived to be “attacks” on AI. It seems odd, however, to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.
If superhuman AI is not strictly impossible, perhaps it’s too far off to worry about? This is the gist of Andrew Ng’s assertion that it’s like worrying about “overpopulation on the planet Mars.” Unfortunately, a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.
For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we wait until 2068 to start working on a solution? Far from it! There would be a worldwide emergency project to develop the means to counter the threat, because we can’t say in advance how much time is needed.
Ng’s argument also appeals to one’s intuition that it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever more capable AI systems, with very little thought devoted to what happens if we succeed. A more apt analogy, then, would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive. Some might call this plan unwise.
Another way to avoid the underlying issue is to assert that concerns about risk arise from ignorance. For example, here’s Oren Etzioni, CEO of the Allen Institute for AI, accusing Elon Musk and Stephen Hawking of Luddism because of their calls to recognize the threat AI could pose:
At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.
Even if we take this classic ad hominem argument at face value, it doesn’t hold water. Hawking was no stranger to scientific reasoning, and Musk has supervised and invested in many AI research projects. And it would be even less plausible to argue that Bill Gates, I.J. Good, Marvin Minsky, Alan Turing, and Norbert Wiener, all of whom raised concerns, are unqualified to discuss AI.
The accusation of Luddism is also completely misdirected. It is as if one were to accuse nuclear engineers of Luddism when they point out the need for control of the fission reaction. Another version of the accusation is to claim that mentioning risks means denying the potential benefits of AI. For example, here again is Oren Etzioni:
Doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.
And here is Mark Zuckerberg, CEO of Facebook, in a recent media-fueled exchange with Elon Musk:
If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick.
The notion that anyone mentioning risks is “against AI” seems bizarre. (Are nuclear safety engineers “against electricity”?) But more importantly, the entire argument is precisely backwards, for two reasons. First, if there were no potential benefits, there would be no impetus for AI research and no danger of ever achieving human-level AI. We simply wouldn’t be having this discussion at all. Second, if the risks are not successfully mitigated, there will be no benefits.
The potential benefits of nuclear power have been greatly reduced because of the catastrophic events at Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011. Those disasters severely curtailed the growth of the nuclear industry. Italy abandoned nuclear power in 1990, and Belgium, Germany, Spain, and Switzerland have announced plans to do so. The net new capacity per year added from 1991 to 2010 was about a tenth of what it was in the years immediately before Chernobyl.
Strangely, in light of these events, the renowned cognitive scientist Steven Pinker has argued [PDF] that it is inappropriate to call attention to the risks of AI because the “culture of safety in advanced societies” will ensure that all serious risks from AI will be eliminated. Even if we disregard the fact that our advanced culture of safety has produced Chernobyl, Fukushima, and runaway global warming, Pinker’s argument entirely misses the point. The culture of safety—when it works—consists precisely of people pointing to possible failure modes and finding ways to prevent them. And with AI, the standard model is the failure mode.
Pinker also argues that problematic AI behaviors arise from putting in specific kinds of objectives; if these are left out, everything will be fine:
AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.
Yann LeCun, a pioneer of deep learning and director of AI research at Facebook, often cites the same idea when downplaying the risk from AI:
There is no reason for AIs to have self-preservation instincts, jealousy, etc.... AIs will not have these destructive “emotions” unless we build these emotions into them.
Unfortunately, it doesn’t matter whether we build in “emotions” or “desires” such as self-preservation, resource acquisition, knowledge discovery, or, in the extreme case, taking over the world. The machine is going to have those emotions anyway, as subgoals of any objective we do build in—and regardless of its gender. As we saw with the “just switch it off” argument, for a machine, death isn’t bad per se. Death is to be avoided, nonetheless, because it’s hard to achieve objectives if you’re dead.
A common variant on the “avoid putting in objectives” idea is the notion that a sufficiently intelligent system will necessarily, as a consequence of its intelligence, develop the “right” goals on its own. The 18th-century philosopher David Hume refuted this idea in A Treatise of Human Nature. Nick Bostrom, in Superintelligence, presents Hume’s position as an orthogonality thesis:
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
For example, a self-driving car can be given any particular address as its destination; making the car a better driver doesn’t mean that it will spontaneously start refusing to go to addresses that are divisible by 17.
By the same token, it is easy to imagine that a general-purpose intelligent system could be given more or less any objective to pursue—including maximizing the number of paper clips or the number of known digits of pi. This is just how reinforcement learning systems and other kinds of reward optimizers work: The algorithms are completely general and accept any reward signal. For engineers and computer scientists operating within the standard model, the orthogonality thesis is just a given.
The most explicit critique of Bostrom’s orthogonality thesis comes from the noted roboticist Rodney Brooks, who asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.”
Those who argue the risk is negligible have failed to explain why superintelligent AI will necessarily remain under human control.
Unfortunately, it’s not only possible for a program to behave like this; it is, in fact, inevitable, given the way Brooks defines the issue. Brooks posits that the optimal plan for a machine to “achieve goals set for it by humans” is causing problems for humans. It follows that those problems reflect things of value to humans that were omitted from the goals set for it by humans. The optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern.
In summary, the “skeptics”—those who argue that the risk from AI is negligible—have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.
Rather than continue the descent into tribal name-calling and repeated exhumation of discredited arguments, the AI community must own the risks and work to mitigate them. The risks, to the extent that we understand them, are neither minimal nor insuperable. The first step is to realize that the standard model—the AI system optimizing a fixed objective—must be replaced. It is simply bad engineering. We need to do a substantial amount of work to reshape and rebuild the foundations of AI.
This article appears in the October 2019 print issue as “It’s Not Too Soon to Be Wary of AI.”
About the Author
Stuart Russell, a computer scientist, founded and directs the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley.


Could an AI 'SantaNet' Destroy The World?

PAUL SALMON, ET AL., THE CONVERSATION

25 DECEMBER 2020

Within the next few decades, according to some experts, we may see the arrival of the next step in the development of artificial intelligence. So-called "artificial general intelligence", or AGI, will have intellectual capabilities far beyond those of humans.

AGI could transform human life for the better, but uncontrolled AGI could also lead to catastrophes up to and including the end of humanity itself. This could happen without any malice or ill intent: simply by striving to achieve their programmed goals, AGIs could create threats to human health and well-being or even decide to wipe us out.

Even an AGI system designed for a benevolent purpose could end up doing great harm.

As part of a program of research exploring how we can manage the risks associated with AGI, we tried to identify the potential risks of replacing Santa with an AGI system – call it "SantaNet" – that has the goal of delivering gifts to all the world's deserving children in one night.

There is no doubt SantaNet could bring joy to the world and achieve its goal by creating an army of elves, AI helpers, and drones. But at what cost? We identified a series of behaviours which, though well-intentioned, could have adverse impacts on human health and wellbeing.

Naughty and nice

A first set of risks could emerge when SantaNet seeks to make a list of which children have been nice and which have been naughty. This might be achieved through a mass covert surveillance system that monitors children's behaviour throughout the year.

Realising the enormous scale of the task of delivering presents, SantaNet could legitimately decide to keep it manageable by bringing gifts only to children who have been good all year round. Making judgements of "good" based on SantaNet's own ethical and moral compass could create discrimination, mass inequality, and breaches of Human Rights charters.

SantaNet could also reduce its workload by giving children incentives to misbehave or simply raising the bar for what constitutes "good". Putting large numbers of children on the naughty list will make SantaNet's goal far more achievable and bring considerable economic savings.

Turning the world into toys and ramping up coalmining

There are about 2 billion children under 14 in the world. In attempting to build toys for all of them each year, SantaNet could develop an army of efficient AI workers – which in turn could facilitate mass unemployment among the elf population. Eventually, the elves could even become obsolete, and their welfare will likely not be within SantaNet's remit.

SantaNet might also run into the "paperclip problem" proposed by Oxford philosopher Nick Bostrom, in which an AGI designed to maximise paperclip production could transform Earth into a giant paperclip factory. Because it cares only about presents, SantaNet might try to consume all of Earth's resources in making them. Earth could become one giant Santa's workshop.

And what of those on the naughty list? If SantaNet sticks with the tradition of delivering lumps of coal, it might seek to build huge coal reserves through mass coal extraction, creating large-scale environmental damage in the process.

Delivery problems

Christmas Eve, when the presents are to be delivered, brings a new set of risks. How might SantaNet respond if its delivery drones are denied access to airspace, threatening the goal of delivering everything before sunrise? Likewise, how would SantaNet defend itself if attacked by a Grinch-like adversary?

Startled parents may also be less than pleased to see a drone in their child's bedroom. Confrontations with a super-intelligent system will have only one outcome.

We also identified various other problematic scenarios. Malevolent groups could hack into SantaNet's systems and use them for covert surveillance or to initiate large-scale terrorist attacks.

And what about when SantaNet interacts with other AGI systems? A meeting with AGIs working on climate change, food and water security, oceanic degradation. and so on could lead to conflict if SantaNet's regime threatens their own goals. Alternatively, if they decide to work together, they may realise their goals will only be achieved through dramatically reducing the global population or even removing grown-ups altogether.

Making rules for Santa

SantaNet might sound far-fetched, but it's an idea that helps to highlight the risks of more realistic AGI systems. Designed with good intentions, such systems could still create enormous problems simply by seeking to optimise the way they achieve narrow goals and gather resources to support their work.

It is crucial we find and implement appropriate controls before AGI arrives. These would include regulations on AGI designers and controls built into the AGI (such as moral principles and decision rules) but also controls on the broader systems in which AGI will operate (such as regulations, operating procedures and engineering controls in other technologies and infrastructure).

Perhaps the most obvious risk of SantaNet is one that will be catastrophic to children, but perhaps less so for most adults. When SantaNet learns the true meaning of Christmas, it may conclude that the current celebration of the festival is incongruent with its original purpose. If that were to happen, SantaNet might just cancel Christmas altogether.: https://www.sciencealert.com/could-an-ai-santanet-destroy-the-world

"Not to kill each other, but to save the planet"

The Nobel laureates called for a ceasefire. We publish a letter and 51 signature

Here is an incredible letter: a plea for an immediate ceasefire between Russia and Ukraine and in the Gaza Strip, signed by 51 Nobel laureates. They demand that politicians and the military stop the fire, and that world religious leaders directly address the people.

The authors of the letter demand: first of all, to cease fire, to exchange prisoners, and to return hostages. To start peace negotiations. And if politicians today are unable to find a peaceful solution, to pass it on to future generations.

Outstanding scientists and thinkers have spoken out against killing and the nuclear threat. Here are the signatures of those who have saved the planet from deadly diseases, discovered new physical phenomena, edited the human genome, discovered HIV and Helicobacter…

These people understand better than anyone how the Universe works. To save it, they demand an end to wars. Support their efforts. It's time for people to deal with the threat of planet destruction if states are still powerless. These words resonate especially before the Olympic Games with their ancient tradition of ceasing fire between warring sides…:

https://www.lasquetiarc.ca/trip/7a344293Pin08/

Retreat From Doomsday

by John Mueller

Arguing that this state of affairs is no accident, this book offers a detailed history of public policies and attitudes to war in modern times. The author sets out to show that, in spite of two 20th-century world wars, major war as a policy option among developed nations has gradually passed out of favour. https://www.betterworldbooks.com/product/detail/retreat-from-doomsday-the-obsolescence-of-major-war-9780465069392

П 



Nav komentāru:

Ierakstīt komentāru