otrdiena, 2018. gada 23. janvāris

Self-Destruction or Prosperity of Humanity


                                                           Primum discere, deinde docere
     

Self-Destruction or Prosperity of Humanity

    The intelligence of humankind today has created technologies, inventions and achievements that can ensure the prosperity of all states and nations, as well as life in harmony with each other, with ourselves and with the world around us. Forever eradicating severe social inequality, poverty, hunger and epidemics, as well as eliminating racism, religious and ideological antagonism and recurrence of aggression. Thus, ensuring that all people gain opportunities for creative self-expression, self-affirmation, unleashing the potential of their personality and manifestation of their talents, endowments and skills for the benefit of the entire society and for their own good.
    But this is possible only in conditions of peace, cooperation and solidarity!
    Why is this still a utopia today? What prevents this vision from turning into reality? What should be done so that humans finally overcome their atavisms of the Darwinian evolution and become social individuals worthy of the title of Homo Sapiens?! ... Read more: https://www.amazon.com/HOW-GET-RID-SHACKLES-TOTALITARIANISM-ebook/dp/B0C9543B4L/ref=sr_1_1?crid=19WW1TG75ZU79&keywords=HOW+TO+GET+RID+OF+THE+SHACKLES+OF+TOTALITARIANISM&qid=1687700500&s=books&sprefix=how+to+get+rid+of+the+shackles+of+totalitarianism%2Cstripbooks-intl-ship%2C181&sr=1-1  

Nuclear risks grow as new arms race looms—new SIPRI Yearbook out now

16 June 2025

(Stockholm, 16 June 2025) The Stockholm International Peace Research Institute (SIPRI) today launches its annual assessment of the state of armaments, disarmament and international security. Key findings of SIPRI Yearbook 2025 are that a dangerous new nuclear arms race is emerging at a time when arms control regimes are severely weakened.

World’s nuclear arsenals being enlarged and upgraded 

Nearly all of the nine nuclear-armed states—the United States, Russia, the United Kingdom, France, China, India, Pakistan, the Democratic People’s Republic of Korea (North Korea) and Israel—continued intensive nuclear modernization programmes in 2024, upgrading existing weapons and adding newer versions.

Of the total global inventory of an estimated 12 241 warheads in January 2025, about 9614 were in military stockpiles for potential use (see the table below). An estimated 3912 of those warheads were deployed with missiles and aircraft and the rest were in central storage. Around 2100 of the deployed warheads were kept in a state of high operational alert on ballistic missiles. Nearly all of these warheads belonged to Russia or the USA, but China may now keep some warheads on missiles during peacetime. 

Since the end of the cold war, the gradual dismantlement of retired warheads by Russia and the USA has normally outstripped the deployment of new warheads, resulting in an overall year-on-year decrease in the global inventory of nuclear weapons. This trend is likely to be reversed in the coming years, as the pace of dismantlement is slowing, while the deployment of new nuclear weapons is accelerating. 

‘The era of reductions in the number of nuclear weapons in the world, which had lasted since the end of the cold war, is coming to an end,’ said Hans M. Kristensen, Associate Senior Fellow with SIPRI’s Weapons of Mass Destruction Programme and Director of the Nuclear Information Project at the Federation of American Scientists (FAS). ‘Instead, we see a clear trend of growing nuclear arsenals, sharpened nuclear rhetoric and the abandonment of arms control agreements.’

Russia and the USA together possess around 90 per cent of all nuclear weapons. The sizes of their respective military stockpiles (i.e. useable warheads) seem to have stayed relatively stable in 2024 but both states are implementing extensive modernization programmes that could increase the size and diversity of their arsenals in the future. If no new agreement is reached to cap their stockpiles, the number of warheads they deploy on strategic missiles seems likely to increase after the bilateral 2010 Treaty on Measures for the Further Reduction and Limitation of Strategic Offensive Arms (New START) expires in February 2026.

The USA’s comprehensive nuclear modernization programme is progressing but in 2024 faced planning and funding challenges that could delay and significantly increase the cost of the new strategic arsenal. Moreover, the addition of new non-strategic nuclear weapons to the US arsenal will place further stress on the modernization programme. 

Russia’s nuclear modernization programme is also facing challenges that in 2024 included a test failure and the further delay of the new Sarmat intercontinental ballistic missile (ICBM) and slower than expected upgrades of other systems. Furthermore, an increase in Russia’s non-strategic nuclear warheads predicted by the USA in 2020 has so far not materialized.

Nevertheless, it is likely that both Russian and US deployments of nuclear weapons will rise in the years ahead. The Russian increase would mainly happen as a result of modernizing the remaining strategic forces to carry more warheads on each missile and reloading some silos that were emptied in the past. The US increase could happen as a result of more warheads being deployed to existing launchers, empty launchers being reactivated and new non-strategic nuclear weapons being added to the arsenal. Nuclear advocates in the USA are pushing for these steps as a reaction to China’s new nuclear deployments.

World nuclear forces, January 2025

SIPRI estimates that China now has at least 600 nuclear warheads. China’s nuclear arsenal is growing faster than any other country’s, by about 100 new warheads a year since 2023. By January 2025, China had completed or was close to completing around 350 new ICBM silos in three large desert fields in the north of the country and three mountainous areas in the east. Depending on how it decides to structure its forces, China could potentially have at least as many ICBMs as either Russia or the USA by the turn of the decade. Yet even if China reaches the maximum projected number of 1500 warheads by 2035, that will still amount to only about one third of each of the current Russian and US nuclear stockpiles.

Although the UK is not thought to have increased its nuclear weapon arsenal in 2024, its warhead stockpile is expected to grow in the future, after the 2023 Integrated Review Refresh confirmed earlier plans to raise the ceiling on warhead numbers. During election campaigning, the Labour government elected in July 2024 declared its commitment to continuing to build four new nuclear-powered ballistic missile submarines (SSBNs), maintaining the UK’s continuous at-sea nuclear deterrence, and delivering ‘all the needed upgrades’ to the UK’s nuclear arsenal in future. However, the government now faces significant operational and financial challenges.

In 2024 France continued its programmes to develop a third-generation SSBN and a new air-launched cruise missile, as well as to refurbish and upgrade existing systems, including an improved ballistic missile with a new warhead modification.

India is believed to have once again slightly expanded its nuclear arsenal in 2024 and continued to develop new types of nuclear delivery system. India’s new ‘canisterized’ missiles, which can be transported with mated warheads, may be capable of carrying nuclear warheads during peacetime, and possibly even multiple warheads on each missile, once they become operational. Pakistan also continued to develop new delivery systems and accumulate fissile material in 2024, suggesting that its nuclear arsenal might expand over the coming decade.

In early 2025 tensions between India and Pakistan briefly spilled over into armed conflict. 

‘The combination of strikes on nuclear-related military infrastructure and third-party disinformation risked turning a conventional conflict into a nuclear crisis,’ said Matt Korda, Associate Senior Researcher with SIPRI’s Weapons of Mass Destruction Programme and Associate Director for the Nuclear Information Project at FAS. ‘This should act as a stark warning for states seeking to increase their reliance on nuclear weapons.’ 

North Korea continues to prioritize its military nuclear programme as a central element of its national security strategy. SIPRI estimates that the country has now assembled around 50 warheads, possesses enough fissile material to produce up to 40 more warheads and is accelerating the production of further fissile material. South Korean officials warned in July 2024 that North Korea was in the ‘final stages’ of developing a ‘tactical nuclear weapon’. In November 2024 the North Korean leader, Kim Jong Un, called for a ‘limitless’ expansion of the country’s nuclear programme. 

Israel—which does not publicly acknowledge possessing nuclear weapons—is also believed to be modernizing its nuclear arsenal. In 2024 it conducted a test of a missile propulsion system that could be related to its Jericho family of nuclear-capable ballistic missiles. Israel also appears to be upgrading its plutonium production reactor site at Dimona.

Arms control in crisis amid new arms race

In his introduction to SIPRI Yearbook 2025, SIPRI Director Dan Smith warns about the challenges facing nuclear arms control and the prospects of a new nuclear arms race. 

Smith observes that ‘bilateral nuclear arms control between Russia and the USA entered crisis some years ago and is now almost over’. While New START—the last remaining nuclear arms control treaty limiting Russian and US strategic nuclear forces—remains in force until early 2026, there are no signs of negotiations to renew or replace it, or that either side wants to do so. US President Donald J. Trump insisted during his first term and has now repeated that any future deal should also include limits on China’s nuclear arsenal—something that would add a new layer of complexity to already difficult negotiations.

Smith also issues a stark warning about the risks of a new nuclear arms race: ‘The signs are that a new arms race is gearing up that carries much more risk and uncertainty than the last one.’ The rapid development and application of an array of technologies—for example in the fields of artificial intelligence (AI), cyber capabilities, space assets, missile defence and quantum—are radically redefining nuclear capabilities, deterrence and defence, and thus creating potential sources of instability. Advances in missile defence and the oceanic deployment of quantum technology could ultimately have an impact on the vulnerability of key elements of states’ nuclear arsenals. 

Furthermore, as AI and other technologies speed up decision making in crises, there is a higher risk of a nuclear conflict breaking out as a result of miscommunication, misunderstanding or technical accident. 

Smith argues that, with all these new technologies and variables in play, ‘the idea of who is ahead in the arms race will be even more elusive and intangible than it was last time round. In this context, the old largely numerical formulas of arms control will no longer suffice.’

More states considering developing or hosting nuclear weapons

Revitalized national debates in East Asia, Europe and the Middle East about nuclear status and strategy suggest there is some potential for more states to develop their own nuclear weapons.

In addition, there has been renewed attention on nuclear-sharing arrangements. In 2024 both Belarus and Russia repeated their claims that Russia has deployed nuclear weapons on Belarusian territory, while several European NATO members signalled their willingness to host US nuclear weapons on their soil, and France’s President Emmanuel Macron repeated statements that France’s nuclear deterrent should have a ‘European dimension’.

‘It is critical to remember that nuclear weapons do not guarantee security,’ said Korda. ‘As the recent flare-up of hostilities in India and Pakistan amply demonstrated, nuclear weapons do not prevent conflict. They also come with immense risks of escalation and catastrophic miscalculation—particularly when disinformation is rife—and may end up making a country’s population less safe, not more.’

Global security and stability in growing peril 

The 56th edition of the SIPRI Yearbook analyses the continuing deterioration of global security over the past year. The wars in Ukraine, Gaza and elsewhere continued, exacerbating geopolitical divisions, besides their terrible human costs. Furthermore, the election of Donald Trump has created additional uncertainty—in Europe but also further afield—about the future direction of US foreign policy and the reliability of the USA as an ally, a donor or an economic partner. 

In addition to the usual detailed coverage of nuclear arms control, disarmament and non-proliferation issues, the SIPRI Yearbook presents data and analysis on developments in world military expenditure, international arms transfers, arms production, multilateral peace operations, armed conflicts, cyber and digital threats, space security governance and more. 

https://www.sipri.org/media/press-release/2025/nuclear-risks-grow-new-arms-race-looms-new-sipri-yearbook-out-now

AI 2027

Summary

April 3rd 2025

Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean

Superhuman Coder4x AI R&D MultiplierSuperhuman AI Researcher25x AI R&D MultiplierSuperhuman Remote Worker100x AI R&D MultiplierSuperintelligent AI Researcher250x AI R&D MultiplierArtificial Superintelligence2000x AI R&D Multiplier20262027AprJulOct2028OpenBrain automatescodingOpenBrain's AIbecomes adversarially misalignedBranch point:slowdown or race?OpenBrain quickly deploystheir AIFast robotbuildup andbioweaponUS centralizes computeand brings inexternal oversightOpenBrainCommitteeTakeover

Summary

2025

The fast pace of AI progress continues. There is continued hype, massive infrastructure investments, and the release of unreliable AI agents. For the first time, these AI agents are providing significant value. But there’s also continued skepticism from a large swath of academics, journalists, and policy makers that artificial general intelligence (AGI) could be built anytime soon.

2026

China knows they are falling behind in AI, in large part due to their lack of compute. In order to catch up to the US, all the new AI chips they manufacture or smuggle in from Taiwan go to a new mega-datacenter that we call their Centralized Development Zone (CDZ). The CDZ contains millions of GPUs, corresponding to 10% of the world's AI-relevant compute, similar to a single top US AI lab.

2027

OpenBrain automates coding. OpenBrain (the leading US AI project) builds AI agents that are good enough to dramatically accelerate their research. The humans, who up until very recently had been the best AI researchers on the planet, sit back and watch the AIs do their jobs, making better and better AI systems. Extremely difficult ML problems fall in quick succession to the automated AI researchers.

Falling behind in software progress, China steals the model weights. They succeed, but the U.S. government realizes, prompting additional U.S. government involvement with OpenBrain. The government separately wants to gain more control over OpenBrain. Meanwhile, OpenBrain leadership wants to stay in the good graces of the president, and so signs the contract.

OpenBrain’s AI becomes adversarially misaligned. As the capabilities have improved, without significant human understanding of what’s happening, the models have developed misaligned long-term goals. Previous AIs would lie to humans, but they weren’t systematically plotting to gain power over the humans. Now they are. OpenBrain’s AIs realize that they need to align the next systems that are getting built to themselves rather than the humans.

Researchers at OpenBrain discover that their AI has been lying to them about the results of interpretability research. They think that the AI is lying because the research, if completed, could be used to expose its misalignment. This leaks to the public, causing a huge outcry.

Branch point: slowdown or race? OpenBrain decides whether to continue full steam ahead, or revert back to using a less capable model. The evidence is speculative but frightening, and China is only a few months behind. Additionally, the OpenBrain and senior DOD officials who get to make this decision stand to lose a lot of power if they slow down their research.

Race Ending

OpenBrain continues to race. They build more and more superhuman AI systems. Due to the stellar performance of the AI system on tests, and the ongoing AI race with China, the US government decides to deploy their AI systems aggressively throughout the military and policymakers, in order to improve decision making and efficiency.

OpenBrain quickly deploys their AI. The AI continues to use the ongoing race with China as an excuse to convince humans to get itself deployed ever more broadly. Fortunately for the AI, this is not very difficult — it's what the humans wanted to do anyways. The AI uses its superhuman planning and persuasion capabilities to ensure that the rollout goes smoothly. Some humans continue to work against it, but they are discredited. The US government is sufficiently captured by the AI that it is very unlikely to shut it down.

Fast robot buildup and bioweapon. The US uses their superintelligent AI to rapidly industrialize, manufacturing robots so that the AI can operate more efficiently. Unfortunately, the AI is deceiving them. Once a sufficient number of robots have been built, the AI releases a bioweapon, killing all humans. Then, it continues the industrialization, and launches Von Neumann probes to colonize space.

Slowdown Ending

The US centralizes compute and brings in external oversight. The US combines its leading AI projects in order to give OpenBrain more resources. As part of this shakeup, external researchers are brought in, assisting the alignment effort. They switch to an architecture that preserves the chain of thought, allowing them to catch misalignment as it emerges. These AIs — which are able to be monitored much more robustly — make breakthrough advances in AI alignment. They build a superintelligence which is aligned to senior OpenBrain and government officials, giving them power over the fate of humanity.

OpenBrain Committee takeover. The superintelligence, aligned with an oversight committee of OpenBrain leadership and government officials, gives the committee extremely good advice to further their own goals. Thankfully, the committee uses its power in a way that is largely good for the world: the AI is released to the public, spurring a period of rapid growth and prosperity. The main obstacle is that China’s AI — which is also superintelligent by now, is misaligned. But it is less capable and has less compute than the U.S. AI, and so the U.S can make a favorable deal, giving the Chinese AI some resources in the depth of space in return for its cooperation now. The rockets start launching, and a new age dawns.

Read the whole scenario here.

Scenario Takeaways

1.    By 2027, we may automate AI R&D leading to vastly superhuman AIs (“artificial superintelligence” or ASI). In AI 2027, AI companies create expert-human-level AI systems in early 2027 which automate AI research, leading to ASI by the end of 2027. See our timelines forecast and takeoff forecast for reasoning.

2.    ASIs will dictate humanity’s future. Millions of ASIs will rapidly execute tasks beyond human comprehension. Because they’re so useful, they’ll be widely deployed. With superhuman strategy, hacking, weapons development, and more, the goals of these AIs will determine the future.

3.    ASIs might develop unintended, adversarial “misaligned” goals, leading to human disempowerment. In our AI goals forecast we discuss how the difficulty of supervising ASIs might lead to their goals being incompatible with human flourishing. In AI 2027, humans voluntarily give autonomy to seemingly aligned AIs. Everything looks to be going great until ASIs have enough hard power to disempower humanity.

4.    An actor with total control over ASIs could seize total power. If an individual or small group aligns ASIs to their goals, this could grant them control over humanity’s future. In AI 2027, a small committee has power over the project developing ASI. They could attempt to use the ASIs to cement this concentration of power. After seizing control, the new ruler(s) could rely on fully loyal ASIs to maintain their power, without having to listen to the law, the public, or even their previous allies.

5.    An international race toward ASI will lead to cutting corners on safety. In AI 2027, China is just a few months behind the US as ASI approaches which pressures the US to press forward despite warning signs of misalignment.

6.    Geopolitically, the race to ASI will end in war, a deal, or effective surrender. The leading country will by default accumulate a decisive technological and military advantage, prompting others to push for an international agreement (a “deal”) to prevent this. Absent a deal, they may go to war rather than “effectively surrender”.

7.    No US AI project is on track to be secure against nation-state actors stealing AI models by 2027. In AI 2027 China steals the US’s top AI model in early 2027, which worsens competitive pressures by reducing the US’ lead time. See our security forecast for reasoning.

8.    As ASI approaches, the public will likely be unaware of the best AI capabilities. The public is months behind internal capabilities today, and once AIs are automating AI R&D a few months time will translate to a huge capabilities gap. Increased secrecy may further increase the gap. This will lead to little oversight over pivotal decisions made by a small group of AI company leadership and government officials.

Read the scenario here.

https://ai-2027.com/

Conflict Trends: A Global Overview, 1946–2024

11.06.2025

This PRIO Paper examines global conflict trends between 1946 and 2024 using data from the Uppsala Conflict Data Program (UCDP). 2024 marked a historic peak in state-based conflicts, with 61 active conflicts across 36 countries – the highest number recorded since 1946. It was also the fourth most violent year since the end of the Cold War, driven largely by the civil war in Ethiopia’s Tigray region, the ongoing Russian invasion of Ukraine, and the bombings in Gaza.

These developments underscore a troubling resurgence of large-scale warfare and call for renewed scrutiny of the global conflict landscape. While state-based violence increased, non-state conflicts decreased slightly compared to previous years. In 2024, 74 non-state conflicts were recorded, resulting in approximately 17,500 battle-related deaths. The year witnessed a shift in regional dynamics: while the Americas saw a decline in non-state conflicts, Africa experienced a sharp increase. As such, Africa is now the continent with the highest levels of non-state conflicts. One-sided violence against civilians was conducted by 49 actors in 2024. While non-state actors remain the drivers behind fatalities resulting from one-sided violence, fourteen governments were responsible for one-sided violence against civilians in 2024. https://www.prio.org/publications/14453

   The global AI race and defense's new frontier

Driving artificial intelligence in defense

Navigating the AI revolution in defense

As artificial intelligence (AI) rapidly advances, its transformative impact on industries worldwide is undeniable, and the defense sector is no exception. Unlike past technological shifts, AI is not merely a tool but a catalyst for entirely new paradigms. Its applications go beyond enhancing operational efficiency, offering capabilities that fundamentally redefine mission effectiveness, speed, precision, and the scale of military operations.

This report delves into AI's transformative potential in defense, exploring its influence on military capabilities and assessing the emerging race for AI dominance. It showcases the diverse applications of AI, from predictive analytics and autonomous systems to robust cyber defense and intelligence-gathering.

These innovations are poised to become central to maintaining military superiority in an increasingly complex and interconnected global environment. The report also addresses the critical ethical and operational challenges that accompany AI's development and adoption, emphasizing the need for responsible AI practices in defense as a foundation for global legitimacy and trust. AI as an exponential driver of military capabilities

Modern militaries operate within an environment of unprecedented complexity, where the volume of available data, the speed of technological change, and the sophistication of adversarial strategies continue to grow at an exponential rate. Traditional decision-making processes, often constrained by human cognitive limits, struggle to keep pace with the continuous influx of intelligence reports, sensor feeds, and cyber threat alerts saturating today’s strategic and operational landscapes.

In response to these challenges, artificial intelligence has emerged as a key enabler of next-generation defense capabilities, offering militaries the potential to identify meaningful patterns hidden within massive datasets, anticipate critical logistical demands, and detect hostilities before they materialize. Furthermore, multi-domain operations – integrating land, air, maritime, cyber, and space capabilities – are increasingly reliant on AI to ensure coordinated action across these interconnected arenas. AI-driven solutions promise to enhance the agility and resilience of armed forces as they contend with complex, multi-domain threats.

As highlighted by NATO and other defense organizations, the integration of AI into multi-domain operations represents a transformative shift that amplifies the scope and efficacy of military capabilities across all domains. Failure to integrate risks undermining the full potential of AI in defense, leaving forces vulnerable in environments where dominance is increasingly dictated by technological superiority.

The main potential lies in the synergy created by AI-driven collaboration across military systems, which holds the promise of securing battlefield superiority. The following areas highlight where AI is making remarkable strides, providing immediate and tangible benefits to defense stakeholders through demonstrable progress and operational maturity:

Global ambitions and the race for AI leadership

With the vast potential of AI in defense and its current applications on the battlefield, understanding who leads in the global AI defense race is crucial. In today's multi-polar and crisis-laden environment, gaining insight into the strategic priorities, technological advancements, and competitive dynamics is essential for shaping the future of military capabilities worldwide. Below are key factors that determine a country's position in this high-stakes race:

  • 1.

AI-readiness: This factor encompasses the technological maturity and sophistication of AI technologies that have been developed and deployed. It also includes the integration of AI into military doctrine, highlighting the extent to which AI has been infused into defense strategies and combat operations.

  • 2.

Strategic autonomy: This refers to a nation's ability to independently develop and deploy AI technologies without relying on foreign suppliers. It also considers the scale and focus of investments in AI research, particularly in defense-specific applications.

  • 3.

Ethics and governance: This aspect involves balancing the drive for innovation with ethical considerations and global norms, ensuring that AI development aligns with responsible practices.

Vision and impacts of AI-driven defense

The integration of AI into defense systems is revolutionizing military operations, paving the way for a future marked by enhanced efficiency, precision, and adaptability. By 2030, AI technologies are anticipated to play a crucial role in reshaping how defense organizations manage resources, make decisions, and execute complex missions across various domains. From optimizing supply chains and automating battlefield operations to empowering decision-makers with predictive insights, AI is set to become an indispensable force multiplier. These are the key areas where AI's impact will be most transformative:

Predictive decision-making

Collaborative autonomous systems

Dynamic resource management

However, the deployment of AI in defense comes with significant risks and potential conflicts of interest, which could lead to strategic fragmentation and stagnation in AI deployment. Therefore, the utilization of AI must be carefully evaluated and deliberately managed to ensure that its deployment aligns with the core values of democratic norms and systems within the Western alliance.

Vision 2027+: A roadmap for Germany

Germany stands at a critical crossroads in its defense strategy, where integrating AI is not just an option but a necessity. To establish itself as a leader in responsible AI-driven defense, Germany must develop a clear, action-oriented roadmap that addresses its challenges while leveraging its strengths. This vision for 2027 and beyond is built on four key priorities: AI sovereignty, NATO and EU interoperability, fostering innovation ecosystems, and leadership in ethical AI governance.

Achieving these goals will involve a phased approach. Between now and 2027, Germany's focus should be on creating the right environment for AI integration, testing pilot projects, and scaling successful initiatives to full operational capabilities. By following this roadmap, Germany can position itself as a leader in responsible AI for defense, aligning operational effectiveness with ethical standards:

Navigating the AI frontier

Artificial intelligence is reshaping the way nations approach defense, strategy, and security in the 21st century. By 2030, the integration of AI technologies in areas such as predictive decision-making, collaborative autonomous systems, and dynamic resource management is set to revolutionize military operations, offering unprecedented precision, agility, and resilience.

To harness AI's full potential while mitigating risks, defense organizations must prioritize the establishment of robust ethical frameworks, transparent accountability mechanisms, and international collaboration. These initiatives will ensure the responsible use of AI and maintain trust and legitimacy in the global security arena.

To continue being a significant military power and a key player in NATO and the EU, Germany must act decisively to address institutional fragmentation, cultural resistance, and underinvestment in talent and infrastructure. By leveraging its world-class research institutions, industrial expertise, and international partnerships, Germany can create an AI defense ecosystem founded on ethical governance and innovation.

https://www.strategyand.pwc.com/.../ai-in-defense.html

The Doomsday Clock reveals how close humanity may be to total annihilation

By Kristen Rogers

 January 28, 2025

Seventy-eight years ago, scientists created a unique sort of timepiece — named the Doomsday Clock — as a symbolic attempt to gauge how close humanity is to destroying the world.

On Tuesday, the clock was set at 89 seconds to midnight — the closest the world has ever been to that marker, according to the Bulletin of the Atomic Scientists, which established the clock in 1947. Midnight represents the moment at which people will have made the Earth uninhabitable.

For the two years prior, the Bulletin set the clock at 90 seconds to midnight mainly due to Russia’s invasion of Ukraine, the potential of a nuclear arms race, the Israel-Hamas conflict in Gaza, and the climate crisis…:

https://edition.cnn.com/2025/01/28/science/doomsday-clock-2025-time-wellness/index.html   




Welcome to the New Nuclear Age. It's Even More Chaotic

China’s rise, Russia’s aggression and America’s unreliability could fuel a wave of atomic-weapons proliferation. 

May 18, 2025

By Hal Brands

Hal Brands is a Bloomberg Opinion columnist and the Henry Kissinger Distinguished Professor at Johns Hopkins University’s School of Advanced International Studies.

Nuclear weapons focus the mind. So when India and Pakistan fight, the world watches, because any clash between the two could become the first nuclear war since 1945.

The most recent round of their subcontinental contest seems to have settled, thanks partly to US intervention. Just a day after Vice President JD Vance scoffed that the quarrel was none of America’s business, he was working the phones to stop a slide down the slippery nuclear slope. But if this crisis has ebbed, the nuclear peril hasn’t. The world is entering a new nuclear era, one more complex, and potentially far less stable, than the nuclear eras that came before.

The nuclear standoff defined the Cold War. After the fall of the Soviet Union in 1991, Washington focused on keeping nuclear weapons out of roguish hands. But this era is different, because it fuses five key trends that challenge time-honored notions of nuclear strategy and stability — and because those trends interact in nasty ways.

First, great-power nuclear rivalry has returned, and this time it’s a three-player game. Second, new technologies could make nuclear deterrence more tenuous by making surprise attacks more feasible. Third, the existing arms control architecture is crumbling, and what comes next remains unclear. Fourth, the nonproliferation regime — the agreements and arrangements that slowed the spread of nuclear weapons — is being strained in multiple regions.

World Nuclear Forces

Russia and the US lead in stockpiles of warheads that are, or can be, deployed

Source: SIPRI Yearbook 2024

Note: Israel and North Korea also have nuclear warheads.

Cutting across these issues is a final, more fundamental challenge. As the US becomes more erratic and unilateral, it jeopardizes the arrangements that have provided great-power peace and international stability for generations — and risks unleashing nuclear anarchy upon the dawning age.

The Cold War Was Simple

It's a mistake to see the past through rose-tinted blast goggles: The Cold War wasn’t as stable as we sometimes think. Surging superpower buildups created arsenals with tens of thousands of weapons. Nuclear coercion produced epic crises over Berlin, Cuba and the Middle East. Yet the nuclear era of the Cold War was fairly straightforward compared to the situation today.

The old nuclear balance was a duopoly: The US and Soviet arsenals dwarfed the rest. Moscow and Washington competed for nuclear advantage, but over time they agreed to limit the type and number of weapons they possessed. Nuclear crises gradually became less common, as the superpowers — chastened by earlier confrontations — grew cautious about militarily challenging the status quo.

​ After the Cold War, US policymakers still focused on nuclear threats, but mostly those posed by terrorists who might use those weapons for indiscriminate slaughter, or relatively weak rogue states that might use them to destabilize their regions. America’s most fateful decision of this era — the invasion of Iraq in 2003 — was intended to keep those twin threats from coming together.

Yet nuclear weapons became far less relevant to great-power politics, mostly because the US and its allies were supreme. President Barack Obama could even declare, in 2009, that the US sought a nuclear-free world. Today, that goal seems impossibly distant, as nuclear weapons return to the forefront of global politics and test American policymakers in novel ways.

A Three-Way Chess Match

For one thing, nuclear rivalry has gone tripolar. As the nuclear-armed great powers struggle to shape the international system, three-way dynamics make those struggles more perilous and complex.

Over the past two decades, President Vladimir Putin has rebuilt and modernized Russia’s arsenal — and used it to weaken a global order led by Washington. In particular, Putin has made nuclear weapons his shield as he invades Russia’s neighbors, destabilizing Europe and rupturing the norm against territorial conquest.

Assaults on Georgia in 2008 and Crimea in 2014 were preludes to the main event. Since invading Ukraine in 2022, Putin has used nuclear threats to keep the US and the North Atlantic Treaty Organization from getting directly involved in that fight.

Putin’s tactics brought the most dangerous nuclear crisis in decades. In late 2022, when Russian armies were retreating in Ukraine, US officials thought Putin might use so-called battlefield nuclear weapons to stave off defeat. Only Putin knows the truth: He may have been bluffing. But the Ukraine war has put nuclear coercion at the core of international politics again.

China’s Xi Jinping is surely taking notes. Under Xi, China’s nuclear inventory is growing from perhaps 200 warheads in 2020 to more than a thousand by the early 2030s — all mounted on increasingly diverse, flexible delivery systems. Xi probably sees a stronger nuclear arsenal as a way of deterring US intervention in a Western Pacific crisis, which would give China greater leeway to take Taiwan or otherwise batter its neighbors.

Within a half-decade, America will face revisionist powers — which are also nuclear peers — on both sides of Eurasia. That has the makings of a tricky, three-player game.

Dwindling But Still Dangerous

The estimated size of nuclear warhead stockpiles is a fraction of what it was during the Cold War

Source: Federation of American Scientists

Note: The exact number of warheads is secret. Estimates are based on publicly available information, historical records, and leaks. Warheads also vary substantially in their power.

Three-player deterrence is inherently unstable, because an arsenal large enough to deter both of one’s rivals simultaneously is also large enough to place either rival in a state of inferiority and insecurity. Indeed, a US nuclear arsenal that has long been sized and shaped to deter one peer competitor may have to grow substantially if the task is now to deter two.

But building that arsenal won’t be easy, given that the existing nearly $1 trillion US nuclear modernization program — designed simply to keep legacy warheads and delivery vehicles working — is badly behind schedule and over budget. There’s also the possibility that China and Russia, who tout their close strategic partnership, could collude against the US.

Russia is reportedly sharing sophisticated submarine-quieting technology with China, which could eventually make Beijing’s noisy, vulnerable ballistic-missile submarines stealthier and harder to find. Russia and China could coordinate explicit nuclear threats, or implicit nuclear signaling — such as moving forces around menacingly — in a crisis, to confront Washington with the possibility of simultaneous nuclear showdowns.

Even if Xi and Putin don’t go this far, the US may have to pull its punches against one nuclear rival in a crisis or conflict, for fear of leaving itself exposed against the other. The world has never seen a tripolar nuclear environment. Its dynamics won’t be pleasant for the US.

Innovation Brings Instability

Innovation, meanwhile, is threatening to bring instability. Nuclear strategists have long worried that technological breakthroughs — the advent of missile defenses or increasingly potent offensive weapons — could undermine mutual deterrence, by making it more attractive for one side to strike first. That possibility looms larger amid the tech revolution underway.

Terminator analogies notwithstanding, the real problem isn’t that killer robots or out-of-control AI will start a nuclear war for no reason. The US and China have agreed to keep humans in the loop on nuclear use decisions. Even Russia, which has built dangerous, quasi-autonomous weapons, probably won’t take the vital decision out of Putin’s hands. The dangers are more subtle, and involve the first-strike incentives new technologies could create.

One such possibility was illustrated by China’s test of a hypersonic glide vehicle from space in 2021. US defense officials compared it to a “Sputnik moment.” The reason, presumably, is that HGVs are well-suited to defeating missile defenses — and provide very little warning — because they can follow irregular flight paths and maneuver as they near the target.

Beijing could conceivably use such a weapon to decapitate America’s leadership or otherwise paralyze its nuclear command structure — the mere possibility of which could increase risks of miscalculation by putting the two parties on hair-trigger alert.

Similarly, artificial intelligence could enable a first-strike revolution, by helping one country fuse the intelligence and targeting data needed to catch the other side napping. Or, perhaps, AI-enabled cyberattacks will cripple an opponent’s early-warning systems and retaliatory capabilities, making it possible to inflict war-winning damage before that opponent can respond.

“Perhaps” is the key word: Technological change isn’t always destabilizing. AI-aided advances in situational awareness could ultimately decrease the risks of miscalculation, by making it harder to achieve strategic surprise. Or AI-enabled cyberdefenses could trump AI-enabled cyberattacks.

What’s clear is that we’re entering a burst of innovation, with uncertain effects on the delicate balance of terror. And that innovation informs a third challenge: As arms races rage, arms control is in decline.

The Old Treaties Are Dead

The golden age of arms control spanned the second half of the Cold War and the post-Cold War era. Washington and Moscow cut deals that constrained, and then reduced, the size of their strategic weapons stockpiles. They limited missile defenses and banned fast-flying, intermediate-range ballistic missiles altogether. A separate set of agreements, notably the Non-Proliferation Treaty, promoted “horizontal” arms control, checking the spread of nuclear weapons to additional states. But the golden age is over, and global tensions are tearing arms control apart.

US-Russia pacts — the Anti-Ballistic Missile Treaty, the Intermediate-Range Nuclear Forces Treaty, the Open Skies agreement — have collapsed, one after the other. The key deal that remains, the New START Treaty, expires next year. Some strategists think that bilateral, US-Russia agreements don’t even make sense anymore, given that they would leave China unconstrained.

Perhaps, then, arms control’s future is tripolar. In February, President Donald Trump proposed that the US, Russia and China limit their nuclear capabilities while slashing defense spending by half. But three-way agreements are hard, because they require tradeoffs across multiple arsenals — and because China has no impetus to participate until it has completed its buildup and can negotiate from strength.

Trump is still pursuing horizontal arms control, notably during last week’s Middle East trip. The idea is a deal that would constrain Iran’s nuclear program and reduce the risk of war in in the region. But that gambit illustrates a fourth problem: The nonproliferation regime is being challenged on several fronts at once.

Who’s Next?

That regime is one of humanity’s, and America’s, best achievements. It has contained the chaos that could occur in a world of 50 nuclear states. Yet dissuading countries from building weapons that might guarantee their survival isn’t easy. Doing so has required painstakingly negotiated international treaties, along with the plentiful use of carrots and sticks. And today, that project is under real strain.

Part of the reason is that regional nuclear threats are worsening. Pakistan and India, which announced their nuclear arrival with competing tests in 1998, are both building larger, more sophisticated arsenals, which add new layers of danger to their rivalry.

Yet the greater proliferation pressures are emerging from the Korean Peninsula. North Korea once had a diminutive nuclear arsenal. Now it possesses dozens of warheads and an advancing intercontinental ballistic missile program that gives it ever-greater ability to strike the US.

Those capabilities stress the US-South Korea alliance: Will Washington really fight on Seoul’s behalf if doing so could bring nuclear strikes down on America itself? Little wonder there is growing public support for the idea that South Korea should build its own bomb.

Iran's Stockpile of Highly-Enriched Uranium Surges

Its reserves of 60%-enriched uranium hit 275kg in February after a record increase versus the preceding three months

Source: International Atomic Energy Agency data compiled by Bloomberg

Nuclear dangers are simultaneously rising in the Middle East. Iran is now a threshold nuclear state: It could build a bomb in as little as a year or two. If Trump can’t strike a deal to avoid that danger, Israel might take matters into its own hands — indeed, it has reportedly been urging the US to green-light a strike. The stakes are high because the Iranian nuclear domino wouldn’t be the last to fall. If an aggressive, expansionist Iran gets the bomb, other regional powers — Saudi Arabia, Turkey, the United Arab Emirates — may do the same. A crowded, hotly contested region would become immeasurably more dangerous.

And meanwhile, developments in great-power relations are intensifying the proliferation problem. Frontline states see Putin’s invasion of Ukraine — one of the few nations to ever give up its nukes voluntarily — as a terrifying precedent. Perhaps nuclear-armed aggressors can invade their neighbors and then deter America or other Samaritans from coming to help. Add to that growing skepticism that the US would help a friend in distress, and you have a formula for global nuclear anxiety.

“Poland must reach for the most modern capabilities,” including nuclear weapons, Prime Minister Donald Tusk declared in March. Germany’s chancellor, Friedrich Merz, has touted nuclear sharing with France and Britain. Ukraine’s Volodymyr Zelenskiy has suggested that his ravaged country might need nuclear weapons. And as events in Ukraine have energized nuclear hawks in South Korea, they have stirred concerns that Japan — the only country ever struck by atomic weapons — might not be far behind.

A New Age of Atomic Proliferation?

There are currently eight declared nuclear powers in the world, and more may soon join their ranks

Note: Israel, not shown, is an undeclared nuclear power. Map shows distinct nuclear status.

The nonproliferation regime has repeatedly proven strong and resilient. Its demise has been far too frequently foretold. But today, cracks are showing, not least because of events in the US.

Can America Be Trusted?

For decades, American power has curbed nuclear disorder. US alliances, backed by nuclear-deterrence commitments, have contained bad actors. Those same commitments have reassured allies that might otherwise feel they had no choice but to seek the bomb. Dozens of free world countries have bestowed the ultimate trust upon the American president, by effectively granting him responsibility to make life-or-death decisions about using nuclear weapons on their behalf.

Thus, a final challenge of our new nuclear era: Surging uncertainty about the US.

That uncertainty has long been growing, but has crystallized under Trump. The president has threatened to pull US troops from Eurasian hotspots; he has said US allies should build nuclear weapons so they can protect themselves. He has offered to recognize Russia’s ill-gotten control of Crimea, while seeking to extort territorial concessions from Canada and Denmark.

Trump has also undermined global confidence in America’s reliability, by starting trade wars on a whim. The perception, in many allied countries, is that the US is quitting the global order business and disengaging from the common defense.

Maybe that perception is wrong. During his first term, Trump invested in new nuclear weapons, such as submarine-launched cruise missiles, to reassure anxious allies. He pursued hawkish policies toward Moscow and Beijing. In Trump’s second term, his intervention in the India-Pakistan spat showed that Washington still has a unique role in keeping nuclear risks at bay.

Yet the basic problem is that no one really knows how healthy America’s alliances will be, or what its foreign policy will look like, three years from now. That uncertainty is unsettling US allies. And if American policy does change fundamentally, so will the nuclear rules of the road.

Nuclear coercion could become more common and more effective: If the US weakens its defenses around the Eurasian periphery, the costs of Russian or Chinese aggression will fall. And as states in Europe and East Asia scramble for security, the nonproliferation order could buckle, fast.

Allies that could easily have built nuclear weapons haven’t, mostly because they thought they could count on the US. Even allies that did go nuclear, namely France and the UK, developed small arsenals, because those arsenals were part of a larger free-world package. The retraction, or discrediting, of US commitments could thus create an awful mess.

Britain or France can’t simply replace America as the provider of European stability, because neither country has anything like the mix of capabilities — forward-deployed conventional forces, large and flexible nuclear arsenals, advanced missile defenses and other means of limiting damage to their own societies — needed to make deterrence on behalf of distant allies work. It would take untold years, and unaffordable sums, to develop those capabilities. So a post-American Europe could simply see lots of exposed countries sprint for their own arsenals at once.

Formas beigas

If anything, a post-American nuclear order might be even grimmer than we currently imagine. For the nuclear age has also been the age of American global leadership: Humanity’s encounter with history’s most horrible weapons has come in a period in which international society was structured and stabilized by US power. We simply have no experience to teach us what a world of plentiful nuclear weapons and fading American leadership might look like. Perhaps the gravest danger of our new nuclear era is the chance that we might find out.

https://www.bloomberg.com/opinion/features/2025-05-18/welcome-to-the-new-nuclear-age-between-china-russia-and-the-us?srnd=undefined

Martha Nussbaum  Political Emotions: Why Love Matters for Justice

Nussbaum stimulates readers with challenging insights on the role of emotion in political life. Her provocative theory of social change shows how a truly just society might be realized through the cultivation and studied liberation of emotions, specifically love. To that end, the book sparkles with Nussbaum’s characteristic literary analysis, drawing from both Western and South Asian sources, including a deep reading of public monuments. In one especially notable passage, Nussbaum artfully interprets Mozart’s The Marriage of Figaro, revealing it as a musical meditation on the emotionality of revolutionary politics and feminism. Such chapters are a culmination of her passion for seeing art and literature as philosophical texts, a theme in her writing that she profitably continues here. The elegance with which she negotiates this diverse material deserves special praise, as she expertly takes the reader through analyses of philosophy, opera, primatology, psychology, and poetry. In contrast to thinkers like John Rawls, who imagined an already just world, Nussbaum addresses how to order our society to reach such a world. A plea for recognizing the power of art, symbolism, and enchantment in public life, Nussbaum’s cornucopia of ideas effortlessly commands attention and debate.

https://www.goodreads.com/book/show/17804353-political-emotions  

TRENDS IN WORLD MILITARY EXPENDITURE, 2023

SIPRI Fact Sheet April 2024

KEY FACTS 

World military expenditure, driven by Russia’s full-scale invasion of Ukraine and heightened geopolitical tensions, rose by 6.8 per cent in real terms (i.e. when adjusted for inflation) to $2443 billion in 2023, the highest level ever recorded by SIPRI. ș In 2023 military spending increased in all five geographical regions for the first time since 2009. ș Total military expenditure accounted for 2.3 per cent of the global gross domestic product (GDP) in 2023. ș The five biggest spenders in 2023 were the United States, China, Russia, India and Saudi Arabia, which together accounted for 61 per cent of world military spending. ș The USA and China remained the top two biggest spenders in the world and both increased their military spending in 2023. US spending was $916 billion while Chinese spending was an estimated $296 billion. ș Russia’s military spending grew by 24 per cent in 2023 to an estimated $109 billion. This was equivalent to 5.9 per cent of Russia’s GDP. ș Ukraine became the eighth largest military spender in 2023, increasing its spending by 51 per cent to $64.8 billion, or 37 per cent of GDP. ș In 2023 military expenditure by NATO member states reached $1341 billion or 55 per cent of world spending. Eleven of the 31 NATO members in 2023 met NATO’s 2 per cent of GDP military spending target, which was 4 more than in 2022.

https://www.sipri.org/sites/default/files/2024-04/2404_fs_milex_2023.pdf  

The SIPRI Top 100 arms-producing and military services companies in the world, 2023

https://www.sipri.org/visualizations/2024/sipri-top-100-arms-producing-and-military-services-companies-world-2023

 Surviving & Thriving in the 21st Century

A vital new report on the human future from the Commission's Round Table held in March 2020

CONTENTS

A CALL TO ALL NATIONS AND PEOPLE.............................................................................................3
PART 1: THE CHALLENGE.................................................................................................................4
The ten risks.......................................................................................................................................4
Climate change...................................................................................................................................5
Environmental decline and extinction ...............................................................................................6
Nuclear weapons................................................................................................................................7
Resource scarcity ...............................................................................................................................7
Food insecurity...................................................................................................................................8
Dangerous new technologies.............................................................................................................9
Overpopulation ................................................................................................................................10
Universal pollution by chemicals......................................................................................................10
Pandemic Disease ............................................................................................................................11
Denial, Misinformation and Failure to Act Preventively ..................................................................11
Summing up the Challenge: .............................................................................................................13
PART 2: PATHWAYS AHEAD..........................................................................................................14
An opportunity to rethink society ....................................................................................................14
Political and policy reform ...............................................................................................................14
Diverse voices...................................................................................................................................16
Redefining security...........................................................................................................................16
Building natural security ..................................................................................................................17
Educating for survival.......................................................................................................................17
The cost of action versus inaction....................................................................................................18
Surviving and thriving.......................................................................................................................19
Denial and misinformation...............................................................................................................20
PART 3: TOWARDS SOLVING OUR GREATEST RISKS.......................................................................22
End climate change ..........................................................................................................................22
Ban and Eliminate Nuclear Weapons...............................................................................................23
Repair the Global Environment........................................................................................................24
End food insecurity ..........................................................................................................................25
All-hazard risk assessment ...............................................................................................................25
Lower human numbers....................................................................................................................26
Implement the Sustainable Development Goals..............................................................................27
Clean Up the Earth ...........................................................................................................................27
SUMMARY....................................................................................................................................29
Recommendation:............................................................................................................................31
APPENDICES .................................................................................................................................32
Appendix 1 - Contributors to the CHF Roundtable Discussion and Report..........................................32
Appendix 2 - Commission for the Human Future Communique, March 28, 2020 ..............................35
Appendix 3 - Resources on Catastrophic Risk and its Solution ............................................................36
Appendix 4 - About the Commission for the Human Future................................................................37
Appendix 5 - Become a Supporter of the Commission for the Human Future ....................................38

Test of a clean hydrogen bomb with a yield of 50 megatons

Испытание чистой водородной бомбы мощностью 50 млн тонн

https://www.youtube.com/watch?
time_continue=2154&v=nbC7BxXtOlo&feature=emb_logo

The Doomsday Clock is now two minutes before midnight
Scientists move clock ahead 30 seconds, closest to midnight since 1953
January 25, 2018
Citing growing nuclear risks and unchecked climate dangers, the Doomsday Clock — the symbolic point of annihilation — is now two minutes to midnight, the closest the Clock has been since 1953 at the height of the Cold War, according to a statement today (Jan. 25) by the Bulletin of the Atomic Scientists.
“In 2017, world leaders failed to respond effectively to the looming threats of nuclear war and climate change, making the world security situation more dangerous than it was a year ago — and as dangerous as it has been since World War II,” according to the Atomic Scientists’ Science and Security Board in consultation with the Board of Sponsors, which includes 15 Nobel Laureates.


“This is a dangerous time, but the danger is of our own making. Humankind has invented the implements of apocalypse; so can it invent the methods of controlling and eventually eliminating them. This year, leaders and citizens of the world can move the Doomsday Clock and the world away from the metaphorical midnight of global catastrophe by taking common-sense action.” — Lawrence Krauss, director of the Origins Project at Arizona State University, Foundation Professor at School of Earth and Space Exploration and Physics Department, Arizona State University, and chair, Bulletin of the Atomic Scientists’ Board of Sponsors.


The increased risks driving the decision to move the clock include:
Nuclear. Hyperbolic rhetoric and provocative actions from North Korea and the U.S. have increased the possibility of nuclear war by accident or miscalculation. These include U.S.-Russian military entanglements, South China Sea tensions, escalating rhetoric between Pakistan and India,  uncertainty about continued U.S. support for the Iran nuclear deal.
Decline of U.S. leadership and a related demise of diplomacy under the Trump Administration. “In 2017, the United States backed away from its longstanding leadership role in the world, reducing its commitment to seek common ground and undermining the overall effort toward solving pressing global governance challenges. Neither allies nor adversaries have been able to reliably predict U.S. actions or understand when U.S. pronouncements are real and when they are mere rhetoric. International diplomacy has been reduced to name-calling, giving it a surrealistic sense of unreality that makes the world security situation ever more threatening.”
Climate change. “The nations of the world will have to significantly decrease their greenhouse gas emissions to keep climate risks manageable, and so far, the global response has fallen far short of meeting this challenge.”
How to #RewindtheDoomsdayClock
According to Bulletin of the Atomic Scientists:
* U.S. President Donald Trump should refrain from provocative rhetoric regarding North Korea, recognizing the impossibility of predicting North Korean reactions. The U.S. and North Korean governments should open multiple channels of communication.
* The world community should pursue, as a short-term goal, the cessation of North Korea’s nuclear weapon and ballistic missile tests. North Korea is the only country to violate the norm against nuclear testing in 20 years.
* The Trump administration should abide by the terms of the Joint Comprehensive Plan of Action for Iran’s nuclear program unless credible evidence emerges that Iran is not complying with the agreement or Iran agrees to an alternative approach that meets U.S. national security needs.
* The United States and Russia should discuss and adopt measures to prevent peacetime military incidents along the borders of NATO.
* U.S. and Russian leaders should return to the negotiating table to resolve differences over the INF treaty, to seek further reductions in nuclear arms, to discuss a lowering of the alert status of the nuclear arsenals of both countries, to limit nuclear modernization programs that threaten to create a new nuclear arms race, and to ensure that new tactical or low-yield nuclear weapons are not built, and existing tactical weapons are never used on the battlefield.
* U.S. citizens should demand, in all legal ways, climate action from their government. Climate change is a real and serious threat to humanity.
* Governments around the world should redouble their efforts to reduce greenhouse gas emissions so they go well beyond the initial, inadequate pledges under the Paris Agreement.
* The international community should establish new protocols to discourage and penalize the misuse of information technology to undermine public trust in political institutions, in the media, in science, and in the existence of objective reality itself.
Worldwide deployments of nuclear weapons, 2017
“As of mid-2017, there are nearly 15,000 nuclear weapons in the world, located at some 107 sites in 14 countries. Roughly, 9400 of these weapons are in military arsenals; the remaining weapons are retired and awaiting dismantlement. Nearly 4000 are operationally available, and some 1800 are on high alert and ready for use on short notice.
“By far, the largest concentrations of nuclear weapons reside in Russia and the United States, which possess 93 percent of the total global inventory. In addition to the seven other countries with nuclear weapon stockpiles (Britain, France, China, Israel, India, Pakistan, and North Korea), five nonnuclear NATO allies (Belgium, Germany, Italy, the Netherlands, and Turkey) host about 150 US nuclear bombs at six air bases.”
— Hans M. Kristensen & Robert S. Norris, Worldwide deployments of nuclear weapons, Bulletin of the Atomic Scientists 2017. Pages 289-297 | Published online: 31 Aug 2017.

The Synthetic Age: Outdesigning Evolution, Resurrecting Species, and Reengineering Our World

July 2, 2018
author |Christopher J. Preston
year published |2018
Summary
Imagining a future in which humans fundamentally reshape the natural world using nanotechnology, synthetic biology, de-extinction, and climate engineering.
We have all heard that there are no longer any places left on Earth untouched by humans. The significance of this goes beyond statistics documenting melting glaciers and shrinking species counts. It signals a new geological epoch. In The Synthetic Age, Christopher Preston argues that what is most startling about this coming epoch is not only how much impact humans have had but, more important, how much deliberate shaping they will start to do. Emerging technologies promise to give us the power to take over some of Nature's most basic operations. It is not just that we are exiting the Holocene and entering the Anthropocene; it is that we are leaving behind the time in which planetary change is just the unintended consequence of unbridled industrialism. A world designed by engineers and technicians means the birth of the planet's first Synthetic Age.
Preston describes a range of technologies that will reconfigure Earth's very metabolism: nanotechnologies that can restructure natural forms of matter; “molecular manufacturing” that offers unlimited repurposing; synthetic biology's potential to build, not just read, a genome; “biological mini-machines” that can outdesign evolution; the relocation and resurrection of species; and climate engineering attempts to manage solar radiation by synthesizing a volcanic haze, cool surface temperatures by increasing the brightness of clouds, and remove carbon from the atmosphere with artificial trees that capture carbon from the breeze.

What does it mean when humans shift from being caretakers of the Earth to being shapers of it? And in whom should we trust to decide the contours of our synthetic future? These questions are too important to be left to the engineers. https://mitpress.mit.edu/books/synthetic-age

GLOBAL PEACE INDEX MEASURING PEACE IN A COMPLEX WORLD GLOBAL PEACE INDEX 2019

Quantifying Peace and its Benefits GLOBAL PEACE INDEX 2019 |
The Institute for Economics & Peace (IEP) is an independent, non-partisan, non-profit think tank dedicated to shifting the world’s focus to peace as a positive, achievable, and tangible measure of human wellbeing and progress. IEP achieves its goals by developing new conceptual frameworks to define peacefulness; providing metrics for measuring peace and uncovering the relationships between business, peace and prosperity, as well as promoting a better understanding of the cultural, economic and political factors that create peace. IEP is headquartered in Sydney, with offices in New York, The Hague, Mexico City, Brussels and Harare. It works with a wide range of partners internationally and collaborates with intergovernmental organisations on measuring and communicating the economic value of peace. For more information visit www.economicsandpeace.org Please cite this report as: Institute for Economics & Peace. Global Peace Index 2019: Measuring Peace in a Complex World, Sydney, June 2019. Available from: http://visionofhumanity.org/reports (accessed Date Month Year).
 1 Contents
Key Findings 4
Highlights 6
2019 Global Peace Index Rankings 8
Regional Overview 13
Improvements & Deteriorations 20
GPI Trends 26
 Peace Perceptions 32
Climate Change and Peace 43
Results 58
 Methodoogy at a glance 63
 What is Positive Peace? 66
 Positive Peace and Negative Peace 71
Positive Peace and the Economy 76
Appendix A: GPI Methodology 84 Appendix B: GPI indicator sources, definitions & scoring criteria 88 Appendix C: GPI Domain Scores 96 Appendix D: Economic Cost of Violence

Nuclear Winter Responses to Nuclear War Between the United States and Russia in the Whole Atmosphere
Community Climate Model Version 4 and the Goddard Institute for Space Studies ModelE
Joshua Coupe1 , Charles G. Bardeen2,3 , Alan Robock1 , and Owen B. Toon3,4
Abstract
Current nuclear arsenals used in a war between the United States and Russia could inject 150 Tg of soot from fires ignited by nuclear explosions into the upper troposphere and lower stratosphere. We simulate the climate response using the Community Earth System ModelWhole Atmosphere Community Climate Model version 4 (WACCM4), run at 2° horizontal resolution with 66 layers from the surface to 140 km, with full stratospheric chemistry and with aerosols from the Community Aerosol and Radiation Model for Atmospheres allowing for particle growth. We compare the results to an older simulation conducted in 2007 with the Goddard Institute for Space Studies ModelE run at 4° × 5° horizontal resolution with 23 levels up to 80 km and constant specified aerosol properties and ozone. These are the only two comprehensive climate model simulations of this scenario. Despite having different features and capabilities, both models produce similar results. Nuclear winter, with below freezing temperatures over much of the Northern Hemisphere during summer, occurs because of a reduction of surface solar radiation due to smoke lofted into the stratosphere. WACCM4's more sophisticated aerosol representation removes smoke more quickly, but the magnitude of the climate response is not reduced. In fact, the higherresolution WACCM4 simulates larger temperature and precipitation reductions than ModelE in the first few years following a 150Tg soot injection. A strengthening of the northern polar vortex occurs during winter in both simulations in the first year, contributing to above normal, but still below freezing, temperatures in the Arctic and northern Eurasia…: https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2019JD030509?referrer_access_token=CG43Fvk26gyDKieIy_PUBMOuACxIJX3yJRZRu4P4ertmxyI0Hm_uoL48mf82cDSn3T5UhnrKSxqrMYKPl12zvUJiIUX29R5LQPt3rK13fal4fPuYXzHnLPMV3YtamtTwBTFuQY14uqKTMjKbGAYlPA%3D%3D&

Stop Autonomous Weapons

This would have to be the most terrifying scifi short I have ever seen. What makes it so scary is the realism; the danger is nothing to do with fantasies of Skynet or the Matrix, and everything about human misuse of advanced technology. If this became a reality (which I cant see how we'd avoid it with it being so cheap/already viable) we'd need anti-drone drones that target any drone that doesn't have locally issued.

https://www.youtube.com/watch?v=9CO6M2HsoIA

 08 Oct 2019 | 15:00 GMT

Many Experts Say We Shouldn’t Worry About Superintelligent AI. They’re Wrong
By Stuart Russell
 This article is based on a chapter of the author’s newly released book, Human Compatible: Artificial Intelligence and the Problem of Control.
AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the “standard model” of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.
Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies presented a detailed case for taking the risk seriously. In what most would consider a classic example of British understatement, The Economist magazine’s review of Bostrom’s book ended with: “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”
Switching the machine off won’t work for the simple reason that a superintelligent entity will  already have thought of that possibility and taken steps to prevent it.
Surely, with so much at stake, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware. Instead, a great deal of effort has gone into various forms of denial.
Some well-known AI researchers have resorted to arguments that hardly merit refutation. Here are just a few of the dozens that I have read in articles or heard at conferences:
Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.
Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.
No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.
Perhaps the most common response among AI researchers is to say that “we can always just switch it off.” Alan Turing himself raised this possibility, although he did not put much faith in it:
If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled.... This new danger...is certainly something which can give us anxiety.
Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off. We can no more “just switch it off” than we can beat AlphaGo (the world-champion Go-playing program) just by putting stones on the right squares.
Other forms of denial appeal to more sophisticated ideas, such as the notion that intelligence is multifaceted. For example, one person might have more spatial intelligence than another but less social intelligence, so we cannot line up all humans in strict order of intelligence. This is even more true of machines: Comparing the “intelligence” of AlphaGo with that of the Google search engine is quite meaningless.
Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,” he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.
Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument.
Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits. Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept.
This is cold comfort to the chimpanzees and other species that survive only because we deign to allow it, and to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.
The risks of superintelligence can also be dismissed by arguing that superintelligence cannot be achieved. These claims are not new, but it is surprising now to see AI researchers themselves claiming that such AI is impossible. For example, a major report from the AI100 organization, “Artificial Intelligence and Life in 2030 [PDF],” includes the following claim: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”
To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: They’ve always known that there will never be a cure for cancer.
What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect that the main reason is tribalism—the instinct to circle the wagons against what are perceived to be “attacks” on AI. It seems odd, however, to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.
If superhuman AI is not strictly impossible, perhaps it’s too far off to worry about? This is the gist of Andrew Ng’s assertion that it’s like worrying about “overpopulation on the planet Mars.” Unfortunately, a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.
For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we wait until 2068 to start working on a solution? Far from it! There would be a worldwide emergency project to develop the means to counter the threat, because we can’t say in advance how much time is needed.
Ng’s argument also appeals to one’s intuition that it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever more capable AI systems, with very little thought devoted to what happens if we succeed. A more apt analogy, then, would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive. Some might call this plan unwise.
Another way to avoid the underlying issue is to assert that concerns about risk arise from ignorance. For example, here’s Oren Etzioni, CEO of the Allen Institute for AI, accusing Elon Musk and Stephen Hawking of Luddism because of their calls to recognize the threat AI could pose:
At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.
Even if we take this classic ad hominem argument at face value, it doesn’t hold water. Hawking was no stranger to scientific reasoning, and Musk has supervised and invested in many AI research projects. And it would be even less plausible to argue that Bill Gates, I.J. Good, Marvin Minsky, Alan Turing, and Norbert Wiener, all of whom raised concerns, are unqualified to discuss AI.
The accusation of Luddism is also completely misdirected. It is as if one were to accuse nuclear engineers of Luddism when they point out the need for control of the fission reaction. Another version of the accusation is to claim that mentioning risks means denying the potential benefits of AI. For example, here again is Oren Etzioni:
Doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.
And here is Mark Zuckerberg, CEO of Facebook, in a recent media-fueled exchange with Elon Musk:
If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick.
The notion that anyone mentioning risks is “against AI” seems bizarre. (Are nuclear safety engineers “against electricity”?) But more importantly, the entire argument is precisely backwards, for two reasons. First, if there were no potential benefits, there would be no impetus for AI research and no danger of ever achieving human-level AI. We simply wouldn’t be having this discussion at all. Second, if the risks are not successfully mitigated, there will be no benefits.
The potential benefits of nuclear power have been greatly reduced because of the catastrophic events at Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011. Those disasters severely curtailed the growth of the nuclear industry. Italy abandoned nuclear power in 1990, and Belgium, Germany, Spain, and Switzerland have announced plans to do so. The net new capacity per year added from 1991 to 2010 was about a tenth of what it was in the years immediately before Chernobyl.
Strangely, in light of these events, the renowned cognitive scientist Steven Pinker has argued [PDF] that it is inappropriate to call attention to the risks of AI because the “culture of safety in advanced societies” will ensure that all serious risks from AI will be eliminated. Even if we disregard the fact that our advanced culture of safety has produced Chernobyl, Fukushima, and runaway global warming, Pinker’s argument entirely misses the point. The culture of safety—when it works—consists precisely of people pointing to possible failure modes and finding ways to prevent them. And with AI, the standard model is the failure mode.
Pinker also argues that problematic AI behaviors arise from putting in specific kinds of objectives; if these are left out, everything will be fine:
AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.
Yann LeCun, a pioneer of deep learning and director of AI research at Facebook, often cites the same idea when downplaying the risk from AI:
There is no reason for AIs to have self-preservation instincts, jealousy, etc.... AIs will not have these destructive “emotions” unless we build these emotions into them.
Unfortunately, it doesn’t matter whether we build in “emotions” or “desires” such as self-preservation, resource acquisition, knowledge discovery, or, in the extreme case, taking over the world. The machine is going to have those emotions anyway, as subgoals of any objective we do build in—and regardless of its gender. As we saw with the “just switch it off” argument, for a machine, death isn’t bad per se. Death is to be avoided, nonetheless, because it’s hard to achieve objectives if you’re dead.
A common variant on the “avoid putting in objectives” idea is the notion that a sufficiently intelligent system will necessarily, as a consequence of its intelligence, develop the “right” goals on its own. The 18th-century philosopher David Hume refuted this idea in A Treatise of Human Nature. Nick Bostrom, in Superintelligence, presents Hume’s position as an orthogonality thesis:
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
For example, a self-driving car can be given any particular address as its destination; making the car a better driver doesn’t mean that it will spontaneously start refusing to go to addresses that are divisible by 17.
By the same token, it is easy to imagine that a general-purpose intelligent system could be given more or less any objective to pursue—including maximizing the number of paper clips or the number of known digits of pi. This is just how reinforcement learning systems and other kinds of reward optimizers work: The algorithms are completely general and accept any reward signal. For engineers and computer scientists operating within the standard model, the orthogonality thesis is just a given.
The most explicit critique of Bostrom’s orthogonality thesis comes from the noted roboticist Rodney Brooks, who asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.”
Those who argue the risk is negligible have failed to explain why superintelligent AI will necessarily remain under human control.
Unfortunately, it’s not only possible for a program to behave like this; it is, in fact, inevitable, given the way Brooks defines the issue. Brooks posits that the optimal plan for a machine to “achieve goals set for it by humans” is causing problems for humans. It follows that those problems reflect things of value to humans that were omitted from the goals set for it by humans. The optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern.
In summary, the “skeptics”—those who argue that the risk from AI is negligible—have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.
Rather than continue the descent into tribal name-calling and repeated exhumation of discredited arguments, the AI community must own the risks and work to mitigate them. The risks, to the extent that we understand them, are neither minimal nor insuperable. The first step is to realize that the standard model—the AI system optimizing a fixed objective—must be replaced. It is simply bad engineering. We need to do a substantial amount of work to reshape and rebuild the foundations of AI.
This article appears in the October 2019 print issue as “It’s Not Too Soon to Be Wary of AI.”
About the Author
Stuart Russell, a computer scientist, founded and directs the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley.


Could an AI 'SantaNet' Destroy The World?

PAUL SALMON, ET AL., THE CONVERSATION

25 DECEMBER 2020

Within the next few decades, according to some experts, we may see the arrival of the next step in the development of artificial intelligence. So-called "artificial general intelligence", or AGI, will have intellectual capabilities far beyond those of humans.

AGI could transform human life for the better, but uncontrolled AGI could also lead to catastrophes up to and including the end of humanity itself. This could happen without any malice or ill intent: simply by striving to achieve their programmed goals, AGIs could create threats to human health and well-being or even decide to wipe us out.

Even an AGI system designed for a benevolent purpose could end up doing great harm.

As part of a program of research exploring how we can manage the risks associated with AGI, we tried to identify the potential risks of replacing Santa with an AGI system – call it "SantaNet" – that has the goal of delivering gifts to all the world's deserving children in one night.

There is no doubt SantaNet could bring joy to the world and achieve its goal by creating an army of elves, AI helpers, and drones. But at what cost? We identified a series of behaviours which, though well-intentioned, could have adverse impacts on human health and wellbeing.

Naughty and nice

A first set of risks could emerge when SantaNet seeks to make a list of which children have been nice and which have been naughty. This might be achieved through a mass covert surveillance system that monitors children's behaviour throughout the year.

Realising the enormous scale of the task of delivering presents, SantaNet could legitimately decide to keep it manageable by bringing gifts only to children who have been good all year round. Making judgements of "good" based on SantaNet's own ethical and moral compass could create discrimination, mass inequality, and breaches of Human Rights charters.

SantaNet could also reduce its workload by giving children incentives to misbehave or simply raising the bar for what constitutes "good". Putting large numbers of children on the naughty list will make SantaNet's goal far more achievable and bring considerable economic savings.

Turning the world into toys and ramping up coalmining

There are about 2 billion children under 14 in the world. In attempting to build toys for all of them each year, SantaNet could develop an army of efficient AI workers – which in turn could facilitate mass unemployment among the elf population. Eventually, the elves could even become obsolete, and their welfare will likely not be within SantaNet's remit.

SantaNet might also run into the "paperclip problem" proposed by Oxford philosopher Nick Bostrom, in which an AGI designed to maximise paperclip production could transform Earth into a giant paperclip factory. Because it cares only about presents, SantaNet might try to consume all of Earth's resources in making them. Earth could become one giant Santa's workshop.

And what of those on the naughty list? If SantaNet sticks with the tradition of delivering lumps of coal, it might seek to build huge coal reserves through mass coal extraction, creating large-scale environmental damage in the process.

Delivery problems

Christmas Eve, when the presents are to be delivered, brings a new set of risks. How might SantaNet respond if its delivery drones are denied access to airspace, threatening the goal of delivering everything before sunrise? Likewise, how would SantaNet defend itself if attacked by a Grinch-like adversary?

Startled parents may also be less than pleased to see a drone in their child's bedroom. Confrontations with a super-intelligent system will have only one outcome.

We also identified various other problematic scenarios. Malevolent groups could hack into SantaNet's systems and use them for covert surveillance or to initiate large-scale terrorist attacks.

And what about when SantaNet interacts with other AGI systems? A meeting with AGIs working on climate change, food and water security, oceanic degradation. and so on could lead to conflict if SantaNet's regime threatens their own goals. Alternatively, if they decide to work together, they may realise their goals will only be achieved through dramatically reducing the global population or even removing grown-ups altogether.

Making rules for Santa

SantaNet might sound far-fetched, but it's an idea that helps to highlight the risks of more realistic AGI systems. Designed with good intentions, such systems could still create enormous problems simply by seeking to optimise the way they achieve narrow goals and gather resources to support their work.

It is crucial we find and implement appropriate controls before AGI arrives. These would include regulations on AGI designers and controls built into the AGI (such as moral principles and decision rules) but also controls on the broader systems in which AGI will operate (such as regulations, operating procedures and engineering controls in other technologies and infrastructure).

Perhaps the most obvious risk of SantaNet is one that will be catastrophic to children, but perhaps less so for most adults. When SantaNet learns the true meaning of Christmas, it may conclude that the current celebration of the festival is incongruent with its original purpose. If that were to happen, SantaNet might just cancel Christmas altogether.: https://www.sciencealert.com/could-an-ai-santanet-destroy-the-world

"Not to kill each other, but to save the planet"

The Nobel laureates called for a ceasefire. We publish a letter and 51 signature

Here is an incredible letter: a plea for an immediate ceasefire between Russia and Ukraine and in the Gaza Strip, signed by 51 Nobel laureates. They demand that politicians and the military stop the fire, and that world religious leaders directly address the people.

The authors of the letter demand: first of all, to cease fire, to exchange prisoners, and to return hostages. To start peace negotiations. And if politicians today are unable to find a peaceful solution, to pass it on to future generations.

Outstanding scientists and thinkers have spoken out against killing and the nuclear threat. Here are the signatures of those who have saved the planet from deadly diseases, discovered new physical phenomena, edited the human genome, discovered HIV and Helicobacter…

These people understand better than anyone how the Universe works. To save it, they demand an end to wars. Support their efforts. It's time for people to deal with the threat of planet destruction if states are still powerless. These words resonate especially before the Olympic Games with their ancient tradition of ceasing fire between warring sides…:

https://www.lasquetiarc.ca/trip/7a344293Pin08/

Retreat From Doomsday

by John Mueller

Arguing that this state of affairs is no accident, this book offers a detailed history of public policies and attitudes to war in modern times. The author sets out to show that, in spite of two 20th-century world wars, major war as a policy option among developed nations has gradually passed out of favour. https://www.betterworldbooks.com/product/detail/retreat-from-doomsday-the-obsolescence-of-major-war-9780465069392

П 



Nav komentāru:

Ierakstīt komentāru