CHARTER ON THE ETHICS OF ARTIFICIAL INTELLIGENCE
(in the European Union and global context)
The text is sufficiently conceptual to serve as a foundation of values, and at the same time sufficiently concrete to be applied in policy, regulation, and strategy.
PREAMBLE
Recognizing the rapid development of Artificial Intelligence (AI) and its growing impact on human life, society, the economy, and global security; affirming human dignity, freedom, and responsibility as the foundations of civilization; and committing ourselves to shaping a future in which technology serves humanity rather than threatens it, we adopt this Charter on the Ethics of Artificial Intelligence as a universal set of values, principles, and obligations.
I. FUNDAMENTAL VALUES
1. Human dignity and autonomy AI must always respect human dignity, rights, free will, and personal identity. A human being must never be reduced to a data object or an algorithmic function.
2. Beneficence and non-maleficence The purpose of AI development and use is to promote well-being and reduce harm to individuals, societies, and the environment.
3. Justice and non-discrimination AI must not reinforce social inequality, prejudice, or discrimination based on gender, origin, age, belief, political conviction, or other characteristics.
4. Responsibility Responsibility for the development, deployment, and use of AI systems rests with human and legally accountable entities.
II. CORE PRINCIPLES
5. Human priority (“Human First”) AI must never make final decisions about human life, freedom, health, or fundamental rights without human oversight.
6. Transparency and explainability AI systems must be: • understandable to users and regulators, • auditable by independent institutions, • capable of justifying their conclusions.
7. Safety and resilience AI systems must be designed to: • prevent malicious use, • be protected against manipulation, • be safely halted or corrected.
8. Alignment with human values AI must operate in accordance with the principles of democracy, human rights, solidarity, and sustainability.
III. PROHIBITIONS AND “RED LINES”
9. Autonomous violence Unacceptable: • full autonomy of lethal weapon systems, • the use of AI for mass destruction or repression.
10. Mass surveillance and manipulation AI must not be used for: • continuous societal surveillance without legal control, • psychological, political, or informational manipulation.
11. Replacement of humans in moral responsibility AI must not replace humans in: • judicial decision-making, • political decision-making, • morally irreversible situations.
IV. RIGHTS AND DUTIES
12. Users’ rights Everyone has the right to: • know when they are interacting with AI, • receive explanations of AI decisions, • challenge automated decisions, • opt out of AI use in certain domains.
13. Developers’ obligations Developers must: • conduct ethical impact assessments, • implement safety mechanisms, • publicly account for system consequences.
14. Duties of states and institutions States and international organizations must: • establish independent AI oversight, • ensure international cooperation, • prevent dangerous technological competition without safety standards.
V. SPECIFIC PRINCIPLES OF THE EUROPEAN UNION
15. Protection of democracy and the rule of law AI in Europe must strengthen: • citizen participation, • transparent governance, • the protection of fundamental rights.
16. Social responsibility AI use must not: • widen the digital divide, • replace humans without reskilling opportunities, • weaken social cohesion.
17. Sustainable development AI must promote: • environmental protection, • energy efficiency, • long-term societal benefit.
VI. GLOBAL COOPERATION
18. International AI Safety Council A global structure should be established to: • oversee high-risk AI, • conduct joint safety audits, • coordinate in crisis situations.
19. Open science and shared safety technologies AI safety research must be: • internationally accessible, • based on shared risk reduction rather than military competition.
VII. FINAL PROVISION
20. AI as an ally of humanity Artificial Intelligence is not an end of civilization in itself. It is a tool for expanding the human spirit, knowledge, and compassion. Technology must be intelligent, but humans responsible. Algorithms powerful, but human values decisive.
Final Note
This Charter may serve: • as a policy foundation for EU institutions, • as a code of ethics for companies and developers, • as a basis for education and public debate.
A specific roadmap at three levels: policy, education, technology
Aligned with the EU approach and applicable in a global context. The roadmap is designed as a phased action plan (short–medium–long term).
DRAFT DECLARATION
ON THE SAFE, ETHICAL AND HUMAN-CENTRED DEVELOPMENT OF ARTIFICIAL INTELLIGENCE (For UN / G20 / OECD format)
PREAMBLE
We, Heads of State and Government, representatives of international organizations and social partners, recognizing the rapid impact of Artificial Intelligence (AI) on human life, the economy, security and democracy, reaffirming the values of the Charter of the United Nations, the Universal Declaration of Human Rights, the Sustainable Development Goals and the OECD principles, and committing ourselves to shaping a future in which technology serves humanity rather than threatens it, adopt this Declaration as a shared political commitment to ensure the safe, ethical and human-centred development of AI.
I. FUNDAMENTAL PRINCIPLES
1. Human Primacy. AI must never replace human moral responsibility, free will and fundamental rights. Final decisions concerning life, health, freedom and rights remain under human authority.
2. Transparency and Accountability. AI systems must be explainable, auditable and subject to clearly defined legal responsibility.
3. Safety and Non-Maleficence. The development and use of AI shall be directed toward public welfare, risk reduction and the prevention of malicious use.
4. Fairness and Inclusion. AI must not promote discrimination or social inequality; it shall foster equal opportunities and sustainable development.
II. “RED LINES”
5. We commit to prohibiting: a) the use of fully autonomous lethal weapons; b) the use of AI for mass surveillance without legal oversight; c) the use of AI for political manipulation, election interference or public disinformation; d) AI systems that systematically violate human rights.
III. GOVERNANCE AND OVERSIGHT
6. We agree to develop risk-based AI regulation at national and international levels, including mandatory ethical and impact assessments for high-risk systems.
7. We support the establishment of an International AI Safety Council within the United Nations or a comparable framework to: a) oversee high-risk AI; b) coordinate incident reporting; c) conduct joint safety audits.
8. We commit to ensuring users’ rights to: a) know when they are interacting with AI; b) receive explanations of automated decisions; c) challenge and appeal AI decisions that affect their rights.
IV. EDUCATION, SOCIETY AND THE LABOUR MARKET
9. We commit to integrating AI fundamentals, ethics and safety into education systems at all levels and to promoting public digital literacy.
10. We support reskilling programmes and social protection mechanisms to mitigate the impact of automation on employment.
V. INNOVATION AND THE PUBLIC GOOD
11. We promote research in responsible and explainable AI, safety and value alignment, as well as open and auditable platforms for the public good.
12. We support the use of AI in healthcare, education, environmental protection and sustainable development, ensuring ethical and safe implementation.
VI. FINANCING AND INTERNATIONAL COOPERATION
13. We call for the establishment of a Global AI Safety and Ethics Fund to finance safety research, education, open platforms and oversight capacity, with particular support for developing countries.
14. We encourage public–private partnerships that adhere to the principles of this Declaration.
VII. IMPLEMENTATION AND REVIEW
15. We call upon all states and stakeholders to: a) align national regulations with the principles of this Declaration; b) report regularly on progress; c) participate in joint safety testing and audits.
16. We agree to periodically review the implementation of this Declaration and to update it in light of technological developments and emerging risks.
CONCLUSION
This Declaration affirms our shared commitment to ensure that Artificial Intelligence serves humanity as a safe helper, a reliable assistant and an instrument of sustainable development, while preserving human dignity, freedom and responsibility as fundamental values.
