otrdiena, 2019. gada 29. janvāris

Ne no šīs Pasaules



                                                                                     Respice finem

          
Ne no šīs Pasaules

Dalos ar to, ko pats esmu pārdzīvojis, izjutis, apzinājies, sapratis, lai laikabiedriem atvieglotu izvēli dzīvē, pasargātu no liktenīgām kļūdām, aplamiem lēmumiem, palīdzētu atbrīvoties no šabloniskas domāšanas un uztveres stereotipiem, savlaicīgi brīdinātu par draudiem un briesmām.
          Nepretendēju uz gatavām receptēm, uz neapstrīdamiem aktuālo problēmu risinājumiem, bet apzinos, ka nekādā gadījumā nedrīkst samierināties ar ļauno, atzīties bezspēcībā pret represīvo varas mašīnu. Sistēmu, kas pratusi mūs pakļaut, uzkundzēties visiem, lai gan veidota mūsu kopējas vainas dēļ. Tās pastāvēšanā šādā formā arī paši esam līdzatbildīgi.
          Tādēļ nav jēgas nevarīgi žēloties, vaimanāt par dzīves grūtībām, bet gan jāmeklē iedarbīgi līdzekļi situācijas labvēlīgai reformēšanai. Jāmēģina savu spēju robežās piedāvāt digitālajam laikmetam atbilstošus risinājumus, jāvirza tos publiskai diskusijai, jāpakļauj konstruktīvai kritikai. Lai gala rezultātā - uz vienota redzējuma un saskaņotu uzskatu bāzes mobilizētu domubiedrus, konsolidētu vēlētājus un kopīgiem spēkiem izvēlētos risinājumus īstenotu dzīvē.
          Šajā nolūkā autors piedāvā izvērtēt šādas iniciatīvas un demokrātiskas cīņas paņēmienus:
vēlētāju prasība legalizēt un uzlikt par pienākumu politiķiem iepazīstināt elektorātu ar objektīvu savas personības raksturojumu un individuālās kompetences izvērtējumu (iespējams, SIP līdzīgā formā). Lai ikkatrs vēlētājs varētu izdarīt savām vitālajām interesēm atbilstīgu valsts līderu un politiskās vadības izvēli demokrātiski organizētu, taisnīgu  vēlēšanu ceļā. Sk. “Politiķu kompetences izvērtējuma iespējas”  http://ceihners.blogspot.com/
    kreatīva audiovizuālās komunikācijas izmantošana animācijas veidā,  lai vienkāršiem ļaudīm viegli uztveramā, saprotamā veidā nestu apgaismību sabiedrībā. Tas palīdzētu minimizēt masveida politiskās vieglprātības recidīvus un motivētu vēlētāju centienus iegūt imunitāti pret propagandas klišejām, izrauties no aizspriedumu tumsības. Sk. “Iespēja atradināt no politiskās lētticības”:  http://ceihners.blogspot.com/
efektīvas atgriezeniskās saites veidošana, kas nodrošinātu  pilsoņu aktīvu līdzdalību valsts pārvaldē, uzraudzību un kontroli par visām publiskās varas struktūrām, - līdzšinējo,  imitēto demokrātiju soli pa solim pārveidojot par sabiedrības tiešo pašpārvaldi. Sk. “Varas degradācija: cēloņi un profilakse”: http://ceihners.blogspot.com/
kolektīvā saprāta mobilizēšana, organizējot un izveidojot valstī  vispāratzītu garīgo līderu, tikumiskiem nopelniem bagātu intelektuāļu, izcilu profesionāļu, prominentu inteliģences pārstāvju Konsīliju (Arbitrāžu), kurai - definēto tiesību ietvaros, – droši varētu uzticēt objektīvu, taisnīgu, neapstrīdamu soģu un viedu ekspertu funkcijas. Zināmā mērā pārņemot un paplašināti interpretējot līdzšinējās Konstitucionālās tiesas prerogatīvas. Sk. “Kāda būs Nākotnes pasaule?!”: http://ceihners.blogspot.com/
mākslīgā intelekta (MI) sociāli atbildīgas izmantošanas iniciēšana valsts pārvaldes iekārtas efektīvas reformēšanas un racionālas reorganizācijas nodrošināšanā. Uzticot MI kompetenta padomdevēja un asistenta, objektīva eksperta, izcila analītiķa pienākumus. Sk. “Kāda būs Nākotnes pasaule?!”: http://ceihners.blogspot.com/
nepatiesas informācijas ietekmes un  viltus ziņu izplatības ierobežošana, organizējot medijpratības kampaņas, izmantojot inovatīvas faktu pārbaudes metodes. Sk. “Patiesības nošķiršana no meliem: mūsdienu iespējas”: http://ceihners.blogspot.com/
korupcijas iegrožošana ar blokķēdes tehnoloģijas palīdzību. Sk. “Blokķēdes platforma korupcijas apkarošanai”: http://ceihners.blogspot.com/
mērķtiecīga pāreja uz depolitizētas varas iekārtas modeli. Sk. В.В. Семенов «Основы неполитического государства» : http://novainfo.ru/article/14169       

… turpinājums grāmatā “Kā atbrīvoties no totalitārisma skavām. Izaicinājums pārvarēt politisko vientiesību“: https://buki.lv/product/ka-atbrivoties-no-totalitarisma-skavam-e-gramata

                            *   *    *

Kā mēs varam radīt jaunu līderu paaudzi postkapitālisma laikmetam? 

Claudio A. Rivera 26. novembris , 2019 7.28

Vai kapitālisma stundas ir skaitītas? Vai zinām, kas mūs sagaida nākotnē? Vai protam izglītot cilvēkus postkapitālisma laikmetam? Šajā rakstā dalīšos ar savām domām par šiem svarīgajiem jautājumiem.
Esam pārmaiņu laikmetā. Mums nav “planētas B”, un izskatās, ka beidzot sabiedrība to ir sapratusi. Cilvēcei ir kļuvis skaidrs, ka nepietiek ar “nepārtrauktu progresu”. Tirgus ir pilns ar produktiem, kurus  cilvēkam nevajag, jaunās tehnoloģijas bieži traucē cilvēka labklājībai, stress un citas ar “moderno dzīvi” saistītās slimības ir kļuvušas epidēmiskas. Rodas jautājums: “Kāpēc turpinām patērēt šos produktus un veicinām šo dzīves veidu?” Arī jaunieši jau kādu laiku pastiprināti interesējas par sarežģītiem, ar pasaules attīstību saistītiem jautājumiem. Beidzot arī pasaules līderi ir sapratuši, ka tik liela globālā nevienlīdzība un nestabilitāte nav ilgtspējīgi.
Kas īsti notiek? Vienkāršojot sabiedrībā esošos procesus, var teikt,  ka sabiedrība mēģina ietekmēt, ierobežot un dažos gadījumos pat aizvietot kapitālisma galveno līderi, proti – “kapitāldaļu īpašnieku”. Tas nenotiek, noniecinot privātīpašumu, kā to darītu komunismā. Galvenā doma ir privātīpašuma sociālfunkcijas atjaunošana. Cilvēce pastiprināti vēlas “humanizēt kapitālismu”.
Kā minēju, pasaules līderi ir sadzirdējuši šo vēlmi – ANO ir uzņēmusies izveidot un veicināt “ilgtspējīgas attīstības mērķus” (sustainability development goals). Ievērojams skaits korporāciju līderu ir skaidri norādījuši, ka kapitālismam ir nepieciešama “pārformulēšana.” Arī politiķi atrodas jaunas pārregulācijas (piemēram, datu aizsardzības regulēšana) un protekcionisma veicināšanas procesos.
Kas ir postkapitālisma pamats?
Šajā sarežģītajā pārmaiņu laikmetā, kad tirgus ekonomika sāk ieņemt citādu lomu cilvēku dzīvēs, jāmainās arī veidam, kā tiek izglītoti nākotnes ekonomikas dalībnieki un veidotāji. Eksistē dažādi filozofiski domāšanas veidi, caur kuriem varam skatīt pasauli un veikt secinājumus, vai kaut kas ir labs vai slikts, derīgs vai nederīgs. Manuprāt, divi domāšanas veidi, kuri vislabāk ilustrē dilemmu, ar kuru saskaramies šajā pārmaiņu laikmetā, ir utilitārisms un humanitārisms.
Gan kapitālisma sistēmai, gan šīs sistēmas līderu izglītībai ideoloģiskais pamats ir bijis viens – utilitārisms. Utilitārisms ir filozofisks domāšanas veids, kas nosaka, ka galvenais ētikas standarts ir sasniegt lielāka cilvēku skaita lietderības vai laimes maksimizāciju. Tehniski uzņēmējdarbībā šī paradigma izplatās caur  “uzņēmējdarbības principu”: uzņēmumu vadītājiem jāmaksimizē akcionāru vērtība. To darīt palīdz “izmaksu un ieguvumu analīze”, kura iekarojusi popularitāti visdrīzāk savas vienkāršības dēļ.
Utilitārisms nebalstās uz vērtībām vai normām, bet vērtē rīcību, pamatojoties uz izmaksu un ieguvumu analīzi.  Jaunajā postkapitālisma laikmetā šī paradigma vairs nav pieņemama pieaugošā daļā dzīves jomu, arī biznesā. Vēl nesen slavenais amerikāņu menedžmenta guru Pīters Drukers (Drucker, Peter F.) nopietni kritizēja akcionāru vērtības maksimizācijas principu: “Apjukumu rada maldīgs uzskats, ka tā sauktais uzņēmēja peļņas motīvs izskaidro viņa uzvedību vai virzību uz viņaprāt pareizu rīcību. Apgalvojums, ka šāds peļņas motīvs pastāv, ir ļoti apšaubāms.”
Pretēji utilitārismam humānisma perspektīva galvenokārt nav vērsta uz rezultātu un sekām, bet gan uz pašas darbības pareizību vai nepareizību. Humānismā galvenais ir cilvēks, utilitārismā cilvēks var būt galvenais tikai tad, ja tas maiksimizē vērtību lēmuma pieņēmējam. Tāpēc uz utilitārisma principu balstītā ekonomika mēs turpinām piedāvāt produktus, kas patērētajam īsti nav vajadzīgi.
Kā varam izglītot līderus postkapitālisma laikmetam?
Utilitārisms ir ļoti ietekmējis arī izglītību. Ja svarīgākais princips izglītībā ir utilitārisms, mēs radām izglītības programmas ar izteiktu specializāciju un disciplīnu fragmentāciju. Tas diemžēl noved pie liela intelektuālā sašaurinājuma. Ja svarīgākais princips izglītībā ir humānisms, mēs radām izglītības programmas, kas veicina kritisko domāšanu (vai intelektuāli plašu redzes loku) un starpdisciplinaritāti. Iemesls ir šāds: utilitārisma domāšanas veids radīt efektivitāti. Humānisma domāšanas veids – radīt izcilību. Un mēs dzīvojam laikmetā, kur sabiedrībai nepietiek ar nepārtrauktu efektivitāti un progresu, kas radies no utilitārisma principiem.
Mūsu nespēja risināt daudzšķautņainas sabiedrības problēmas konceptuāli radusies no  “kastīšu domāšanas”. Finansisti nesadarbojās ar inženieriem, inženieri nesadarbojās ar māksliniekiem utt. Reālās problēmas nerespektē utilitārisma izglītībā apgūtās disciplīnu robežas. Spēja efektīvi darboties vienā specifiskā jomā nekorelē ar spēju efektīvi risināt problēmas, kuras sastāv no neskaitāmi daudziem aspektiem un jomām.
Ja mēs vēlamies radīt jaunu līderu paaudzi, kura atbilst postkapitālisma laikmeta prasībai skatīties uz pasauli caur plašāku prizmu kā izmaksu un ieguvumu analīze, mums ir jāpiedāvā topošajiem līderiem starpdisciplinārās programmas. Tās ir programmas, kurās dalībnieki kritiski meklē visaptverošas atbildes uz reālām problēmām, izmantojot gan “teorētiskās disciplīnas” (piemēram, filozofiju, matemātiku), gan “praktiskās disciplīnas” (piemēram, finanses, inženieriju, medicīnu). Starpdisciplinārajās programmās galvenais nav efektīvi akumulēt zināšanu blokus. Starpdisciplinārajās programmās galvenais ir prasmīgi “skatīt realitāti no jauna” caur integrētām zinātnēm.
Redzu, ka jau tuvākajā nākotnē vadošās izglītības institūcijas būs tādas, kur programmas nebūs strukturētas pa kursiem, bet ap sabiedrībā svarīgiem jautājumiem.  Pasniedzēji nebūs piesaistīti disciplīnām, mācības vadīs starpdisciplināras komandas, studenti pavadīs vairāk laika laboratorijās un kafejnīcās, nevis auditorijās.
Pasaulē jau ir augstskolas, kuras sāk jaudīgi strādāt tieši šajā – plašu domāšanu attīstošā – virzienā. Daļa no šīm augstskolām ir tradicionālās Ivy League skolas, piemēram, Hārvarda Universitāte, bet daļa ir jaunpienācēji  – Minerva at KGI un Quest College. Sekojot šim trendam, Latvijā radies spilgts starpdisciplinārās izglītības piemērs – starp Latvijas vadošajām augstskolām nodibinātā Baltijas IT līderu programma (www.bitl.lv).
Postkapitālisma laikmets ir periods, kur tirgus ekonomikas pieņēmumi ir zem jautājuma zīmes.  Taču viena lieta ir skaidrano vadošiem uzņēmumiem tiks prasīta cilvēciskāka attieksme pret procesiem un lēmumu pieņemšanu. Vairāk tiks prasīti inovatīvi un ilgtspējīgi risinājumi. Turklāt humānistiskais biznesa redzējums jebkuru uzņēmumu uzskata par vietu, kur caur kopīgu darbu notiek cilvēka personīgā un sociālā attīstība. Lai izveidotu šādus biznesus nepieciešami izcili izglītoti līderi, kuri ir eksperti gan produktu izstrādē, gan cilvēku izpratnē. Esmu pārliecināts, ka tikai starpdisciplinārā izglītība var veicināt nepieciešamo līderu kapacitāti, lai veicinātu sabiedrības ilgtermiņa attīstību.



A letter from Albert Einstein to his daughter: about The Universal Force which is LOVE

April 15, 2015 Ines Radman UncategorizedAlbert Einsteinlove

Reposted from: https://suedreamwalker.wordpress.com/2015/04/15/a-letter-from-albert-einstein-to-his-daughter-about-the-universal-force-which-is-love/

In the late 1980s, Lieserl, the daughter of the famous genius, donated 1,400 letters, written by Einstein, to the Hebrew University, with orders not to publish their contents until two decades after his death. This is one of them, for Lieserl Einstein.More can be found about Lieserl here

…”When I proposed the theory of relativity, very few understood me, and what I will reveal now to transmit to mankind will also collide with the misunderstanding and prejudice in the world.
I ask you to guard the letters as long as necessary, years, decades, until society is advanced enough to accept what I will explain below.
There is an extremely powerful force that, so far, science has not found a formal explanation to. It is a force that includes and governs all others, and is even behind any phenomenon operating in the universe and has not yet been identified by us.

This universal force is LOVE.
When scientists looked for a unified theory of the universe they forgot the most powerful unseen force.

Love is Light, that enlightens those who give and receive it.
Love is gravity, because it makes some people feel attracted to others.

Love is power, because it multiplies the best we have, and allows humanity not to be extinguished in their blind selfishness. Love unfolds and reveals.

For love we live and die.
Love is God and God is Love.

This force explains everything and gives meaning to life. This is the variable that we have ignored for too long, maybe because we are afraid of love because it is the only energy in the universe that man has not learned to drive at will.

To give visibility to love, I made a simple substitution in my most famous equation.

If instead of E = mc2, we accept that the energy to heal the world can be obtained through love multiplied by the speed of light squared, we arrive at the conclusion that love is the most powerful force there is, because it has no limits.
After the failure of humanity in the use and control of the other forces of the universe that have turned against us, it is urgent that we nourish ourselves with another kind of energy…

If we want our species to survive, if we are to find meaning in life, if we want to save the world and every sentient being that inhabits it, love is the one and only answer.
Perhaps we are not yet ready to make a bomb of love, a device powerful enough to entirely destroy the hate, selfishness and greed that devastate the planet.

However, each individual carries within them a small but powerful generator of love whose energy is waiting to be released.
When we learn to give and receive this universal energy, dear Lieserl, we will have affirmed that love conquers all, is able to transcend everything and anything, because love is the quintessence of life.

I deeply regret not having been able to express what is in my heart, which has quietly beaten for you all my life. Maybe it’s too late to apologize, but as time is relative, I need to tell you that I love you and thanks to you I have reached the ultimate answer! “.

Your father Albert Einstein

https://offradranch.com/lv/celebrities/3874-lieserl-einstein-bio-age-family-facts-about-albert-einstein8217s-daughter.html  ("This document is not by Einstein. The family letters donated to the Hebrew University - referred to in this rumor - were not given by Lieserl. They were given by Margot Einstein, who was Albert Einstein's stepdaughter.) https://www.huffpost.com/entry/the-truth-behind-einsteins-letter-on-the-universal-force-of-love_b_7949032



ceturtdiena, 2019. gada 24. janvāris

Human as God




                                                                                        Sacer esto!


    
                              Human as God

          Human is a dual being by nature = in each of us there is immanent Good as the embodiment of the Divine and Evil as a demonic reflection – the emancipation of the dark side of human nature.
          How we balance Good and Evil in real life – which of the parts we develop, nurture and stimulate in ourselves – depends on our own will and choice, commensurate with the virtues and resulting from the influence of the environment. God within us is as powerful as we have strengthened Him and provided Him with the energy of our Love, or weakened Him by spreading the energy of evil, devilry and hatred.
           It is only through the collaboration of believers of different religions and the consolidation of all the faithful in the name of the one God that we can come to a consensus underpinned by Love and based on Faith in God. And then the power of God that we perceive, that lives in us and felt by us is multiplied at an exponential rate. And people who believe in God become as powerful as their true Love for God within themselves is, as united as they see God in their fellow citizens – in the totality of the people who have come together in God.
          Realising our humane essence, our human purpose and doing good deeds, we reveal the Divine spark and also bring goodness to God. We feel as a Divine particle and approach God as the Supreme Reason. Eradicating evil, lies and hatred, suppressing the feelings of aggression, arrogance and revenge and perfecting our personality in the virtue and the Law of God, we increasingly begin to resemble the Divine creation. This is when the world around us begins to transform and becomes more good-natured and bright – filled with Love and built on justice.... Read more: https://www.amazon.com/HOW-GET-RID-SHACKLES-TOTALITARIANISM-ebook/dp/B0C9543B4L/ref=sr_1_1?crid=19WW1TG75ZU79&keywords=HOW+TO+GET+RID+OF+THE+SHACKLES+OF+TOTALITARIANISM&qid=1687700500&s=books&sprefix=how+to+get+rid+of+the+shackles+of+totalitarianism%2Cstripbooks-intl-ship%2C181&sr=1-1
         
MAN 2.0 R-EVOLUTION (Series 6x52'); Человек 2.0

For 4.5 billion years mutation, gene flow and natural selection have driven the evolutionary process and forged life on earth. Today cultural evolution is taking over changing how we live, how we think and how we die. For the first time in history humankind has the knowledge and the tools to intervene directly in its own evolutionary process. The SIX EPISODES DOCUMENTARY TV SERIES investigates the growing role of science in shaping human life.
We meet the most prominent scientists, anthropologists and futurologists to explore the state of the art of the human species: from a genetically modified homo sapiens to homo technologicus and digitalis.
The series explores past and present scenarios and looks at the future, tracing the journey that mankind is making to modify human minds and bodies, as well as the lenght and quality of our lives. This is altogether turning us into a new species: Man 2.0




How Wisdom Can Change Your Life

In San Francisco Chronicle March 25, 2019
By Deepak Chopra, MD
It seems perverse that the easier life becomes, the worse our problems. Technology has created life-changing innovations like the Internet that are directly linked with terrorist attacks, giving like-minded fanatics instant global communication. Computers gave rise to social media, which has led to cruel bullying at school, fake news, conspiracy plots, and the anonymity to mount vicious personal attacks—all of these seem as endemic as hacking, another insoluble problem created by technology.
One could go on practically forever, and it wouldn’t be necessary to blame current technology either—the internal combustion engine is directly connected to climate change, and nuclear fission led to the horrors of atomic warfare. But my point isn’t to bash technology; we owe every advance in the modern world to it—except one.
Technology is based on higher education, and whatever its benefits, higher education has almost totally lost interest in wisdom. Wisdom isn’t the same as knowledge. You can collect facts that lead to the understanding of things, but wisdom is different. I’d define it as a shift in allegiance, away from objective knowledge toward self-awareness.
The Greek dictum “Know thyself” doesn’t make sense if the self you mean is the ego-personality, with its selfish demands, unending desires, and lack of happiness. Another self is meant, which isn’t a person’s ego but a state of consciousness. “Self” might not even be a helpful term, despite the age-old references to a higher self identified with enlightenment. It is more helpful to say that the pursuit of wisdom is about waking up.
Waking up is a metaphor for the conscious life, and the conscious life is what wisdom leads to. Every day we are driven by unconscious impulses and desires, and unexamined processes go on beneath the surface of the mind that lead to anxiety, depression, low self-esteem, self-destructive behavior, and every kind of pointless discord, from household tensions to war. The outbreak of World War I led to senseless slaughter, as did the excesses of Communism. These consequences were unforeseen, yet in retrospect it is obvious that World War I was a kind of eruption from the unconscious, of pride, anger, stubborn nationalism, and xenophobic tendencies that people were harboring as a matter of course.
European anti-Semitism wasn’t invented in Germany by a single fanatical Nazi; it was accepted in the most polite and civilized circles almost everywhere. In one way or another, allowing our unconscious to go unexamined has caused the greatest and longest suffering in human history. It was suffering and finding a way to end it that became the foundation of Buddhism, but any spiritual teaching that will show people how to wake up also aims to bring them out of suffering.
In that sense, wisdom has a definite purpose, but escaping the ills and woes of the unconscious life is secondary to the main purpose of waking up, which is to know who we really are. The answer can be formulated in a few words: humans are a species of consciousness whose special trait is self-awareness. Being self-aware, we have the capacity to access the very source of awareness.
At first sight this ability doesn’t necessarily sound positive. The vast majority of people have become expert at denial and were taught from childhood not to dwell on themselves, an activity deemed self-centered if not totally solipsistic. People are also expert at keeping secrets from themselves, at going along to get along, at valuing social conformity and fitting in. These habits unravel when self-awareness awakens. Not everyone wants to let them go, for obvious reasons.
But the worst aspect of being unconscious, or asleep to use the metaphor of waking up, is self-limitation. We all go around with core beliefs about how insignificant a single individual is, how risky it would be to step out of the norm, and how only the gifted few rise above the average. The wise in every generation have asserted the opposite, that the source of consciousness makes human potential infinite. We can think an infinite number of new thoughts and say things never before said. In fact, there is no arbitrary limit on any trait that makes us human: intelligence, creativity, insight, love, discovery, curiosity, invention, and spiritual experiences of every kind.
We are a species of consciousness whose great pitfall isn’t evil but “mind-forg’d manacles,” to borrow a phrase from the poet William Blake. We make up mental constructs, invent stories around them, and tell the next generation that these stories are true. One story says that women are inferior, a complex tale that gave rise to a thousand injustices and false beliefs. Us-versus-them thinking leads to stories about racism and nationalism that caused their own barbarous results.
Waking up allows us to escape all stories and to live free of self-limiting mental constructs. The real question is whether it can be done. Can you and I wake up? If so, how do we go about it? Are there awakened teachers who can provide examples of what it means to live the conscious life? This is exactly where wisdom enters the picture. Without living examples of awakened individuals, the whole enterprise would be trapped in a limbo of fantasy and wishful thinking. But when a society values wisdom, it turns out that the awakened exist among us and always have.
Anyone who wants to wake up is fortunate to be alive now, because despite our global problems, irrational behavior, and self-destructive denial, modern society is open to ready communication about every topic, including the exploration of higher consciousness. Where prior generations had little grasp of higher consciousness beyond the precepts of religion, millions of people today can walk their own path to self-awareness, choosing to include God, the soul, organized religion, and scriptures as they see fit, or to avoid them. Even “spirituality” is a term you can adopt or ignore—the real purpose of waking up is about consciousness.
We have always had the potential to be wise by using self-awareness to explore who we really are. A society driven by consumerism, celebrity worship, video games and social media gossip, and indifference to massive social problems feels like it could never find wisdom, or even the first impulse to wake up. But I’d argue that we are the most fortunate society to wake up in, simply because higher consciousness is open to anyone. I count this as the greatest opportunity facing us, to see that waking up is possible and to hasten toward it as quickly as we can.

Deepak Chopra MD, FACP, founder of The Chopra Foundation and co-founder of The Chopra Center for Wellbeing, is a world-renowned pioneer in integrative medicine and personal transformation, and is Board Certified in Internal Medicine, Endocrinology and Metabolism. He is a Fellow of the American College of Physicians and a member of the American Association of Clinical Endocrinologists. Chopra is the author of more than 85 books translated into over 43 languages, including numerous New York Times bestsellers. His latest books are The Healing Self co-authored with Rudy Tanzi, Ph.D. and Quantum Healing (Revised and Updated): Exploring the Frontiers of Mind/Body Medicine. Chopra hosts a new podcast Infinite Potential available on iTunes or Spotifywww.deepakchopra.com 


https://www.linkedin.com/pulse/how-wisdom-can-change-your-life-deepak-chopra-md-official-/?trk=





Top of Form
Bottom of Form
  • 09.20.19

The rise of AI has led to tattered privacy protections and rogue algorithms. Here’s what we can do about it.
This article is part of Fast Company’s editorial series The New Rules of AI. More than 60 years into the era of artificial intelligence, the world’s largest technology companies are just beginning to crack open what’s possible with AI—and grapple with how it might change our future. Click here to read all the stories in the series.


Consumers and activists are rebelling against Silicon Valley titans, and all levels of government are probing how they operate. Much of the concern is over vast quantities of data that tech companies gather—with and without our consent—to fuel artificial intelligence models that increasingly shape what we see and influence how we act.
If “data is the new oil,” as boosters of the AI industry like to say, then scandal-challenged data companies like AmazonFacebook, and Google may face the same mistrust as oil companies like BP and Chevron. Vast computing facilities refine crude data into valuable distillates like targeted advertising and product recommendations. But burning data pollutes as well, with faulty algorithms that make judgments on who can get a loan, who gets hired and fired, even who goes to jail.
The extraction of crude data can be equally devastating, with poor communities paying a high price. Sociologist and researcher Mutale Nkonde fears that the poor will sell for cheap the rights to biometric data, like scans of their faces and bodies, to feed algorithms for identifying and surveilling people. “The capturing and encoding of our biometric data is going to probably be the new frontier in creating value for companies in terms of AI,” she says.
The further expansion of AI is inevitable, and it could be used for good, like helping take violent images off the internet or speeding up the drug discovery process. The question is whether we can steer its growth to realize its potential benefits while guarding against its potential harms. Activists will have different notions of how to achieve that than politicians or heads of industry do. But we’ve sought to cut across these divides, distilling the best ideas from elected officials, business experts, academics, and activists into five principles for tackling the challenges AI poses to society.
1. CREATE AN FDA FOR ALGORITHMS
Algorithms are impacting our world in powerful but not easily discernable ways. Robotic systems aren’t yet replacing soldiers as in The Terminator, but instead they’re slowly supplanting the accountants, bureaucrats, lawyers, and judges who decide benefits, rewards, and punishment. Despite the grown-up jobs AI is taking on, algorithms continue to use childish logic drawn from biased or incomplete data.
Cautionary tales abound, such as a seminal 2016 ProPublica investigation that found law enforcement software was overestimating the chance that black defendants would re-offend, leading to harsher sentences. In August, the ACLU of Northern California tested Rekognition, Amazon’s facial-recognition software, on images of California legislators. It matched 26 of 120 state lawmakers to images from a set of 25,000 public arrest photos, echoing a test the ACLU did of national legislators last year. (Amazon disputes the ACLU’s methodology.)
Faulty algorithms charged with major responsibilities like these pose the greatest threat to society—and need the greatest oversight. “I advocate having an FDA-type board where, before an algorithm is even released into usage, tests have been run to look at impact,” says Nkonde, a fellow at Harvard University’s Berkman Klein Center for Internet & Society. “If the impact is in violation of existing laws, whether it be civil rights, human rights, or voting rights, then that algorithm cannot be released.”
Nkonde is putting that idea into practice by helping write the Algorithmic Accountability Act of 2019, a bill introduced by U.S. Representative Yvette Clarke and Senators Ron Wyden and Cory Booker, all of whom are Democrats. It would require companies that use AI to conduct “automated decision system impact assessments and data protection impact assessments” to look for issues of “accuracy, fairness, bias, discrimination, privacy, and security.”
These would need to be in plain language, not techno-babble. “Artificial intelligence is . . . a very simple concept, but people often explain it in very convoluted ways,” says Representative Ro Khanna, whose Congressional district contains much of Silicon Valley. Khanna has signed on to support the Algorithmic Accountability Act and is a co-sponsor of a resolution calling for national guidelines on ethical AI development.
Chances are slim that any of this legislation will pass in a divided government during an election year, but it will likely influence the discussion in the future (for instance, Khanna co-chairs Bernie Sanders’s presidential campaign).
2. OPEN UP THE BLACK BOX OF AI FOR ALL TO SEE
Plain-language explanations aren’t just wishful thinking by politicians who don’t understand AI, according to someone who certainly does: data scientist and human rights activist Jack Poulson. “Qualitatively speaking, you don’t need deep domain expertise to understand many of these issues,” says Poulson, who resigned his position at Google to protest its development of a censored, snooping search engine for the Chinese market.
To understand how AI systems work, he says, civil society needs access to the whole system—the raw training data, the algorithms that analyze it, and the decision-making models that emerge. “I think it’s highly misleading if someone were to claim that laymen cannot get insight from trained models,” says Poulson. The ACLU’s Amazon Rekognition tests, he says, show how even non-experts can evaluate how well a model is working.


AI can even help evaluate its own failings, says Ruchir Puri, IBM Fellow and the chief scientist of IBM Research who oversaw IBM’s AI platform Watson from 2016 to 2019. Puri has an intimate understanding of AI’s limitations: Watson Health AI came under fire from healthcare clients in 2017 for not delivering the intelligent diagnostic help promised—at least not on IBM’s optimistic timeframe.
“We are continuously learning and evolving our products, taking feedback, both from successful and, you know, not-so-successful projects,” Puri says.
IBM is trying to bolster its reputation as a trustworthy source of AI technology by releasing tools to help make it easier to understand. In August, the company released open-source software to analyze and explain how algorithms come to their decisions. That follows on its open-source software from 2018 that looks for bias in data used to train AI models, such as those assigning credit scores.
“This is not just, ‘Can I explain this to a data scientist?'” says Puri. “This is, ‘Can I explain this to someone who owns a business?'”
3. VALUE HUMAN WISDOM OVER AI WIZARDRY
The overpromise of IBM Watson indicates another truth: AI still has a long way to go. And as a result, humans should remain an integral part of any algorithmic system. “It is important to have humans in the loop,” says Puri.
Part of the problem is that artificial intelligence still isn’t very intelligent, says Michael Sellitto, deputy director of Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI). “If you take an algorithm out of the specific context for which it was trained, it fails quite spectacularly,” he says.
That’s also the case when algorithms are poorly trained with biased or incomplete data—or data that doesn’t prepare them for nuance. Khanna points to Twitter freezing the account of Senate Majority Leader Mitch McConnell’s campaign for posting a video of people making “violent threats.” But they were protestors against McConnell, whose team was condemning the violent threats, not endorsing them.
Because of AI’s failings, human judgment will always have to be the ultimate authority, says Khanna. In the case of Twitter’s decision to freeze McConnell’s account, “it turns out that the context mattered,” he says. (It’s not clear if Twitter’s decision was based on algorithms, human judgment, or both.)


But the context of humans making decisions also matters. For instance, Khanna is collaborating with Stanford HAI to develop a national AI policy framework, which raises its own questions of bias. The economy of Khanna’s district depends on the AI titans, whose current and former leaders dominate HAI’s Advisory Council. Industry leaders who have bet their future on AI will likely have a hard time making fair decisions that benefit everyone, not just businesses.
“That’s why I am putting so much effort into advocating for them to have more members of civil society in the room and for there to be at least some accountability,” says Poulson. He led a petition against an address by former Google CEO Eric Schmidt that has been planned for HAI’s first major conference in October.
Stanford has since added two speakers—Algorithmic Justice League founder Joy Buolamwini and Stony Brook University art professor Stephanie Dinkins—whom Poulson considers to be “unconflicted.” (Stanford says that it was already recruiting the two as speakers before Poulson’s petition.)
Humans are also making their voices heard within big tech companies as well. Poulson is one of many current and former Googlers to sound the alarm about ethical implications of the company’s tech development, such as the Maven program to provide AI to the Pentagon. And tech worker activism is on the rise at other big AI powerhouses, such as Amazon and Microsoft.
4. MAKE PRIVACY THE DEFAULT
At the heart of many of these issues is privacy—a value that has long been lacking in Silicon Valley. Facebook founder Mark Zuckerberg’s motto, “Move fast and break things,” has been the modus operandi of artificial intelligence, embodied in Facebook’s own liberal collection of customer data. Part of the $5 billion FTC settlement against Facebook was for not clearly informing users that it was using facial-recognition technology on their uploaded photos. The default is now to exclude users from face scanning unless they choose to participate. Such opt-ins should be routine across the tech industry.
“We need a regulatory framework for data where, even if you’re a big company that has a lot of data, there are very clear guidelines about how you can use that data,” says Khanna.
That would be a radical shift for Big Tech’s freewheeling development of AI, says Poulson, especially since companies tend to incentivize quick-moving development. “The way promotions work is based upon products getting out the door,” he says. “If you convince engineers not to raise complaints when there is some fundamental privacy or ethics violation, you’ve built an entire subset of the company where career development now depends upon that abuse.”
In an ideal world, privacy should extend to never collecting some data in the first place, especially without consent. Nkonde worked with Representative Yvette Clarke on another AI bill, one that would prohibit the use of biometric technology like face recognition in public housing. Bernie Sanders has called for a ban on facial recognition in policing. California is poised to pass a law that bans running facial recognition programs on police body camera footage. San Francisco, Oakland, and Somerville, Massachusetts, have banned facial recognition technology by city government, and more cities are likely to institute their own bans. (Still, these are exceptions to widespread use of facial recognition by cities across the United States.)
Tech companies tend to argue that if data is anonymized, they should have free reign to use it as they see fit. Anonymization is central to Khanna’s strategy to compete with China’s vast data resources.
But it’s easy to recover personal information from purportedly anonymized records. For instance, a Harvard study found that 87% of Americans can be identified by their unique combination of birth date, gender, and zip code. In 2018, MIT researchers identified Singapore residents by analyzing overlaps in anonymized data sets of transit trips and mobile phone logs.
5. COMPETE BY PROMOTING, NOT INFRINGING, CIVIL RIGHTS
The privacy debate is central to the battle between tech superpowers China and the United States. The common but simplistic view of machine learning is that the more data, the more accurate the algorithm. China’s growing AI prowess benefits from vast, unfettered information collection on 1.4 billion residents, calling into doubt whether a country with stricter privacy safeguards can amass sufficient data to compete.
But China’s advantage comes at a huge price, including gross human rights abuses, such as the deep surveillance of the Uighur Muslim minority. Omnipresent cameras tied to facial recognition software help track residents, for instance, and analysis of their social relationships are used to assess their risk to the state.
Chinese citizens voluntarily give up privacy far more freely than Americans do, according to Taiwanese-American AI expert and entrepreneur Kai-Fu Lee, who leads the China-based VC firm Sinovation Ventures. “People in China are more accepting of having their faces, voices, and shopping choices captured and digitized,” he writes in his 2018 book AI Superpowers: China, Silicon Valley, and the New World Order.
That may be changing. The extensive data collection by viral Chinese face-swapping app Zao provoked outrage not only in the West, but in China as well, forcing Zao to update its policy.
And the country with the most data doesn’t automatically win, anyway. “This is more of a race for human capital than it is for any particular data source,” says Sellitto of Stanford’s HAI. While protecting privacy rights may slightly impinge data collection, it helps attract talent.
The United States has the largest share of prominent AI researchers, and most of them are foreign born, according to a study by the Paulson Institute. The biggest threat to America’s AI leadership may not be China’s mass of data or the talent developed in other countries, but newly restrictive immigration policies that make it harder for that talent to migrate to the U.S. The Partnership on AI, a coalition of businesses and nonprofits, says that a prohibitive approach to immigration hurts AI development everywhere. “In the long run, valuing civil liberties is going to attract the best talent to America by the most innovative people in the world,” says Khanna. “It allows for freedom of creativity and entrepreneurship in ways that authoritarian societies don’t.” https://www.fastcompany.com/90402489/5-simple-rules-to-make-ai-a-force-for-

Will We Ever Control the Word With Our Minds?
For decades, controlling computers by thought was the stuff of science fiction. But now we are tantalisingly close to a breakthrough. The question is, does it create more problems than it solves?
  • By Mark Piesing
15 August 2019
Science-fiction can sometimes be a good guide to the future. In the film Upgrade (2018) Grey Trace, the main character, is shot in the neck. His wife is shot dead. Trace wakes up to discover that not only has he lost his wife, but he now faces a future as a wheelchair-bound quadriplegic.
He is implanted with a computer chip called Stem designed by famous tech innovator Eron Keen – any similarity with Elon Musk must be coincidental – which will let him walk again. Stem turns out to be an artificial intelligence (AI) and can “talk” to him in a way no one else can hear. It can even take over control of his body. You can guess the rest of the story.
The reality of being a cyborg in 2019 is much less dramatic – but still incredible. In 2012, as part of a research programme led by Jennifer Collinger, a biomedical engineer at the University of Pittsburgh, and funded by the US government’s Defense Advanced Research Projects Agency (Darpa), Jan Scheuermann became one of a tiny handful of people to be implanted with a brain-computer interface. The 53-year-old woman, a quadriplegic due to the effects of a degenerative disorder, has two cables attached to box-like sockets in her head, which connect to what looks like a video game console.
You might also like:
Scheuermann can use this brain-computer interface to control a robotic arm with her thoughts, well enough to feed herself chocolate. Three years later she successfully flew a fighter aircraft in a computer simulator.
Darpa has been funding research into these interfaces since the 1970s, and now wants to go one step closer to the kind of world glimpsed in Upgrade. The goal of the Next-Generation Nonsurgical Neurotechnology (N3) programme launched earlier this year is to remove the need for electrodes, cables and brain surgery.
Al Emondi, who manages the programme, has given scientists from six of the USA’s leading research institutes the task of developing a piece of hardware capable of reading your thoughts from the outside of your head and small enough to be embedded into a baseball cap or headrest. In an approach that has been compared to telepathy – or the creation of “a true brain-computer interface”, according to Emondi – the device has to be bi-directional, able to transmit information back to the brain in a form that the brain will understand.
Emondi has given the scientists only four years to take the new technology from the laboratory to the point it can be tested on humans. Even Elon Musk’s plan for an Upgrade-style brain–computer interface, Neuralink, still requires risky surgery to embed the chip in the brain, even if it does replace cables with a form of wireless communication.  
“The ability to really change the world doesn't happen often in a career,” says Emondi. “If we can build a neural interface that’s not invasive, we will have opened up the door to a whole new ecosystem that doesn’t exist right now.”
The only way that humans have evolved to interact with the world is through our bodies, our muscles and our senses – Michael Wolmetz
“The most common applications are to help people who have lost the ability to move their arms and quadriplegics, paraplegics,” says Jacob Robinson, an electrical and computer engineer at Rice University, Houston, Texas, and the principal researcher of one of the teams. “Imagine then, if we can have the same kind of ability to communicate with our machines but without surgery, then we open up this technology to a broad user base, people who are otherwise able-bodied who just want faster ways to communicate with their devices.”
Some other researchers think our fascination with brain-computer interfaces is about something more profound. “The only way that humans have evolved to interact with the world is through our bodies, our muscles and our senses, and we’re pretty good at it,” says Michael Wolmetz,  a human and machine intelligence research lead at Johns Hopkins Applied Physics Laboratory in Laurel, Maryland. “But it’s also a fundamental limitation on our ability to interact with the world. And the only way to get outside of that evolutionary constraint is to directly interface with the brain.”
Despite its slightly unnerving strapline of “creating breakthrough technologies and capabilities for national security”, Darpa has a history of pioneering technologies that shape the world that we civilians live in. The development of the internetGPS, virtual assistants like Apple’s Siri and now AI has all been sped up thanks to the dollars ploughed into these areas by the agency. Its funding of research into brain-computer interfaces suggests it could be a similarly game-changing technology. But it is not alone.
Musk’s Neuralink is just one of a number of projects attracted by the potential of brain-computer interfaces. Major technology firms including Intel are also working in this area.
And there are great rewards for those who manage to crack it – the market in neurological technology is expected to be worth $13.3bn (£10.95bn) in 2022.
The quality of the information that you can transmit is limited by the number of channels – Jacob Robinson
Brain-computer interfaces are possible today only because in the 1800s scientists tried to understand the electrical activity that had been discovered in the brains of animals. During the 1920s, Hans Berger developed the electroencephalograph (EEEG) to detected electrical activity from the surface of the human skull and recorded it. Fifty years later computer scientist Jacques Vidal’s research at the University of California Los Angeles (UCLA) led him to coin the term “brain–computer interface”.
Scientists then had to wait for computing power, artificial intelligence and nanotechnology for their visions to be realised. In 2004, a quadriplegic patient was implanted with the first advanced computer interface after a stabbing left him paralysed from the neck down. This allowed him to play ping pong on a computer just by thinking about it.
Despite such successes, problems remain. “The quality of the information that you can transmit is limited by the number of channels,” says Robinson. “The interfaces require cutting a hole in the skull to put the electrode directly in contact with the brain. Your device might only operate for a limited amount of time before your body rejects it; or if the devices fail, it’s hard to get them out.”
Millimetres in the skull is the equivalent of tens of metres in the ocean and kilometres in the atmosphere in terms of the clutter you have to face – David Blodgett
To achieve the goal of an interface that works without the need for brain surgery, Emondi’s teams are exploring using combinations of techniques such as ultrasound, magnetic fields, electric fields and light to read our thoughts and/or write back. Problems include how you tell useful neural activity from the cacophony of other noise the brain emits. It has also got to be able to pick up the signals through the skull and the scalp.
“When you consider the problem of imaging through a scattering medium, millimetres in the skull is the equivalent of tens of metres in the ocean and kilometres in the atmosphere in terms of the clutter you have to face,” says David Blodgett, principal investigator for the team from Johns Hopkins University Applied Physics Laboratory team.
“But we still believe that we can get very useful information,” says Emondi.
Some teams are looking at what Emondi calls “minutely invasive surgery”. “You can still put something in the body, but you can’t do it through any surgical means,” he says. This means you have to eat something, inject it or squirt it up your nose. One team is looking at nanoparticles that act as “nanotransducers” when they reach their destination in the brain. These are very small particles the width of a human air that can transform external magnetic energy into an electric signal to the brain and vice versa.  Another is looking at using viruses to inject DNA into to cells to alter them to do a similar job.
If these techniques work, then the performance of a minutely invasive interface should be able to match that of a chip surgically implanted into the body.
Then there is the challenge of getting the information from the device to the computer and delivering a response in a split second.
“If you were using a mouse with a computer, and you click it, and then you have wait to a second for it to do something, then that technology would never get off the ground,” says Emondi. “So, we’ve got to do something that’s going to be superfast.”
The interfaces need to have “high resolution” and enough “bandwidth”, or channels of communication, to fly a real drone rather than move a robotic arm.
But even if we can do it, how exactly do we communicate? Will we be communicating in words or in pictures? Will we be able to talk with a friend or pay bills online? How much will this be unique to each individual? No one really knows the answers to such questions because the rules haven’t been written yet.
“All new interfaces take some practice to get used to,” says Patrick Ganzer, co-investigator on the project at Battelle. “It’s hard to say how easy this new brain-computer interface will be to use. We don’t want users to have to learn hundreds of rules. One attractive option is to have outputs from the user’s brain-computer interface to communicate with a semi-autonomous device. The user will not need to control every single action but simply set a ‘process in motion’ in the computer system.”
No one who is able-bodied has yet chosen to be embedded with an interface in order to play a video game like Fortnite
Emondi goes further than this: “As the AI becomes better, the systems we are interoperating with are going to become more autonomous. Depending on the task, we may just have to say, ‘I want that ball’ and the robot goes and gets it itself.”
The film Upgrade may have hinted at a problem, however; who exactly is in control?
But there are some clues. “To date, most brain-computer interfaces have extracted detailed movement or muscle-related information from the brain activity even if the user is thinking more broadly about their goal,” says Jennifer Collinger. “We can detect in the brain activity which direction they want to move an object and when they want to close their hand and the resulting movement is a direct path to the object that enables them to pick it up. The user does not have to think ‘right’, ‘forward’, ‘down’.”
“The amount of mental effort required to operate a BCI varies between participants but has typically been greater for non-invasive interfaces. It remains to be seen whether any technologies that come out of N3 will allow the user to multi-task.”
There is an even more fundamental question than this. No one who is able-bodied has yet chosen to be embedded with an interface in order to play a video game like Fortnite or shop online – and no one knows whether their behaviour towards an interface would be different, nor whether it would change if the chip was in a baseball cap.
The ethical dilemmas are tremendous. “The benefits coming out of that technology have to outweigh the risks,” says Emondi. “But if you’re not trying to regain some function that you’ve lost then that’s different: that’s why non-invasive approaches are so interesting.
There is a question of at what point humans become the weakest link in the systems that we use – Michael Wolmetz
“But just because it’s not invasive technology doesn’t mean that you aren’t causing harm to an individual’s neural interface – microwaves are non-invasive, but they wouldn’t be a good thing,” he adds. “So, there are limits. With ultrasound, you have to work within certain pressure levels. If it’s electric fields, you have to be within certain power levels.”
The development of powerful brain-computer interfaces may even help humans survive the hypothetical technological singularity, when artificial intelligence surpasses human intelligence and is able to self-replicate itself. Humans could use technology to upgrade themselves to compete with these new rivals, or even merge with an AI, something Elon Musk has made explicit in his sales pitch for Neuralink.
“Our artificial intelligence systems are getting better and better,” says Wolmetz. “And there is a question of at what point humans become the weakest link in the systems that we use. In order to be able to keep up with the pace of innovation in artificial intelligence and machine learning, we may very well need to directly interface with these systems.”
In the end, it may not make any difference.  At the end of the film Upgrade, Stem takes full control over Grey’s mind and body. The mechanic’s consciousness is left in idyllic dream state in which he isn’t paralysed, and his wife is alive.



World first as artificial neurons developed to cure chronic diseases

For the first time researchers successfully reproduced the electrical properties of biological neurons onto semiconductor chips.

  • Published on Tuesday 3 December 2019
https://www.bath.ac.uk/announcements/world-first-as-artificial-neurons-developed-to-cure-chronic-diseases/


Memristive synapses connect brain and silicon spiking neurons

Abstract
Brain function relies on circuits of spiking neurons with synapses playing the key role of merging transmission with memory storage and processing. Electronics has made important advances to emulate neurons and synapses and brain-computer interfacing concepts that interlink brain and brain-inspired devices are beginning to materialise. We report on memristive links between brain and silicon spiking neurons that emulate transmission and plasticity properties of real synapses. A memristor paired with a metal-thin film titanium oxide microelectrode connects a silicon neuron to a neuron of the rat hippocampus. Memristive plasticity accounts for modulation of connection strength, while transmission is mediated by weighted stimuli through the thin film oxide leading to responses that resemble excitatory postsynaptic potentials. The reverse brain-to-silicon link is established through a microelectrode-memristor pair. On these bases, we demonstrate a three-neuron brain-silicon network where memristive synapses undergo long-term potentiation or depression driven by neuronal firing rates….:



Cyborg Organoids: Implantation of Nanoelectronics via Organogenesis for Tissue-Wide Electrophysiology

Publication Date:July 26, 2019

Abstract
Tissue-wide electrophysiology with single-cell and millisecond spatiotemporal resolution is critical for heart and brain studies. Issues arise, however, from the invasive, localized implantation of electronics that destroys well-connected cellular networks within matured organs. Here, we report the creation of cyborg organoids: the three-dimensional (3D) assembly of soft, stretchable mesh nanoelectronics across the entire organoid by the cell–cell attraction forces from 2D-to-3D tissue reconfiguration during organogenesis. We demonstrate that stretchable mesh nanoelectronics can migrate with and grow into the initial 2D cell layers to form the 3D organoid structure with minimal impact on tissue growth and differentiation. The intimate contact between the dispersed nanoelectronics and cells enables us to chronically and systematically observe the evolution, propagation, and synchronization of the bursting dynamics in human cardiac organoids through their entire organogenesis.

https://pubs.acs.org/doi/10.1021/acs.nanolett.9b02512

Multiplexed genome engineering by Cas12a and CRISPR arrays encoded on single transcripts
Abstract
The ability to modify multiple genetic elements simultaneously would help to elucidate and control the gene interactions and networks underlying complex cellular functions. However, current genome engineering technologies are limited in both the number and the type of perturbations that can be performed simultaneously. Here, we demonstrate that both Cas12a and a clustered regularly interspaced short palindromic repeat (CRISPR) array can be encoded in a single transcript by adding a stabilizer tertiary RNA structure. By leveraging this system, we illustrate constitutive, conditional, inducible, orthogonal and multiplexed genome engineering of endogenous targets using up to 25 individual CRISPR RNAs delivered on a single plasmid. Our method provides a powerful platform to investigate and orchestrate the sophisticated genetic programs underlying complex cell behaviors…:


The Dark Side of CRISPR

Its potential ability to “fix” people at the genetic level is a threat to those who are judged by society to be biologically inferior…: https://www.scientificamerican.com/article/the-dark-side-of-crispr/?utm_


‘Create a New Society’: Russian Lawmakers Order Gene-Editing Tech

Russian lawmakers have ordered a study on assisted human reproduction, including a cutting-edge and controversial gene-editing technology that would create a “new type of society.”
New gene-editing tools such as CRISPR/Cas9 have made it possible to rearrange the genetic code much more precisely and at lower costs than before. A Chinese scientist caused outrage last year with a claim to have “gene-edited” babies, while a Russian biologist has this year declared plans to modify the genomes of human embryos and implant them in women.

Russian Military Seeks Upper Hand With ‘Genetic Passport’ for Soldiers, Top Scientist Says

June 7, 2019
 Russian soldiers of the future will be assigned service in specific military branches based on their hereditary predispositions detailed in so-called “genetic passports,” the country’s chief scientist has said.
President Vladimir Putin decreed in March for all Russians to be assigned “genetic passports” by 2025 under the national chemical and biological security strategy. Scientists speculated at the time that these “genetic passports” could refer toeither a set of genetic markers used to identify individuals or a detailed list of individual health risks and traits.


Alexander Sergeyev, the head of Russia’s Academy of Sciences, said the institution is in talks to develop a “soldier’s genetic passport” with the St. Petersburg-based Kirov Military Medical Academy.
“The idea is to understand on a genetic level who’s more predisposed to serve in the Navy or who may be better-suited to become a paratrooper or tankman,” he told the state-run TASS news agency.
The project will also help predict soldiers’ behavior and capabilities in stressful conditions, Sergeyev said in an interview published on Thursday.
The Kirov academy is researching stress resistance as part of the “genetic passport” project to ready for traditional warfare’s expansion into cyberspace, Sergeyev said on Friday.
“After all, the war of the future will largely be a war of intellects, of people who make decisions in conditions far different from those in the past,” he told the state-run RIA Novosti news agency.

https://www.themoscowtimes.com/2019/06/07/russian-military-seeks-upper-hand-with-genetic-passport-for-soldiers-top-scientist-says-a65927


Sapiens: A Brief History of Humankind

By Yuval Noah Harari
100,000 years ago, at least six human species inhabited the earth. Today there is just one. Us. Homo sapiens.
How did our species succeed in the battle for dominance? Why did our foraging ancestors come together to create cities and kingdoms? How did we come to believe in gods, nations and human rights; to trust money, books and laws; and to be enslaved by bureaucracy, timetables and consumerism? And what will our world be like in the millennia to come?
In Sapiens, Dr Yuval Noah Harari spans the whole of human history, from the very first humans to walk the earth to the radical – and sometimes devastating – breakthroughs of the Cognitive, Agricultural and Scientific Revolutions. Drawing on insights from biology, anthropology, paleontology and economics, he explores how the currents of history have shaped our human societies, the animals and plants around us, and even our personalities. Have we become happier as history has unfolded? Can we ever free our behaviour from the heritage of our ancestors? And what, if anything, can we do to influence the course of the centuries to come?
Bold, wide-ranging and provocative, Sapiens challenges everything we thought we knew about being human: our thoughts, our actions, our power ... and our future

Homo Deus: A Brief History of Tomorrow

By Yuval Noah Harari
Yuval Noah Harari, author of the critically-acclaimed New York Times bestseller and international phenomenon Sapiens, returns with an equally original, compelling, and provocative book, turning his focus toward humanity’s future, and our quest to upgrade humans into gods.
Over the past century humankind has managed to do the impossible and rein in famine, plague, and war. This may seem hard to accept, but, as Harari explains in his trademark style—thorough, yet riveting—famine, plague and war have been transformed from incomprehensible and uncontrollable forces of nature into manageable challenges. For the first time ever, more people die from eating too much than from eating too little; more people die from old age than from infectious diseases; and more people commit suicide than are killed by soldiers, terrorists and criminals put together. The average American is a thousand times more likely to die from binging at McDonalds than from being blown up by Al Qaeda.
What then will replace famine, plague, and war at the top of the human agenda? As the self-made gods of planet earth, what destinies will we set ourselves, and which quests will we undertake? Homo Deus explores the projects, dreams and nightmares that will shape the twenty-first century—from overcoming death to creating artificial life. It asks the fundamental questions: Where do we go from here? And how will we protect this fragile world from our own destructive powers? This is the next stage of evolution. This is Homo Deus.
With the same insight and clarity that made Sapiens an international hit and a New York Times bestseller, Harari maps out our future.

https://www.goodreads.com/book/show/31138556-homo-deus
08 Oct 2019 | 15:00 GMT
Many Experts Say We Shouldn’t Worry About Superintelligent AI. They’re Wrong
By Stuart Russell
 This article is based on a chapter of the author’s newly released book, Human Compatible: Artificial Intelligence and the Problem of Control.
AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the “standard model” of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.
Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies presented a detailed case for taking the risk seriously. In what most would consider a classic example of British understatement, The Economist magazine’s review of Bostrom’s book ended with: “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”
Switching the machine off won’t work for the simple reason that a superintelligent entity will  already have thought of that possibility and taken steps to prevent it.
Surely, with so much at stake, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware. Instead, a great deal of effort has gone into various forms of denial.
Some well-known AI researchers have resorted to arguments that hardly merit refutation. Here are just a few of the dozens that I have read in articles or heard at conferences:
Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.
Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.
No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.
Perhaps the most common response among AI researchers is to say that “we can always just switch it off.” Alan Turing himself raised this possibility, although he did not put much faith in it:
If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled.... This new danger...is certainly something which can give us anxiety.
Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off. We can no more “just switch it off” than we can beat AlphaGo (the world-champion Go-playing program) just by putting stones on the right squares.
Other forms of denial appeal to more sophisticated ideas, such as the notion that intelligence is multifaceted. For example, one person might have more spatial intelligence than another but less social intelligence, so we cannot line up all humans in strict order of intelligence. This is even more true of machines: Comparing the “intelligence” of AlphaGo with that of the Google search engine is quite meaningless.
Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,” he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.
Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument.
Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits. Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept.
This is cold comfort to the chimpanzees and other species that survive only because we deign to allow it, and to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.
The risks of superintelligence can also be dismissed by arguing that superintelligence cannot be achieved. These claims are not new, but it is surprising now to see AI researchers themselves claiming that such AI is impossible. For example, a major report from the AI100 organization, “Artificial Intelligence and Life in 2030 [PDF],” includes the following claim: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”
To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: They’ve always known that there will never be a cure for cancer.
What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect that the main reason is tribalism—the instinct to circle the wagons against what are perceived to be “attacks” on AI. It seems odd, however, to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.
If superhuman AI is not strictly impossible, perhaps it’s too far off to worry about? This is the gist of Andrew Ng’s assertion that it’s like worrying about “overpopulation on the planet Mars.” Unfortunately, a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.
For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we wait until 2068 to start working on a solution? Far from it! There would be a worldwide emergency project to develop the means to counter the threat, because we can’t say in advance how much time is needed.
Ng’s argument also appeals to one’s intuition that it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever more capable AI systems, with very little thought devoted to what happens if we succeed. A more apt analogy, then, would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive. Some might call this plan unwise.
Another way to avoid the underlying issue is to assert that concerns about risk arise from ignorance. For example, here’s Oren Etzioni, CEO of the Allen Institute for AI, accusing Elon Musk and Stephen Hawking of Luddism because of their calls to recognize the threat AI could pose:
At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.
Even if we take this classic ad hominem argument at face value, it doesn’t hold water. Hawking was no stranger to scientific reasoning, and Musk has supervised and invested in many AI research projects. And it would be even less plausible to argue that Bill Gates, I.J. Good, Marvin Minsky, Alan Turing, and Norbert Wiener, all of whom raised concerns, are unqualified to discuss AI.
The accusation of Luddism is also completely misdirected. It is as if one were to accuse nuclear engineers of Luddism when they point out the need for control of the fission reaction. Another version of the accusation is to claim that mentioning risks means denying the potential benefits of AI. For example, here again is Oren Etzioni:
Doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.
And here is Mark Zuckerberg, CEO of Facebook, in a recent media-fueled exchange with Elon Musk:
If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick.
The notion that anyone mentioning risks is “against AI” seems bizarre. (Are nuclear safety engineers “against electricity”?) But more importantly, the entire argument is precisely backwards, for two reasons. First, if there were no potential benefits, there would be no impetus for AI research and no danger of ever achieving human-level AI. We simply wouldn’t be having this discussion at all. Second, if the risks are not successfully mitigated, there will be no benefits.
The potential benefits of nuclear power have been greatly reduced because of the catastrophic events at Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011. Those disasters severely curtailed the growth of the nuclear industry. Italy abandoned nuclear power in 1990, and Belgium, Germany, Spain, and Switzerland have announced plans to do so. The net new capacity per year added from 1991 to 2010 was about a tenth of what it was in the years immediately before Chernobyl.
Strangely, in light of these events, the renowned cognitive scientist Steven Pinker has argued [PDF] that it is inappropriate to call attention to the risks of AI because the “culture of safety in advanced societies” will ensure that all serious risks from AI will be eliminated. Even if we disregard the fact that our advanced culture of safety has produced Chernobyl, Fukushima, and runaway global warming, Pinker’s argument entirely misses the point. The culture of safety—when it works—consists precisely of people pointing to possible failure modes and finding ways to prevent them. And with AI, the standard model is the failure mode.
Pinker also argues that problematic AI behaviors arise from putting in specific kinds of objectives; if these are left out, everything will be fine:
AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.
Yann LeCun, a pioneer of deep learning and director of AI research at Facebook, often cites the same idea when downplaying the risk from AI:
There is no reason for AIs to have self-preservation instincts, jealousy, etc.... AIs will not have these destructive “emotions” unless we build these emotions into them.
Unfortunately, it doesn’t matter whether we build in “emotions” or “desires” such as self-preservation, resource acquisition, knowledge discovery, or, in the extreme case, taking over the world. The machine is going to have those emotions anyway, as subgoals of any objective we do build in—and regardless of its gender. As we saw with the “just switch it off” argument, for a machine, death isn’t bad per se. Death is to be avoided, nonetheless, because it’s hard to achieve objectives if you’re dead.
A common variant on the “avoid putting in objectives” idea is the notion that a sufficiently intelligent system will necessarily, as a consequence of its intelligence, develop the “right” goals on its own. The 18th-century philosopher David Hume refuted this idea in A Treatise of Human Nature. Nick Bostrom, in Superintelligence, presents Hume’s position as an orthogonality thesis:
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
For example, a self-driving car can be given any particular address as its destination; making the car a better driver doesn’t mean that it will spontaneously start refusing to go to addresses that are divisible by 17.
By the same token, it is easy to imagine that a general-purpose intelligent system could be given more or less any objective to pursue—including maximizing the number of paper clips or the number of known digits of pi. This is just how reinforcement learning systems and other kinds of reward optimizers work: The algorithms are completely general and accept any reward signal. For engineers and computer scientists operating within the standard model, the orthogonality thesis is just a given.
The most explicit critique of Bostrom’s orthogonality thesis comes from the noted roboticist Rodney Brooks, who asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.”
Those who argue the risk is negligible have failed to explain why superintelligent AI will necessarily remain under human control.
Unfortunately, it’s not only possible for a program to behave like this; it is, in fact, inevitable, given the way Brooks defines the issue. Brooks posits that the optimal plan for a machine to “achieve goals set for it by humans” is causing problems for humans. It follows that those problems reflect things of value to humans that were omitted from the goals set for it by humans. The optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern.
In summary, the “skeptics”—those who argue that the risk from AI is negligible—have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.
Rather than continue the descent into tribal name-calling and repeated exhumation of discredited arguments, the AI community must own the risks and work to mitigate them. The risks, to the extent that we understand them, are neither minimal nor insuperable. The first step is to realize that the standard model—the AI system optimizing a fixed objective—must be replaced. It is simply bad engineering. We need to do a substantial amount of work to reshape and rebuild the foundations of AI.
This article appears in the October 2019 print issue as “It’s Not Too Soon to Be Wary of AI.”
About the Author
Stuart Russell, a computer scientist, founded and directs the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley.


AI method generates 3D holograms in real-time

For virtual reality, 3D printing, and medical imaging.

Even though virtual reality headsets are popular for gaming, they haven’t yet become the go-to device for watching television, shopping, or using software tools for design and modelling.

One reason why is because VR can make users feel sick with nausea, imbalance, eye strain, and headaches. This happens because VR creates an illusion of 3D viewing — but the user is actually staring at a fixed-distance 2D display. The solution for better 3D visualization exists in a 60-year-old tech that’s being updated for the digital world — holograms.

A new method called tensor holography enables the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smart-phone…:

https://www.kurzweilai.net/digest-breakthrough-ai-method-generates-3d-holograms-in-real-time

Civilization: knowledge, institutions, and humanity’s future

Insights from technological sociologist Samo Burja.

Burja outlines these steps: investigate the landscape, evaluate our odds, then try to plot the best course. He explains:

Our civilization is made-up of countless individuals and pieces of material technology — that come together to form institutions and inter-dependent systems of logistics, development, and production. These institutions + systems then store the knowledge required for their own renewal + growth.

We pin the hopes of our common human project on this renewal + growth of the whole civilization. Whether this project is going well is a challenging — but vital — question to answer. History shows us we’re not safe from institutional collapse. Advances in technology mitigate some aspects, but produce their own risks. Agile institutions that make use of both social + technical knowledge not only mitigate such risks — but promise unprecedented human flourishing.

There has never been an immortal society. No matter how technologically advanced our own society is — it’s unlikely to be an exception. For a good future that defies these odds, we must understand the hidden forces shaping society.…:

https://www.kurzweilai.net/digest-civilization-knowledge-institutions-and-humanitys-future

The Social Singularity: How decentralization will allow us to transcend politics, create global prosperity, and avoid the robot apocalypse

by Max Borders

 In this decentralization manifesto, futurist Max Borders shows that humanity is already building systems that will “underthrow” great centers of power.

Exploring the promise of a decentralized world, Borders says we will:

- Reorganize to collaborate and compete with AI;
- Operate within networks of superior collective intelligence;
- Rediscover our humanity and embrace values for an age of connection.

With lively prose, Borders takes us on a tour of modern pagan festivals, cities of the future, and radically new ways to organize society. In so doing, he examines trends likely to revolutionize the ways we live and work.

Although the technological singularity fast approaches, Borders argues, a parallel process of human reorganization will allow us to reap enormous benefits. The paradox? Our billion little acts of subversion will help us lead richer, healthier lives—and avoid the robot apocalypse. …: https://www.goodreads.com/book/show/41031272-the-social-singularity   

Nature Imply about the Meaning of Our Existence

Samuel T. Wilkinson

By using principles from a variety of scientific disciplines, Yale Professor Samuel Wilkinson provides a framework for human evolution that reveals an overarching purpose to our existence.
Generations have been taught that evolution implies there is no overarching purpose to our existence, that life has no fundamental meaning. We are merely the accumulation of tens of thousands of intricate molecular accidents. Some scientists take this logic one step “The fact of evolution [is] inherently atheistic. It goes against the notion that there is a God.”
But is this true?
By integrating emerging principles from a variety of scientific disciplines—ranging from evolutionary biology to psychology—Yale Professor Samuel Wilkinson provides a framework of evolution that implies not only that there is an overarching purpose to our existence, but what this purpose is .
With respect to our evolution, nature seems to have endowed us with competing dispositions, what Wilkinson calls the dual potential of human nature . We are pulled in different selfishness and altruism, aggression and cooperation, lust and love. When we couple this with the observation that we possess a measure of free will, all this strongly implies there is a universal purpose to our existence.
This purpose, at least one of them, is to choose between the good and evil impulses that nature has created within us. Our life is a test. This is a truth, as old as history it seems, that has been espoused by so many of the world’s religions. From a certain framework, these aspects of human nature—including how evolution shaped us—are evidence for the existence of a God, not against it.
Closely related to this is meaning . What is the meaning of life? Based on the scientific data, it would seem that one such meaning is to develop deep and abiding relationships. At least that is what most people report are the most meaningful aspects of their lives. This is a function of our evolution. It is how we were created.

https://www.goodreads.com/en/book/show/101135650-purpose