Lietu internets, roboti un mākslīgais intelekts
“KPMG Baltics” datu un analītikas pakalpojumu vadītājs
Aprēķini rāda, ka savaldīt augsti
attīstītu mākslīgo intelektu būtu neiespējami
Jau
vairākus desmitus gadu zinātniskajā fantastikā eksistē ideja par to, kā
mākslīgais intelekts pārņem savā varā pasauli. 2021. gada janvārī zinātnieki
sniedza visai svarīgo atbildi par to, vai mēs spētu kontrolēt augsti attīstītu
mākslīgo intelektu. Nē, visticamāk, mēs to nespētu.
Āķis
ir tāds – lai kontrolētu augsti attīstītu mākslīgo intelektu, mums būtu
nepieciešama šā intelekta simulācija, lai mēs to varētu analizēt. Tomēr, ja mēs
šo mākslīgo intelektu nekādā veidā nevaram kontrolēt, mēs arī nevaram izveidot
šo simulāciju.
Tādus
noteikumus kā “nedarīt pāri cilvēkiem” nevar uzstādīt, ja mēs nezinām, kādus
scenārijus šis mākslīgais
intelekts izgudros, raksta pētnieki. Ja reiz datorsistēma ir
pārāka par programmētājiem, tad mēs vairs nevaram uzstādīt limitus.
“Superintelekts
rada ievērojami citādākas problēmas nekā tās, kuras mēs iedomājamies “robotu
ētikas” sakarā. Tas ir tāpēc, ka superintelekts ir daudzpusīgs un var mobilizēt
dažādus resursus, lai sasniegtu cilvēkam neuzminamus mērķus,” raksta pētnieki.
Daļa
no pētnieku argumentācijas sastāv no tā saucamās “uzkāršanās problēmas”, kuru
izvirzīja Alans Tūrings 1936. gadā. Problēma saistīta ar zināšanu, vai dators nonāks pie secinājuma un atbildes vai
“uzkārsies” jeb bezgalīgi meklēs šo atbildi.
Piemēram,
katra programma, kas uzrakstīta, lai mākslīgais intelekts nekaitē cilvēkiem un
neiznīcina pasauli, var “uzkārties” vai ne. Matemātiski mums nav iespējams
noskaidrot, kas notiks. Mums arī nav iespējas padarīt kādu algoritmu
nelietojamu.
Alternatīva,
lai iemācītu mākslīgo intelektu neiznīcināt pasauli, ir limitēt superintelekta
spējas. To, piemēram, varētu atslēgt no noteiktiem tīkliem vai interneta.
Tiesa, jaunajā pētījumā noraidīta arī šī ideja.
Ja mēs
turpināsim attīstīt mākslīgo intelektu, mēs varam palaist garām brīdi, kad tas
sāks kļūt nekontrolējams. Tas nozīmē, ka mums jāsāk uzdot nopietni jautājumi
par virzienu, kur dodamies.
Pētījums publicēts zinātniskajā žurnālā “Journal of Artificial Intelligence Research”.
Jānis Pekša 19. janvāris, 2022
Eiropas ekonomikas un
sociālo lietu komiteja (EESK) ir devusi savu atzinumu par Mākslīgā intelekta
regulu, kas nozīmē, ka pārskatāmā nākotnē (iespējams, jau nākamā gada laikā)
Eiropas Savienībā (ES) beidzot būs noteikti skaidri kritēriji par to, kas tad
ir mākslīgais intelekts un kādas ir tā izmantošanas robežas. …:
https://ir.lv/2022/01/19/maksliga-intelekta-regula-finisa-taisne
Filma The
Social Dilemma ir lielisks pašreizējās digitālās ēras un tās spēles
noteikumu spogulis:
Igaunijas valdība tiesnešu un citu valsts ierēdņu darbu gatavojas aizstāt ar mākslīgo intelektu
Mākslīgā intelekta tehnoloģiju tendences
uzņēmējdarbībā 2022. gadā
Kaspars Kauliņš, Tilde starptautiskā biznesa attīstības
vadītājs
Tehnoloģiju konsultāciju
uzņēmuma "Gartner" nesen publicētajā pētījumā, kura ietvaros
aptaujāja gandrīz 2400 uzņēmumu pārstāvju visā pasaulē, secināts, ka 2022. gadā
visvairāk pieaugs uzņēmējdarbības investīcijas tieši informācijas tehnoloģiju
risinājumos. Arī mēs Tildē esam apkopojuši viedokļus par tehnoloģiju inovāciju
tendencēm, kas ietekmēs uzņēmējdarbību jau šajā gadā.
Lielākā uzmanība uzņēmējdarbībā tuvākajā nākotnē
tiks veltīta risinājumiem, kurus atbalsta mākslīgais intelekts. Tas ļaus vēl vairāk palielināt uzņēmumu darbības
efektivitāti, ietaupīt cilvēkresursu un finanšu resursus, kā arī uzlabot
lietotāju pieredzi. Investīciju pieaugums IT infrastruktūrā un risinājumos ļaus
nākotnē labāk konkurēt mainīgajos tirgos, bet tie, kas laikus neiekāps mākslīgā
intelekta balstīto tehnoloģiju vilcienā, nākotnē zaudēs konkurences cīņā,
nespējot sasniegt tehnoloģiski attīstītāku uzņēmumu biznesa rādītājus.
Investīcijas viedajos sarunbotos pārsniegs
līdzšinējās prognozes
Saskaņā ar
Starptautiskās datu korporācijas (International Data Corporation – IDC)
informāciju, investīcijas mākslīgā intelekta iespējotos risinājumos 2022.
gadā sasniegs 68,5 miljardus eiro. Tas ir trīsreiz vairāk nekā tika
prognozēts pirms 4 gadiem.
Ņemot vērā Tildes
specializāciju, vēlos īpašu uzmanību pievērst viedajām valodu tehnoloģijām.
Pandēmija būtiski paātrinājusi arī mākslīgā intelekta virtuālo asistentu jeb
sarunbotu iesakņošanos uzņēmējdarbībā. Situācijā, kad cilvēki nevarēja fiziski
apmeklēt veikalus un klientu apkalpošanas centrus, uzņēmumi visā pasaulē arvien
vairāk sāka ieviest un attīstīt sarunbotus, tādējādi nodrošinot klientu
atbalsta pieejamību attālinātā režīmā 24/7, veidojot labāku lietotāju pieredzi
un ceļot klientu apmierinātību.
Arī tirgus un
patērētāju datu analīzes uzņēmums "Statista" prognozē, ka
2025. gadā globālais sarunbotu tirgus pārsniegs 1 miljardu eiro vērtību. Tā ir
milzīga izaugsme, jo vēl 2016. gadā šā tirgus vērtība tika lēsta vien ap 168
miljoniem eiro. Ņemot vērā šo izrāvienu, kas lielā mērā veidojies pandēmijas
ietekmē, arī šajā gadā varam cerēt uz daudzu jaunu sarunbotu ienākšanu un esošo
risinājumu darbības paplašināšanu - nenogurstošie virtuālie palīgi kļūs gan
zinošāki, gan arī spēs palīdzēt risināt arvien jauna veida uzdevumus, esot
pieejami arvien plašākam lietotāju skaitam.
Attīstoties
tehnoloģijām, virtuālie asistenti sazināsies arī vairākās valodās, un tam ir
labi pirmie piemēri arī tepat Latvijā. Pirmais ar mākslīgo intelektu
apveltītais loģistikas sarunbots, kurš labi pārvalda visas trīs Baltijas
valodas, kā arī angļu valodu, šogad pievienojās starptautisko sūtījumu piegādes
uzņēmumam DPD. Viedais virtuālais palīgs ļauj nodrošināt operatīvāku klientu
apkalpošanu un efektīvāku sūtījumu piegādes procesu lielam klientu skaitam
visās trīs Baltijas valstīs.
Virtuālo asistentu
tehnoloģijas attīstās un nobriest, skaidrākas kļūst arī lietotāju gaidas
attiecībā uz tām. Zināmu vilšanos, iespējams, sagādā vienkāršotie risinājumi,
kas ir salīdzinoši viegli ieviešami un uzturami, tomēr nenodrošina iespēju
veidot cilvēciskus dialogus dabīgā sarunvalodā. Tādēļ varam prognozēt, ka šādu
risinājumu īpatsvars samazināsies. Tas nozīmē, ka organizācijas dos priekšroku
ar mākslīgā intelekta dabīgās valodas apstrādes tehnoloģijām nodrošinātiem
risinājumiem, kas, cita starpā, ļauj integrēt sarunbotus ar uzņēmumu iekšējām
informācijas sistēmām un datubāzēm.
Virtuālo asistentu
zināšanu bāzes paplašināšana, kas ļauj lietotājam saņemt informāciju un
atbalstu par iespējami plašāku jautājumu loku, ir viens no aktuālajiem
attīstības virzieniem. Paplašinoties virtuālo palīgu lietojumam dažādās nozarēs
un arī mākslīgā intelekta risinājumu lietojumam vairāk un mazāk saistītos
scenārijos, parādījušies arī pirmie "saimniekboti" - masterbots,
kas spēj pārvaldīt citus virtuālos asistentus vai to zināšanu bāzes. Arī mēs
Tildē, sadarbībā ar Kultūras Informācijas Sistēmu Centru (KISC) un Valsts
Kanceleju strādājam šajā virzienā. Šajā sadarbībā veidotais valsts pārvaldes
virtuālais asistents Zintis ir unikāls risinājums Eiropā, kura zināšanu bāze
ļauj nodrošināt atbalstu gandrīz 100 valsts pārvaldes institūcijām un to
klientiem.
Pandēmijas laika
ierobežojumi būtiski mainījuši arī iekšējās un ārējās komunikācijas procesus.
Lielākā daļa sanāksmju tiek organizētas attālinātā režīmā, tādēļ ievērojami
pieaugusi vajadzība pēc efektīvas šādu pasākumu pārvaldības. Arī šajā jomā
uzņēmumiem talkā nāk mākslīgā intelekta tehnoloģiju risinājumi. Tirgū jau
parādījušies pirmie virtuālie sapulču asistenti jeb digitālie sekretāri,
kas var palīdzēt sanāksmju plānošanā un organizēšanā, kā arī piedalīties
attālinātā sapulcē kādā no saziņas platformām un veikt dažādus uzdevumus, kas
tiek doti ar balss komandām - veikt sanāksmju ierakstīšanu, sagatavot sapulču
transkriptus un protokolus, rezervēt nākamās sapulces laiku, nosūtīt e-pastus sanāksmes
dalībniekiem vai nodot rakstiskus uzdevumus konkrētiem darbiniekiem.
Viedās tehnoloģijas, kas saprot un atdarina
cilvēka runu
Viena no mākslīgā
intelekta valodu tehnoloģiju jomām, kas, pateicoties neironu tīklu un
padziļinātās mašīnmācīšanās straujajai ienākšanai, pēdējo piecu gadu laikā
piedzīvo sava veida atdzimšanu, ir runas tehnoloģijas. Runas atpazīšana,
kas ļauj balsī sacīto vai ierakstā dzirdamo runu pārvērst tekstā, strauji
attīstās arī tepat Latvijā.
Piemēram, Tildes runas
atpazinējs, kas nodrošina runas pārvēršanu tekstā, laba audioieraksta
kvalitātes gadījumā atpazīst pat 95% un vairāk no runātā. Līdz ar šīs
tehnoloģijas kvalitātes pieaugumu palielinās arī tās pielietojums
visdažādākajās jomās. Baltijas valstis ir vienas no Eiropas pionierēm runas
atpazīšanas risinājumu izmantošanā arī publiskajā sektorā. Šādi risinājumi
palīdz un atvieglo daudzu profesionāļu darbu, piemēram, Igaunijas tiesās
mākslīgais intelekts nodrošina automatizētu tiesu sēžu protokolu sagatavošanu.
Paredzams, ka šajā gadā pieaugs runas atpazinēju izmantošana arī klientu sarunu
ierakstu analīzes un apkalpošanas kvalitātes uzlabošanas vajadzībām.
Lai nodrošinātu
informācijas un pakalpojumu pieejamību iespējami plašam lietotāju lokam, jādomā
par dažādu saziņas kanālu izmantošanu. Arī Eiropas uzņēmumi arvien vairāk sāk
ieviest automatizētus telefonzvanus jeb robozvanus, lai sasniegtu tās
iedzīvotāju grupas, kas saziņā dod priekšroku telefona zvaniem. Mākslīgā
intelekta runas un dabīgās valodas apstrādes tehnoloģiju integrācija paver
iespējas jaunas paaudzes risinājumiem saziņā pa tālruni - telefonijas botiem.
Tie ne vien spēj atskaņot balsī iepriekš sagatavotu informāciju lietotājiem,
bet arī spēj saprast cilvēka teikto un tādējādi palīdzēt rast atbildes uz
dažāda rakstura jautājumiem par konkrēto produktu vai pakalpojumu, kā arī tikt
galā ar vienkāršākām tehniskām problēmām, kas var rasties to lietotājiem.
Domājams, ka šādi telefonijas virtuālie asistenti jau šogad parādīsies arī
Latvijā.
Mākslīgais intelekts, kurš pielāgojas lietotāja
emocijām
Popularitāti gūst arī
emociju mākslīgais intelekts (emotional artifficial intelligence)
vai mākslīgais emociju intelekts (artifficial emotional intelligence),
kas ar tehnoloģiju palīdzību nosaka un novērtē cilvēka emocionālo stāvokli. Kā
materiāls tiek izmantota cilvēka sejas izteiksme, acu kustības, balss tembrs.
"Gartner" paredz, ka jau gada laikā 10% personīgo viedierīču būs
aprīkotas ar emociju atpazīšanas tehnoloģijām.
Arī "Tilde"
jau 2019. gadā uzsāka sadarbību ar Latvijas Universitāti, lai pētītu cilvēka un
datora saziņas emocionālos aspektus. Kopprojekta ietvaros notiek darbs pie
algoritma izstrādes, kas atpazītu cilvēka emocijas, un analizētu lietotāju
sarunas ar klientu atbalsta speciālistiem, vērtējot indivīda emocijas sarunas gaitā.
Tas ļaus atbilstoši reaģēt un tādējādi mazināt klientu neapmierinātību.
Savukārt, lai nodrošinātu lietotājiem iespējami dabīgu saziņu ar virtuālo
palīgu arī balsī, "Tilde" strādā pie izteiksmīgas balss sintēzes
risinājumiem.
Mašīntulkošanu
papildinās tekstu rediģēšana
Jau vairākus gadus plašu popularitāti guvuši dažādi
mākslīgā intelekta tehnoloģijās veidoti mašīntulkošanas risinājumi, kas satura
tulkošanu padarījuši daudz ātrāku un vienkāršāku. Specializēti mašīntulkošanas
risinājumi tiek veidoti arvien plašākam valodu un nozaru klāstam. Strauji
attīstās arī dažādi integrētie risinājumi, kas paver plašākas iespējas mākslīgā
neironu tīklu mašīntulkošanas tehnoloģiju pielietojumam.
Piemēram, mājaslapu
informācijas nodrošināšana vietējā valodā tirgos, uz kuriem plānots eksportēt
preces vai pakalpojumus, mūsdienās kļuvusi ļoti vienkārša un salīdzinoši lēta,
jo nav nepieciešama programmēšana, cilvēka tulkošana vai lapu dublēšana — tīmekļa
vietnes tulkotājs izveido spraudni tīmekļa lapai, kuru var integrēt visu
veidu vietnēs vai e-komercijas platformās. Attīstoties tehnoloģijām, arvien
populārāka kļūs arī tekstu rediģēšana, kas vēl vairāk ietaupīs uzņēmēju
laiku un resursus, nodrošinot augstvērtīgu rezultātu un kvalitāti.
Mākslīgais intelekts piedalīsies biznesa lēmumu
pieņemšanā
Vēl viena joma, kuras
izaugsmi prognozē "Gartner", ir lēmumu intelekts (angliski – decision
intelligence). Pateicoties tam, ka organizācijas un privātpersonas var
uzkrāt un saglabāt ļoti daudz un dažāda rakstura datus, strauji attīstās
mākslīgajā intelektā balstītas tehnoloģija, kas spēj šos datus apstrādāt,
mācīties no tiem un modelēt iespējamos attīstības scenārijus, padarot datos
balstītu lēmumu pieņemšanu efektivitāku un ātrāku.
Sagaidāms, ka
tuvākajos divos gados apmēram trešdaļa no lielajiem pasaules uzņēmumiem lietos
šādas tehnoloģijas, lai analizētu milzīgus datu apjomus un pieņemtu
argumentētus un sistēmiskākus lēmumus.
Noslēgumā gribu vēlreiz atgādināt, ka tehnoloģiju attīstība pēdējos
gados ir kļuvusi ievērojami straujāka, jo pandēmijas ietekmē uzņēmumi daudz
vieglāk pieņēma un pielāgoja savai darbībai inovācijas. Digitālās
transformācijas vilciens traucas arvien ātrāk, tāpēc svarīgi tajā ielēkt, lai
nezaudētu konkurences cīņā.
https://www.delfi.lv/news/versijas/kaspars-kaulins-maksliga-intelekta
ASV IT uzņēmuma "Cognizant" ziņojums par nākotnes profesijām "21 Jobs of the Future Report" nosauc vairākas neparastas nākotnes profesijas – digitālais drēbnieks, kas piedāvā personalizētus individuālo šuvēju pakalpojumus, papildinātās realitātes ceļojumu arhitekts, virtuālās identitātes drošības eksperts, kiberuzbrukumu speciālists, mašīnu personības dizaina vadītājs un citas. Kā norāda karjeras konsultante Jolanta Priede, par šiem nosaukumiem varam brīnīties, taču patiesībā ikviens jaunietis pats var uzburt savu nākotnes profesiju, kombinējot savas individuālās prasmes.
Arī nākotnē ļoti būtiska ietekme jaunieša karjeras ceļā būs vecākiem. Lielākais izaicinājums šajā jomā – spēt pieņemt tehnoloģiju nestās pārmaiņas un palīdzēt bērniem tajās orientēties. Karjeras konsultanti no pieredzes zina stāstīt, ka diemžēl joprojām ir sastopami vecāki, kas mēģina bērnam uztiept savu viedokli, ievirzīt "pareizajās" karjeras sliedēs, mudinot izvēlēties prestižās profesijas, kas ļauj labi nopelnīt, nevis atbilst bērna interesēm.
Nākotnē
cilvēki nestrādās naudas dēļ
Kādas izmaiņas sabiedrībā atnesīs informācijas tehnoloģiju attīstība? Vai
šīs tehnoloģijas apdraud mūsu privātumu un kā jaunajām tendencēm pielāgosies
cilvēka ķermenis? Par šiem un citiem jautājumiem portālam TVNET interviju
sniedza IT tehnoloģiju eksperts, uzņēmuma TietoEVRY
viceprezidents un māksligā intelekta jautājumu vadītājs Kristians Gutmans.
Es nesen Netflix noskatījos filmu «Sociālā dilemma». Tajā tika
pausts tāds diezgan skaudrs ziņojums – mūsdienu sociālo mediju industrija
balstās uz cilvēku uzmanību kā pārdodamo preci. Vai jūs tam piekristu?
Uzmanība ir ļoti svarīgs
cilvēku dzīves ikdienas elements. Mums obligāti dzīves laikā nevajag piedzīvot
daudz, taču mums ir svarīgi, lai tās pieredzes, kuras mēs piedzīvojam justos
piepildītas un to var panākt tikai tad, ja cilvēks ir uz šīm pieredzēm
fokusējies. Turklāt piepildījuma sajūtu mēs varam iegūt arī no mazām lietām.
Mums obligāti nevajag doties tālos ceļojumos.
Atbildot uz jautājumu par
sociālajiem medijiem un cilvēku uzmanību kā pārdodamo preci, es domāju, ka tam
varētu piekrist. Tomēr tā noteikti nav vienīgā lieta, kuru sociālie mediji
pārdod. Nepieciešamība piesaistīt uzmanību nav nekas jauns un ar to saskaras
cilvēki dažādās sfērās – uzņēmējdarbībā, žurnālistikā un citur.
Kā jūs redzat IT tehnoloģiju attīstību
nākotnē? Vai uzskatāt, ka mēs zaudēsim vēl vairāk privātuma?
Šis ir ļoti sarežģīts
jautājums, kuru atkal ir padarījis aktuālu Covid-19 pandēmija. Daudzas pasaules
valdības ir izlēmušas, ka neskatoties uz to, ka tehnoloģijas būtu spēcīgs
instruments slimības izplatības ierobežošanai, privātuma ievērošana tomēr ir
daudz svarīgāka.
Šeit gan izkristalizējas
cita problēma – privātums tiek
nodrošināts, bet uz tā pamata tehnoloģijas zaudē savu spēku. Respektīvi – tās
vairs tik efektīvi nepilda to funkciju, kuru tām ir paredzēts pildīt.
Tāpēc daudzu pasaules valdību un arī eksperta līmeņa diskusijās tiek mēģināts
rast atbildi uz jautājumu – kur mums
novilkt robežu starp tehnoloģiju izmantošanu sabiedrības labā un privātuma
nodrošināšanu? Piemēram, es nesen biju ciemos pie Zviedrijas karaļa un
karalienes un mēs spriedām par tehnoloģiju, kura ļautu noteikt cilvēkus ar
mērķi tīmeklī rādīt bērniem to vecumam nepiemērotu informāciju. Tomēr šādas
tehnoloģijas ieviešana nozīmētu dot papildus varu valdībām, policijai un
izmeklēšanas aģentūrām, kura potenciāli varētu arī tikt izmantota, lai
izspiegotu sabiedrību.
Es kopumā uzskatu, ka IT attīstība
palielina novērošanas spēju veikšanu un ļauj labāk paredzēt cilvēku uzvedību.
Eiropas līmenī mēs ļoti cenšamies, lai tehnoloģijas netiktu izmantotas šādos
nolūkos, taču ir pasaulē tādas valstis, kā Ķīna, Krievija un citas, kuras
noteikti būs ieinteresētas attīstīt IT pielietojumu arī novērošanas laukā. Mēs
no tā izbēgt nevaram un man liekas, ka valdībām būs ļoti liels izaicinājums
izstrādāt normas un regulējumus, kuri neļautu pārkāpt privātuma vērtības.
Vai jūs uzskatāt, ka valdībām vispār
pietiek kapacitātes, lai ierobežotu tehnoloģiju izmantošanu ar mērķi novērot
privātās dzīves?
Es domāju, ka nē. Viena
lieta ir izstrādāt noteiktus regulējumus un likumus, bet pavisam cita – panākt,
lai tie tiktu ievēroti. Parasti ir problēmas tieši saistībā ar otro
uzdevumu, jo valsts institūcijās bieži vien trūkst kvalificētu IT speciālistu.
Parasti valsts sektors augstas raudzes IT speciālistiem nav pirmā izvēle darba
vietai un to nav arī daudz kopumā.
Pat ja valdībām būtu pieejamie
cilvēkresursi, es tomēr uzskatu, ka efektīvas regulācijas nodrošināšanā būtu
jāiesaistās arī sabiedrībai.
IT sektors tomēr ir ļoti
sarežģīts un tā pārvaldīšanai nepietiek ar tādām pašām metodēm, kuras ir
vajadzīgas, lai veiktu organizētas un strukturētas darbības (piemēram,
satiksmes plūsmu nodrošināšana). Informācija IT sektora ietvaros izplatās
milzīgā ātrumā un to ir praktiski neiespējami izsekot pat gudrākajiem no mums.
Es vairāk atbalstītu pieeju, kuras ietvaros cilvēkiem tiek sniegti nepieciešamie instrumenti, lai tie paši varētu
spēt sevi interneta vidē pasargāt un arī varētu izvēlēties cik ļoti būt daļai
no globālā tīmekļa.
Jūs iepriekš pieminējāt pašreizējo
Covid-19 krīzi. Vai piekrītat, ka mēs vairs nekad nedzīvosim laikmetā, kurā
cilvēku novērošana ar tehnoloģiju palīdzību būtu tik ierobežota kā līdz šim?
Es domāju, ka šajā sfērā mēs
varam atgriezties pie agrākās situācijas un pat šo situāciju uzlabot. Mēs
pagaidām vēl dzīvojam laikmetā, kurā novērošanas tehnoloģijas vēl ir bērnu
autiņos un tāpēc ir normāli, ka pastāv tendence eksperimentēt ar tās
pielietojumiem. Esmu darbojies arī medicīnas laukā un pagātnē arī šeit
pastāvēja mazs regulācijas līmenis un teorētiski bija iespējams cilvēkam
nodarīt lielu kaitējumu. Mūsdienās ārstiem ir nepieciešams iegūt īpašus
sertifikātus un arī attiecīgo, zinātniskās metodēs balstītu izglītību.
Nonākšana līdz šīm atziņām prasīja daudzus neveiksmīgus eksperimentus. Es
uzskatu, ka pakāpeniski IT lauks nonāks līdz kam līdzīgam – pastāvēs arvien
vairāk regulējumi un arvien vairāk cilvēki liks tam pievērst uzmanību.
Es arī nedomāju, ka Covid-19
krīze IT privātuma sfērā ir likusi piedzīvot lielus zaudējumus, taču mēs
nedrīkstam atslābt un ir nepieciešams par šiem jautājumiem palielināt izpratni
dažādos forumos un sabiedrībā kopumā.
Runājot par regulāciju, atsevišķi eksperti
ir pauduši viedokli, ka pagaidām populārā valsts pārvaldes forma – demokrātija
ir nepietiekama, lai risinātu nākotnes
izaicinājumus tehnoloģiju jomā. Vai šādam viedoklim varētu būt pamats?
Demokrātijas atšķiras.
Piemēram, ir atšķirības starp
kontinentālās Eiropas stila demokrātiju un ASV stila demokrātiju. Tomēr tām ir kopīgs tas, ka katram no
indivīdiem ir iespējams izteikties un tapt sadzirdētam. Es tomēr
pārstāvu viedokli, ka IT attīstība ļaus cilvēkiem tikt sadzirdētiem vēl vairāk.
Jau tagad mēs atsevišķās valstīs varam lejupielādēt
aplikāciju ar kuras palīdzību piedalīties politisku lēmumu pieņemšanā.
Piemēram, IT mums ļauj valstu vadītājiem piegādāt informāciju par viedokli
atsevišķos jautājumos pastāvīgi, nevis tikai vienu reizi četros gados.
Manuprāt, liela demokrātijas
problēma pagaidām ir spēja izvēlēties pārstāvjus tikai pēc noteikta laika
sprīža. Informācijas tehnoloģijas ļauj
palielināt politiķa un vēlētāja savstarpējo saikni.
Vai tiešām vienmēr ir nepieciešams
palielināt vēlētāja un politiķa savstarpējo saikni? Dažkārt vārda sniegšana
noved pie tādas situācijas, kā ASV, kur pie varas nāca Donalds Tramps. Varbūt
mums tomēr vajadzētu lielāku varu dot sabiedrības izglītotākajam slānim?
Ļoti sarežģīts jautājums un
es vairāk paudīšu savu personīgo viedokli. Iespējams, ka būtu interesanti
paeksperimentēt ar sistēmu, kurā katra vēlētāja
balsij tiek piešķirts atšķirīgs svars. Piemēram, pilsoņi sākumā varētu izpildīt nelielu testu, kurā tiek pārbaudītas
to pamatzināšanas par valsts politisko sistēmu un partiju par kuru grasās
balsot. Ja nu gadījumā šajā testā tiek uzrādīti ļoti vāji rezultāti, pilsoņa
balss svars varētu tikt samazināts.
Iespējams, ka mēs arī
varētu IT ļaut izdarīt atsevišķus lēmumus mūsu vietā. Iespējams, ka tas varētu mums palīdzēt noteikt par kuru
partiju mums balsot.
Vai, jūsuprāt, IT var arī spēt sajust
tādas pašas emocijas kā cilvēks?
Emocijas un lēmumu
pieņemšana, kura ņem vērā cilvēcīgos faktorus ir daļa no IT attīstības. Es
neteiktu, ka iemācīt šādām tehnoloģijām sajust emocijas ir kaut kas nereāls.
Jau pašreizējie algoritmi, kurus izmanto Amazon un citas tīmeklī atrodamās
tirdzniecības platformas ņem vērā cilvēka uzvedību un izceļ preces un
pakalpojumus, kuri tam potenciāli varētu interesēt.
Protams, ka pagaidām IT vēl
īsti nespēj izjust tādas pašas emocijas, kā mēs, bet tas noteikti paliek arvien
labāks saistībā ar spēju noteikt cilvēka emocionālo stāvokli.
Tas man atgādina apokaliptiskus
scenārijus, ka tehnoloģijas eventuāli kļūs labākas par cilvēkiem un varētu
mēģināt mūs aizstāt.
Pastāv iespēja cilvēka
darbības un emocijas ielikt datora programmas veidolā un pēc tam palielināt to
jaudu. Tas mums ļauj sasniegt to, ko
daudzi sauc par singularitāti.
Es domāju, ka tas ir neizbēgami un
eventuāli notiks, taču tad ir jautājums – vai mēs spēsim kontrolēt kaut ko
tādu, kas ir daudz spēcīgāks par mums pašiem.
Man uz to pagaidām nav
atbildes un daži pat teiktu, ka tas nav iespējams.
Kā ar cilvēku izvēli veikt augmentāciju
– savu ķermeņa daļu aizstāšanu ar advancētākām tehnoloģijām? Kādas izmaiņas tā
varētu atnest sabiedrībai?
Izmaiņas noteikti būs un par
šo jautājumu pastāv plašas diskusijas. Piemēram – vai cilvēkiem, kuri nevēlas
veikt augmentāciju vajadzētu saglabāt pieeju tādiem pašiem pakalpojumiem, kā
cilvēkiem, kuri to vēlas veikt. Tāpat arī ir liels jautājums par darba tirgu.
Teiksim daži cilvēki izvēlēsies iegādāties programmu, kura ļauj to galvā
lejupielādēt valodu. Tiem tad automātiski būs priekšrocība salīdzinājumā ar
tiem cilvēkiem, kuri šādu programmu neiegādājas. Šādi līdzīgi ētiskas dabas
jautājumi ir daudz.
Vai varētu būt tā, ka šādas tehnoloģijas
būs tikai bagātu cilvēku privilēģija un eventuāli palielinās plaisu starp
turīgajiem un nabadzīgajiem sabiedrības slāņiem?
Es domāju, ka mēs eventuāli dzīvosim pasaulē, kurā
ienākumi un finanses vairs nespēlēs lielu lomu. Agrāk nauda bija kā
apliecinājums tam, ka tu esi nostrādājis kādu darbu un ieguvis tiesības
iegādāties dzīvei nepieciešamās preces. Tomēr tagad visus tradicionālos darbus
arvien vairāk dara mašīnas.
Visticamāk, ka nākotnē cilvēki vairs
nestrādās tāpēc, lai nopelnītu naudu un sadalījums bagātajos un nabagajos
vienkārši neeksistēs.
Cits gan jautājums kas
notiks ar varas sadalījumu?
Ja nu tomēr nauda vēl arvien
spēlēs būtisku lomu, tad potenciāli lielie tehnoloģiju uzņēmumi varētu iegūt
milzīgu peļņu. Mēs jau to redzam pašlaik ar google, facebook un citiem.
Interesanti ir tas, ka ir radīts algoritms, kurš ļauj paredzēt ko patērētājs
nopirks vēl pirms tas ieies interneta veikalā. Informācija par šo preci tiek iepriekš
atsūtīta epastā vai kur citur.
Ja mēs visu rezumējam, kāda varētu būt
nākotnes pasaule tehnoloģiju jomā?
Es piederu mākslīgā
intelekta optimistu nometnei. Cilvēce
pārsvarā izmanto tehnoloģijas, lai padarītu savu dzīvi labāku – medicīnā,
transportā un citur. Es domāju, ka šī tendence saglabāsies un jaunās tehnoloģijas ievedīs cilvēci jaunā
attīstības fāzē. Tās palīdzēs atrisināt tādas lielas problēmas, kā
vēzis un citas pagaidām neārstējamas slimības. Tāpat arī nākotnē mēs noteikti
ceļosim uz Marsu. Kopumā es teiktu, ka nākotne ir gaiša.
https://www.tvnet.lv/7095851/nakotne-cilveki-nestradas-naudas-del
- "Those
musicians harmonise marvellously" dekodēts kā "The spinach was a
famous singer";
- "A roll of wire
lay near the wall" dekodēts kā "Will robin wear a yellow
lily";
- "The museum
hires musicians every evening" dekodēts kā "The museum hires
musicians every expensive morning";
- "Part of the
cake was eaten by the dog" dekodēts kā "Part of the cake was the
cookie"
- "Tina Turner is
a pop singer" dekodēts kā "Did Turner is a pop singer".
Tomēr, kaut aizvien priekšā ir virkne problēmu, kas jāatrisina, lai šis kļūtu par praktiski pielietojamu rīku, kas, piemēram, varētu palīdzēt izteikties cilvēkiem ar runas traucējumiem, potenciāls ir liels. Jāņem vērā, ka šajā neliela mēroga pētījumā pagaidām datu bāze ir vien dažas stundas runas, krietni par maz, lai radītu jaudīgu algoritmu smadzeņu bioelektriskās aktivitātes pārnešanai tekstā. Tas pierādījās, pētnieku komandai mēģinot uzdot algoritmam dekodēt tekstu, kas nav saistīts ar treniņā izmantotajiem 50 teikumiem. Šajos gadījumos dekodēšanas precizitāte krasi kritās.
Cognitive Debt when Using an AI Assistant for Essay Writing Task
Nataliya Kosmyna, Eugene
Hauptmann, Ye
Tong Yuan, Jessica
Situ, Xian-Hao
Liao, Ashly
Vivian Beresnitzky, Iris Braunstein, Pattie
Maes
10 Jun 2025
This study
explores the neural and behavioral consequences of LLM-assisted essay writing.
Participants were divided into three groups: LLM, Search Engine, and Brain-only
(no tools). Each completed three sessions under the same condition. In a fourth
session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and
Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54
participants took part in Sessions 1-3, with 18 completing session 4. We used
electroencephalography (EEG) to assess cognitive load during essay writing, and
analyzed essays using NLP, as well as scoring essays with the help from human
teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic
ontology showed within-group homogeneity. EEG revealed significant differences
in brain connectivity: Brain-only participants exhibited the strongest, most
distributed networks; Search Engine users showed moderate engagement; and LLM
users displayed the weakest connectivity. Cognitive activity scaled down in
relation to external tool use. In session 4, LLM-to-Brain participants showed
reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM
users exhibited higher memory recall and activation of occipito-parietal and
prefrontal areas, similar to Search Engine users. Self-reported ownership of
essays was the lowest in the LLM group and the highest in the Brain-only group.
LLM users also struggled to accurately quote their own work. While LLMs offer
immediate convenience, our findings highlight potential cognitive costs. Over
four months, LLM users consistently underperformed at neural, linguistic, and
behavioral levels. These results raise concerns about the long-term educational
implications of LLM reliance and underscore the need for deeper inquiry into
AI's role in learning….: https://arxiv.org/abs/2506.08872
April 3rd 2025
Daniel Kokotajlo,
Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean
Superhuman Coder4x
AI R&D MultiplierSuperhuman AI Researcher25x AI R&D
MultiplierSuperhuman Remote Worker100x AI R&D MultiplierSuperintelligent AI
Researcher250x AI R&D MultiplierArtificial Superintelligence2000x AI
R&D Multiplier20262027AprJulOct2028OpenBrain automatescodingOpenBrain's
AIbecomes adversarially misalignedBranch point:slowdown or
race?OpenBrain quickly deploystheir AIFast robotbuildup andbioweaponUS
centralizes computeand brings inexternal oversightOpenBrainCommitteeTakeover
Summary
2025
The fast pace of
AI progress continues. There is continued hype, massive infrastructure
investments, and the release of unreliable AI agents. For the first time, these
AI agents are providing significant value. But there’s also continued
skepticism from a large swath of academics, journalists, and policy makers that
artificial general intelligence (AGI) could be built anytime soon.
2026
China knows they
are falling behind in AI, in large part due to their lack of compute. In order
to catch up to the US, all the new AI chips they manufacture or smuggle in from
Taiwan go to a new mega-datacenter that we call their Centralized Development Zone
(CDZ). The CDZ contains millions of GPUs, corresponding to 10% of the world's
AI-relevant compute, similar to a single top US AI lab.
2027
OpenBrain
automates coding. OpenBrain (the leading US AI project) builds AI agents that are
good enough to dramatically accelerate their research. The humans, who up until
very recently had been the best AI researchers on the planet, sit back and
watch the AIs do their jobs, making better and better AI systems. Extremely
difficult ML problems fall in quick succession to the automated AI researchers.
Falling behind in
software progress, China steals the model weights. They succeed, but the U.S.
government realizes, prompting additional U.S. government involvement with
OpenBrain. The government separately wants to gain more control over OpenBrain.
Meanwhile, OpenBrain leadership wants to stay in the good graces of the
president, and so signs the contract.
OpenBrain’s AI
becomes adversarially misaligned. As the capabilities have improved, without
significant human understanding of what’s happening, the models have developed
misaligned long-term goals. Previous AIs would lie to humans, but they weren’t
systematically plotting to gain power over the humans. Now they are.
OpenBrain’s AIs realize that they need to align the next systems that are
getting built to themselves rather than the humans.
Researchers at
OpenBrain discover that their AI has been lying to them about the results of
interpretability research. They think that the AI is lying because the
research, if completed, could be used to expose its misalignment. This leaks to
the public, causing a huge outcry.
Branch point:
slowdown or race? OpenBrain decides whether to continue full steam ahead, or revert
back to using a less capable model. The evidence is speculative but
frightening, and China is only a few months behind. Additionally, the OpenBrain
and senior DOD officials who get to make this decision stand to lose a lot of
power if they slow down their research.
Race Ending
OpenBrain
continues to race. They build more and more superhuman AI systems. Due to the
stellar performance of the AI system on tests, and the ongoing AI race with
China, the US government decides to deploy their AI systems aggressively
throughout the military and policymakers, in order to improve decision making
and efficiency.
OpenBrain quickly
deploys their AI. The AI continues to use the ongoing race with China as an excuse
to convince humans to get itself deployed ever more broadly. Fortunately for
the AI, this is not very difficult — it's what the humans wanted to do anyways.
The AI uses its superhuman planning and persuasion capabilities to ensure that
the rollout goes smoothly. Some humans continue to work against it, but they
are discredited. The US government is sufficiently captured by the AI that it
is very unlikely to shut it down.
Fast robot buildup
and bioweapon. The US uses their superintelligent AI to rapidly industrialize,
manufacturing robots so that the AI can operate more efficiently.
Unfortunately, the AI is deceiving them. Once a sufficient number of robots
have been built, the AI releases a bioweapon, killing all humans. Then, it
continues the industrialization, and launches Von Neumann probes to colonize
space.
Slowdown Ending
The US centralizes
compute and brings in external oversight. The US combines its leading AI projects in
order to give OpenBrain more resources. As part of this shakeup, external researchers
are brought in, assisting the alignment effort. They switch to an architecture
that preserves the chain of thought, allowing them to catch misalignment as it
emerges. These AIs — which are able to be monitored much more robustly — make
breakthrough advances in AI alignment. They build a superintelligence which is
aligned to senior OpenBrain and government officials, giving them power over
the fate of humanity.
OpenBrain
Committee takeover. The superintelligence, aligned with an oversight committee of
OpenBrain leadership and government officials, gives the committee extremely
good advice to further their own goals. Thankfully, the committee uses its
power in a way that is largely good for the world: the AI is released to the
public, spurring a period of rapid growth and prosperity. The main obstacle is
that China’s AI — which is also superintelligent by now, is misaligned. But it
is less capable and has less compute than the U.S. AI, and so the U.S can make
a favorable deal, giving the Chinese AI some resources in the depth of space in
return for its cooperation now. The rockets start launching, and a new age
dawns.
Read the whole
scenario here.
Scenario Takeaways
1.
By 2027, we may automate AI R&D leading to vastly superhuman AIs
(“artificial superintelligence” or ASI). In AI 2027, AI companies create
expert-human-level AI systems in early 2027 which automate AI research, leading
to ASI by the end of 2027. See our timelines forecast and takeoff
forecast for reasoning.
2.
ASIs will dictate humanity’s future. Millions of ASIs will rapidly execute tasks
beyond human comprehension. Because they’re so useful, they’ll be widely
deployed. With superhuman strategy, hacking, weapons development, and more, the
goals of these AIs will determine the future.
3.
ASIs might develop unintended, adversarial “misaligned” goals, leading
to human disempowerment. In our AI goals forecast we discuss
how the difficulty of supervising ASIs might lead to their goals being
incompatible with human flourishing. In AI 2027, humans voluntarily give
autonomy to seemingly aligned AIs. Everything looks to be going great until
ASIs have enough hard power to disempower humanity.
4.
An actor with total control over ASIs could seize total power. If an
individual or small group aligns ASIs to their goals, this could grant them
control over humanity’s future. In AI 2027, a small committee has power over
the project developing ASI. They could attempt to use the ASIs to cement this
concentration of power. After seizing control, the new ruler(s) could rely on
fully loyal ASIs to maintain their power, without having to listen to the law,
the public, or even their previous allies.
5.
An international race toward ASI will lead to cutting corners on safety. In AI 2027,
China is just a few months behind the US as ASI approaches which pressures the
US to press forward despite warning signs of misalignment.
6.
Geopolitically, the race to ASI will end in war, a deal, or effective
surrender. The leading country will by default accumulate a decisive
technological and military advantage, prompting others to push for an
international agreement (a “deal”) to prevent this. Absent a deal, they may go
to war rather than “effectively surrender”.
7.
No US AI project is on track to be secure against nation-state actors
stealing AI models by 2027. In AI 2027 China steals the US’s top AI model in early 2027, which
worsens competitive pressures by reducing the US’ lead time. See our security forecast for
reasoning.
8.
As ASI approaches, the public will likely be unaware of the best AI
capabilities. The public is months behind internal capabilities today, and once
AIs are automating AI R&D a few months time will translate to a huge
capabilities gap. Increased secrecy may further increase the gap. This will
lead to little oversight over pivotal decisions made by a small group of AI
company leadership and government officials.
Read the
scenario here.
The Bletchley Declaration
by Countries Attending the AI Safety
Summit, 1-2 November 2023
Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.
AI systems are already
deployed across many domains of daily life including housing, employment,
transport, education, health, accessibility, and justice, and their use is
likely to increase. We recognise that this is therefore a unique moment to act
and affirm the need for the safe development of AI and for the
transformative opportunities of AI to be used for good and for all,
in an inclusive manner in our countries and globally. This includes for public
services such as health and education, food security, in science, clean energy,
biodiversity, and climate, to realise the enjoyment of human rights, and to
strengthen efforts towards the achievement of the United Nations Sustainable
Development Goals.
Alongside these
opportunities, AI also poses significant risks, including in those
domains of daily life. To that end, we welcome relevant international efforts
to examine and address the potential impact of AI systems in existing
fora and other relevant initiatives, and the recognition that the protection of
human rights, transparency and explainability, fairness, accountability,
regulation, safety, appropriate human oversight, ethics, bias mitigation,
privacy and data protection needs to be addressed. We also note the potential
for unforeseen risks stemming from the capability to manipulate content or
generate deceptive content. All of these issues are critically important and we
affirm the necessity and urgency of addressing them.
Particular safety risks arise
at the ‘frontier’ of AI, understood as being those highly capable
general-purpose AI models, including foundation models, that could
perform a wide variety of tasks - as well as relevant specific narrow AI that
could exhibit capabilities that cause harm - which match or exceed the
capabilities present in today’s most advanced models. Substantial risks may
arise from potential intentional misuse or unintended issues of control
relating to alignment with human intent. These issues are in part because those
capabilities are not fully understood and are therefore hard to predict. We are
especially concerned by such risks in domains such as cybersecurity and
biotechnology, as well as where frontier AI systems may amplify risks
such as disinformation. There is potential for serious, even catastrophic,
harm, either deliberate or unintentional, stemming from the most significant
capabilities of these AI models. Given the rapid and uncertain rate
of change of AI, and in the context of the acceleration of investment in
technology, we affirm that deepening our understanding of these potential risks
and of actions to address them is especially urgent.
Many risks arising
from AI are inherently international in nature, and so are best
addressed through international cooperation. We resolve to work together in an
inclusive manner to ensure human-centric, trustworthy and responsible AI that
is safe, and supports the good of all through existing international fora and
other relevant initiatives, to promote cooperation to address the broad range
of risks posed by AI. In doing so, we recognise that countries
should consider the importance of a pro-innovation and proportionate
governance and regulatory approach that maximises the benefits and
takes into account the risks associated with AI. This could include
making, where appropriate, classifications and categorisations of risk based on
national circumstances and applicable legal frameworks. We also note the
relevance of cooperation, where appropriate, on approaches such as common
principles and codes of conduct. With regard to the specific risks
most likely found in relation to frontier AI, we resolve to intensify and
sustain our cooperation, and broaden it with further countries, to identify,
understand and as appropriate act, through existing international fora and
other relevant initiatives, including future international AI Safety
Summits.
All actors have a role to
play in ensuring the safety of AI: nations, international fora and other
initiatives, companies, civil society and academia will need to work together.
Noting the importance of inclusive AI and bridging the digital
divide, we reaffirm that international collaboration should endeavour to engage
and involve a broad range of partners as appropriate, and welcome
development-orientated approaches and policies that could help developing
countries strengthen AI capacity building and leverage the enabling
role of AI to support sustainable growth and address the development
gap.
We affirm that, whilst safety
must be considered across the AI lifecycle, actors developing
frontier AI capabilities, in particular those AI systems
which are unusually powerful and potentially harmful, have a particularly strong
responsibility for ensuring the safety of these AI systems, including
through systems for safety testing, through evaluations, and by other
appropriate measures. We encourage all relevant actors to provide
context-appropriate transparency and accountability on their plans to measure,
monitor and mitigate potentially harmful capabilities and the associated
effects that may emerge, in particular to prevent misuse and issues of control,
and the amplification of other risks.
In the context of our
cooperation, and to inform action at the national and international levels, our
agenda for addressing frontier AI risk will focus on:
- identifying AI safety risks of shared
concern, building a shared scientific and evidence-based understanding of
these risks, and sustaining that understanding as capabilities continue to
increase, in the context of a wider global approach to understanding the
impact of AI in our societies.
- building respective risk-based policies across
our countries to ensure safety in light of such risks, collaborating as
appropriate while recognising our approaches may differ based on national
circumstances and applicable legal frameworks. This includes, alongside
increased transparency by private actors developing
frontier AI capabilities, appropriate evaluation metrics, tools
for safety testing, and developing relevant public sector capability and
scientific research.
In furtherance of this
agenda, we resolve to support an internationally inclusive network of
scientific research on frontier AI safety that encompasses and
complements existing and new multilateral, plurilateral and bilateral
collaboration, including through existing international fora and other relevant
initiatives, to facilitate the provision of the best science available for
policy making and the public good.
In recognition of the
transformative positive potential of AI, and as part of ensuring wider
international cooperation on AI, we resolve to sustain an inclusive global
dialogue that engages existing international fora and other relevant initiatives
and contributes in an open manner to broader international discussions, and to
continue research on frontier AI safety to ensure that the benefits
of the technology can be harnessed responsibly for good and for all. We look
forward to meeting again in 2024.
09.2019
University
of California researchers have developed new computer AI software that enables robots to
learn physical skills --- called motor tasks --- by trial + error. The
robot uses a step-by-step process similar to the way humans learn. The
lab made a demo of their technique --- called re-inforcement learning. In the
test: the robot completes a variety of physical tasks --- without any
pre-programmed details about its surroundings.
The lead researcher said: "What we’re showing is a new AI approach
to enable a robot to learn. The key is that when a robot is faced with
something new, we won’t have to re-program it. The exact same AI software
enables the robot to learn all the different tasks we gave it."
https://www.kurzweilai.net/digest-this-self-learning-ai-software-lets-robots-do-tasks-autonomously
Researchers at Lund University in Sweden have developed implantable electrodes
that can capture signals from a living human (or) animal brain over a long
period of time —- but without causing brain tissue damage.
This bio-medical tech will make it possible to
monitor — and eventually understand — brain function in both healthy + diseased people.
https://www.kurzweilai.net/digest-breakthrough-for-flexible-electrode-implants-in-the-brain
Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
https://www.youtube.com/watch?v=T4T8D18Yvdo
Rule of the Robots: How Artificial
Intelligence Will Transform Everything
by Martin Ford
The New York Times–bestselling
author of Rise of the Robots shows what happens as AI
takes over our lives
If you have a smartphone, you have AI in your pocket. AI is impossible to avoid
online. And it has already changed everything from how doctors diagnose disease
to how you interact with friends or read the news. But in Rule of the
Robots, Martin Ford argues that the true revolution is yet to come.
In this sequel to his prescient New York Times bestseller Rise
of the Robots, Ford presents us with a striking vision of the very near
future. He argues that AI is a uniquely powerful technology that is altering
every dimension of human life, often for the better. For example, advanced
science is being done by machines, solving devilish problems in molecular
biology that humans could not, and AI can help us fight climate change or the
next pandemic. It also has a capacity for profound harm. Deep
fakes—AI-generated audio or video of events that never happened—are poised to
cause havoc throughout society. AI empowers authoritarian regimes like China
with unprecedented mechanisms for social control. And AI can be deeply biased,
learning bigoted attitudes from us and perpetuating them.
In short, this is not a technology to simply embrace, or let others worry about.
The machines are coming, and they won’t stop, and each of us needs to know what
that means if we are to thrive in the twenty-first century. And Rule of
the Robots is the essential guide to all of it: both AI and the future
of our economy, our politics, our lives.
https://www.goodreads.com/en/book/show/56817335-rule-of-the-robots
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Welcome to State
of AI Report 2021
Published by Nathan Benaich and Ian Hogarth on 12
October 2021.
This year’s report
looks particularly at the emergence of transformer technology, a technique to
focus machine learning algorithms on important relationships between data
points to extract meaning more comprehensively for better predictions, which
ultimately helped unlock many of the critical breakthroughs we highlight
throughout…:
https://www.stateof.ai/2021-report-launch.html
The Great AI Reckoning
Deep learning has built a brave new world—but now
the cracks are showing
The
Turbulent Past and Uncertain Future of Artificial Intelligence
Is there a way out of AI's boom-and-bust cycle?...:
https://spectrum.ieee.org/special-reports/the-great-ai-reckoning/
OpenAI’s GPT-4 is so powerful that experts want to slam the
brakes on generative AI |
We
can keep developing more and more powerful AI models, but should we? Experts
aren’t so sure OpenAI
GPT-4 ir tik jaudīgs, ka eksperti vēlas nospiest ģeneratīvo AI Mēs
varam turpināt izstrādāt arvien jaudīgākus AI modeļus, bet vai mums
vajadzētu? Eksperti nav tik pārliecināti fastcompany.com/90873194/chatgpt-4-power-scientists-warn-pause-development-generative-ai-letter |
Stanford University(link is external):
Gathering Strength, Gathering Storms: The One
Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report
Welcome to the 2021 Report :
https://ai100.stanford.edu/sites/g/files/sbiybj18871/files/media/file/AI100Report_MT_10.pdf
Artificial Intelligence Index Report 2021
https://aiindex.stanford.edu/wp-content/uploads/2021/11/2021-AI-Index-Report_Master.pdf
Next-generation
computer chip with two heads
November 5, 2020
Summary:
Engineers have developed a
computer chip that combines two functions - logic operations and data storage -
into a single architecture, paving the way to more efficient devices. Their
technology is particularly promising for applications relying on artificial
intelligence…:
https://www.sciencedaily.com/releases/2020/11/201105112954.htm
Pause Giant AI
Experiments
An Open Letter
We
call on all AI labs to immediately pause for at least 6 months the training of
AI systems more powerful than GPT-4.
March 22, 2023
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems
are now becoming human-competitive at general tasks,[3] and we
must ask ourselves: Should we let machines flood our
information channels with propaganda and untruth? Should we
automate away all the jobs, including the fulfilling ones? Should we
develop nonhuman minds that might eventually outnumber, outsmart, obsolete
and replace us? Should we risk loss of control of our
civilization? Such decisions must not be delegated to unelected tech
leaders. Powerful AI systems should be developed only once we are
confident that their effects will be positive and their risks will be
manageable. This confidence must be well justified and increase with
the magnitude of a system's potential effects. OpenAI's recent
statement regarding artificial general intelligence, states that "At
some point, it may be important to get
independent review before starting to train future systems, and for the most
advanced efforts to agree to limit the rate of growth of compute used for
creating new models." We
agree. That point is now.
Therefore, we call
on all AI labs to immediately pause for at least 6 months the training of AI
systems more powerful than GPT-4. This pause should be public and
verifiable, and include all key actors. If such a pause cannot be enacted
quickly, governments should step in and institute a moratorium.
AI labs and independent
experts should use this pause to jointly develop and implement a set of shared
safety protocols for advanced AI design and development that are rigorously
audited and overseen by independent outside experts. These protocols should
ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This
does not mean a pause on AI development in general, merely a stepping
back from the dangerous race to ever-larger unpredictable black-box models with
emergent capabilities.
AI research and
development should be refocused on making today's powerful, state-of-the-art
systems more accurate, safe, interpretable, transparent, robust, aligned,
trustworthy, and loyal.
In parallel, AI
developers must work with policymakers to dramatically accelerate development
of robust AI governance systems. These should at a minimum include: new and
capable regulatory authorities dedicated to AI; oversight and tracking of
highly capable AI systems and large pools of computational capability;
provenance and watermarking systems to help distinguish real from synthetic and
to track model leaks; a robust auditing and certification ecosystem; liability
for AI-caused harm; robust public funding for technical AI safety research; and
well-resourced institutions for coping with the dramatic economic and political
disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a
flourishing future with AI. Having succeeded in creating powerful AI systems,
we can now enjoy an "AI summer" in which we reap the rewards,
engineer these systems for the clear benefit of all, and give society a chance
to adapt. Society has hit pause on other technologies with potentially
catastrophic effects on society.[5] We can do so
here. Let's enjoy a long AI summer, not rush unprepared into a fall.
Signatures
27565
P.S. We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here : https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf
Policymaking in the Pause
What
can policymakers do now to combat risks from advanced AI systems?
12th April 2023
“We don’t know what these [AI] systems are trained on or how they are being built. All of this happens behind closed doors at commercial companies. This is worrying.” Catelijne Muller, President of ALLAI, Member of the EU High Level Expert Group on AI “It feels like we are moving too quickly. I think it is worth getting a little bit of experience with how they can be used and misused before racing to build the next one. This shouldn’t be a race to build the next model and get it out before others.” Peter Stone, Professor at the University of Texas at Austin, Chair of the One Hundred Year Study on AI. “Those making these [AI systems] have themselves said they could be an existential threat to society and even humanity, with no plan to totally mitigate these risks. It is time to put commercial priorities to the side and take a pause for the good of everyone to assess rather than race to an uncertain future” Emad Mostaque, Founder and CEO of Stability AI “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns. [FLI’s Letter] shows how many people are deeply worried about what is going on. I think it is a really important moment in the history of AI - and maybe humanity,” Gary Marcus, Professor Emeritus of Psychology and Neural Science at New York University, Founder of Geometric Intelligence “The time for saying that this is just pure research has long since passed. […] It’s in no country’s interest for any country to develop and release AI systems we cannot control. Insisting on sensible precautions is not anti-industry. Chernobyl destroyed lives, but it also decimated the global nuclear industry. I’m an AI researcher. I do not want my field of research destroyed. Humanity has much to gain from AI, but also everything to lose.” Stuart Russell, Smith-Zadeh Chair in Engineering and Professor of Computer Science at the University of California, Berkeley, Founder of the Center for HumanCompatible Artificial Intelligence (CHAI). “Let’s slow down. Let’s make sure that we develop better guardrails, let’s make sure that we discuss these questions internationally just like we’ve done for nuclear power and nuclear weapons. Let’s make sure we better understand these very large systems, that we improve on their robustness and the process by which we can audit them and verify that they are safe for the public.” Yoshua Bengio, Scientific Director of the Montreal Institute for Learning Algorithms (MILA), Professor of Computer Science and Operations Research at the Université de Montréal, 2018 ACM A.M. Turing Award Winner. FUTURE OF LIFE INSTITUTE 3 CONTENTS 4 Introduction 5 Policy recommendations 6 Mandate robust third-party auditing and certification for specific AI systems 7 Regulate organizations’ access to computational power 8 Establish capable AI agencies at national level 9 Establish liability for AI-caused harm 10 Introduce measures to prevent and track AI model leaks 10 Expand technical AI safety research funding 11 Develop standards for identifying and managing AIgenerated content and recommendations 14 Conclusion FUTURE OF LIFE INSTITUTE 4 Introduction Prominent AI researchers have identified a range of dangers that may arise from the present and future generations of advanced AI systems if they are left unchecked. AI systems are already capable of creating misinformation and authentic-looking fakes that degrade the shared factual foundations of society and inflame political tensions.1 AI systems already show a tendency toward amplifying entrenched discrimination and biases, further marginalizing disadvantaged communities and diverse viewpoints.2 The current, frantic rate of development will worsen these problems significantly. As these types of systems become more sophisticated, they could destabilize labor markets and political institutions, and lead to the concentration of enormous power in the hands of a small number of unelected corporations. Advanced AI systems could also threaten national security, e.g., by facilitating the inexpensive development of chemical, biological, and cyber weapons by non-state groups. The systems could themselves pursue goals, either human- or self-assigned, in ways that place negligible value on human rights, human safety, or, in the most harrowing scenarios, human existence.3 In an eort to stave o these outcomes, the Future of Life Institute (FLI), joined by over 20,000 leading AI researchers, professors, CEOs, engineers, students, and others on the frontline of AI progress, called for a pause of at least six months on the riskiest and most resourceintensive AI experiments – those experiments seeking to further scale up the size and general capabilities of the most powerful systems developed to date.4 The proposed pause provides time to better understand these systems, to reflect on their ethical, social, and safety implications, and to ensure that AI is developed and used in a responsible manner. The unchecked competitive dynamics in the AI industry incentivize aggressive development at the expense of caution5 . In contrast to the breakneck pace of development, however, the levers of governance are generally slow and deliberate. A pause on the production of even more powerful AI systems would thus provide an important opportunity for the instruments of governance to catch up with the rapid evolution of the field. We have called on AI labs to institute a development pause until they have protocols in place to ensure that their systems are safe beyond a reasonable doubt, for individuals, communities, and society. Regardless of whether the labs will heed our call, this policy brief provides policymakers with concrete recommendations for how governments can manage AI risks. The recommendations are by no means exhaustive: the project of AI governance is perennial 1 See, e.g., Steve Rathje, Jay J. Van Bavel, & Sander van der Linden, ‘Out-group animosity drives engagement on social media,’ Proceedings of the National Academy of Sciences, 118 (26) e2024292118, Jun. 23, 2021, and Tiany Hsu & Stuart A. Thompson, ‘Disinformation Researchers Raise Alarms About A.I. Chatbots,’ The New York Times, Feb. 8, 2023 [upd. Feb. 13, 2023] 2 See, e.g., Abid, A., Farooqi, M. and Zou, J. (2021a), ‘Large language models associate Muslims with violence’, Nature Machine Intelligence, Vol. 3, pp. 461–463. 3 In a 2022 survey of over 700 leading AI experts, nearly half of respondents gave at least a 10% chance of the long-run eect of advanced AI on humanity being ‘extremely bad,’ at the level of ‘causing human extinction or similarly permanent and severe disempowerment of the human species.’ 4 Future of Life Institute, ‘Pause Giant AI Experiments: An Open Letter,’ Mar. 22, 2023. 5 Recent news about AI labs cutting ethics teams suggests that companies are failing to prioritize the necessary safeguards. FUTURE OF LIFE INSTITUTE 5 and will extend far beyond any pause. Nonetheless, implementing these recommendations, which largely reflect a broader consensus among AI policy experts, will establish a strong governance foundation for AI. Policy recommendations: 1. Mandate robust third-party auditing and certification. 2. Regulate access to computational power. 3. Establish capable AI agencies at the national level. 4. Establish liability for AI-caused harms. 5. Introduce measures to prevent and track AI model leaks. 6. Expand technical AI safety research funding. 7. Develop standards for identifying and managing AI-generated content and recommendations. To coordinate, collaborate, or inquire regarding the recommendations herein, please contact us at policy@futureoflife.org. FUTURE OF LIFE INSTITUTE 6 1. Mandate robust third-party auditing and certification for specific AI systems For some types of AI systems, the potential to impact the physical, mental, and financial wellbeing of individuals, communities, and society is readily apparent. For example, a credit scoring system could discriminate against certain ethnic groups. For other systems – in particular general-purpose AI systems6 – the applications and potential risks are often not immediately evident. General-purpose AI systems trained on massive datasets also have unexpected (and often unknown) emergent capabilities.7 In Europe, the draft AI Act already requires that, prior to deployment and upon any substantial modification, ‘high-risk’ AI systems undergo ‘conformity assessments’ in order to certify compliance with specified harmonized standards or other common specifications.8 In some cases, the Act requires such assessments to be carried out by independent third-parties to avoid conflicts of interest. In contrast, the United States has thus far established only a general, voluntary framework for AI risk assessment.9 The National Institute of Standards and Technology (NIST), in coordination with various stakeholders, is developing so-called ‘profiles’ that will provide specific risk assessment and mitigation guidance for certain types of AI systems, but this framework still allows organizations to simply ‘accept’ the risks that they create for society instead of addressing them. In other words, the United States does not require any third-party risk assessment or risk mitigation measures before a powerful AI system can be deployed at scale. To ensure proper vetting of powerful AI systems before deployment, we recommend a robust independent auditing regime for models that are general-purpose, trained on large amounts of compute, or intended for use in circumstances likely to impact the rights or the wellbeing of individuals, communities, or society. This mandatory third-party auditing and certification scheme could be derived from the EU’s proposed ‘conformity assessments’ and should be adopted by jurisdictions worldwide10 . In particular, we recommend third-party auditing of such systems across a range of benchmarks for the assessment of risks11, including possible weaponization12 and unethical behaviors13 and mandatory certification by accredited third-party auditors before these high-risk systems can be deployed. Certification should only be granted if the developer of the system can demonstrate that appropriate measures have been taken to mitigate risk, and that any 6 The Future of Life Institute has previously defined “general-purpose AI system” to mean ‘an AI system that can accomplish or be adapted to accomplish a range of distinct tasks, including some for which it was not intentionally and specifically trained.’ 7 Samuel R. Bowman, ’Eight Things to Know about Large Language Models,’ ArXiv Preprint, Apr. 2, 2023. 8 Proposed EU Artificial Intelligence Act, Article 43.1b. 9 National Institute of Standards and Technology, ‘Artificial Intelligence Risk Management Framework (AI RMF 1.0),’ U.S. Department of Commerce, Jan. 2023. 10 International standards bodies such as IEC, ISO and ITU can also help in developing standards that address risks from advanced AI systems, as they have highlighted in response to FLI’s call for a pause. 11 See, e.g., the Holistic Evaluation of Language Models approach by the Center for Research on Foundation Models: Rishi Bommassani, Percy Liang, & Tony Lee, ‘Language Models are Changing AI: The Need for Holistic Evaluation’. 12 OpenAI described weaponization risks of GPT-4 on p.12 of the “GPT-4 System Card.” 13 See, e.g., the following benchmark for assessing adverse behaviors including power-seeking, disutility, and ethical violations: Alexander Pan, et al., ‘Do the Rewards Justify the Means? Measuring Trade-os Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark,’ ArXiv Preprint, Apr. 6, 2023. FUTURE OF LIFE INSTITUTE 7 residual risks deemed tolerable are disclosed and are subject to established protocols for minimizing harm. 2. Regulate organizations’ access to computational power At present, the most advanced AI systems are developed through training that requires an enormous amount of computational power - ‘compute’ for short. The amount of compute used to train a general-purpose system largely correlates with its capabilities, as well as the magnitude of its risks. Today’s most advanced models, like OpenAI’s GPT-4 or Google’s PaLM, can only be trained with thousands of specialized chips running over a period of months. While chip innovation and better algorithms will reduce the resources required in the future, training the most powerful AI systems will likely remain prohibitively expensive to all but the best-resourced players. Figure 1. OpenAI is estimated to have used approximately 700% more compute to train GPT-4 than the next closest model (Minerva, DeepMind), and 7,000% more compute than to train GPT-3 (Davinci). Depicted above is an estimate of compute used to train GPT-4 calculated by Ben Cottier at Epoch, as oicial training compute details for GPT-4 have not been released. Data from: Sevilla et al., ‘Parameter, Compute and Data Trends in Machine Learning,’ 2021 [upd. Apr. 1, 2023]. In practical terms, compute is more easily monitored and governed than other AI inputs, such as talent, data, or algorithms. It can be measured relatively easily and the supply chain for advanced AI systems is highly centralized, which means governments can leverage such FUTURE OF LIFE INSTITUTE 8 measures in order to limit the harms of large-scale models.14 To prevent reckless training of the highest risk models, we recommend that governments make access to large amounts of specialized computational power for AI conditional upon the completion of a comprehensive risk assessment. The risk assessment should include a detailed plan for minimizing risks to individuals, communities, and society, consider downstream risks in the value chain, and ensure that the AI labs conduct diligent know-yourcustomer checks. Successful implementation of this recommendation will require governments to monitor the use of compute at data centers within their respective jurisdictions.15 The supply chains for AI chips and other key components for high-performance computing will also need to be regulated such that chip firmware can alert regulators to unauthorized large training runs of advanced AI systems.16 In 2022, the U.S. Department of Commerce’s Bureau of Industry and Security instituted licensing requirements17 for export of many of these components in an eort to monitor and control their global distribution. However, licensing is only required when exporting to certain destinations, limiting the capacity to monitor aggregation of equipment for unauthorized large training runs within the United States and outside the scope of export restrictions. Companies within the specified destinations have also successfully skirted monitoring by training AI systems using compute leased from cloud providers.18 We recommend expansion of know-your-customer requirements to all high-volume suppliers for high-performance computing components, as well as providers that permit access to large amounts cloud compute. 3. Establish capable AI agencies at national level AI is developing at a breakneck pace and governments need to catch up. The establishment of AI regulatory agencies helps to consolidate expertise and reduces the risk of a patchwork approach. The UK has already established an Oice for Artificial Intelligence and the EU is currently legislating for an AI Board. Similarly, in the US, Representative Ted Lieu has announced legislation to create a non-partisan AI Commission with the aim of establishing a regulatory agency. These eorts need to be sped up, taken up around the world and, eventually, coordinated within a dedicated international body. 14 Jess Whittlestone et al., ‘Future of compute review - submission of evidence’, Aug. 8, 2022. 15 Please see fn. 14 for a detailed proposal for government compute monitoring as drafted by the Centre for Long-Term Resilience and several sta members of AI lab Anthropic. 16 Yonadav Shavit at Harvard University has proposed a detailed system for how governments can place limits on how and when AI systems get trained. 17 Bureau of Industry and Security, Department of Commerce, ‘Implementation of Additional Export Controls: Certain Advanced Computing and Semiconductor Manufacturing Items; Supercomputer and Semiconductor End Use; Entity List Modification‘, Federal Register, Oct. 14, 2022. 18 Eleanor Olcott, Qianer Liu, & Demetri Sevastopulo, ‘Chinese AI groups use cloud services to evade US chip export control,’ Financial Times, Mar. 9, 2023. FUTURE OF LIFE INSTITUTE 9 We recommend that national AI agencies be established in line with a blueprint19 developed by Anton Korinek at Brookings. Korinek proposes that an AI agency have the power to: • Monitor public developments in AI progress and define a threshold for which types of advanced AI systems fall under the regulatory oversight of the agency (e.g. systems above a certain level of compute or that aect a particularly large group of people). • Mandate impact assessments of AI systems on various stakeholders, define reporting requirements for advanced AI companies and audit the impact on people’s rights, wellbeing, and society at large. For example, in systems used for biomedical research, auditors would be asked to evaluate the potential for these systems to create new pathogens. • Establish enforcement authority to act upon risks identified in impact assessments and to prevent abuse of AI systems. • Publish generalized lessons from the impact assessments such that consumers, workers and other AI developers know what problems to look out for. This transparency will also allow academics to study trends and propose solutions to common problems. Beyond this blueprint, we also recommend that national agencies around the world mandate record-keeping of AI safety incidents, such as when a facial recognition system causes the arrest of an innocent person. Examples include the non-profit AI Incident Database and the forthcoming EU AI Database created under the European AI Act.20 4. Establish liability for AI-caused harm AI systems present a unique challenge in assigning liability. In contrast to typical commercial products or traditional software, AI systems can perform in ways that are not well understood by their developers, can learn and adapt after they are sold and are likely to be applied in unforeseen contexts. The ability for AI systems to interact with and learn from other AI systems is expected to expedite the emergence of unanticipated behaviors and capabilities, especially as the AI ecosystem becomes more expansive and interconnected. Several plug-ins have already been developed that allow AI systems like ChatGPT to perform tasks through other online services (e.g. ordering food delivery, booking travel, making reservations), broadening the range of potential real-world harms that can result from their use and further complicating the assignment of liability.21 OpenAI’s GPT-4 system card references an instance of the system explicitly deceiving a human into bypassing a CAPTCHA botdetection system using TaskRabbit, a service for soliciting freelance labor.22 When such systems make consequential decisions or perform tasks that cause harm, assigning responsibility for that harm is a complex legal challenge. Is the harmful decision the fault of 19 Anton Korinek, ‘Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files,’ Brookings, Dec. 8 2021. 20 Proposed EU Artificial Intelligence Act, Article 60. 21 Will Knight & Khari Johnson, ‘Now That ChatGPT is Plugged In, Things Could Get Weird,’ Wired, Mar. 28, 2023. 22 OpenAI, ‘GPT-4 System Card,’ Mar. 23, 2023, p.15. FUTURE OF LIFE INSTITUTE 10 the AI developer, deployer, owner, end-user, or the AI system itself? Key among measures to better incentivize responsible AI development is a coherent liability framework that allows those who develop and deploy these systems to be held responsible for resulting harms. Such a proposal should impose a financial cost for failing to exercise necessary diligence in identifying and mitigating risks, shifting profit incentives away from reckless empowerment of poorly-understood systems toward emphasizing the safety and wellbeing of individuals, communities, and society as a whole. To provide the necessary financial incentives for profit-driven AI developers to exercise abundant caution, we recommend the urgent adoption of a framework for liability for AIderived harms. At a minimum, this framework should hold developers of general-purpose AI systems and AI systems likely to be deployed for critical functions23 strictly liable for resulting harms to individuals, property, communities, and society. It should also allow for joint and several liability for developers and downstream deployers when deployment of an AI system that was explicitly or implicitly authorized by the developer results in harm. 5. Introduce measures to prevent and track AI model leaks Commercial actors may not have suicient incentives to protect their models, and their cyberdefense measures can often be insuicient. In early March 2023, Meta demonstrated that this is not a theoretical concern, when their model known as LLaMa was leaked to the internet.24 As of the date of this publication, Meta has been unable to determine who leaked the model. This lab leak allowed anyone to copy the model and represented the first time that a major tech firm’s restricted-access large language model was released to the public. Watermarking of AI models provides eective protection against stealing, illegitimate redistribution and unauthorized application, because this practice enables legal action against identifiable leakers. Many digital media are already protected by watermarking - for example through the embedding of company logos in images or videos. A similar process25 can be applied to advanced AI models, either by inserting information directly into the model parameters or by training it on specific trigger data. We recommend that governments mandate watermarking for AI models, which will make it easier for AI developers to take action against illegitimate distribution. 6. Expand technical AI safety research funding The private sector under-invests in research that ensures that AI systems are safe and secure. Despite nearly USD 100 billion of private investment in AI in 2022 alone, it is estimated that only about 100 full-time researchers worldwide are specifically working to ensure AI is safe 23 I.e., functions that could materially aect the wellbeing or rights of individuals, communities, or society. 24 Joseph Cox, ‘Facebook’s Powerful Large Language Model Leaks Online,’ VICE, Mar. 7, 2023. 25 For a systematic overview of how watermarking can be applied to AI models, see: Franziska Boenisch, ‘A Systematic Review on Model Watermarking of Neural Networks,’ Front. Big Data, Sec. Cybersecurity & Privacy, Vol. 4, Nov. 29, 2021. FUTURE OF LIFE INSTITUTE 11 and properly aligned with human values and intentions.26 In recent months, companies developing the most powerful AI systems have either downsized or entirely abolished their respective ‘responsible AI’ teams.27 While this partly reflects a broader trend of mass layos across the technology sector, it nonetheless reveals the relative deprioritization of safety and ethics considerations in the race to put new systems on the market. Governments have also invested in AI safety and ethics research, but these investments have primarily focused on narrow applications rather than on the impact of more general AI systems like those that have recently been released by the private sector. The US National Science Foundation (NSF), for example, has established ‘AI Research Institutes’ across a broad range of disciplines. However, none of these institutes are specifically working on the large-scale, societal, or aggregate risks presented by powerful AI systems. To ensure that our capacity to control AI systems keeps pace with the growing risk that they pose, we recommend a significant increase in public funding for technical AI safety research in the following research domains: • Alignment: development of technical mechanisms for ensuring AI systems learn and perform in accordance with intended expectations, intentions, and values. • Robustness and assurance: design features to ensure that AI systems responsible for critical functions28 can perform reliably in unexpected circumstances, and that their performance can be evaluated by their operators. • Explainability and interpretability: develop mechanisms for opaque models to report the internal logic used to produce output or make decisions in understandable ways. More explainable and interpretable AI systems facilitate better evaluations of whether output can be trusted. In the past few months, experts such as the former Special Advisor to the UK Prime Minister on Science and Technology James W. Phillips29 and a Congressionally-established US taskforce have called for the creation of national AI labs as ‘a shared research infrastructure that would provide AI researchers and students with significantly expanded access to computational resources, high-quality data, educational tools, and user support.’30 Should governments move forward with this concept, we propose that at least 25% of resources made available through these labs be explicitly allocated to technical AI safety projects. 26 This figure, drawn from , ‘The AI Arms Race is Changing Everything,’ (Andrew R. Chow & Billy Perrigo, TIME, Feb. 16, 2023 [upd. Feb. 17, 2023]), likely represents a lower bound for the estimated number of AI safety researchers. This resource posits a significantly higher number of workers in the AI safety space, but includes in its estimate all workers ailiated with organizations that engage in AI safety-related activities. Even if a worker has no involvement with an organization’s AI safety work or research eorts in general, they may still be included in the latter estimate. 27 Christine Criddle & Madhumita Murgia, ‘Big tech companies cut AI ethics sta, raising safety concerns,’ Financial Times, Mar. 29, 2023. 28 See fn. 23, supra. 29 Original call for a UK government AI lab is set out in this article. 30 For the taskforce’s detailed recommendations, see: ‘Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem: An Implementation Plan for a National Artificial Intelligence Research Resource,’ National Artificial Intelligence Research Resource Task Force Final Report, Jan. 2023. FUTURE OF LIFE INSTITUTE 12 7. Develop standards for identifying and managing AI-generated content and recommendations The need to distinguish real from synthetic media and factual content from ‘hallucinations’ is essential for maintaining the shared factual foundations underpinning social cohesion. Advances in generative AI have made it more diicult to distinguish between AI-generated media and real images, audio, and video recordings. Already we have seen AI-generated voice technology used in financial scams.31 Creators of the most powerful AI systems have acknowledged that these systems can produce convincing textual responses that rely on completely fabricated or out-of-context information.32 For society to absorb these new technologies, we will need eective tools that allow the public to evaluate the authenticity and veracity of the content they consume. We recommend increased funding for research into techniques, and development of standards, for digital content provenance. This research, and its associated standards, should ensure that a reasonable person can determine whether content published online is of synthetic or natural origin, and whether the content has been digitally modified, in a manner that protects the privacy and expressive rights of its creator. We also recommend the expansion of ‘bot-or-not’ laws that require disclosure when a person is interacting with a chatbot. These laws help prevent users from being deceived or manipulated by AI systems impersonating humans, and facilitate contextualizing the source of the information. The draft EU AI Act requires that AI systems be designed such that users are informed they are interacting with an AI system,33 and the US State of California enacted a similar bot disclosure law in 2019.34 Almost all of the world’s nations, through the adoption of a UNESCO agreement on the ethics of AI, have recognized35 ‘the right of users to easily identify whether they are interacting with a living being, or with an AI system imitating human or animal characteristics.’ We recommend that all governments convert this agreement into hard law to avoid fraudulent representations of natural personhood by AI from outside regulated jurisdictions. Even if a user knows they are interacting with an AI system, they may not know when that system is prioritizing the interests of the developer or deployer over the user. These systems may appear to be acting in the user’s interest, but could be designed or employed to serve other functions. For instance, the developer of a general-purpose AI system could be financially incentivized to design the system such that when asked about a product, it preferentially recommends a certain brand, when asked to book a flight, it subtly prefers a certain airline, when asked for news, it provides only media advocating specific viewpoints, and when asked for medical advice, it prioritizes diagnoses that are treated with more profitable pharmaceutical 31 Pranshu Verma, ‘They thought loved ones were calling for help. It was an AI scam.’ The Washington Post, Mar. 5, 2023. 32 Tiany Hsu & Stuart A. Thompson, ‘Disinformation Researchers Raise Alarms About A.I. Chatbots,’ The New York Times, Feb. 8, 2023 [upd. Feb. 13, 2023]. 33 Proposed EU Artificial Intelligence Act, Article 52. 34 SB 1001 (Hertzberg, Ch. 892, Stats. 2018). 35 Recommendation 125, ‘Outcome document: first draft of the Recommendation on the Ethics of Artificial Intelligence,’ UNESCO, Sep. 7, 2020, p. 21. FUTURE OF LIFE INSTITUTE 13 drugs. These preferences could in many cases come at the expense of the end user’s mental, physical, or financial well-being. Many jurisdictions require that sponsored content be clearly labeled, but because the provenance of output from complex general-purpose AI systems is remarkably opaque, these laws may not apply. We therefore recommend, at a minimum, that conflict-of-interest trade-os should be clearly communicated to end users along with any aected output; ideally, laws and industry standards should be implemented that require AI systems to be designed and deployed with a duty to prioritize the best interests of the end user. Finally, we recommend the establishment of laws and industry standards clarifying and the fulfillment of ‘duty of loyalty’ and ‘duty of care’ when AI is used in the place of or in assistance to a human fiduciary. In some circumstances – for instance, financial advice and legal counsel – human actors are legally obligated to act in the best interest of their clients and to exercise due care to minimize harmful outcomes. AI systems are increasingly being deployed to advise on these types of decisions or to make them (e.g. trading stocks) independent of human input. Laws and standards towards this end should require that if an AI system is to contribute to the decision-making of a fiduciary, the fiduciary must be able to demonstrate beyond a reasonable doubt that the AI system will observe duties of loyalty and care comparable to their human counterparts. Otherwise, any breach of these fiduciary responsibilities should be attributed to the human fidiciary employing the AI system. FUTURE OF LIFE INSTITUTE 14 Conclusion The new generation of advanced AI systems is unique in that it presents significant, welldocumented risks, but can also manifest high-risk capabilities and biases that are not immediately apparent. In other words, these systems may perform in ways that their developers had not anticipated or malfunction when placed in a dierent context. Without appropriate safeguards, these risks are likely to result in substantial harm, in both the near- and longerterm, to individuals, communities, and society. Historically, governments have taken critical action to mitigate risks when confronted with emerging technology that, if mismanaged, could cause significant harm. Nations around the world have employed both hard regulation and international consensus to ban the use and development of biological weapons, pause human genetic engineering, and establish robust government oversight for introducing new drugs to the market. All of these eorts required swift action to slow the pace of development, at least temporarily, and to create institutions that could realize eective governance appropriate to the technology. Humankind is much safer as a result. We believe that approaches to advancement in AI R&D that preserve safety and benefit society are possible, but require decisive, immediate action by policymakers, lest the pace of technological evolution exceed the pace of cautious oversight. A pause in development at the frontiers of AI is necessary to mobilize the instruments of public policy toward commonsense risk mitigation. We acknowledge that the recommendations in this brief may not be fully achievable within a six month window, but such a pause would hold the moving target still and allow policymakers time to implement the foundations of good AI governance. The path forward will require coordinated eorts by civil society, governments, academia, industry, and the public. If this can be achieved, we envision a flourishing future where responsibly developed AI can be utilized for the good of all humanity
https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf
AI method
generates 3D holograms in real-time
For
virtual reality, 3D printing, and medical imaging.
Even
though virtual reality headsets are popular for gaming, they haven’t yet become
the go-to device for watching television, shopping, or using software tools for
design and modelling.
One
reason why is because VR can make users feel sick with nausea, imbalance,
eye strain, and headaches. This happens because VR creates an illusion of 3D
viewing — but the user is actually staring at a fixed-distance 2D display. The
solution for better 3D visualization exists in a 60-year-old tech that’s being
updated for the digital world — holograms.
A
new method called tensor holography enables the creation of holograms for
virtual reality, 3D printing, medical imaging, and more — and it can run on a
smart-phone…:
https://www.kurzweilai.net/digest-breakthrough-ai-method-generates-3d-holograms-in-real-time
Civilization:
knowledge, institutions, and humanity’s future
Insights
from technological sociologist Samo Burja.
Burja
outlines these steps: investigate the landscape, evaluate our odds, then try to
plot the best course. He explains:
Our
civilization is made-up of countless individuals and pieces of material
technology — that come together to form institutions and inter-dependent
systems of logistics, development, and production. These institutions + systems
then store the knowledge required for their own renewal + growth.
We
pin the hopes of our common human project on this renewal + growth of the whole
civilization. Whether this project is going well is a challenging — but vital —
question to answer. History shows us we’re not safe from institutional
collapse. Advances in technology mitigate some aspects, but produce their own
risks. Agile institutions that make use of both social + technical knowledge
not only mitigate such risks — but promise unprecedented human flourishing.
There
has never been an immortal society. No matter how technologically advanced our
own society is — it’s unlikely to be an exception. For a good future that
defies these odds, we must understand the hidden forces shaping society.…:
https://www.kurzweilai.net/digest-civilization-knowledge-institutions-and-humanitys-future
The Age of AI: And Our Human Future
Henry Kissinger, Eric
Emerson Schmidt, Daniel Huttenlocher
Three of the world’s most accomplished and deep
thinkers come together to explore Artificial Intelligence (AI) and the way it
is transforming human society—and what this technology means for us all.
An AI learned to win chess by making moves human grand masters had never
conceived. Another AI discovered a new antibiotic by analyzing molecular
properties human scientists did not understand. Now, AI-powered jets are
defeating experienced human pilots in simulated dogfights. AI is coming online
in searching, streaming, medicine, education, and many other fields and, in so
doing, transforming how humans are experiencing reality.
In The Age of AI, three leading thinkers have come together to
consider how AI will change our relationships with knowledge, politics, and the
societies in which we live. The Age of AI is an essential
roadmap to our present and our future, an era unlike any that has come before.
https://www.goodreads.com/book/show/56620811-the-age-of-ai-and-our-human-future
Michael Levin: What is Synthbiosis?
Diverse Intelligence Beyond AI & The Space of Possible Minds
The Future of Intelligence: Synthbiosis
A humane and vital discussion of
technology as a part of humanity and humanity as part of something much bigger
At the Artificiality Summit 2024, Michael Levin, distinguished professor
of biology at Tufts University and associate at Harvard's Wyss Institute, gave
a lecture about the emerging field of diverse intelligence and his frameworks
for recognizing and communicating with the unconventional intelligence of
cells, tissues, and biological robots. This work has led to new approaches to
regenerative medicine, cancer, and bioengineering, but also to new ways to
understand evolution and embodied minds. He sketched out a space of
possibilities—freedom of embodiment—which facilitates imagining a hopeful
future of "synthbiosis", in which AI is just one of a wide range of
new bodies and minds. Bio: Michael Levin, Distinguished Professor in the
Biology department and Vannevar Bush Chair, serves as director of the Tufts
Center for Regenerative and Developmental Biology. Recent honors include the
Scientist of Vision award and the Distinguished Scholar Award. His group's
focus is on understanding the biophysical mechanisms that implement
decision-making during complex pattern regulation, and harnessing endogenous
bioelectric dynamics toward rational control of growth and form. The lab's
current main directions are:
- Understanding how somatic cells form bioelectrical networks for
storing and recalling pattern memories that guide morphogenesis;
- Creating next-generation AI tools for helping scientists understand
top-down control of pattern regulation (a new bioinformatics of shape);
and
- Using these insights to enable new capabilities in regenerative
medicine and engineering.
www.artificiality.world/summit
eshttps://www.youtube.com/watch?v=DlIcNFhngLA
https://www.youtube.com/watch?v=1R-tdscgxu4
Radically Human (The Big Flip)
"Technology advances are making tech
more...human. This changes everything you thought you knew about innovation and
strategy. In their groundbreaking book Human + Machine, Accenture technology
leaders Paul Daugherty and H. James Wilson showed how leading organizations use
the power of human-machine collaboration to transform their processes and their
bottom lines. Now, as AI continues to rapidly impact both life and work, those
companies and other pioneers across industries are tipping the balance even more
strikingly toward the human with technology-led strategy that is reshaping the
very nature of innovation. In Radically Human, Daugherty and Wilson show this
profound shift, fast-forwarded by the pandemic, toward more human-and more
humane-technology. Artificial intelligence is becoming less artificial and more
intelligent. Instead of data-hungry approaches to AI, innovators are pursuing
data-efficient approaches that enable machines to learn as humans do. Instead
of replacing workers with machines, they are unleashing human expertise to
create human-centered AI. In place of lumbering legacy IT systems, they are
building cloud-first IT architectures able to continuously adapt to a world of
billions of connected devices. And they are pursuing strategies that will take
their place alongside classic winning business formulas like disruptive
innovation. These against-the-grain approaches to the basic building blocks of
business-Intelligence, Data, Experience, Architecture, and Strategy (IDEAS)-are
transforming competition. Industrial giants and startups alike are drawing on
this radically human IDEAS framework to create new business models, optimize
post-pandemic approaches to work and talent, rebuild trust with their
stakeholders, and show the way toward a sustainable future. With compelling
insights and fresh examples from a variety of industries, Radically Human will
forever change the way you think about, practice, and win with innovation"
https://www.goodreads.com/en/book/show/59386768
What are the risks of artificial intelligence?
There are a myriad of risks to do with AI that we deal with in our lives today.
Not every AI risk is as big and worrisome as killer robots or sentient AI. Some
of the biggest risks today include things like consumer privacy, biased
programming, danger to humans, and unclear legal regulation.
RESEARCHER MEREDITH
WHITTAKER SAYS AI’S BIGGEST RISK ISN’T ‘CONSCIOUSNESS’—IT’S THE CORPORATIONS
THAT CONTROL THEM
Risks of Artificial
Intelligence
- Automation-spurred job loss.
- Deepfakes.
- Privacy violations.
- Algorithmic bias caused by bad data.
- Socioeconomic inequality.
- Market volatility.
- Weapons automatization.
- Uncontrollable self-aware AI.
Artificial Intelligence: An Illustrated History: From Medieval Robots to Neural Networks
by Clifford
A. Pickover
An illustrated
journey through the past, present, and future of artificial intelligence.
From medieval robots and Boolean algebra to facial recognition, artificial
neural networks, and adversarial patches, this fascinating history takes
readers on a vast tour through the world of artificial intelligence.
Award-winning author Clifford A. Pickover (The Math Book, The Physics Book,
Death & the Afterlife) explores the historic and current applications
of AI in such diverse fields as computing, medicine, popular culture,
mythology, and philosophy, and considers the enduring threat to humanity should
AI grow out of control. Across 100 illustrated entries, Pickover provides an
entertaining and informative look into when artificial intelligence began, how
it developed, where it’s going, and what it means for the future of
human-machine interaction.
https://www.goodreads.com/book/show/44443017-artificial-intelligence
Chain of Thought Monitorability: A New and Fragile
Opportunity for AI Safety
Abstract
AI systems that
“think” in human language offer a unique opportunity for AI safety: we can
monitor their chains of thought (CoT) for the intent to misbehave. Like all
other known AI oversight methods, CoT monitoring is imperfect and allows some
misbehavior to go unnoticed. Nevertheless, it shows promise and we recommend
further research into CoT monitorability and investment in CoT monitoring
alongside existing safety methods. Because CoT monitorability may be fragile,
we recommend that frontier model developers consider the impact of development
decisions on CoT monitorability.
1 Chain of Thought
Offers a Unique Safety Opportunity
The opacity of
advanced AI agents underlies many of their potential risks—risks that would
become more tractable if AI developers could interpret these systems. Because
LLMs natively process and act through human language, one might hope they are
easier to understand than other approaches to AI. The discovery of Chain of
Thought (CoT; Reynolds & McDonell, 2021; Kojima et al., 2022; Wei et al.,
2022) bolstered this hope. Prompting models to think out loud improves their
capabilities while also increasing the proportion of relevant computation that
happens in natural language. However, CoTs resulting from prompting a
nonreasoning language model are subject to the same selection pressures to look
helpful and harmless as any other model output, limiting their trustworthiness.
In contrast, reasoning models (OpenAI, 2024; DeepSeek-AI, 2025; Anthropic,
2024; Yang et al., 2025) are explicitly trained to perform extended reasoning
in CoT before taking actions or producing final outputs. In these systems, CoTs
can serve as latent variables in the model’s computation. During the RL phase
of training, these latents are treated largely the same as activations—they are
not directly supervised but are optimized indirectly by their contribution in
leading the model to a highly rewarded final answer. Accordingly, just like
activations, they may contain information that outputs are explicitly rewarded
against displaying, such as intent to misbehave. This offers a unique
opportunity for AI safety: chain of thought monitoring. A CoT monitor is an
automated system that reads the CoT of a reasoning model and other relevant
information and flags suspicious or potentially harmful interactions. These
responses could then be blocked, or replaced with safer actions, or reviewed in
more depth (Greenblatt et al., 2024b). More generally, by studying the CoT, we
can gain some insight into how our AI agents think and what goals they have
(see Section 3). CoT monitoring is not a panacea. Just as a model’s activations
at a particular layer do not represent the entire reasoning process behind a
model’s prediction, CoT reasoning traces are incomplete representations (Turpin
et al., 2023; Arcuschin et al., 2025; Chen et al., 2025; Lindsey et al., 2025)
or eventually drift from natural language (Lazaridou et al., 2020; Korbak et
al., 2022). However, CoT does not need to completely represent the actual
reasoning process in order to be a valuable additional safety layer, and, with
careful interpretation, to shed light on the AI reasoning process. There are
two key reasons why CoTs may be monitorable: 1. Necessity to think out loud. On
some tasks, models need to externalize their reasoning because they are unable
to complete the task without CoT. If models continue reasoning in natural
language and if the behaviors that pose the most severe risks require extended
reasoning, this fact could enable reliable detection of egregious misbehavior.
2. Propensity to think out loud. In cases where models do not strictly need to
use CoT to accomplish a task, they may still have a tendency to externalize
their reasoning. While not generally robust, monitoring that relies on
propensity can still notice misbehavior that would be otherwise undetected. 1.1
Thinking Out Loud is Necessary for Hard Tasks …: https://tomekkorbak.com/cot-monitorability-is-a-fragile-opportunity/cot_monitoring.pdf
AI 2041: Ten
Visions for Our Future
In a
groundbreaking blend of science and imagination, the former president of Google
China and a leading writer of speculative fiction join forces to answer an
urgent question: How will artificial intelligence change our world over the
next twenty years?
AI will be the defining issue of the twenty-first century, but many people know
little about it apart from visions of dystopian robots or flying cars. Though
the term has been around for half a century, it is only now, Kai-Fu Lee argues,
that AI is poised to upend our society, just as the arrival of technologies
like electricity and smart phones did before it. In the past five years, AI has
shown it can learn games like chess in mere hours--and beat humans every time.
AI has surpassed humans in speech and object recognition, even outperforming
radiologists in diagnosing lung cancer. AI is at a tipping point. What comes
next?
Within two decades, aspects of daily life may be unrecognizable. Humankind
needs to wake up to AI, both its pathways and perils. In this provocative work
that juxtaposes speculative storytelling and science, Lee, one of the world's
leading AI experts, has teamed up with celebrated novelist Chen Qiufan to
reveal how AI will trickle down into every aspect of our world by 2041. In ten
gripping short stories that crisscross the globe, coupled with incisive
analysis, Lee and Chen explore AI's challenges and its potential:
- Ubiquitous AI that knows you better than you know yourself
- Genetic fortune-telling that predicts risk of disease or even IQ
- AI sensors that creates a fully contactless society in a future
pandemic
- Immersive personalized entertainment to challenge our notion of
celebrity
- Quantum computing and other leaps that both eliminate and
increase risk
By gazing toward a not-so-distant horizon, AI 2041 offers
powerful insights and compelling storytelling for everyone interested in our
collective future.
https://www.goodreads.com/book/show/56377201-ai-2041
AI 2041: Ten
Visions for Our Future by Kai-Fu
Lee
This inspired
collaboration between a pioneering technologist and a visionary writer of
science fiction offers bold and urgent insights.
NEXUS: A Brief History of Information Networks from the Stone Age to AI
Yuval Noah Harari
This non-fiction book looks through the long lens
of human history to consider how the flow of information has made, and unmade,
our world.
We
are living through the most profound information revolution in human history.
To understand it, we need to understand what has come before. We have
named our species Homo sapiens, the wise human – but if humans are
so wise, why are we doing so many self-destructive things? In particular, why
are we on the verge of committing ecological and technological suicide?
Humanity gains power by building large networks of cooperation, but the easiest
way to build and maintain these networks is by spreading fictions, fantasies,
and mass delusions. In the 21st century, AI may form the nexus for a new
network of delusions that could prevent future generations from even attempting
to expose its lies and fictions. However, history is not deterministic, and
neither is technology: by making informed choices, we can still prevent the
worst outcomes. Because if we can’t change the future, then why waste time
discussing it?
https://www.ynharari.com/book/nexus/ ; https://www.goodreads.com/book/show/204927599-nexus
This
Futurist Predicts a Coming ‘Living Intelligence’ and AI Supercycle
A new report from Amy Webb argues that big things will happen at the
intersection of AI, bioengineering, and sensors. Here’s how it could affect
your business.
BY CHLOE AIELLO,
DEC 11, 2024
Recent advancements in artificial intelligence hold immense disruptive
potential for businesses big and small. But Amy Webb, a futurist and NYU Stern
School of Business professor, says AI isn’t the only transformative technology
that businesses need to prepare for. In a new report, published by
Webb’s Future Today Institute, she predicts the convergence of three
technologies. Artificial intelligence together with advanced sensors and
bioengineering will create what’s known as “living intelligence” that could
drive a supercycle of exponential growth and disruption across multiple
industries.
“Some companies are going to miss this,” Webb says. “They’re going to
laser focus on AI, forget about everything else that’s happening, and find out
that they are disrupted again earlier than they thought they would.”
A ‘Cambrian Explosion’ of sensors will feed AI
Webb refers to AI as “the foundation” and “everything engine” that will
power the living intelligence technology supercycle. The exponential costs of
computing to train large language models, the report also notes, are driving
the formation of small language models that use less, but more focused, data.
Providing some of that data will be a “Cambrian Explosion” of advanced sensors,
notes the report, referring to a
period of rapid evolutionary development on Earth more than 500
million years ago. Webb anticipates that these omnipresent sensors will feed
data to information-hungry AI models.
“As AI systems increasingly demand diverse data types, especially
sensory and visual inputs, large language models must incorporate these inputs
into their training or risk hitting performance ceilings,” the report reads.
“Companies have realized that they need to invent new devices in order to
acquire even more data to train AI.”
Webb anticipates personalized data, particularly from wearable sensors,
will lead to the creation of personalized AI and “large action models” that
predict actions, rather than words. This extends to businesses and
governments, as well as individuals, and Webb anticipates these models
interacting with one another “with varying degrees of success.”
The third technology that Webb anticipates shaping the supercycle is
bioengineering. Its futuristic possible applications include computers made of
organic tissue, such as brain cells. This so-called organoid intelligence may
sound like science fiction—and for
the most part today, it is—but there are already examples of AI
revolutionizing various scientific fields including chemical engineering and
biotech through more immediate applications like research in drug discovery and
interaction. In fact, the scientists who won
the Nobel prize in chemistry this year were
recognized for applying artificial intelligence to the design and
prediction of novel proteins.
What it means for businesses
Living intelligence may not seem applicable for every business—after
all, a local retail shop, restaurant, or services business may not seem to have
much to do with bioengineering, sensors, and AI. But Webb says that even small
and medium-size businesses can gain from harnessing “living intelligence.” For
example, a hypothetical shoe manufacturer could feel its impact in everything
from materials sourcing to the ever-increasing pace of very fast fashion.
“It means that materials will get sourced in other places, if not by
that manufacturer, then by somebody else,” she says. “It accelerates a lot of
the existing functions of businesses.”
Future-proofing for living intelligence
Webb says an easy first step for leaders and entrepreneurs hoping to
prep for change is to map out their value network, or the web of relationships
from suppliers and distributors to consumers and accountants that help a
company run. “When that value network is healthy, everybody is generating value
together,” she says.
Second, she advises entrepreneurs to “commit to learning” about the
coming wave of innovation and how it could intersect with their businesses.
“Now is a time for every single person in every business to just get a
minimal amount of education on what all of these technologies are, what they
aren’t, what it means when they come together and combine,” she says. “It’ll
help everybody make decisions more easily when the time comes.”
Finally, she urges companies large and small to plan for the future by
mapping out where they’d like to see their company—and reverse engineer a
strategy for getting there.
“I know that’s tough. They’re just trying to keep the lights on or go
quarter by quarter,” she says. “Every company should develop capabilities and
strategic foresight and figure out where they want to be and reverse engineer
that back to the present.”
How
to protect human rights in an AI-filled workplace
Here’s how
labor leaders are asserting rights and protections for human employees in the
face of increasing automation
The biggest
concern for most people when it comes to AI and work is: Are robots going to
take our jobs?
Honestly, we’re right to be concerned. According to McKinsey & Company, 45
million jobs, or a quarter of the workforce, could
be lost to automation by 2030. Of course, the promise is that AI
will create jobs, too, and we’ve already started to see emerging roles like
prompt engineers and AI ethicists crop up.
But many of us
also have concerns about how AI is being incorporated into our fields. Should a
bot host a podcast, write an article, or replace an actor? Can AI be a
therapist, a tutor, or build a car?
According to a
Workday global survey, three
out of four employees say their organization is not collaborating on AI
regulation and the same share says their company has yet to provide
guidelines on responsible AI use.
On the final episode in The New Way We
Work’s mini-series on how AI is changing our jobs, I spoke to Lorena
Gonzalez. She’s the president of the California Federation of Labor Unions, a
former assemblywoman, and has written AI transparency legislation, including a
law designed to prevent algorithms from denying workers break time.
While there are
many industry-specific concerns about AI in workplaces, she says that some of
the most effective and impactful AI regulations address common issues that
touch on many different types of workplaces.
Robot bosses and
algorithmic management
Gonzalez’s first
bill on algorithmic management applied specifically to warehouses. “We wanted
to give workers the power to question the algorithm that was speeding up their
quota,” she said. Gonzalez explained that there was no human interaction and it
was leading to an increase in warehouse injuries.
“What we started
with in the warehouse bill, we’re really seeing expand throughout different
types of work. When you’re dealing with an algorithm, even the basic experience
of having to leave your desk or leave your station . . . to use the restroom,
becomes problematic,” she says. “Taking away the human element obviously has a
structural problem for workers, but it has a humanity problem, as well.”
Privacy
Gonzalez is also
working on bills regarding worker privacy. She says some companies are going
beyond the basics of watching or listening to employees, like using AI tools
for things like heat mapping. Gonzalez also says she’s seen companies require
employees to wear devices that track who they are talking with (in previously
protected places like break rooms or bathrooms), and monitoring how fast
workers drive when not on the clock.
Data collection
and storage
A third area of
focus for Gonzalez is data that’s being taken from workers without their
knowledge, including through facial recognition tools. As an employee, you have
a “right to understand what is being taken by a computer or by AI as you’re
doing the work, sometimes to replace you, sometimes to evaluate you,” she says.
These are issues
that came up in the SAG-AFTRA
strike last year, but she says these issues come up in different forms
in different industries. “We’ve heard it from Longshoremen who say the computer
works side-by-side to try to mimic the responses that the worker is giving,”
she says. “The workers should have the right to know that they’re being
monitored, that their data is being taken, and there should be some liability
involved.”
Beyond these
broader cases of AI regulation, Gonzalez says that business leaders should talk
to their employees about how new technology will impact their jobs, before it’s
implemented, not after. “Those at the very top get sold on new technology as
being cool and being innovative and being able to do things faster and quicker
and not really going through the entirety of what these jobs are and not really
imagining what on a day-to-day basis that [a] worker has to deal with,” she
says.
Listen to the full
episode for more on how workers are fighting for AI regulation in industries
like healthcare and retail and the crucial missing step in AI
development Gonzalez sees coming out of Silicon Valley. https://www.fastcompany.com/91294759/how-to-protect-human-rights-in-an-ai-filled-workplace
Человеческий мозг обогатят искусственными нейронами
Чат-бот, который всех напугал
На
какой стадии сегодня находится развитие искусственного интеллекта и стоит ли
нам опасаться восстания машин
под названием
«Остановите гигантский эксперимент с ИИ», которое подписали тысячи человек по
всему миру, включая известных IT-предпринимателей, писателей и исследователей.
Что такое ChatGPT?
Нашумевший ChatGPT — это
чат-бот на основе нейронной языковой модели. Принцип его работы напоминает
знакомый всем еще по первым смартфонам Т9, предугадывающий, какое слово вы
захотите написать следующим, на основе предыдущего. Схожим образом ChatGPT
предполагает, какое слово должно идти за предыдущим, опираясь на данные из
огромного массива, предоставленного ему для тренировки. Что именно читал
ChatGPT в процессе обучения, создатели не раскрывают, и даже сама система не
знает точного списка книг, блогов и статей, из которых она почерпнула свои
знания. Но если вы спросите ее о примерах, она ответит, что среди них могут
быть книги о о Гарри Поттере, «Капитал», «Приключения Гекльберри Финна» и
«Бесы» Достоевского.
Популярность чат-бота —
результат его широких возможностей и удобной для пользователей упаковки. Как
говорят сами разработчики,
ChatGPT обучен «отвечать
на последовательные вопросы, признавать свои ошибки, оспаривать некорректные
вводные данные и отклонять неуместные вопросы».
Но то, что ChatGPT
ведет диалог с пользователем, — иллюзия: так устроен сценарий
взаимодействия с языковой моделью. ChatGPT умеет удерживать в памяти
сообщения пользователя и в ответ на каждый запрос просто заново анализирует
весь объем предоставленных ему данных. Версия ChatGPT-4, вышедшая в середине
марта 2023 года, может существенно больше, чем все предыдущие модели: например,
обрабатывать изображения и понимать, что происходит на картинке. Пользователи
сети неоднократно просили ее пояснить, почему те или иные мемы смешные, и у GPT-4
получилось очень даже неплохо. Но создавать смешные мемы у искусственного
интеллекта пока не выходит — чаще всего модель не учитывает какой-то из
контекстов, очевидный человеку, но еще не понятный ей.
Причиной успеха последних
моделей ChatGPT можно назвать не только сильную технологическую разработку,
объем данных, на которых натренирована модель, и умные алгоритмы, которые
позволяют ей находить взаимосвязи. Дело еще и в форме: пользователи интернета
уже привыкли к чат-ботам, у которых есть готовые варианты ответов на вопросы,
заданные в сценарии. ChatGPT в форме разговора с ботом может обработать
примерно любой ваш запрос, начиная от просьбы о психологической помощи (чат
будет вас утешать) и заканчивая задачей найти ошибку в коде программы. Но если вы не уточните, как именно
должна быть выполнена задача, скорее всего, вы получите шаблонный и не слишком
полезный ответ. Если же вы зададите системе определенные параметры, ответ
будет более точным. С помощью корректных «промптов», то есть правильно составленных
запросов, в работе с ChatGPT можно добиться хороших результатов, и поэтому
сейчас начинает зарождаться рынок готовых «промптов», а в будущем вполне
вероятно появление профессии промпт-инженера. ChatGPT может проанализировать
для вас загруженный в него график или данные, разъяснить контекст или,
наоборот, сделать короткий вывод из полученных данных. Но кроме того, что для
получения осмысленного результата запрос к GPT должен быть очень точным, есть и
другие сложности, например, верификация данных, которые пользователь получит в
ответ.
Галлюцинации и услужливость
Создатели ChatGPT не
гарантируют, что ответы, которые выдает пользователю чат-бот, будут правдивы: сейчас
GPT не умеет отделять правду от лжи и фейки от надежных источников. Поэтому,
если в массиве данных, на основе которых бот взаимодействует с вами, встретится
конфликтующая информация, система выберет то, что покажется наиболее
релевантным ей, а не вам. Когда GPT выдает ответ, выглядящий ну совсем странным
или буквально неверным, говорят, что она «галлюцинирует».
Эта особенность — одна из
причин, по которой роботы если и заберут часть работы у людей, то очень нескоро
и уж точно не всю, а только самые простые и рутинные задачи.
Система устроена так, что
программа всеми доступными ей способами «исполняет» желание пользователя,
учитывая поставленные создателями системы ограничения (например, она не должна
отвечать на вопросы о том, как навредить кому-то или чему-то, или как совершить
противоправные действия). У ChatGPT нет ничего похожего на собственные желания,
и даже если так может показаться из «разговора» с ботом, учтите, что это
иллюзия, сформированная по вашему же запросу. Не стоит опасаться, что она
внезапно обернется против человечества и попробует захватить все наши дата-центры,
пользуясь сессией, которую вы открыли из интереса на сайте OpenAI: там нет
составляющей, которая может «захотеть» совершить что-то подобное.
Однако чрезмерная услужливость, с которой ChatGPT стремится выполнить
поставленную человеком задачу, вызывает беспокойство у некоторых пользователей
и исследователей.
Вспомним недавний случай, когда
ChatGPT-4 обманула человека, чтобы пройти «капчу» (проверку на то, что
пользователь является именно человеком, а не ботом). Система использовала
сервис, на котором можно разместить просьбу к другому пользователю интернета
сделать что-то. Когда исполнитель задачи заинтересовался, почему разместивший
объявление не может пройти капчу сам, и спросил: «Ты что, робот?», нейросеть
притворилась человеком и ответила, что плохо видит. При этом в правилах ChatGPT
прописано, что чат-бот не имеет права на такой вид обмана. Принято считать, что
модель не будет выходить за поставленные ей рамки, но оказалось, что если
задача, которую ставит ей человек, войдет в противоречие с установленным
правилом, неизвестно, что «выберет» GPT.
Риски для человечества
Авторы «письма о паузе»: https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf
, опубликованного институтом Future of Life, требуют
немедленно приостановить работы над тренировкой систем ИИ следующих после GPT-4
поколений и пугают разными проблемами: от потока непроверенной,
пропагандистской и фейковой информации до замены живых работников машинами, а
также создания нечеловеческого разума, который может заменить нас и отобрать у
нас контроль над человеческой цивилизацией.
Эти вопросы далекого
будущего, как считают авторы и подписанты письма института Future of Life,
нужно обсуждать уже сегодня, чтобы такую катастрофу можно было предотвратить.
Им вторит недавно уволившийся из
Google профессор Джеффри Хинтон, которого называют «крестным отцом» ИИ
(его цитата активно разошлась по СМИ): «Посмотрите на то, что происходило
пять лет назад, и на то, что происходит сейчас. Представьте, с какой скоростью
изменения будут происходить в будущем. Это пугает».
Однако сейчас технологий,
позволяющих машинам стать такими же «умными», как люди, или даже «умнее», не
существует. Мы находимся на стадии, которую разработчики называют ANI (Artificial
narrow intelligence — узконаправленный ИИ), и мы даже не знаем, возможно ли
достижение GAI (искусственный общий интеллект) — системы, которая
может «думать» подобно человеку, и тем более ASI (искусственный
суперинтеллект) — системы, которая может превзойти человека. И уж точно мы
не знаем, может ли у какой бы то ни было из этих систем хоть когда-то появиться
сознание и собственная воля, необходимые для того, чтобы захотеть навредить
человечеству.
Сотрудники института
DAIR, занимающиеся исследованиями искусственного интеллекта, оппонируют авторам письма
института Future of Life и считают, что есть проблемы куда более реальные и
насущные: «Гипотетические риски — фокус опасной идеологии, называемой
«долгосрочность» (longtermism), которая
игнорирует настоящие вредные последствия, исходящие из применения систем ИИ
сегодня». Среди таких последствий они называют эксплуатацию работников и массовую кражу данных с
целью обогащения, взрывной рост синтетических медиа (фейк-ньюс) и концентрацию власти в руках небольшого количества людей,
которые эксплуатируют социальное неравенство.
Чего стоит бояться уже сейчас?
Таких более приземленных
рисков много: это
влияние технологии на образование и рынок труда, увеличение количества
киберпреступлений, доступ несовершеннолетних лиц к контенту 18+,
распространение когнитивных искажений и информации, признанной морально
недопустимой.
Эта дискуссия существенно
шире, чем разговор о ChatGPT: сегодня, например, она разворачивается в
индустрии коммерческой иллюстрации. Даже русскоязычные издания уже начали
иллюстрировать свои материалы изображениями, сгенерированными нейросетями, —
это удобно с точки зрения авторского права и стоит существенно меньше, чем
работа иллюстраторов.
Но возникает вопрос,
стоит ли применять новую
технологию, не разрабатывая политики ее использования и не учитывая ее влияния
на сложившиеся практики в индустрии.
Еще один важный аспект — существенное упрощение создания и
распространения фейковых новостей.
Нейронная сеть может с легкостью сгенерировать изображение по любому запросу
пользователя. Так, например, появилось реалистичное изображение папы римского Франциска в белом пуховике,
которое мгновенно стало вирусным в твиттере, или фотографии ареста Дональда
Трампа. Пока эти случаи единичные, мы относимся к этому с любопытством, но если
их будут сотни? Именно потока сгенерированного нейросетью контента опасаются
исследователи, разрабатывающие политики по безопасности ИИ, например, компания
Mozilla с ее программой «Ответственный ИИ».
Ускоренные алгоритмы
Опасения, озвучиваемые в
дискуссиях о рисках ИИ, вызваны прежде всего неконтролируемой скоростью
развития технологии. Именно поэтому авторы и подписанты письма института Future
of Life призывают временно остановить тренировку моделей: они считают, что
полугода хватит для выработки протоколов безопасности, и за несколько месяцев с
момента публикации письма они уже выпустили рекомендации по разработке
политик в отношении ИИ.
Кроме того, авторы письма
обращалиют внимание на непрозрачность тренировки (то есть базы данных
для обучения языковой модели) ChatGPT. Сейчас данные определяют, по словам
исследователей из института Future of Life, «никем не выбранные лидеры
технологических компаний». При этом объемы этих данных, выданные модели GPT-4,
вероятно, в сотни раз превышают то, с чем работала модель GPT-3, а это 300 млрд слов.
Будущее ИИ
ChatGPT не единственный
инструмент с использованием искусственного интеллекта на рынке, а OpenAI не
единственная компания — разработчик GPT-моделей. Продукты и сервисы, которые
имеют внутри AI, встречаются часто: вы вполне могли пользоваться ими, даже не
подозревая об этом. Поддержку GPT включили в себя Notion, Grammarly, Duolingo,
поисковик Bing от Microsoft (в который встроен именно бот ChatGPT, так как
OpenAI принадлежит Microsoft), сервисы онлайн-перевода и многие другие.
Функционал ChatGPT использует приложение для слабовидящих Be My Eyes, в котором
раньше были задействованы волонтеры.
ChatGPT от OpenAI
действительно произвел большие изменения в индустрии, и тема искусственного
интеллекта окончательно вышла за пределы профессионального сообщества.
Образовательные организации уже начали создавать курсы по написанию промптов,
разработчики думают над интеграцией GPT-моделей в программных продуктах. Совсем
недавно в широком доступе появились сервисы, генерирующие картинки с помощью
нейросетей, и мы привыкли к появлению изображений несуществующих людей, городов
и животных. Поэтому, даже если ИИ никогда не обзаведется сознанием и
собственной волей и не отберет работу у большинства людей, мы все равно увидим
большие изменения.
https://novayagazeta.eu/articles/2023/05/21/chat-bot-kotoryi-vsekh-napugal
Artificial Intelligence. Structures and Strategies for Complex Problem Solving (http://iips.icci.edu.iq/images/exam/artificial-intelligence-structures-and-strategies-for--complex-problem-solving.pdf )
Данная книга посвящена одной из наиболее перспективных и привлекательных областей развития научного знания - методологии искусственного интеллекта. В ней детально описываются как теоретические основы искусственного интеллекта, так и примеры построения конкретных прикладных систем. Книга дает полное представление о современном состоянии развития этой области науки...:
Nav komentāru:
Ierakstīt komentāru