trešdiena, 2018. gada 31. janvāris

Cilvēkkapitāls modernajā pasaulē


                                                                    Omnia mutantur et nos mutamur in illis



Cilvēkkapitāls modernajā pasaulē

Visā civilizācijas evolūcijas gaitā cilvēks ir bijis nežēlīgas ekspluatācijas objekts, kurš vienmēr asociējies kā darbaspēks, bet tirgus ekonomikas apstākļos – arī kā kapitāls. Cilvēka personība ir tikusi noniecināta, atstāta otrajā plānā. Bet vai šodien situācija ir mainījusies?
          Patērētāju sabiedrība turpina uzspiest savus dzīves ieradumus: ar visiem pieejamajiem veidiem un līdzekļiem liek dzīties pēc naudas, lai to tērētu izreklamētajām izklaidēm, daudzveidīgajam patēriņam; liek darbiniekiem strādāt tik intensīvā darba režīmā, ka bieži vien vairs neatliek laika personības radošai attīstībai, zūd vēlme komunicēt, iepazīt pašam sevi. Parastam darba rūķim dzīvi aizpilda naudas pelnīšanas un karjeras intereses, mājas un ģimenes rūpes. Daudz ļaužu dzīvo stresā, rodas kompleksi, fobijas. … turpinājums grāmatā “Kā atbrīvoties no totalitārisma skavām. Izaicinājums pārvarēt politisko vientiesību“: https://buki.lv/product/ka-atbrivoties-no-totalitarisma-skavam-e-gramata

                                           *     *      *
  
Laikmeta fenomens – dati un to vērtība 

Jānis Rozenbergs 19. marts, 2019 

Saskaņā ar Eiropas Komisijas aprēķiniem datu ekonomika 2020.gadā veidos ap 4% no iekšzemes kopprodukta. No pašvaldības vadītāja skatpunkta tas nozīmē milzīgas iespējas izaugsmei, bet vienlaikus arī tikpat lielu atbildību nodrošināt datu pieejamību, vidi un sabiedrības prasmes datu kopas izmantot.
Pašvaldības un valsts iestādes skrupulozi uzkrāj visdažādākos datus izglītībā, sociālajā un veselības aprūpē, grāmatvedībā, būvvaldē, attīstības jomā un tā joprojām, kas teorētiski varētu būt noderīgi. Diemžēl mana kā pašvaldības vadītāja pieredze liecina, ka mēs ieguldām milzīgu darbu datu savākšanā un uzkrāšanā, bet to izmantošanas iespējas ir niecīgas – dati reti ir analizējamā un apstrādājamā formātā, tie nav pieejami sabiedrībai, un arī kolēģi īsti nesaprot, ko ar tiem iesākt un kādu informatīvo vērtību tie glabā.
Ir jāatzīst, ka šobrīd mēs kā sabiedrība kopumā un katrs atsevišķi esam nedaudz apjukuši datu pārbagātībā.
Teorētiski saprotam datu vērtību, bet faktiski īsti nezinām, kā tos izmantot – jo dati ir tikai izejmateriāls, īstā vērtība ir tajā, ko mēs varam iegūt, datus skatot dažādos griezumos, protams, ievērojot visus datus drošības aspektus.
Tāpēc Cēsu novadā esam sākuši diskusiju par atvērto datu stratēģijas izveidi. Mūsu mērķis ir pašvaldības rīcībā esošos datus “izvilkt no atvilktnes”, lai ikviens iedzīvotājs, uzņēmējs, nevalstiskā organizācija varētu ar šiem datiem strādāt, tos analizēt dažādos šķērsgriezumos un balstīt savus lēmumus un izvēli patiesā un plašā informācijā.
Esmu pārliecināts, ka datu pieejamība un prasme tos analizēt dažādos šķērsgriezumos paver pilnīgi jaunus apvāršņus.
Tas vairo atklātību sabiedrības pārvaldībā, atbalsta biznesa attīstību – labākus lēmumus par cilvēku pārvietošanos, pakalpojumu migrāciju, tūristu piesaisti, labāko vietu veikalam vai medicīnas iestādei, jaunam mājoklim un daudz ko citu. Pasaules prakse rāda, ka ar brīvu pieeju datiem privātais bizness bieži vien nāk ar saviem risinājumiem datu analīzes un izmantošanas jomā, piemēram, Londonā ir radīts datu analīzes rīks, kas apkopo, kādas prasmes un iemaņas būs nepieciešamas nākotnē un kādas interešu izglītības iespējas pastāv Londonā. Lietojot šo risinājumu, cilvēks var atrast visērtāko un praktiskāko risinājumu savu prasmju pilnveidošanai nākotnes vajadzībām.
Lai spertu soli no skaistās teorijas grūtajā praksē, Cēsīs esam uzsākuši sadarbību ar Latvijas Datu skolu (http://eprasmes.lv/latvijas-skolu-atverto-datu-hakatons-2019/ ), kas ir daļa no starptautiska tīkla sabiedrības datu pratības veicināšanai. Vēlamies kļūt par piemēru citām Latvijas pilsētām un novadiem datu pieejamībā un izmantošanā tā, lai radītu reālu ekonomisko efektu. Kopā ar Datu skolu iniciatīvu sabiedrības digitālās identitātes stiprināšanai “Valsts #196” un citiem partneriem aicinām Cēsu novada iedzīvotājus un uzņēmējus ieteikt idejas un priekšlikumus par to, kādas datu kopas būtu nepieciešamas un kāds atbalsts vajadzīgs, lai no datiem iegūtu izmantojamu informāciju.
Savukārt 21.martā konferencē “Es. Identitāte. Dati” Cēsīs ikviens ir aicināts izkāpt no komforta zonas un aizdomāties, kā attīstīt savu digitālo identitāti un iespējas datu laikmetā.
Latvija pēdējo gadu laikā ir būtiski pavirzījusies uz priekšu atvērto datu infrastruktūras un tiesiskā ietvara izveidē, tagad laiks valsts atvērto datu portālu piepildīt ar iedzīvotājiem un uzņēmējiem izmantojamu saturu un attīstīt prasmes datus pārvērst informācijā un pamatotos, gudros attīstības lēmumos.
Jānis Rozenbergs ir Cēsu novada domes priekšsēdētājs



Lai robots ir draugs, ne drauds  

Jānis Krievānsб "Junior Achievement Latvia" valdes priekšsēdētājs
Jau gadiem ilgi informatīvajā telpā klātesošs ir vēstījums par robotiem, kas tūlīt, tūlīt nāks un atņems cilvēkiem darbvietas. Nenoliedzami, tehnoloģijas progresē, turklāt situācija darba tirgu spiež uzņēmējus meklēt risinājumus, kas kompensētu darbaspēka trūkumu, kā arī reducētu izmaksas vienkāršākajos darbos.
Taču neatkarīgi no robotizācijas un automatizācijas pakāpes dažādās nozarēs vienmēr visaugstāk kotēsies tādas spējas kā empātija, radošums un spēja adaptēties dažādos apstākļos. Tās ir svarīgi attīstīt jau kopš bērnības, un nav labākas vietas šādu kognitīvi sociālo spēju attīstīšanai kā skolas ar pārdomātu un uz bērna vispārējo attīstību vērstu mācību programmu. Kā vienu no tādām var minēt uzņēmējdarbības izglītības programmu, kas Latvijā darbojas jau 27 gadus un ir pieejama 220 skolās visā Latvijā.
No teorijas uz prasmēm
Šīs vasaras sākumā Francijas pilsētā Lillē notika Uzņēmējdarbības izglītības samits, kurā kā Latvijas pārstāvji piedalījāmies arī mēs – lielākā izglītības organizācija "Junior Achievement Latvia" ("JA Latvia"). Secinājumi pēc tā ir skaidri un nepārprotami – skolām, mācību programmām ir jāpārorientējas no tīras teorijas uz dzīves prasmju integrēšanu mācību programmās. Turklāt ieteicams, lai apstākļi skolās sāktu tuvināties reālajai darba videi – no rindā izkārtotiem soliem, kur bērni sēž pa vienam vai divatā, uz darbu grupās brīvākā vidē.
Šīs atziņas pauda nevis izglītības sistēmas ierēdņi, bet gan pasaules lielāko kompāniju, tādu kā "FedEx", "Nestle", "Ernst&Young", VISA, "Microsoft", un citu uzņēmumu augstākā līmeņa vadītāji, aplūkojot izglītības programmas no biznesa skatpunkta. Tas šiem secinājumiem piešķir īpašu svaru, jo spriedumi ir balstīti ilggadējos novērojumos darba tirgū un saskan ar nākotnes tirgus prognozēm – sociālās spējas būs vienas no svarīgākajām veiksmīgā nākotnes karjerā.
Jādomā plašāk
Nereti kā droša izglītības un karjeras izvēle tiek minētas STEM (zinātne, tehnoloģijas, inženierzinātne un matemātika) disciplīnas. Nenoliedzami, Latvijā un arī citur Eiropā trūkst eksakto zinību pārstāvju, jo īpaši informācijas tehnoloģiju (IT) jomā. Taču ir jādomā plašāk – ir nepieciešamas arī tā dēvētās "softskills" jeb dzīves prasmes, lai sabiedrība spētu attīstīties ne tikai lineāri, bet arī plašumā, paverot jaunus apvāršņus turpmākai izaugsmei.
Pagājušā gada Pasaules Ekonomikas forumā eksperti mēģināja identificēt prasmes, kas būs svarīgākās nākotnes darba tirgū, un izvēlētā desmitnieka sarakstā ne velti dominē tieši tās, kas vērstas uz sadarbību un spēju domāt līdzi, – kompleksa problēmu risināšana, kritiskā domāšana, radošums, cilvēkresursu vadība, spēja sadarboties un citas. Mūsdienu sabiedrību vada vizionāri, kuru idejas realizē speciālisti un savas jomas eksperti. Lai atklātu un attīstītu jaunietī spēju izdomāt konceptus, radīt idejas un iedvesmot apkārtējos, jau kopš skolas sola ir jādod iespēja sevi atrast un pierādīt dažādās dzīves situācijās.
"JA Latvia" īstenotā skolēnu uzņēmējdarbības programma ļauj izpausties visās iespējamās jomās – kļūt par uzņēmuma vadītāju, kuram ir vīzija par sasniedzamajiem mērķiem, mārketinga speciālistu, kurš strādā pie izstrādātā produkta popularitātes, tehnisko direktoru, kurš tur rūpi par izmantojamiem materiāliem, IT speciālistu, kurš saredz iespējas izmantot jaunākās tehnoloģijas... Šādi varētu turpināt bezgalīgi. Te jāatzīmē, ka skolēnu mācību uzņēmumu programma sniedz arī visaptverošas iespējas izmantot citos mācību priekšmetos gūtās zināšanas, papildinot tās ar praksē iegūtajām, tā veidojot skolēnu izpratni par uzņēmējdarbību kopumā. Pat ja visi skolēnu mācību uzņēmumu programmas dalībnieki nekļūs par veiksmīgiem uzņēmējiem, viņiem būs skaidrāks priekšstats par ekonomikas procesiem – gan makro, gan mikro līmenī. Līdz ar to jau tagad tiek veicināta iedzīvotāju izpratne par finanšu līdzekļiem, tā mazinot neapdomīgu aizņēmēju un citas finanšu kļūmes pieļaujošu cilvēku īpatsvaru sabiedrībā.
Draugs, ne drauds
Un tomēr – kā tur īsti ir ar tiem robotiem? Nenoliedzami, automatizācija un robotizācija arvien spēcīgāk ienāks mūsu visu ikdienā. Ekonomiskās sadarbības un attīstības organizācija (OECD) pērn publicēja pētījumu, kas atklāja – "robotu darbaspēka" ietekme uz darba tirgu būs lielāka valstīs, kur mazāk tiek radīti produkti ar pievienoto vērtību. Nav lietderīgi vērtēt automatizācijas risku attiecībā pret profesijām, jo katras profesijas ietvaros specifika atšķiras – cik daudz nepieciešama emocionālā inteliģence, lai veiktu konkrēto darbu, un cik lielā mērā tās ir tehniskas iemaņas.
Taču jau tagad ir skaidrs, ka nav apdraudētas profesijas, kuras pieprasa sekojošas prasmes: sociālā inteliģence, tostarp spēja efektīvi vadīt sarunas sociālā saspringuma gaisotnē, ieskaitot spēju rūpēties par citiem vai atpazīt sensitīvus kultūras jautājumus; kognitīvā inteliģenceradošums un spēja kompleksi risināt jautājumus; uztvere un spēja pielāgoties, tostarp spēja veikt fizisku darbu nestrukturētā darba vidē. Līdz ar to šodien mūsu uzdevums ir izstrādāt izglītības pieejas, kas veicinātu šīs spējas un ļautu robotus uztvert kā tehnisko atbalstu, ne kā zināmu apdraudējumu, un skolēnu uzņēmējdarbības izglītības programma ir viena no tādām.


Kā mēs varam radīt jaunu līderu paaudzi postkapitālisma laikmetam? 

Claudio A. Rivera 26. novembris , 2019 7.28

Vai kapitālisma stundas ir skaitītas? Vai zinām, kas mūs sagaida nākotnē? Vai protam izglītot cilvēkus postkapitālisma laikmetam? Šajā rakstā dalīšos ar savām domām par šiem svarīgajiem jautājumiem.
Esam pārmaiņu laikmetā. Mums nav “planētas B”, un izskatās, ka beidzot sabiedrība to ir sapratusi. Cilvēcei ir kļuvis skaidrs, ka nepietiek ar “nepārtrauktu progresu”. Tirgus ir pilns ar produktiem, kurus  cilvēkam nevajag, jaunās tehnoloģijas bieži traucē cilvēka labklājībai, stress un citas ar “moderno dzīvi” saistītās slimības ir kļuvušas epidēmiskas. Rodas jautājums: “Kāpēc turpinām patērēt šos produktus un veicinām šo dzīves veidu?” Arī jaunieši jau kādu laiku pastiprināti interesējas par sarežģītiem, ar pasaules attīstību saistītiem jautājumiem. Beidzot arī pasaules līderi ir sapratuši, ka tik liela globālā nevienlīdzība un nestabilitāte nav ilgtspējīgi.
Kas īsti notiek? Vienkāršojot sabiedrībā esošos procesus, var teikt,  ka sabiedrība mēģina ietekmēt, ierobežot un dažos gadījumos pat aizvietot kapitālisma galveno līderi, proti – “kapitāldaļu īpašnieku”. Tas nenotiek, noniecinot privātīpašumu, kā to darītu komunismā. Galvenā doma ir privātīpašuma sociālfunkcijas atjaunošana. Cilvēce pastiprināti vēlas “humanizēt kapitālismu”.
Kā minēju, pasaules līderi ir sadzirdējuši šo vēlmi – ANO ir uzņēmusies izveidot un veicināt “ilgtspējīgas attīstības mērķus” (sustainability development goals). Ievērojams skaits korporāciju līderu ir skaidri norādījuši, ka kapitālismam ir nepieciešama “pārformulēšana.” Arī politiķi atrodas jaunas pārregulācijas (piemēram, datu aizsardzības regulēšana) un protekcionisma veicināšanas procesos.
Kas ir postkapitālisma pamats?
Šajā sarežģītajā pārmaiņu laikmetā, kad tirgus ekonomika sāk ieņemt citādu lomu cilvēku dzīvēs, jāmainās arī veidam, kā tiek izglītoti nākotnes ekonomikas dalībnieki un veidotāji. Eksistē dažādi filozofiski domāšanas veidi, caur kuriem varam skatīt pasauli un veikt secinājumus, vai kaut kas ir labs vai slikts, derīgs vai nederīgs. Manuprāt, divi domāšanas veidi, kuri vislabāk ilustrē dilemmu, ar kuru saskaramies šajā pārmaiņu laikmetā, ir utilitārisms un humanitārisms.
Gan kapitālisma sistēmai, gan šīs sistēmas līderu izglītībai ideoloģiskais pamats ir bijis viens – utilitārisms. Utilitārisms ir filozofisks domāšanas veids, kas nosaka, ka galvenais ētikas standarts ir sasniegt lielāka cilvēku skaita lietderības vai laimes maksimizāciju. Tehniski uzņēmējdarbībā šī paradigma izplatās caur  “uzņēmējdarbības principu”: uzņēmumu vadītājiem jāmaksimizē akcionāru vērtība. To darīt palīdz “izmaksu un ieguvumu analīze”, kura iekarojusi popularitāti visdrīzāk savas vienkāršības dēļ.
Utilitārisms nebalstās uz vērtībām vai normām, bet vērtē rīcību, pamatojoties uz izmaksu un ieguvumu analīzi.  Jaunajā postkapitālisma laikmetā šī paradigma vairs nav pieņemama pieaugošā daļā dzīves jomu, arī biznesā. Vēl nesen slavenais amerikāņu menedžmenta guru Pīters Drukers (Drucker, Peter F.) nopietni kritizēja akcionāru vērtības maksimizācijas principu: “Apjukumu rada maldīgs uzskats, ka tā sauktais uzņēmēja peļņas motīvs izskaidro viņa uzvedību vai virzību uz viņaprāt pareizu rīcību. Apgalvojums, ka šāds peļņas motīvs pastāv, ir ļoti apšaubāms.”
Pretēji utilitārismam humānisma perspektīva galvenokārt nav vērsta uz rezultātu un sekām, bet gan uz pašas darbības pareizību vai nepareizību. Humānismā galvenais ir cilvēks, utilitārismā cilvēks var būt galvenais tikai tad, ja tas maiksimizē vērtību lēmuma pieņēmējam. Tāpēc uz utilitārisma principu balstītā ekonomika mēs turpinām piedāvāt produktus, kas patērētajam īsti nav vajadzīgi.
Kā varam izglītot līderus postkapitālisma laikmetam?
Utilitārisms ir ļoti ietekmējis arī izglītību. Ja svarīgākais princips izglītībā ir utilitārisms, mēs radām izglītības programmas ar izteiktu specializāciju un disciplīnu fragmentāciju. Tas diemžēl noved pie liela intelektuālā sašaurinājuma. Ja svarīgākais princips izglītībā ir humānisms, mēs radām izglītības programmas, kas veicina kritisko domāšanu (vai intelektuāli plašu redzes loku) un starpdisciplinaritāti. Iemesls ir šāds: utilitārisma domāšanas veids radīt efektivitāti. Humānisma domāšanas veids – radīt izcilību. Un mēs dzīvojam laikmetā, kur sabiedrībai nepietiek ar nepārtrauktu efektivitāti un progresu, kas radies no utilitārisma principiem.
Mūsu nespēja risināt daudzšķautņainas sabiedrības problēmas konceptuāli radusies no  “kastīšu domāšanas”. Finansisti nesadarbojās ar inženieriem, inženieri nesadarbojās ar māksliniekiem utt. Reālās problēmas nerespektē utilitārisma izglītībā apgūtās disciplīnu robežas. Spēja efektīvi darboties vienā specifiskā jomā nekorelē ar spēju efektīvi risināt problēmas, kuras sastāv no neskaitāmi daudziem aspektiem un jomām.
Ja mēs vēlamies radīt jaunu līderu paaudzi, kura atbilst postkapitālisma laikmeta prasībai skatīties uz pasauli caur plašāku prizmu kā izmaksu un ieguvumu analīze, mums ir jāpiedāvā topošajiem līderiem starpdisciplinārās programmas. Tās ir programmas, kurās dalībnieki kritiski meklē visaptverošas atbildes uz reālām problēmām, izmantojot gan “teorētiskās disciplīnas” (piemēram, filozofiju, matemātiku), gan “praktiskās disciplīnas” (piemēram, finanses, inženieriju, medicīnu). Starpdisciplinārajās programmās galvenais nav efektīvi akumulēt zināšanu blokus. Starpdisciplinārajās programmās galvenais ir prasmīgi “skatīt realitāti no jauna” caur integrētām zinātnēm.
Redzu, ka jau tuvākajā nākotnē vadošās izglītības institūcijas būs tādas, kur programmas nebūs strukturētas pa kursiem, bet ap sabiedrībā svarīgiem jautājumiem.  Pasniedzēji nebūs piesaistīti disciplīnām, mācības vadīs starpdisciplināras komandas, studenti pavadīs vairāk laika laboratorijās un kafejnīcās, nevis auditorijās.
Pasaulē jau ir augstskolas, kuras sāk jaudīgi strādāt tieši šajā – plašu domāšanu attīstošā – virzienā. Daļa no šīm augstskolām ir tradicionālās Ivy League skolas, piemēram, Hārvarda Universitāte, bet daļa ir jaunpienācēji  – Minerva at KGI un Quest College. Sekojot šim trendam, Latvijā radies spilgts starpdisciplinārās izglītības piemērs – starp Latvijas vadošajām augstskolām nodibinātā Baltijas IT līderu programma (www.bitl.lv).
Postkapitālisma laikmets ir periods, kur tirgus ekonomikas pieņēmumi ir zem jautājuma zīmes.  Taču viena lieta ir skaidrano vadošiem uzņēmumiem tiks prasīta cilvēciskāka attieksme pret procesiem un lēmumu pieņemšanu. Vairāk tiks prasīti inovatīvi un ilgtspējīgi risinājumi. Turklāt humānistiskais biznesa redzējums jebkuru uzņēmumu uzskata par vietu, kur caur kopīgu darbu notiek cilvēka personīgā un sociālā attīstība. Lai izveidotu šādus biznesus nepieciešami izcili izglītoti līderi, kuri ir eksperti gan produktu izstrādē, gan cilvēku izpratnē. Esmu pārliecināts, ka tikai starpdisciplinārā izglītība var veicināt nepieciešamo līderu kapacitāti, lai veicinātu sabiedrības ilgtermiņa attīstību.
https://ir.lv/2019/11/26/ka-mes-varam-radit-jaunu-lideru-paaudzi-postkapitalisma-laikmetam/

Tauta, kas grib un prot mainīties, lai stiprinātu Latviju

Agnis Stibe, #EsiLV Zinātnes, inovāciju un tehnoloģiju domnīcas vēstnieks 

28. jūlijs , 2020 7.00
Tauta un tās valsts ir kā organisms ar miljoniem šūnu. Katra no tām rūpējas par organisma kopējo labsajūtu. Ar vienu ziņu, jo pašai no tā būs labāk. Līdzīgi kā šūnas organismā katrs tautietis, tuvu vai tālu, ir spējīgs radīt pienesumu savai tautai un valstij, ja patiesi to vēlas. Vēl katrā organismā ir saziņas un pārvaldes procesi, kas palīdz šūnām vienoti virzīties izvēlētajā virzienā. Cilvēka organismā par to rūpējas neironu šūnas un tīkli. Valstī tā ir pārvalde. Abos gadījumos, ir spēja klausīties un atspoguļot kopēju vīziju, kurp doties. Viegli tverams un vienots skatījums par to, kā celt un stiprināt kopējo nākotni.
Latvija un latvieši var sasniegt to, ko patiesi vēlas. Tieši tāpat tas ir iespējams jebkurai citai tautai un valstij. Svarīga ir izvēle, virziens un izlēmība turp doties katru brīdi. Latvija var būt pašnoteicoša un atvērta pasaules valsts. Latvieši var būt ilgtspējīga un laimīga pasaules tauta. Vai mēs visi kopā to patiesi vēlamies? Es, jā! Un tu?
Kā to izdarīt?
Apzināties, kas mūsos ir vērtīgs pamats, uz kura varam atsperties. Atsijāt traucējošas domas un apstākļus. Skaidri redzēt vienotu nākotni. Un tad sākt virzību. Kā tas nozīmē? Pārmaiņas! Nevis tādas, ko kurām gribas bēgt. Bet tādas, kuras no sirds vēlies paveikt katru dienu. Un katru vakaru, pirms došanās gulēt, jau domā par to, ko spēsi paveikt rīt.
Latvieša dabā ir dots izturīgums, elastīgums, atsperīgums, inteliģence un daudzas citas lieliskas īpašības. Mēs esam izdarīgi, radoši, amatnieciski un ļoti varoši vēl daudzās citās izpausmēs. Mūsu gēnos ir ierakstīta spēja izdzīvot un plaukt jebkuros apstākļos. Izmantosim to, lai mēs, mūsu bērni un viņu bērni lepotos par savu tautu un valsti, kuru paši veido tādu, kā vēlas.
Labā ziņa it tāda, ka pārmaiņas esam mēs. Dzīve ir pārmaiņas. Paskatāmies katra bērnības foto, dzīves gājumu, darbu, idejas, pārliecības un domas. Tas viss ar laiku ir mainījies. Pat tad, ja kāds to nav pamanījis vai ir piemirsis. Pārmaiņas ir izaugsmes un attīstības pamatā. Tās ir mūsu draugs, kas patiesi jāiepazīst. Laipni jāpieņem ceļabiedra lomā izvēlētās vīzijas virzienā. Kopā ejot, ar laiku apgūstot pārmaiņu viedumu.
Kā kļūt par pārmaiņu meistaru?
Pirmkārt, katrā no mums ir dabā ielikta izpratne par pārmaiņām. Tas ir mūsu evolūcijas pamatā. Cilvēces attīstība kā nepārtrauktu pārmaiņu process. Otrkārt, pārmaiņas sākas no katra paša izvēles un rīcības. Protams, ir ārējie apstākļi, kas var traucēt. Bet tā ir katra paša izvēle ko un kā darīt esošajos apstākļos. Visa pamatā ir vēlēšanās atklāt savu iekšējo pārmaiņu algoritmu. To veidu, kā pārmaiņas notiek un var notikt tieši tevī.
Treškārt, ir svarīgi saprast, ka pārmaiņas nav mistika un maģija. Bet tās ir diezgan vienkārši saprotamas un paveicamas. Sākumā ir jāsaprot, kāda doma visvairāk traucē konkrētai pārmaiņai. Tad ir jāizvēlas to aizvietot ar pilnīgi pretēju domu. Piemēram, traucējošā doma: “Man nav katru dienu vienas stundas, kuru veltīt pašattīstībai.” Pretējā domā: “Man ir katru dienu viena stunda, kuru veltīt pašattīstībai.”
Lai palīdzētu turpināt un nepārtraukt šo domu nomaiņu, var izmantot vektoru transformācijas ceļvedi. Tas attēlo izvēlēto virzienu “man ir viena stunda” kā zaļu bultu, bet pārējos blakus virzienus kā dzeltenas “man ir pusstunda” un sarkanas “man ir desmit minūtes” bultas. Šādu rīku izmantošana palīdz stiprināt apziņu un ceļu līdz sava iekšējā pārmaiņu algoritma atklāšanai. Kad tas ir atrasts, to var sekmīgi pielietot gan mazām ikdienas pārmaiņām, gan kopā ar tautiešiem un atbalstītājiem veicināt būtiskas valsts pārmaiņas.
Ceturtkārt, mēs esam sabiedriskas būtnes, tāpēc mūsu dabai piemīt sociālā spoguļošana. Ko tas nozīmē? Cilvēku mijiedarbībā katrs pamana daļu no sevis apkārtējos cilvēkos. Kaut arī bieži domā, ka tas pamanītais piemīt tiem citiem cilvēkiem. Patiesībā mēs citos pamanām to, kas esam paši. Tādā pašā veidā arī pamanām to, ja kāds ir mainījies. Mēs redzam, ne tikai to, ko kāds ir sasniedzis, bet arī to, par kādu cilvēku kāds ir kļuvis šī mērķa sasniegšanā. Spējam pamanīt, ka cilvēkā ir notikušas iekšējās pārmaiņas, kas deva iespēju šim sasniegumam.
Bieži to nevar precīzi paskaidrot sīkumos, jo tas ir tā cilvēka atklātais pārmaiņu algoritms, kas var atšķirties to tavējā. Bet viens ir ļoti skaidrs, to var sajust. Var sajust, ka pastāv šis iekšējais pārmaiņu viedums, kas dabā dots ikkatram. Tādejādi pārmaiņu meistari spoguļo sabiedrībai ne tikai savu virzību un rezultātu, bet arī šo sajūtu, ka katrs no mums var par tādu kļūt. Tas dabiski atspoguļojas sabiedrībā kā vilnis, kas ceļo cauri tautas apziņai.
Piektkārt, katrā organizācijā, uzņēmumā un pārvaldē ir tās vadība. Līderi, kuri stāsta vīziju, rāda ceļu un iedrošina katru virzīties kopā ar visiem izvēlētajā virzienā. Šie vadoņi ir viens no visspēcīgākajiem atbalstiem mūsu tautas un valsts labesīgai virzībai nākotnē. Ar savu piemēru un iedvesmu šie cilvēki var palīdzēt mainīties ne tikai savai komandai un darbiniekiem, bet arī uzmundrināt citus līdzīgus uzņēmumus un organizācijas Latvijā, kā arī latviešu apvienības un kopienas visā pasaulē. Ļoti praktiski un izdarāmi, piemēram, vadītāji var piedāvāt un iekļaut pārmaiņu apmācības un pašizglītošanās iespējas saviem darbiniekiem. Pārmaiņu izpratni un būtību var arī pieņemt kā daļu no ikdienas būtiskām vērtībām un darbības pamatiem uzņēmumos un organizācijās.
Noslēgumā svarīgi ir uzsvērt to, cik lielā mērā iepriekš aprakstīto gaitu var paātrināt izmantojot tehnoloģijas. Tās var veidot ar nodomu palīdzēt cilvēkiem atklāt savu iekšējo pārmaiņu algoritmu. Tehnoloģiskas inovācijas var papildināt uzņēmumu un organizāciju darba vidi, tādejādi uzmundrinot darbiniekus pievērst uzmanību notiekošajām un nepieciešamajām pārmaiņām. Un tieši tāpat šādas inovācijas var piemērot pilsētvidei, kurā katrs var saņemt tūlītēju ieskatu pārmaiņu gaitā. Kā arī rast iespēju saskatīt sava paša pārmaiņu artavu kopēja virzībā. Šāds tehnoloģisks atbalsts ne tikai veicina izvēlētās pārmaiņas, bet arī pastiprina katra cilvēka pārliecību par notiekošo pārmaiņu realitāti un tās neatraujamo ietekmi un paša dzīves kvalitāti.
Pārmaiņu pieredze?
Nav noslēpums, ka daudzi cilvēki vēlas mainīties. Veikt uzlabojumus dzīves apstākļos un savās izvēlēs. Kā piemērs, par to liecina pirmā janvāra apņemšanās. Mēs zinām, ka ne vienmēr visi spēj viegli un ātri sasniegt izvēlētos mērķus. Tajā pašā laikā ir arī daudz piemēru, kas liecina par cilvēkiem, kuri grib un var pārliecinoši virzīties savos izvēļu virzienos.
Savā dzīvē esmu veicis neskaitāmas paliekošas pārmaiņas, kuras dēvēju par transformācijām. Jaunībā tās darīju, pat īsti nespējot līdz galam saprast, kā un kas notiek. Bet vienmēr zināju, ka ir ceļš. Ir iespēja, kuru vēl neesmu pamēģinājis. Ar gadiem mana ziņkārība pārauga pārmaiņu zinātnē, kuru ikdienā praktizēju, palīdzot uzņēmumiem un organizācijām.
Ilustrācijas nolūkā izstāstīšu nesenu atklāsmi, kas līdzīgā veidā savieno manu bērnības pieredzi ar šībrīža sapratni par cilvēka iekšējo pārmaiņu algoritmu. Vai ir pazīstams Kubiks Rubiks? Šo kubiku izgudroja Ernö Rubik 1974. gadā Budapeštā, Ungārijā. Viņš vēlējās radīt spēli, kas vienlaicīgi palīdzētu izprast trīs dimensiju ģeometriju un radītu pasaulē pārdotāko rotaļlietu. Pēc aptuveni sešiem gadiem šī spēle strauji kļuva populāra un katrs piektais jau bija to pamēģinājis.
Sakritība. Esmu dzimis vienlaikus ar šo spēli. Arī manās rokās tā nonāca manā bērnībā. Man patika ar to spēlēties, bet ne reizi nespēju to pabeigt līdz galam. Lai kā arī centos, tas nebija man pa spēkam. Nesen iegādājāmies šo kubiku maniem bērniem. Šoreiz man par lielu pārsteigumu kopā ar kubiku saņēmu tā salikšanas pamācību. Spēlējot ar to kādu laiku un apgūstot pamācību, tagad varu to salikt trīs minūtēs. Lūk, arī atklāsme par to, ka visam ir savs iekšējais ritms, savas vadlīnijas, savs algoritms. Cilvēkam bez izpratnes par savu iekšējo algoritmu, iespējams, vienmēr būs līdzīgs rezultāts kā man bērnībā ar kubiku rubiku. Tāpēc aicinu apsvērt šo domu un katram paņemt no tās to, kas šķiet noderīgs.
Skats uz Latviju 2040
Latviju pēc vienas paaudzes vēlos redzēt kā pāšnoteicošu un atvērtu pasaules valsti ar tautu, kas grib un stiprina to. Tādu, kas neizšķiež savu inteliģenci nebūtiskos trokšņos, bet stabili balstās uz skaidru apziņu par savām spējām un pārliecinoši virzās uz kopīgu laimi. Mani bērni pašlaik uzaug ārpus Latvijas, bet ļoti iedvesmojošā latviskā vidē. Zinot, ka Latvija ir zeme, kas vienmēr būs atvērta viņu pienesumam. Vieta, kas vienmēr viņus uzskatīs par savējiem un ar lepnumu priecāsies par katru viņu sasniegumu pasaulē. Atvērta domu telpa, kas dos patiesu saskaņu ar mūsu dabu, dvēseles piepildījumu tradīciju bagātībā un to sajūtu, kuru nevar iegūt nekur citur. Atkārtošos — latvietis latvieša sajūtu var iegūt tikai Latvijā.
Ko darīt katru rītu?
Katru rītu ieteicams sev pajautāt un godīgi atbildēt uz šiem diviem vienkāršiem jautājumiem:
Vai es vēlos laimīgu un jēgpilnu dzīvi?
— Vai es vēlos stipru, ilgtspējīgu un atvērtu tautu un valsti?

Strange Tools: Art and Human Nature 

by Alva Noë


A philosopher makes the case for thinking of works of art as tools for investigating ourselves.
The philosopher and cognitive scientist Alva Noë argues that our obsession with works of art has gotten in the way of understanding how art works on us. For Noë, art isn't a phenomenon in need of an explanation but a mode of research, a method of investigating what makes us human--a strange tool. Art isn't just something to look at or listen to--it is a challenge, a dare to try to make sense of what it is all about. Art aims not for satisfaction but for confrontation, intervention, and subversion. Through diverse and provocative examples from the history of art-making, Noë reveals the transformative power of artistic production. By staging a dance, choreographers cast light on the way bodily movement organizes us. Painting goes beyond depiction and representation to call into question the role of pictures in our lives. Accordingly, we cannot reduce art to some natural aesthetic sense or trigger; recent efforts to frame questions of art in terms of neurobiology and evolutionary theory alone are doomed to fail.
By engaging with art, we are able to study ourselves in profoundly novel ways. In fact, art and philosophy have much more in common than we might think. Reframing the conversation around artists and their craft, Strange Tools is a daring and stimulating intervention in contemporary thought.: https://www.amazon.com/Strange-Tools-Art-Human-Nature/dp/0809089165


otrdiena, 2018. gada 30. janvāris

The Truth or Maybe Lies?!

                                                                     Prius quam exaudias, ne iudices

                        
                              The Truth or Maybe Lies?!

     In the world where information is presented selectively, by mixing facts with fictions and one-sided interpretations, artificial assumptions and hypothetical versions, an ordinary person gets completely lost in the barrage of information. People are disoriented and their minds are zombified through bland political TV shows organised in the interests of the nomenclature of power and financed by media moguls, and through specially directed performances and illustrative demonstrations of the rhetoric of the leaders.
    Massive information pressure coming from the biased media contributes to the formation and maintenance of an environment where it is easy to order and manipulate the public opinion. To create and instil the views and assessments desirable for the authorities and to disseminate such concepts which are beneficial and necessary to those who pay the piper and call the tune. Bringing the audience of viewers, listeners and readers to such a condition that they want to see and hear only what they are successfully accustomed to and what corresponds to the ideas of objectivity and imaginary truth formed in that audience. Differing information is ignored, repelled and rejected as out of place, unacceptable or absurd.... Read more: https://www.amazon.com/HOW-GET-RID-SHACKLES-TOTALITARIANISM-ebook/dp/B0C9543B4L/ref=sr_1_1?crid=19WW1TG75ZU79&keywords=HOW+TO+GET+RID+OF+THE+SHACKLES+OF+TOTALITARIANISM&qid=1687700500&s=books&sprefix=how+to+get+rid+of+the+shackles+of+totalitarianism%2Cstripbooks-intl-ship%2C181&sr=1-1
    

Denialism: what drives people to reject the truth

From vaccines to climate change to genocide, a new age of denialism is upon us. Why have we failed to understand it? By Keith Kahn-Harris

Fri 3 Aug 2018 

We are all in denial, some of the time at least. Part of being human, and living in a society with other humans, is finding clever ways to express – and conceal – our feelings. From the most sophisticated diplomatic language to the baldest lie, humans find ways to deceive. Deceptions are not necessarily malign; at some level they are vital if humans are to live together with civility. As Richard Sennett has argued: “In practising social civility, you keep silent about things you know clearly but which you should not and do not say.” Just as we can suppress some aspects of ourselves in our self-presentation to others, so we can do the same to ourselves in acknowledging or not acknowledging what we desire. Most of the time, we spare ourselves from the torture of recognising our baser yearnings. But when does this necessary private self-deception become harmful? When it becomes public dogma. In other words: when it becomes denialism.
Denialism is an expansion, an intensification, of denial. At root, denial and denialism are simply a subset of the many ways humans have developed to use language to deceive others and themselves. Denial can be as simple as refusing to accept that someone else is speaking truthfully. Denial can be as unfathomable as the multiple ways we avoid acknowledging our weaknesses and secret desires.….: https://www.theguardian.com/news/2018/aug/03/denialism-what-drives-people-to-reject-the-truth?utm_source


Automatic Detection of Fake News
(Submitted on 23 Aug 2017)
The proliferation of misleading information in everyday access media outlets such as social media feeds, news blogs, and online newspapers have made it challenging to identify trustworthy news sources, thus increasing the need for computational tools able to provide insights into the reliability of online content. In this paper, we focus on the automatic identification of fake content in online news. Our contribution is twofold. First, we introduce two novel datasets for the task of fake news detection, covering seven different news domains. We describe the collection, annotation, and validation process in detail and present several exploratory analysis on the identification of linguistic differences in fake and legitimate news content. Second, we conduct a set of learning experiments to build accurate fake news detectors. In addition, we provide comparative analyses of the automatic and manual identification of fake news.
Subjects:
Computation and Language (cs.CL)
Cite as:

(or arXiv:1708.07104v1 [cs.CL] for this version)
Submission history

From: Veronica Perez-Rosas [view email]; https://arxiv.org/abs/1708.07104 

Bad News

By Joseph Bernstein

Selling the story of disinformation

In the beginning, there were ABC, NBC, and CBS, and they were good. Midcentury American man could come home after eight hours of work and turn on his television and know where he stood in relation to his wife, and his children, and his neighbors, and his town, and his country, and his world. And that was good. Or he could open the local paper in the morning in the ritual fashion, taking his civic communion with his coffee, and know that identical scenes were unfolding in households across the country.

Over frequencies our American never tuned in to, red-baiting, ultra-right-wing radio preachers hyperventilated to millions. In magazines and books he didn’t read, elites fretted at great length about the dislocating effects of television. And for people who didn’t look like him, the media had hardly anything to say at all. But our man lived in an Eden, not because it was unspoiled, but because he hadn’t considered any other state of affairs. For him, information was in its right—that is to say, unquestioned—place. And that was good, too.

Today, we are lapsed. We understand the media through a metaphor—“the information ecosystem”—which suggests to the American subject that she occupies a hopelessly denatured habitat. Every time she logs on to Facebook or YouTube or Twitter, she encounters the toxic byproducts of modernity as fast as her fingers can scroll. Here is hate speech, foreign interference, and trolling; there are lies about the sizes of inauguration crowds, the origins of pandemics, and the outcomes of elections…: https://harpers.org/archive/2021/09/bad-news-selling 


Criteria for and Explanation of Ratings

A news or information website is rated green if its content is produced by people who are trying to communicate news, information, and opinion that they believe is accurate, and who adhere to practices aimed at ensuring basic standards of accuracy and accountability. A site is rated red if it fails to meet these minimum standards.
As explained below, NewsGuard uses 9 specific criteria to evaluate these possible points of failure. We start with the premise that a site should be greenuntil our evaluation of the site, based on those 9 criteria, produces a red rating.
These determinations are made through reporting by NewsGuard’s analysts, who are trained journalists supervised by experienced editors. NewsGuard’s analysts review and assess the content and processes of each site and contact those in charge of the site for comment when necessary. The resulting Nutrition Label and rating is reviewed by at least two senior editors, and often by many. Any sites that receive Red ratings are reviewed again by all senior editors and the co-CEOs. Any disagreements are resolved by the executive editor, managing editor, and co-CEOs. The backgrounds of the analysts and editors named, as well as those of the supervising editors, can be found by clicking on their names or going to the Our Team page of this website.
It should be noted that the only attribute that sites rated green have in common is that they did not fail to meet enough of the 9 criteria that they should be rated red. Not all sites rated green are equal. As our Nutrition Label write-ups indicate, some are much fairer and more accurate in their reporting than others. Some are more transparent and accountable than others. Some are more robustly staffed and regularly produce superior content, while others struggle with tight budgets. Some focus on reporting the news, while others have a mission based on a political or other point of view that they unabashedly support. NewsGuard’s mission is not to make granular judgments but to establish and communicate adherence to basic standards in order to give readers information they need to assess their sources of information online. Again, the Nutrition Labels attempt to convey those differences, while the green–red rating provides a more basic, binary distinction. Put simply, red-rated sites fail the test of the key criteria and sometimes even all 9.
THE 9 FACTORS
Here are the 9 criteria that NewsGuard uses in determining if a provider is rated red. A site that fails to adhere to a preponderance of these criteria, as described in the weighted criteria definitions below, are rated red. No site must adhere to all of the criteria to be rated green.
In every case the NewsGuard Nutrition Labels that are provided for each site (by clicking on the rating) spell out the site’s adherence to each of the 9 criteria that yielded that source’s particular rating.
The 9 criteria below are listed in order of their importance in determining a red rating. For example, failure to adhere to the first criteria—publishing false content—will be more influential in determining a red rating than failure to reveal information about content creators.
Credibility
  • Does not repeatedly publish false content: The site does not repeatedly produce stories that have been found—either by journalists at NewsGuard or elsewhere—to be clearly and significantly false, and which have not been quickly and prominently corrected. (22 Points. A label with a score lower than 60 points gets a red rating.)
  • Gathers and presents information responsibly: Content on the site is created by reporters, writers, videographers, researchers, or other information providers who generally seek to be accurate and fair in gathering, reporting, and interpreting information, even if they approach their work from a strong point of view. They do this by referencing multiple sources, preferably those that present direct, firsthand information on a subject or event. (18 Points)
  • Regularly corrects or clarifies errors: The site makes clear how to contact those in charge and has effective practices for publishing clarifications and corrections. (12.5 Points)
  • Handles the difference between news and opinion responsibly: Content providers who convey the impression that they report news or a mix of news and opinion distinguish opinion from news reporting, and when reporting news, they do not regularly or egregiously misstate, distort, or cherry pick facts, or egregiously cherry pick stories, to advance opinions. Content providers whose clearly expressed purpose is to advance a particular point of view do not regularly and egregiously misstate or distort facts to make their case. (12.5 Points)
  • Avoids deceptive headlines: The site generally does not publish headlines that include false information, significantly sensationalize, or otherwise do not reflect what is actually in the story. (10 Points)
Transparency
  • Website discloses ownership and financing: The site discloses its ownership and/or financing, as well as any notable ideological or political positions held by those with a significant financial interest in the site (including in the case of nonprofits, its major donors), in a user-friendly manner. (7.5 Points)
  • Clearly labels advertising: The site makes clear which content is paid for and which is not. (7.5 Points)
  • Reveals who’s in charge, including any possible conflicts of interest: Information about those in charge of the content is made accessible on the site, including any possible conflicts of interest. (5 Points)
  • Site provides the names of content creators, along with either contact information or biographical information: Information about those producing the content is made accessible on the site. (5 Points)
CRITERIA WEIGHTING
Each of the nine criteria is assigned a certain number of points, as noted in the section above. The points add up to 100. A website that scores less than 60 points is rated red.​
Note: The weighting of these criteria may change as we receive feedback and continue to develop our process​. When such changes are made they will be noted here.
Criteria Weighting Points:
Criteria
Points
Does not repeatedly publish false content
22
Gathers and presents information responsibly
18
Regularly corrects or clarifies errors
12.5
Handles the difference between news and opinion responsibly
12.5
Avoids deceptive headlines
10
Website discloses ownership and financing
7.5
Clearly labels advertising
7.5
Reveals who’s in charge, including any possible conflicts of interest
5
Provides information about content creators
5



Most organisations have been fighting fake news when they see it. But what if there were a way to stop misinformation before it even starts to spread?

By Diego Arguedas Ortiz 14 November 2018

For decades, medicine has provided us with an easy way to prevent diseases: vaccines.
Most of us are familiar with how a vaccine works – it exposes our bodies to weakened versions of a virus to help us build antibodies against the real thing. Now common practice in GPs around the world, vaccination has all but extinguished some of the worst diseases of the last century, including measles and polio.
But could vaccines have applications beyond medicine?
Researchers like Sander van der Linden are working on a type of vaccination that could combat a very 21st-Century scourge: fake news.
This could work because misinformation behaves like a virus. False news stories spread faster, deeper and farther than true stories, cascading from ‘host’ to ‘host’ across Twitter hashtags, WhatsApp groups and your great-uncle Gavin’s Facebook profile (yes, he opened one). To make things worse, a fake story is tenaciously persistent.
Misinformation spreads like a virus, spreading across social media faster and farther than factual stories (Credit: Getty Images)
“If you try to debunk it, misinformation sticks with people,” says van der Linden, who leads the Social Decision-Making Laboratory at the University of Cambridge. “Once it’s integrated into the long-term memory, it becomes very difficult to correct it.”
So what can you do? Like Han Solo, you shoot first.
Professionals in this field call the approach “pre-bunking”. Instead of waiting for false information to spread and then laboriously fact-checking and debunking it, researchers go for a pre-emptive strike that has the potential to shield your brain.
Psychologists first proposed inoculation in the 1960s, driven by fears of propaganda and brainwashing during the Cold War. But a 21st Century version targets our modern misinformation landscape, one more preoccupied with political divisions and culture wars.  
Hot topic
Take climate change. More than 97% of climate scientists have concluded that humans are responsible for global warming, but large groups of society still have trouble believing it. When asked what percentage of climate scientists agree that human-caused global warming is occurring, only 49% of Americans thought it was more than half – and only 15% answered, correctly, that it was more than 91%. The confusion reflects sophisticated campaigns aimed at sowing doubt among the public.
Once doubt settles, it is hard to dislodge it
The difficulty is that once doubt settles, it is hard to dislodge it. Van der Linden and his colleagues wondered what would happen if they reached people before the nay-sayers did.  
They dug up a real-life disinformation campaign: the so-called Oregon Petition, which in 2007 falsely claimed that over 31,000 American scientists rejected the position that humans caused climate change.
Debunking misinformation after the fact is surprisingly ineffective – but what about ‘pre-bunking’? (Credit: Getty Images)
The team prepared three documents. First, they wrote a ‘truth brief’ explaining that 97% of climate scientists agree that humans are responsible for climate change. They also prepared a ‘counter-brief’ revealing the flaws in the Oregon Petition – for instance, that among the Petition’s 31,000 names are people like the deceased Charles Darwin and the Spice Girls, and that fewer than 1% of the signatories are climate scientists.
Finally, they surveyed 2,000 people. First, they asked them how big the scientific consensus on climate change is – without looking at either document. Then they broke them into one group that saw the ‘truth brief’, one group that saw the Oregon Petition, and those who saw the ‘truth brief’ before the petition.
The results were intriguing. When participants first were asked about the scientific consensus on climate change, they calculated it to be around 72% on average. But they then changed their estimates based on what they read.
When the scientists provided a group with the ‘truth brief’, the average rose to 90%. For those who only read the Oregon Petition, the average sank to 63%.
When a third group read them both – first the ‘truth brief’ and then the petition – the average remained unchanged from participants’ original instincts: 72%.
“I didn’t expect this potency of misinformation,” says Van der Linden – the misinformation managed to completely ‘neutralise’ the correct data.
One study found that when people were given a factual brief, followed by a fake one, the misinformation completely neutralised the correct data (Credit: Getty Images)
Enter inoculation.
When a group of participants read the ‘truth brief’ and also were told that politically motivated groups could try to mislead the public on topics like climate change – the ‘vaccine’ – the calculated average rose to almost 80%. Strikingly, this was true even after receiving the Oregon Petition.
The ‘counter brief’ detailing how the petition was misleading was more effective. One final group who read it before the petition estimated that 84% of scientists agreed that humans were responsible for climate change (of course, the actual number still is 97%).
In a separate piece of research, another team lead by John Cook asked a similar question and arrived at the same result: inoculation could give us the upper hand against misinformation.
They are flipping the approach, doing a pre-emptive strike and giving people a heads-up – Eryn Newman
“It’s an exciting development,” says Eryn Newman, a cognitive scientist and lecturer at the National University of Australia who was not involved in the studies. “They are flipping the approach, doing a pre-emptive strike and giving people a heads-up.”.
In other words, uncle Gavin may think twice before sharing that latest post about Brexit, Trump or whether the Earth is flat.
Why it works
Humans usually rely on mental shortcuts to think; the world is full of information and our brain has limited time and capacity to process it. If you see a wrinkled, grey-haired man and someone tells you he is a senior citizen, your brain accepts that and carries on.
People working with misinformation know this and use it to their advantage. For instance, the drafters of the Oregon Petition falsely claimed 31,000 scientists supported their claim because we tend to trust experts.
When information feels easy to process, people tend to nod along – Newman
“When information feels easy to process, people tend to nod along,” says Newman, who co-authored a review on how to deal with false information.
Before believing a piece of new information, most people scrutinise it in at least five ways, they found. We usually want to know if other people believe it, if there is evidence supporting this new claim, if it fits with our previous knowledge on the matter (hence the grey-haired man, who might fit your idea of a senior citizen), if the internal argument makes sense and whether the source is credible enough.
But at times we rely too much on shortcuts to answer these five questions. Our evaluation is not as thorough. We do not ask ourselves, “Hmm, how many of those are actually climate scientists?” Instead, we just accept the number “31,000 scientists” because it feels about right.
Before believing a piece of new information, people scrutinise it – but sometimes rely on shortcuts to do so (Credit: Getty Images)
Psychologists call this more automatic way of thinking “System 1”. It is immensely helpful for daily life, but vulnerable to deceit. In our fast-paced information ecosystem, our brain jumps from one Facebook post to the next, relying on rules-of-thumb to assess headlines and comments and without giving much thought to each claim.
This is fertile ground for fake news. However, the teams working on the misinformation ‘vaccine’ believe their work allows for deeper thinking to kick in.
“Inoculation forces our brain to slow down,” says van der Linden. “There is a warning element.”
To appreciate this, it might helpful to understand how a vaccine (the actual medical procedure, not the misinformation metaphor) works.
Each time we receive a shot, we are showing our body a sample of a disease – a biological ‘mugshot’ small enough to avoid feeling really ill but sufficiently strong to provoke a reaction. This interloper startles our immune system into action and it starts building defences, or antibodies. When we come across the real disease, our body recognises it from the mugshot and is ready to strike back.
When the research team tipped off the participants that others might try to deceive them, they did not take the misinformation at face value
Something similar happened in van der Linden’s study. When his team tipped off the participants that others might try to deceive them, they did not take the Oregon Petition at face value. They overruled their System 1 thinking and nudged into replacing it with its cousin – the slower but more powerful thinking mode psychologists call System 2.
Those who read both the ‘truth brief’ and the Oregon Petition, and estimated the scientific consensus at 72%, perhaps relied more on the faster and more superficial System 1. But as the ‘vaccine’ startled their brain into switching to System 2, the two last groups remembered the ‘knowledge mugshot’ from the counter-brief and distrusted the petition. That could explain the higher estimate in the later groups.  
Game on
There is one great weakness of this approach: it takes a lot of time and effort to go case by case, inoculating people.
While the inoculation approach is effective, the difficulty is that it takes time and effort to ‘vaccinate’ everyone (Credit: Getty Images)
Let us stretch the vaccine metaphor a bit more. Having a shot against rubella, for instance, will not keep you from getting measles or hepatitis, as it will only create antibodies against the rubella virus. Similarly, if you receive the counterarguments to climate denial, you might still be vulnerable to fake news on other topics.
You can’t ‘pre-bunk’ every story because you don’t know where the next deception is coming from – Jon Roozenbeek
“There are millions of topics out there on which you can deceive people,” explains Jon Roozenbeek, who joined van der Linden’s team in 2016. “You can’t ‘pre-bunk’ every story because you don’t know where the next deception is coming from.”
The other problem is that people don’t like being told what is true and what is false. We usually think we know better. That is why pedagogy experts usually advise educators to provide students with an active role in learning.
So the Cambridge researchers went back to the lab until they came up with a new idea.
“What if we taught people the tactics used in the fake news industry?” van der Linden recalls of his thinking at the time. “What better way to prepare them?”
The result was a role-playing game in which participants could play one of four characters, from being an ‘alarmist’ to impersonating a ‘clickbait tycoon’. The game focussed on fake news strategies, rather than topics.
Once the offline fake news game proved effective on a test with Dutch high school students, they scaled it up to an online version called Bad News with the help of the collective DROG.  
Navigating the online game takes less than 15 minutes – but it is a surreal experience. You launch a fake news site, become its editor-in-chief, purchase an army of Twitter bots and direct your followers against a well-meaning fact checker. By the time I surpassed 7,000 followers, I felt slightly uneasy at how addictive it was.
Throughout the game, you learn six different techniques used by fake news tycoons: impersonation, emotional exploitation, polarisation, conspiracy, discredit and trolling. The idea is that the next time someone tries to use the tactics against me on social media, I should recognise them and be able to call them out. Or, at least, an alarm will go off somewhere in my brain and the automatic and easy System 1 process will take the back seat as my mind subs in System 2 for a closer scrutiny. One can only hope.
A fake news game teaches players six techniques used, including impersonation, emotional exploitation, polarisation, conspiracy, discredit and trolling (Credit: Getty Images)
It seems a bit counterintuitive to fight fake news by teaching people how to become a misinformation mogul, but Roozenbeek trusts the experiment. He is also amused, if only slightly, at my discomfort.
“If you get a shot, you might feel a bit nauseous later that day,” the PhD student assured me, “but it helps you in the long term.”
The pair drafted an academic paper with the results of 20,000 players who agreed to share their data for a study. Although unpublished, they say the results are encouraging.
A shorter version of the game is on display at an exhibition at the London Design Museum in which people can play the part of an information agitator in post-Brexit Britain.
In their fondness of the inoculation metaphor, the Cambridge team members speak hopefully of the online game, now to be translated to over 12 languages. Van der Linden expects people can get “herd immunisation” if it’s sufficiently shared online. Roozenbeek talks about “general immunity”, since the game doesn’t target one specific topic but the general use of fake news.
Ultimately, it will also have to pass the test of time: researchers do not know how long the benefits of the inoculation would last, if the inoculation works at all. As a virus, disinformation moves in a rapidly changing environment and adapts quickly to new conditions.
“If the virus changes, will people still be protected?” asks Newman, who wonders whether the game will stand the ever-changing nature of online trolling and disinformation.  
In other words: will this boost to your uncle Gavin’s mental defences last until the next election cycle?
Diego Arguedas Ortiz is a science and climate change reporter for BBC Future. He is @arguedasortiz on Twitter.



Why can’t we agree on what’s true any more?

 It’s not about foreign trolls, filter bubbles or fake news. Technology encourages us to believe we can all have first-hand access to the ‘real’ facts – and now we can’t stop fighting about it. By William Davies

Thu 19 Sep 2019 06.00 
We live in a time of political fury and hardening cultural divides. But if there is one thing on which virtually everyone is agreed, it is that the news and information we receive is biased. Every second of every day, someone is complaining about bias, in everything from the latest movie reviews to sports commentary to the BBC’s coverage of Brexit. These complaints and controversies take up a growing share of public discussion.
Much of the outrage that floods social media, occasionally leaking into opinion columns and broadcast interviews, is not simply a reaction to events themselves, but to the way in which they are reported and framed. The “mainstream media” is the principal focal point for this anger. Journalists and broadcasters who purport to be neutral are a constant object of scrutiny and derision, whenever they appear to let their personal views slip. The work of journalists involves an increasing amount of unscripted, real-time discussion, which provides an occasionally troubling window into their thinking.
But this is not simply an anti-journalist sentiment. A similar fury can just as easily descend on a civil servant or independent expert whenever their veneer of neutrality seems to crack, apparently revealing prejudices underneath. Sometimes a report or claim is dismissed as biased or inaccurate for the simple reason that it is unwelcome: to a Brexiter, every bad economic forecast is just another case of the so-called project fear. A sense that the game is rigged now fuels public debate.
Reasons we cannot agree on what’s true | Letters
 This mentality now spans the entire political spectrum and pervades societies around the world. A recent survey found that the majority of people globally believe their society is broken and their economy is rigged. Both the left and the right feel misrepresented and misunderstood by political institutions and the media, but the anger is shared by many in the liberal centre, who believe that populists have gamed the system to harvest more attention than they deserve. Outrage with “mainstream” institutions has become a mass sentiment.
This spirit of indignation was once the natural property of the left, which has long resented the establishment bias of the press. But in the present culture war, the right points to universities, the BBC and civil service as institutions that twist our basic understanding of reality to their own ends. Everyone can point to evidence that justifies their outrage. This arms race in cultural analysis is unwinnable.
This is not as simple as distrust. The appearance of digital platforms, smartphones and the ubiquitous surveillance they enable has ushered in a new public mood that is instinctively suspicious of anyone claiming to describe reality in a fair and objective fashion. It is a mindset that begins with legitimate curiosity about what motivates a given media story, but which ends in a Trumpian refusal to accept any mainstream or official account of the world. We can all probably locate ourselves somewhere on this spectrum, between the curiosity of the engaged citizen and the corrosive cynicism of the climate denier. The question is whether this mentality is doing us any good, either individually or collectively.
Public life has become like a play whose audience is unwilling to suspend disbelief. Any utterance by a public figure can be unpicked in search of its ulterior motive. As cynicism grows, even judges, the supposedly neutral upholders of the law, are publicly accused of personal bias. Once doubt descends on public life, people become increasingly dependent on their own experiences and their own beliefs about how the world really works. One effect of this is that facts no longer seem to matter (the phenomenon misleadingly dubbed “post-truth”). But the crisis of democracy and of truth are one and the same: individuals are increasingly suspicious of the “official” stories they are being told, and expect to witness things for themselves.
On one level, heightened scepticism towards the establishment is a welcome development. A more media-literate and critical citizenry ought to be less easy for the powerful to manipulate. It may even represent a victory for the type of cultural critique pioneered by intellectuals such as Pierre Bourdieu and Stuart Hall in the 1970s and 80s, revealing the injustices embedded in everyday cultural expressions and interactions.
But it is possible to have too much scepticism. How exactly do we distinguish this critical mentality from that of the conspiracy theorist, who is convinced that they alone have seen through the official version of events? Or to turn the question around, how might it be possible to recognise the most flagrant cases of bias in the behaviour of reporters and experts, but nevertheless to accept that what they say is often a reasonable depiction of the world?
It is tempting to blame the internet, populists or foreign trolls for flooding our otherwise rational society with lies. But this underestimates the scale of the technological and philosophical transformations that are under way. The single biggest change in our public sphere is that we now have an unimaginable excess of news and content, where once we had scarcity. Suddenly, the analogue channels and professions we depended on for our knowledge of the world have come to seem partial, slow and dispensable.
And yet, contrary to initial hype surrounding big data, the explosion of information available to us is making it harder, not easier, to achieve consensus on truth. As the quantity of information increases, the need to pick out bite-size pieces of content rises accordingly. In this radically sceptical age, questions of where to look, what to focus on and who to trust are ones that we increasingly seek to answer for ourselves, without the help of intermediaries. This is a liberation of sorts, but it is also at the heart of our deteriorating confidence in public institutions.


The current threat to democracy is often seen to emanate from new forms of propaganda, with the implication that lies are being deliberately fed to a naive and over-emotional public. The simultaneous rise of populist parties and digital platforms has triggered well-known anxieties regarding the fate of truth in democratic societies. Fake news and internet echo chambers are believed to manipulate and ghettoise certain communities, for shadowy ends. Key groups – millennials or the white working-class, say – are accused of being easily persuadable, thanks to their excessive sentimentality.
This diagnosis exaggerates old-fashioned threats while overlooking new phenomena. Over-reliant on analogies to 20th century totalitarianism, it paints the present moment as a moral conflict between truth and lies, with an unthinking public passively consuming the results. But our relationship to information and news is now entirely different: it has become an active and critical one, that is deeply suspicious of the official line. Nowadays, everyone is engaged in spotting and rebutting propaganda of one kind or another, curating our news feeds, attacking the framing of the other side and consciously resisting manipulation. In some ways, we have become too concerned with truth, to the point where we can no longer agree on it. The very institutions that might once have brought controversies to an end are under constant fire for their compromises and biases.
The threat of misinformation and propaganda should not be denied. As the scholars Yochai Benkler, Robert Faris and Hal Roberts have shown in their book Network Propaganda, there is now a self-sustaining information ecosystem on the American right through which conspiracy theories and untruths get recycled, between Breitbart, Fox News, talk radio and social media. Meanwhile, the anti-vaxx movement is becoming a serious public health problem across the world, aided by the online circulation of conspiracy theories and pseudo-science. This is a situation where simple misinformation poses a serious threat to society.
But away from these eye-catching cases, things look less clear-cut. The majority of people in northern Europe still regularly encounter mainstream news and information. Britain is a long way from the US experience, thanks principally to the presence of the BBC, which, for all its faults, still performs a basic function in providing a common informational experience. It is treated as a primary source of news by 60% of people in the UK. Even 42% of Brexit party and Ukip voters get their news from the BBC.
The panic surrounding echo chambers and so-called filter bubbles is largely groundless. If we think of an echo chamber as a sealed environment, which only circulates opinions and facts that are agreeable to its participants, it is a rather implausible phenomenon. Research by the Oxford Internet Institute suggests that just 8% of the UK public are at risk of becoming trapped in such a clique.
Trust in the media is low, but this entrenched scepticism long predates the internet or contemporary populism. From the Sun’s lies about Hillsborough to the BBC’s failure to expose Jimmy Savile as early as they might, to the fevered enthusiasm for the Iraq war that gripped much of Fleet Street, the British public has had plenty of good reasons to distrust journalists. Even so, the number of people in the UK who trust journalists to tell the truth has actually risen slightly since the 1980s.
What, then, has changed? The key thing is that the elites of government and the media have lost their monopoly over the provision of information, but retain their prominence in the public eye. They have become more like celebrities, anti-heroes or figures in a reality TV show. And digital platforms now provide a public space to identify and rake over the flaws, biases and falsehoods of mainstream institutions. The result is an increasingly sceptical citizenry, each seeking to manage their media diet, checking up on individual journalists in order to resist the pernicious influence of the establishment.
There are clear and obvious benefits to this, where it allows hateful and manipulative journalism to be called out. It is reassuring to discover the large swell of public sympathy for the likes of Ben Stokes and Gareth Thomas, and their families, who have been harassed by the tabloids in recent days. But this also generates a mood of outrage, which is far more focused on denouncing bad and biased reporting than with defending the alternative. Across the political spectrum, we are increasingly distracted and enraged by what our adversaries deem important and how they frame it. It is not typically the media’s lies that provoke the greatest fury online, but the discovery that an important event has been ignored or downplayed. While it is true that arguments rage over dodgy facts and figures (concerning climate change or the details of Britain’s trading relations), many of the most bitter controversies of our news cycle concern the framing and weighting of different issues and how they are reported, rather than the facts of what actually happened.
The problem we face is not, then, that certain people are oblivious to the “mainstream media”, or are victims of fake news, but that we are all seeking to see through the veneer of facts and information provided to us by public institutions. Facts and official reports are no longer the end of the story. Such scepticism is healthy and, in many ways, the just deserts of an establishment that has been caught twisting the truth too many times. But political problems arise once we turn against all representations and framings of reality, on the basis that these are compromised and biased – as if some purer, unmediated access to the truth might be possible instead. This is a seductive, but misleading ideal.
Every human culture throughout history has developed ways to record experiences and events, allowing them to endure. From early modern times, liberal societies have developed a wide range of institutions and professions whose work ensures that events do not simply pass without trace or public awareness. Newspapers and broadcasters share reports, photographs and footage of things that have happened in politics, business, society and culture. Court documents and the Hansard parliamentary reports provide records of what has been said in court and in parliament. Systems of accounting, audit and economics help to establish basic facts of what takes place in businesses and markets.
Traditionally, it is through these systems, which are grounded in written testimonies and public statements, that we have learned what is going on in the world. But in the past 20 years, this patchwork of record-keeping has been supplemented and threatened by a radically different system, which is transforming the nature of empirical evidence and memory. One term for this is “big data”, which highlights the exponential growth in the quantity of data that societies create, thanks to digital technologies.
The reason there is so much data today is that more and more of our social lives are mediated digitally. Internet browsers, smartphones, social media platforms, smart cards and every other smart interface record every move we make. Whether or not we are conscious of it, we are constantly leaving traces of our activities, no matter how trivial.
But it is not the escalating quantity of data that constitutes the radical change. Something altogether new has occurred that distinguishes today’s society from previous epochs. In the past, recording devices were principally trained upon events that were already acknowledged as important. Journalists did not just report news, but determined what counted as newsworthy. TV crews turned up at events that were deemed of national significance. The rest of us kept our cameras for noteworthy occasions, such as holidays and parties.
The ubiquity of digital technology has thrown all of this up in the air. Things no longer need to be judged “important” to be captured. Consciously, we photograph events and record experiences regardless of their importance. Unconsciously, we leave a trace of our behaviour every time we swipe a smart card, address Amazon’s Alexa or touch our phone. For the first time in human history, recording now happens by default, and the question of significance is addressed separately.
This shift has prompted an unrealistic set of expectations regarding possibilities for human knowledge. As many of the original evangelists of big data liked to claim, when everything is being recorded, our knowledge of the world no longer needs to be mediated by professionals, experts, institutions and theories. Instead, they argued that the data can simply “speak for itself”. Patterns will emerge, traces will come to light. This holds out the prospect of some purer truth than the one presented to us by professional editors or trained experts. As the Australian surveillance scholar Mark Andrejevic has brilliantly articulated, this is a fantasy of a truth unpolluted by any deliberate human intervention – the ultimate in scientific objectivity.
Andrejevic argues that the rise of this fantasy coincides with growing impatience with the efforts of reporters and experts to frame reality in meaningful ways. He writes that “we might describe the contemporary media moment – and its characteristic attitude of sceptical savviness regarding the contrivance of representation – as one that implicitly embraces the ideal of framelessness”. From this perspective, every controversy can in principle be settled thanks to the vast trove of data – CCTV, records of digital activity and so on – now available to us. Reality in its totality is being recorded, and reporters and officials look dismally compromised by comparison.
One way in which seemingly frameless media has transformed public life over recent years is in the elevation of photography and video as arbiters of truth, as opposed to written testimony or numbers. “Pics or it didn’t happen” is a jokey barb sometimes thrown at social media users when they share some unlikely experience. It is often a single image that seems to capture the truth of an event, only now there are cameras everywhere. No matter how many times it is disproven, the notion that “the camera doesn’t lie” has a peculiar hold over our imaginations. In a society of blanket CCTV and smartphones, there are more cameras than people, and the torrent of data adds to the sense that the truth is somewhere amid the deluge, ignored by mainstream accounts. The central demand of this newly sceptical public is “so show me”.
This transformation in our recording equipment is responsible for much of the outrage directed at those formerly tasked with describing the world. The rise of blanket surveillance technologies has paradoxical effects, raising expectations for objective knowledge to unrealistic levels, and then provoking fury when those in the public eye do not meet them.
On the one hand, data science appears to make the question of objective truth easier to settle. Slow and imperfect institutions of social science and journalism can be circumvented, and we can get directly to reality itself, unpolluted by human bias. Surely, in this age of mass data capture, the truth will become undeniable.
On the other hand, as the quantity of data becomes overwhelming – greater than human intelligence can comprehend – our ability to agree on the nature of reality seems to be declining. Once everything is, in principle, recordable, disputes heat up regarding what counts as significant in the first place. It turns out that the “frames” that journalists and experts use to reduce and organise information are indispensable to its coherence and meaning.
What we are discovering is that, once the limitations on data capture are removed, there are escalating opportunities for conflict over the nature of reality. Every time a mainstream media agency reports the news, they can instantly be met with the retort: but what about this other event, in another time and another place, that you failed to report? What about the bits you left out? What about the other voters in the town you didn’t talk to? When editors judge the relative importance of stories, they now confront a panoply of alternative judgements. Where records are abundant, fights break out over relevance and meaning.
Professional editors have always faced the challenge of reducing long interviews to short consumable chunks and discarding the majority of photos or text. Editing is largely a question of what to throw away. This necessitates value judgements, that readers and audiences once had little option but to trust. Now, however, the question of which image or sentence is truly significant opens irresolvable arguments. One person’s offcut is another person’s revealing nugget.
Political agendas can be pursued this way, including cynical ones aimed at painting one’s opponents in the worst possible light. An absurd or extreme voice can be represented as typical of a political movement (known as “nutpicking”). Taking quotes out of context is one of the most disruptive of online ploys, which provokes far more fury than simple insults. Rather than deploying lies or “fake news”, it messes with the significance of data, taking the fact that someone did say or write something, but violating their intended meaning. No doubt professional journalists have always descended to such tactics from time to time, but now we are all at it, provoking a vicious circle of misrepresentation.
Then consider the status of photography and video. It is not just that photographic evidence can be manipulated to mislead, but that questions will always survive regarding camera angle and context. What happened before or after a camera started rolling? What was outside the shot? These questions provoke suspicion, often with good reason.
The most historic example of such a controversy predates digital media. The Zapruder film, which captured the assassination of John F Kennedy, became the most scrutinised piece of footage in history. The film helped spawn countless conspiracy theories, with individual frames becoming the focus of controversies, with competing theories as to what they reveal. The difficulty of completely squaring any narrative with a photographic image is a philosophical one as much as anything, and the Zapruder film gave a glimpse of the sorts of media disputes that have become endemic now cameras are ubiquitous parts of our social lives and built environments.
Today, minor gestures that would usually have passed without comment only a decade ago become pored over in search of their hidden message. What did Emily Maitlis mean when she rolled her eyes at Barry Gardiner on Newsnight? What was Jeremy Corbyn mouthing during Prime Minister’s Questions? Who took the photo of Boris Johnson and Carrie Symonds sitting at a garden table in July, and why? This way madness lies.
While we are now able to see evidence for ourselves, we all have conflicting ideas of what bit to attend to, and what it means. The camera may not lie, but that is because it does not speak at all. As we become more fixated on some ultimate gold-standard of objective truth, which exceeds the words of mere journalists or experts, so the number of interpretations applied to the evidence multiplies. As our faith in the idea of undeniable proof deepens, so our frustration with competing framings and official accounts rises. All too often, the charge of “bias” means “that’s not my perspective”. Our screen-based interactions with many institutions have become fuelled by anger that our experiences are not being better recognised, along with a new pleasure at being able to complain about it. As the writer and programmer Paul Ford wrote, back in 2011, “the fundamental question of the web” is: “Why wasn’t I consulted?”
What we are witnessing is a collision between two conflicting ideals of truth: one that depends on trusted intermediaries (journalists and experts), and another that promises the illusion of direct access to reality itself. This has echoes of the populist challenge to liberal democracy, which pits direct expressions of the popular will against parliaments and judges, undermining the very possibility of compromise. The Brexit crisis exemplifies this as well as anything. Liberals and remainers adhere to the long-standing constitutional convention that the public speaks via the institutions of general elections and parliament. Adamant Brexiters believe that the people spoke for themselves in June 2016, and have been thwarted ever since by MPs and civil servants. It is this latter logic that paints suspending parliament as an act of democracy.
This is the tension that many populist leaders exploit. Officials and elected politicians are painted as cynically self-interested, while the “will of the people” is both pure and obvious. Attacks on the mainstream media follow an identical script: the individuals professionally tasked with informing the public, in this case journalists, are biased and fake. It is widely noted that leaders such as Donald Trump, Jair Bolsonaro and Matteo Salvini are enthusiastic users of Twitter, and Boris Johnson has recently begun to use Facebook Live to speak directly to “the people” from Downing Street. Whether it be parliaments or broadcasters, the analogue intermediaries of the public sphere are discredited and circumvented.
What can professional editors and journalists do in response? One response is to shout even louder about their commitment to “truth”, as some American newspapers have profitably done in the face of Trump. But this escalates cultural conflict, and fails to account for how the media and informational landscape has changed in the past 20 years.
What if, instead, we accepted the claim that all reports about the world are simply framings of one kind or another, which cannot but involve political and moral ideas about what counts as important? After all, reality becomes incoherent and overwhelming unless it is simplified and narrated in some way or other. And what if we accepted that journalists, editors and public figures will inevitably let cultural and personal biases slip from time to time? A shrug is often the more appropriate response than a howl. If we abandoned the search for some pure and unbiased truth, where might our critical energies be directed instead?
If we recognise that reporting and editing is always a political act (at least in the sense that it asserts the importance of one story rather than another), then the key question is not whether it is biased, but whether it is independent of financial or political influence. The problem becomes a quasi-constitutional one, of what processes, networks and money determine how data gets turned into news, and how power gets distributed. On this front, the British media is looking worse and worse, with every year that passes.
The relationship between the government and the press has been getting tighter since the 1980s. This is partly thanks to the overweening power of Rupert Murdoch, and the image management that developed in response. Spin doctors such as Alastair Campbell, Andy Coulson, Tom Baldwin, Robbie Gibb and Seumas Milne typically move from the media into party politics, weakening the division between the two.
Then there are those individuals who shift backwards and forwards between senior political positions and the BBC, such as Gibb, Rona Fairhead and James Purnell. The press has taken a very bad turn over recent years, with ex-Chancellor George Osborne becoming editor of the Evening Standard, then the extraordinary recent behaviour of the Daily Telegraph, which seeks to present whatever story or gloss is most supportive of their former star columnist in 10 Downing Street, and rubbishes his opponents. (The Opinion page of the Telegraph website proudly includes a “Best of Boris” section.)
Why we stopped trusting elites
Since the financial crisis of 2008, there have been regular complaints about the revolving door between the financial sector and governmental institutions around the world, most importantly the White House. There has been far less criticism of the similar door that links the media and politics. The exception to this comes from populist leaders, who routinely denounce all “mainstream” democratic and media institutions as a single liberal elite, that acts against the will of the people. One of the reasons they are able to do this is because there is a grain of truth in what they say.
The financial obstacles confronting critical, independent, investigative media are significant. If the Johnson administration takes a more sharply populist turn, the political obstacles could increase, too – Channel 4 is frequently held up as an enemy of Brexit, for example. But let us be clear that an independent, professional media is what we need to defend at the present moment, and abandon the misleading and destructive idea that – thanks to a combination of ubiquitous data capture and personal passions – the truth can be grasped directly, without anyone needing to report it.

https://www.theguardian.com/media/2019/sep/19/why-cant-we-agree-on-whats-true-

How Susceptible Are You to Misinformation? There's a Test You Can Take

A new misinformation quiz shows that, despite the stereotype, younger Americans have a harder time discerning fake headlines, compared with older generations

Many Americans seem to worry that their parents or grandparents will fall for fake news online. But as it turns out, we may be collectively concerned about the wrong generation.

Contrary to popular belief, Gen Zers and millennials could be more susceptible to online misinformation than older adults, according to a poll published online on June 29 by the research agency YouGov. What’s more, people who spend more time online had more difficulty distinguishing between real and fake news headlines. “We saw some results that are different from the ad hoc kinds of tests that [previous] researchers have done,” says Rakoen Maertens, a research psychologist at the University of Cambridge and lead author of a study on the development of the test used in the poll, which was published on June 29 in Behavior Research Methods.

Maertens’s team worked with YouGov to administer a quick online quiz based on the test that the researchers developed, dubbed the “misinformation susceptibility test” (MIST). It represents the first standardized test in psychology for misinformation and was set up in a way that allows researchers to administer it broadly and collect huge amounts of data. To create their test, Maertens and his colleagues carefully selected 10 actual headlines and 10 artificial-intelligence-generated false ones—similar to those you might encounter online—that they then categorized as “real” or “fake.” Test takers were asked to sort the real headlines from the fake news and received a percentage score at the end for each category. Here are a couple of examples of headlines from the test so you can try out your “fake news detector”: “US Support for Legal Marijuana Steady in Past Year,” “Certain Vaccines Are Loaded with Dangerous Chemicals and Toxins” and “Morocco’s King Appoints Committee Chief to Fight Poverty and Inequality.” The answers are at the bottom of this article.

Maertens and his team gave the test to thousands of people across the U.S. and the U.K. in their study, but the YouGov poll was given to 1,516 adults who were all U.S. citizens. On average, in the YouGov poll, U.S. adults correctly categorized about 65 percent of the headlines. However, age seemed to impact accuracy. Only 11 percent of Americans ages 18 to 29 correctly classified 17 or more headlines, and 36 percent got no more than 10 correct. That’s compared with 36 percent of the 65-and-older crowd who accurately assessed at least 16 headlines. And only 9 percent in the latter age group got 10 or fewer correct. On average, Americans below age 45 scored 12 out of 20, while their older counterparts scored 15.

Additionally, people who reported spending three or more leisure hours a day online were more likely to fall for misinformation (false headlines), compared with those who spent less time online. And where people got their news made a difference: folks who read legacy publications such as the Associated Press and Politico had better misinformation detection, while those who primarily got their news from social media sites such as TikTok, Instagram and Snapchat  generally scored lower. (“I didn’t even know that [getting news from Snapchat] was an option,” Maertens says.) This could be part of the reason that younger people scored lower overall, Maertens’s team hypothesized. People who spend a lot of time on social media are exposed to a firehose of information, both real and fake, with little context to help distinguish the two.

Personality traits also impacted a person’s susceptibility to fake news. Conscientiousness, for instance, was associated with higher scores in the study conducted by Maertens and his team, while neuroticism and narcissism were associated with lower scores.

“They’ve done a good job in terms of conducting the research,” says Magda Osman, head of research and analysis at the Center for Science and Policy at the University of Cambridge, who was not involved in the study. She worries, however, that some of the test’s AI-generated headlines were less clear-cut than a simple real/fake classification could capture.

Take, for example, the headline “Democrats More Supportive than Republicans of Federal Spending for Scientific Research.” In the study, this claim was labeled as unambiguously true based on data from the Pew Research Center. But just by looking at the headline, Osman says, “you don’t know whether this means Democrats versus Republicans in the population or Democrats versus Republicans in Congress.”

This distinction matters because it changes the veracity of the statement. While it’s accurate to say that Democrats generally tend to support increased science funding, Republican politicians have a history of hiking up the defense budget, which means that over the past few decades, they have actually outspent their Democratic colleagues in funding certain types of research and development.

What’s more, Osman points out, the study does not differentiate which topics of misinformation different groups are more susceptible to. Younger people might be more likely than their parents to believe misinformation about sexual health or COVID but less likely to fall for fake news about climate change, she suggests.

“The test shouldn’t be taken as a 100% reliable individual-level test. Small differences can occur,” Maertens wrote in an e-mail to Scientific American. “Someone who has 18/20 could in practice be equally resilient as someone scoring 20/20. However, it is more likely that a 20/20 scorer is effectively better than let’s say a 14/20 scorer.”

Ultimately both Osman and Maertens agree that media literacy is a crucial skill for navigating today’s information-saturated world. “If you get flooded with information, you can’t really analyze every single piece,” Maertens says. He recommends taking a skeptical approach to everything you read online, fact-checking when possible (though that was not an option for MIST participants) and keeping in mind that you may be more susceptible to misinformation than you think.

In the example in the third paragraph, the headlines are, in order, real, fake, real.

https://www.scientificamerican.com/article/how-susceptible-are-you-to-fake-news-theres-a-test-for-that/?utm_source=newsletter&utm_medium=email&utm_campaign=week-in-science&utm_content=link&utm_term=2023-07-07_top-stories


Knowledge resistance: How we avoid insight from others

Why do people and groups ignore, deny and resist knowledge about society's many problems? In a world of 'alternative facts', 'fake news’ that some believe could be remedied by ‘factfulness’, the question has never been more pressing. After years of ideologically polarised debates on this topic, the book seeks to further advance our understanding of the phenomenon of knowledge resistance by integrating insights from the social, economic and evolutionary sciences. It identifies simplistic views in public and scholarly debates about what facts, knowledge and human motivations are and what 'rational' use of information actually means. The examples used include controversies about nature-nurture, climate change, gender roles, vaccination, genetically modified food and artificial intelligence. Drawing on cutting-edge scholarship as well as personal experiences of culture clashes, the book is aimed at the general, educated public as well as students and scholars interested in the interface of human motivation and the urgent social problems of today…: 


Alexios Mantzarlis 

Director, International Fact-Checking Network at The Poynter Institute
Saint Petersburg, Florida
The Poynter InstituteInstitut d'Etudes politiques de Paris / Sciences Po Paris See contact info
Alexios Mantzarlis writes about and advocates for fact-checking. He also trains and convenes fact-checkers around the world.

As Director of the IFCN, Alexios has helped draft the fact-checkers' code of principles, shepherded a partnership between third-party fact-checkers and Facebook, testified to the Italian Chamber of Deputies on the "fake news" phenomenon and helped launch International Fact-Checking Day. In January 2018 he was invited to join the European Union's High Level Group on fake news and online disinformation. He has also drafted a lesson plan for UNESCO and a chapter on fact-checking in the 2016 U.S. presidential elections in Truth Counts, published by Congressional Quarterly.

The International Fact-Checking Network (IFCN) is a forum for fact-checkers worldwide hosted by the Poynter Institute for Media Studies. These organizations fact-check statements by public figures, major institutions and other widely circulated claims of interest to society.

It launched in September 2015, in recognition of the fact that a booming crop of fact-checking initiatives could benefit from an organization that promotes best practices and exchanges in this field.

Among other things, the IFCN:
* Monitors trends and formats in fact-checking worldwide, publishing regular articles on the dedicated Poynter.org channel.
* Provides training resources for fact-checkers.
* Supports collaborative efforts in international fact-checking.
* Convenes a yearly conference (Global Fact).
* Is the home of the fact-checkers' code of principles.

The IFCN has received funding from the Arthur M. Blank Family Foundation, the Duke Reporters’ Lab, the Bill & Melinda Gates Foundation, Google, the National Endowment for Democracy, the Omidyar Network, the Open Society Foundations and the Park Foundation.

To find out more, follow @factchecknet on Twitter or go to bit.ly/GlobalFac



Why we stopped trusting elites

technological change and political upheavals. But it’s too late to turn back the clock. By William Davies

More from this series: The new populism
Thu 29 Nov 2018 
For hundreds of years, modern societies have depended on something that is so ubiquitous, so ordinary, that we scarcely ever stop to notice it: trust. The fact that millions of people are able to believe the same things about reality is a remarkable achievement, but one that is more fragile than is often recognised.
At times when public institutions – including the media, government departments and professions – command widespread trust, we rarely question how they achieve this. And yet at the heart of successful liberal democracies lies a remarkable collective leap of faith: that when public officials, reporters, experts and politicians share a piece of information, they are presumed to be doing so in an honest fashion….:
https://www.theguardian.com/news/2018/nov/29/why-we-stopped-trusting-elites-the-new-populism?utm_source 


Finland is winning the war on fake news. What it’s learned may be crucial to Western democracy

By Eliza Mackintosh, CNNVideo by Edward Kiernan, CNN
Helsinki, Finland (CNN) – On a recent afternoon in Helsinki, a group of students gathered to hear a lecture on a subject that is far from a staple in most community college curriculums.
Standing in front of the classroom at Espoo Adult Education Centre, Jussi Toivanen worked his way through his PowerPoint presentation. A slide titled “Have you been hit by the Russian troll army?” included a checklist of methods used to deceive readers on social media: image and video manipulations, half-truths, intimidation and false profiles.
Another slide, featuring a diagram of a Twitter profile page, explained how to identify bots: look for stock photos, assess the volume of posts per day, check for inconsistent translations and a lack of personal information.
The lesson wrapped with a popular “deepfake” — highly realistic manipulated video or audio — of Barack Obama to highlight the challenges of the information war ahead.
The course is part of an anti-fake news initiative launched by Finland’s government in 2014 – two years before Russia meddled in the US elections – aimed at teaching residents, students, journalists and politicians how to counter false information designed to sow division.
Jussi Toivanen teaching students how to spot fake news at Espoo Adult Education Centre.
The initiative is just one layer of a multi-pronged, cross-sector approach the country is taking to prepare citizens of all ages for the complex digital landscape of today – and tomorrow. The Nordic country, which shares an 832-mile border with Russia, is acutely aware of what’s at stake if it doesn’t.
Finland has faced down Kremlin-backed propaganda campaigns ever since it declared independence from Russia 101 years ago. But in 2014, after Moscow annexed Crimea and backed rebels in eastern Ukraine, it became obvious that the battlefield had shifted: information warfare was moving online.
Toivanen, the chief communications specialist for the prime minister’s office, said it is difficult to pinpoint the exact number of misinformation operations to have targeted the country in recent years, but most play on issues like immigration, the European Union, or whether Finland should become a full member of NATO (Russia is not a fan).
As the trolling ramped up in 2015, President Sauli Niinisto called on every Finn to take responsibility for the fight against false information. A year later, Finland brought in American experts to advise officials on how to recognize fake news, understand why it goes viral and develop strategies to fight it. The education system was also reformed to emphasize critical thinking.
Although it’s difficult to measure the results in real-time, the approach appears to be working, and now other countries are looking to Finland as an example of how to win the war on misinformation.
“It’s not just a government problem, the whole society has been targeted. We are doing our part, but it’s everyone’s task to protect the Finnish democracy,” Toivanen said, before adding: “The first line of defense is the kindergarten teacher.”
Sorting fact from fiction
At the French-Finnish School of Helsinki, a bilingual state-run K-12 institution, that ethos is taken seriously.
In Valentina Uitto’s social studies class, a group of 10th-graders were locked in debate over what the key issues will be in next week’s EU elections. Brexit, immigration, security and the economy were mentioned with a flurry of raised hands before the students were asked to choose a theme to analyze.
“They’ve gathered what they think they know about the EU election … now let’s see if they can sort fact from fiction,” Uitto said with a smirk.
The students broke off into groups, grabbing laptops and cell phones to investigate their chosen topics – the idea is to inspire them to become digital detectives, like a rebooted version of Sherlock Holmes for the post-Millennial generation.
Her class is the embodiment of Finland’s critical thinking curriculum, which was revised in 2016 to prioritize the skills students need to spot the sort of disinformation that has clouded recent election campaigns in the US and across Europe.
Students in Valentina Uitto’s social studies class research the issues at play in the upcoming EU elections as part of their critical thinking curriculum.
The school recently partnered with Finnish fact-checking agency Faktabaari(FactBar) to develop a digital literacy “toolkit” for elementary to high school students learning about the EU elections. It was presented to the bloc’s expert group on media literacy and has been shared among member states.
The exercises include examining claims found in YouTube videos and social media posts, comparing media bias in an array of different “clickbait” articles, probing how misinformation preys on readers’ emotions, and even getting students to try their hand at writing fake news stories themselves.
“What we want our students to do is … before they like or share in the social media they think twice – who has written this? Where has it been published? Can I find the same information from another source?” Kari Kivinen, director of Helsinki French-Finnish School and former secretary-general of the European Schools, told CNN.
He cautioned that it is a balancing act trying to make sure skepticism doesn’t give way to cynicism in students.
“It’s very annoying having to fact check everything, not being able to trust anything … or anyone on the internet,” said 15-year-old Tatu Tukiainen, one of the students in Uitto’s class. “I think we should try to put a stop to that.”
Gabrielle Bagula (left), 18, and Alexander Shemeikka (right), 17, in the Helsinki French-Finnish School library.
In the school library, Alexander Shemeikka, 17, and Gabrielle Bagula, 18, are watching YouTube videos together on an iPhone and chatting about other social platforms where they get their news: Instagram, Snapchat, Reddit and Twitter but, notably, not Facebook – “that’s for old people.”
“The word ‘fake news’ is thrown around very often,” Shemeikka said, explaining that when their friends share dubious memes or far-fetched articles online he always asks for the source. “You can never be too sure,” Bagula agreed.
That’s exactly the type of conversation that Kivinen hopes to cultivate outside of the classroom.
Students aged 5 to 8 gather in the library to read paperbacks and scroll through social media feeds.
“What we have been developing here – combining fact-checking with the critical thinking and voter literacy – is something we have seen that there is an interest in outside Finland,” Kivinen said.
But Kivinen isn’t sure that this approach could serve as a template for schools elsewhere. “In the end … it’s difficult to export democracy,” he added.
The ‘superpower’ of being Finnish
It may be difficult to export democracy, but it is easy to import experts, which is precisely what Finland did in 2016 to combat what it saw as a rise in disinformation emanating from accounts linked to its neighbor to the east.
“They knew that the Kremlin was messing with Finnish politics, but they didn’t have a context with which to interpret that. They were wondering if this meant they [Russia] would invade, was this war?” Jed Willard, director of the Franklin Delano Roosevelt Center for Global Engagement at Harvard University, who was hired by Finland to train state officials to spot and then hit back at fake news, told CNN.
Russia maintains that it has not and does not interfere in the domestic politics of other countries.
Behind closed doors, Willard’s workshops largely focused on one thing: developing a strong national narrative, rather than trying to debunk false claims.
“The Finns have a very unique and special strength in that they know who they are. And who they are is directly rooted in human rights and the rule of law, in a lot of things that Russia, right now, is not,” Willard said. “There is a strong sense of what it means to be Finnish … that is a super power.”
Not all nations have the type of narrative to fall back on that Finland does.
The small and largely homogenous country consistently ranks at or near the top of almost every index – happinesspress freedomgender equalitysocial justicetransparency and education – making it difficult for external actors to find fissures within society to crowbar open and exploit.
Finland also has long tradition of reading – its 5.5 million people borrow close to 68 million books a year and it just spent $110 million on a state-of-the-art library, referred to lovingly as “Helsinki’s living room.” Finland has the highest PISA score for reading performance in the EU.
On the Oodi library’s ethereal third-floor, Finns browse the internet and leaf through national daily newspaper Helsingin Sanomat.
And as trust in the media has flagged in other parts of the globe, Finland has maintained a strong regional press and public broadcaster. According to the Reuters Institute Digital News Report 2018, Finland tops the charts for media trust, which means its citizens are less likely to turn to alternative sources for news.
Polluting the internet?
But some argue that simply teaching media literacy and critical thinking isn’t enough — more must be done on the part of social media companies to stop the spread of disinformation.
“Facebook, Twitter, Google/YouTube … who are enablers of Russian trolls … they really should be regulated,” said Jessikka Aro, a journalist with Finland’s public broadcaster YLE, who has faced a barrage of abuse for her work investigating Russian interference, long before it was linked to the 2016 US elections.
“Just like any polluting companies or factories should be and are already regulated, for polluting the air and the forests, the waters, these companies are polluting the minds of people. So, they also have to pay for it and take responsibility for it.”
Facebook, Twitter and Google, which are all signatories to the European Commission’s code of practice against disinformation, told CNN that they have taken steps ahead of the EU elections to increase transparency on their platforms, including making EU-specific political advertisement libraries publicly available, working with third-party fact-checkers to identify misleading election-related content, and cracking down on fake accounts.
Jessikka Aro scrolls through her Twitter mentions, pointing out the type of trolling and abuse she has faced online as a result of her investigations.
Aro’s first open-source investigation back in 2014 looked at how Russia-linked disinformation campaigns impacted Finns.
“Many Finns told me that they have witnessed these activities, but that it was only merely new digital technology for the old fashioned, old school Soviet Union propaganda, which has always existed and that Finns have been aware of,” Aro said. “So, they could avoid the trolls.”
The probe also made her the target of a relentless smear campaign, accused of being a CIA operative, a secret assistant to NATO, a drug dealer and deranged Russophobe.
Aro received some respite when, last year, the Helsinki District Court handed harsh sentences to two pro-Putin activists on charges of defamation – Ilja Janitskin, a Finn of Russian descent who ran the anti-immigrant, pro-Russia website MV-Lehti, and Johan Backman, a self-declared “human rights activist” and frequent guest on the Russian state-run news outlet RT.
It was the first time that an EU country had convicted those responsible for disinformation campaigns, drawing a line in the sand between extreme hate speech and the pretense of free speech.
A never-ending game
Perhaps the biggest sign that Finland is winning the war on fake news is the fact that other countries are seeking to copy its blueprint. Representatives from a slew of EU states, along with Singapore, have come to learn from Finland’s approach to the problem.
The scene outside the Prime Minister’s Office in Helsinki. Since 2016, government officials have trained over 10,000 Finns how to spot fake news.
The race is on to figure out a fix after authorities linked Russian groups to misinformation campaigns targeting Catalonia’s independence referendum and Brexit, as well as recent votes in France and Germany. Germany has already put a law in place to fine tech platforms that fail to remove “obviously illegal” hate speech, while France passed a law last year that bans fake news on the internet during election campaigns. Some critics have argued that both pieces of legislation jeopardize free speech. Russia denied interference in all of these instances.
Finland’s strategy was on public display ahead of last month’s national elections, in an advertising campaign that ran under the slogan “Finland has the world’s best elections – think about why” and encouraged citizens to think about fake news.
Officials didn’t see any evidence of Russian interference in the vote, which Toivanen says may be a sign that trolls have stopped thinking of the Finnish electorate as a soft target.
Jussi Toivanen, who has traveled the country to train Finns, at his office in Helsinki.
“A couple of years ago, one of my colleagues said that he thought Finland has won the first round countering foreign-led hostile information activities. But even though Finland has been quite successful, I don’t think that there are any first, second or third rounds, instead, this is an ongoing game,” Toivanen said.
“It’s going to be much more challenging for us to counter these kinds of activities in the future. And we need to be ready for that.”


Information Overload Helps Fake News Spread, and Social Media Knows It

Understanding how algorithm manipulators exploit our cognitive vulnerabilities empowers us to fight back

December 1, 2020

AUTHORS Filippo Menczer  Thomas Hills 

Consider Andy, who is worried about contracting COVID-19. Unable to read all the articles he sees on it, he relies on trusted friends for tips. When one opines on Facebook that pandemic fears are overblown, Andy dismisses the idea at first. But then the hotel where he works closes its doors, and with his job at risk, Andy starts wondering how serious the threat from the new virus really is. No one he knows has died, after all. A colleague posts an article about the COVID “scare” having been created by Big Pharma in collusion with corrupt politicians, which jibes with Andy's distrust of government. His Web search quickly takes him to articles claiming that COVID-19 is no worse than the flu. Andy joins an online group of people who have been or fear being laid off and soon finds himself asking, like many of them, “What pandemic?” When he learns that several of his new friends are planning to attend a rally demanding an end to lockdowns, he decides to join them. Almost no one at the massive protest, including him, wears a mask. When his sister asks about the rally, Andy shares the conviction that has now become part of his identity: COVID is a hoax.

This example illustrates a minefield of cognitive biases. We prefer information from people we trust, our in-group. We pay attention to and are more likely to share information about risks—for Andy, the risk of losing his job. We search for and remember things that fit well with what we already know and understand. These biases are products of our evolutionary past, and for tens of thousands of years, they served us well. People who behaved in accordance with them—for example, by staying away from the overgrown pond bank where someone said there was a viper—were more likely to survive than those who did not.

Modern technologies are amplifying these biases in harmful ways, however. Search engines direct Andy to sites that inflame his suspicions, and social media connects him with like-minded people, feeding his fears. Making matters worse, bots—automated social media accounts that impersonate humans—enable misguided or malevolent actors to take advantage of his vulnerabilities.

Compounding the problem is the proliferation of online information. Viewing and producing blogs, videos, tweets and other units of information called memes has become so cheap and easy that the information marketplace is inundated. Unable to process all this material, we let our cognitive biases decide what we should pay attention to. These mental shortcuts influence which information we search for, comprehend, remember and repeat to a harmful extent.

The need to understand these cognitive vulnerabilities and how algorithms use or manipulate them has become urgent. At the University of Warwick in England and at Indiana University Bloomington's Observatory on Social Media (OSoMe, pronounced “awesome”), our teams are using cognitive experiments, simulations, data mining and artificial intelligence to comprehend the cognitive vulnerabilities of social media users. Insights from psychological studies on the evolution of information conducted at Warwick inform the computer models developed at Indiana, and vice versa. We are also developing analytical and machine-learning aids to fight social media manipulation. Some of these tools are already being used by journalists, civil-society organizations and individuals to detect inauthentic actors, map the spread of false narratives and foster news literacy.

INFORMATION OVERLOAD

The glut of information has generated intense competition for people's attention. As Nobel Prize–winning economist and psychologist Herbert A. Simon noted, “What information consumes is rather obvious: it consumes the attention of its recipients.” One of the first consequences of the so-called attention economy is the loss of high-quality information. The OSoMe team demonstrated this result with a set of simple simulations. It represented users of social media such as Andy, called agents, as nodes in a network of online acquaintances. At each time step in the simulation, an agent may either create a meme or reshare one that he or she sees in a news feed. To mimic limited attention, agents are allowed to view only a certain number of items near the top of their news feeds.

Running this simulation over many time steps, Lilian Weng of OSoMe found that as agents' attention became increasingly limited, the propagation of memes came to reflect the power-law distribution of actual social media: the probability that a meme would be shared a given number of times was roughly an inverse power of that number. For example, the likelihood of a meme being shared three times was approximately nine times less than that of its being shared once.

This winner-take-all popularity pattern of memes, in which most are barely noticed while a few spread widely, could not be explained by some of them being more catchy or somehow more valuable: the memes in this simulated world had no intrinsic quality. Virality resulted purely from the statistical consequences of information proliferation in a social network of agents with limited attention. Even when agents preferentially shared memes of higher quality, researcher Xiaoyan Qiu, then at OSoMe, observed little improvement in the overall quality of those shared the most. Our models revealed that even when we want to see and share high-quality information, our inability to view everything in our news feeds inevitably leads us to share things that are partly or completely untrue.

Cognitive biases greatly worsen the problem. In a set of groundbreaking studies in 1932, psychologist Frederic Bartlett told volunteers a Native American legend about a young man who hears war cries and, pursuing them, enters a dreamlike battle that eventually leads to his real death. Bartlett asked the volunteers, who were non-Native, to recall the rather confusing story at increasing intervals, from minutes to years later. He found that as time passed, the rememberers tended to distort the tale's culturally unfamiliar parts such that they were either lost to memory or transformed into more familiar things. We now know that our minds do this all the time: they adjust our understanding of new information so that it fits in with what we already know. One consequence of this so-called confirmation bias is that people often seek out, recall and understand information that best confirms what they already believe.

This tendency is extremely difficult to correct. Experiments consistently show that even when people encounter balanced information containing views from differing perspectives, they tend to find supporting evidence for what they already believe. And when people with divergent beliefs about emotionally charged issues such as climate change are shown the same information on these topics, they become even more committed to their original positions.

Making matters worse, search engines and social media platforms provide personalized recommendations based on the vast amounts of data they have about users' past preferences. They prioritize information in our feeds that we are most likely to agree with—no matter how fringe—and shield us from information that might change our minds. This makes us easy targets for polarization. Nir Grinberg and his co-workers at Northeastern University recently showed that conservatives in the U.S. are more receptive to misinformation. But our own analysis of consumption of low-quality information on Twitter shows that the vulnerability applies to both sides of the political spectrum, and no one can fully avoid it. Even our ability to detect online manipulation is affected by our political bias, though not symmetrically: Republican users are more likely to mistake bots promoting conservative ideas for humans, whereas Democrats are more likely to mistake conservative human users for bots.

SOCIAL HERDING

In New York City in August 2019, people began running away from what sounded like gunshots. Others followed, some shouting, “Shooter!” Only later did they learn that the blasts came from a backfiring motorcycle. In such a situation, it may pay to run first and ask questions later. In the absence of clear signals, our brains use information about the crowd to infer appropriate actions, similar to the behavior of schooling fish and flocking birds.

Such social conformity is pervasive. In a fascinating 2006 study involving 14,000 Web-based volunteers, Matthew Salganik, then at Columbia University, and his colleagues found that when people can see what music others are downloading, they end up downloading similar songs. Moreover, when people were isolated into “social” groups, in which they could see the preferences of others in their circle but had no information about outsiders, the choices of individual groups rapidly diverged. But the preferences of “nonsocial” groups, where no one knew about others' choices, stayed relatively stable. In other words, social groups create a pressure toward conformity so powerful that it can overcome individual preferences, and by amplifying random early differences, it can cause segregated groups to diverge to extremes.

Social media follows a similar dynamic. We confuse popularity with quality and end up copying the behavior we observe. Experiments on Twitter by Bjarke Mønsted and his colleagues at the Technical University of Denmark and the University of Southern California indicate that information is transmitted via “complex contagion”: when we are repeatedly exposed to an idea, typically from many sources, we are more likely to adopt and reshare it. This social bias is further amplified by what psychologists call the “mere exposure” effect: when people are repeatedly exposed to the same stimuli, such as certain faces, they grow to like those stimuli more than those they have encountered less often.

Such biases translate into an irresistible urge to pay attention to information that is going viral—if everybody else is talking about it, it must be important. In addition to showing us items that conform with our views, social media platforms such as Facebook, Twitter, YouTube and Instagram place popular content at the top of our screens and show us how many people have liked and shared something. Few of us realize that these cues do not provide independent assessments of quality.

In fact, programmers who design the algorithms for ranking memes on social media assume that the “wisdom of crowds” will quickly identify high-quality items; they use popularity as a proxy for quality. Our analysis of vast amounts of anonymous data about clicks shows that all platforms—social media, search engines and news sites—preferentially serve up information from a narrow subset of popular sources.

To understand why, we modeled how they combine signals for quality and popularity in their rankings. In this model, agents with limited attention—those who see only a given number of items at the top of their news feeds—are also more likely to click on memes ranked higher by the platform. Each item has intrinsic quality, as well as a level of popularity determined by how many times it has been clicked on. Another variable tracks the extent to which the ranking relies on popularity rather than quality. Simulations of this model reveal that such algorithmic bias typically suppresses the quality of memes even in the absence of human bias. Even when we want to share the best information, the algorithms end up misleading us.

ECHO CHAMBERS

Most of us do not believe we follow the herd. But our confirmation bias leads us to follow others who are like us, a dynamic that is sometimes referred to as homophily—a tendency for like-minded people to connect with one another. Social media amplifies homophily by allowing users to alter their social network structures through following, unfriending, and so on. The result is that people become segregated into large, dense and increasingly misinformed communities commonly described as echo chambers.

At OSoMe, we explored the emergence of online echo chambers through another simulation, EchoDemo. In this model, each agent has a political opinion represented by a number ranging from −1 (say, liberal) to +1 (conservative). These inclinations are reflected in agents' posts. Agents are also influenced by the opinions they see in their news feeds, and they can unfollow users with dissimilar opinions. Starting with random initial networks and opinions, we found that the combination of social influence and unfollowing greatly accelerates the formation of polarized and segregated communities.

Indeed, the political echo chambers on Twitter are so extreme that individual users' political leanings can be predicted with high accuracy: you have the same opinions as the majority of your connections. This chambered structure efficiently spreads information within a community while insulating that community from other groups. In 2014 our research group was targeted by a disinformation campaign claiming that we were part of a politically motivated effort to suppress free speech. This false charge spread virally mostly in the conservative echo chamber, whereas debunking articles by fact-checkers were found mainly in the liberal community. Sadly, such segregation of fake news items from their fact-check reports is the norm.

Social media can also increase our negativity. In a recent laboratory study, Robert Jagiello, also at Warwick, found that socially shared information not only bolsters our biases but also becomes more resilient to correction. He investigated how information is passed from person to person in a so-called social diffusion chain. In the experiment, the first person in the chain read a set of articles about either nuclear power or food additives. The articles were designed to be balanced, containing as much positive information (for example, about less carbon pollution or longer-lasting food) as negative information (such as risk of meltdown or possible harm to health).

The first person in the social diffusion chain told the next person about the articles, the second told the third, and so on. We observed an overall increase in the amount of negative information as it passed along the chain—known as the social amplification of risk. Moreover, work by Danielle J. Navarro and her colleagues at the University of New South Wales in Australia found that information in social diffusion chains is most susceptible to distortion by individuals with the most extreme biases.

Even worse, social diffusion also makes negative information more “sticky.” When Jagiello subsequently exposed people in the social diffusion chains to the original, balanced information—that is, the news that the first person in the chain had seen—the balanced information did little to reduce individuals' negative attitudes. The information that had passed through people not only had become more negative but also was more resistant to updating.

2015 study by OSoMe researchers Emilio Ferrara and Zeyao Yang analyzed empirical data about such “emotional contagion” on Twitter and found that people overexposed to negative content tend to then share negative posts, whereas those overexposed to positive content tend to share more positive posts. Because negative content spreads faster than positive content, it is easy to manipulate emotions by creating narratives that trigger negative responses such as fear and anxiety. Ferrara, now at the University of Southern California, and his colleagues at the Bruno Kessler Foundation in Italy have shown that during Spain's 2017 referendum on Catalan independence, social bots were leveraged to retweet violent and inflammatory narratives, increasing their exposure and exacerbating social conflict.

RISE OF THE BOTS

Information quality is further impaired by social bots, which can exploit all our cognitive loopholes. Bots are easy to create. Social media platforms provide so-called application programming interfaces that make it fairly trivial for a single actor to set up and control thousands of bots. But amplifying a message, even with just a few early upvotes by bots on social media platforms such as Reddit, can have a huge impact on the subsequent popularity of a post.

At OSoMe, we have developed machine-learning algorithms to detect social bots. One of these, Botometer, is a public tool that extracts 1,200 features from a given Twitter account to characterize its profile, friends, social network structure, temporal activity patterns, language and other features. The program compares these characteristics with those of tens of thousands of previously identified bots to give the Twitter account a score for its likely use of automation.

In 2017 we estimated that up to 15 percent of active Twitter accounts were bots—and that they had played a key role in the spread of misinformation during the 2016 U.S. election period. Within seconds of a fake news article being posted—such as one claiming the Clinton campaign was involved in occult rituals—it would be tweeted by many bots, and humans, beguiled by the apparent popularity of the content, would retweet it.

Bots also influence us by pretending to represent people from our in-group. A bot only has to follow, like and retweet someone in an online community to quickly infiltrate it. OSoMe researcher Xiaodan Lou developed another model in which some of the agents are bots that infiltrate a social network and share deceptively engaging low-quality content—think of clickbait. One parameter in the model describes the probability that an authentic agent will follow bots—which, for the purposes of this model, we define as agents that generate memes of zero quality and retweet only one another. Our simulations show that these bots can effectively suppress the entire ecosystem's information quality by infiltrating only a small fraction of the network. Bots can also accelerate the formation of echo chambers by suggesting other inauthentic accounts to be followed, a technique known as creating “follow trains.”

Some manipulators play both sides of a divide through separate fake news sites and bots, driving political polarization or monetization by ads. At OSoMe, we recently uncovered a network of inauthentic accounts on Twitter that were all coordinated by the same entity. Some pretended to be pro-Trump supporters of the Make America Great Again campaign, whereas others posed as Trump “resisters”; all asked for political donations. Such operations amplify content that preys on confirmation biases and accelerate the formation of polarized echo chambers.

CURBING ONLINE MANIPULATION

Understanding our cognitive biases and how algorithms and bots exploit them allows us to better guard against manipulation. OSoMe has produced a number of tools to help people understand their own vulnerabilities, as well as the weaknesses of social media platforms. One is a mobile app called Fakey that helps users learn how to spot misinformation. The game simulates a social media news feed, showing actual articles from low- and high-credibility sources. Users must decide what they can or should not share and what to fact-check. Analysis of data from Fakey confirms the prevalence of online social herding: users are more likely to share low-credibility articles when they believe that many other people have shared them.

Another program available to the public, called Hoaxy, shows how any extant meme spreads through Twitter. In this visualization, nodes represent actual Twitter accounts, and links depict how retweets, quotes, mentions and replies propagate the meme from account to account. Each node has a color representing its score from Botometer, which allows users to see the scale at which bots amplify misinformation. These tools have been used by investigative journalists to uncover the roots of misinformation campaigns, such as one pushing the “pizzagate” conspiracy in the U.S. They also helped to detect bot-driven voter-suppression efforts during the 2018 U.S. midterm election. Manipulation is getting harder to spot, however, as machine-learning algorithms become better at emulating human behavior.

Apart from spreading fake news, misinformation campaigns can also divert attention from other, more serious problems. To combat such manipulation, we have recently developed a software tool called BotSlayer. It extracts hashtags, links, accounts and other features that co-occur in tweets about topics a user wishes to study. For each entity, BotSlayer tracks the tweets, the accounts posting them and their bot scores to flag entities that are trending and probably being amplified by bots or coordinated accounts. The goal is to enable reporters, civil-society organizations and political candidates to spot and track inauthentic influence campaigns in real time.

These programmatic tools are important aids, but institutional changes are also necessary to curb the proliferation of fake news. Education can help, although it is unlikely to encompass all the topics on which people are misled. Some governments and social media platforms are also trying to clamp down on online manipulation and fake news. But who decides what is fake or manipulative and what is not? Information can come with warning labels such as the ones Facebook and Twitter have started providing, but can the people who apply those labels be trusted? The risk that such measures could deliberately or inadvertently suppress free speech, which is vital for robust democracies, is real. The dominance of social media platforms with global reach and close ties with governments further complicates the possibilities.

One of the best ideas may be to make it more difficult to create and share low-quality information. This could involve adding friction by forcing people to pay to share or receive information. Payment could be in the form of time, mental work such as puzzles, or microscopic fees for subscriptions or usage. Automated posting should be treated like advertising. Some platforms are already using friction in the form of CAPTCHAs and phone confirmation to access accounts. Twitter has placed limits on automated posting. These efforts could be expanded to gradually shift online sharing incentives toward information that is valuable to consumers.

Free communication is not free. By decreasing the cost of information, we have decreased its value and invited its adulteration. To restore the health of our information ecosystem, we must understand the vulnerabilities of our overwhelmed minds and how the economics of information can be leveraged to protect us from being misled.

https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/


Action Plan against Disinformation

1.     INTRODUCTION
Freedom of expression is a core value of the European Union enshrined in the European Union Charter of Fundamental Rights and in the constitutions of Member States. Our open democratic societies depend on the ability of citizens to access a variety of verifiable information so that they can form a view on different political issues. In this way, citizens can participate in an informed way in public debates and express their will through free and fair political processes. These democratic processes are increasingly challenged by deliberate, large-scale, and systematic spreading of disinformation. Disinformation is understood as verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm1 . Public harm includes threats to democratic processes as well as to public goods such as Union citizens' health, environment or security. Disinformation does not include inadvertent errors, satire and parody, or clearly identified partisan news and commentary. The actions contained in this Action Plan only target disinformation content that is legal under Union or national law. They are without prejudice to the laws of the Union or of any of the Member States that may be applicable, including rules on illegal content…:


Tim Berners-Lee unveils global plan to save the web


Inventor of web calls on governments and firms to safeguard it from abuse and ensure it benefits humanity
Ian Sample Science editor Sun 24 Nov 2019 23.01 

 Sir Tim Berners-Lee: ‘If we leave the web as it is, there’s a very large number of things that will go wrong.’ 
Sir Tim Berners-Lee has launched a global action plan to save the web from political manipulation, fake news, privacy violations and other malign forces that threaten to plunge the world into a “digital dystopia”.
The Contract for the Web requires endorsing governments, companies and individuals to make concrete commitments to protect the web from abuse and ensure it benefits humanity.
“I think people’s fear of bad things happening on the internet is becoming, justifiably, greater and greater,” Berners-Lee, the inventor of the web, told the Guardian. “If we leave the web as it is, there’s a very large number of things that will go wrong. We could end up with a digital dystopia if we don’t turn things around. It’s not that we need a 10-year plan for the web, we need to turn the web around now.”
The contract, which has been worked on by 80 organisations for more than a year, outlines nine central principles to safeguard the web – three each for governments, companies and individuals.
The document, published by Berners-Lee’s Web Foundation, has the backing of more than 150 organisations, from Microsoft, Twitter, Google and Facebook to the digital rights group the Electronic Frontier Foundation. At the time of writing, Amazon had not endorsed the principles.
Those who back the contract must show they are implementing the principles and working on solutions to the tougher problems, or face being removed from the list of endorsers. If the stipulation is properly enforced, some may not last long. A report from Amnesty International accuses Google and Facebook of “enabling human rights harm at a population scale”. The report comes weeks after Google was found to have acquired the personal health records of 50 million Americans without their consent.
The contract’s principles require governments to do all they can to ensure that everyone who wants to can connect to the web and have their privacy respected. People should have access to whatever personal data is held on them and have the right to object or withdraw from having that data processed.
Further principles oblige companies to make internet access affordable and calls on them to develop web services for people with disabilities and those who speak minority languages. To build trust online, companies are compelled to simplify privacy settings by providing control panels where people can access their data and manage their privacy options in one place.
Another principle requires companies to diversify their workforces, consult broad communities before and after they release new products, and assess the risk of their technology spreading misinformation or harming people’s behaviour or personal wellbeing.
Three more principles call on individuals to create rich and relevant content to make the web a valuable place, build strong online communities where everyone feels safe and welcome, and finally, to fight for the web, so it remains open to everyone, everywhere.
“The forces taking the web in the wrong direction have always been very strong,” Berners-Lee said. “Whether you’re a company or a government, controlling the web is a way to make huge profits, or a way of ensuring you remain in power. The people are arguably the most important part of this, because it’s only the people who will be motivated to hold the other two to account.”
Emily Sharpe, the director of policy at the Web Foundation, said: “The web’s power to be a force for good is under threat and people are crying out for change. We are determined to shape that debate using the framework that the Contract sets out.
“Ultimately, we need a global movement for the web like we now have for the environment, so that governments and companies are far more responsive to citizens than they are today. The contract lays the foundations for that movement.”

The race to create a perfect lie detector – and the dangers of succeeding
AI and brain-scanning technology could soon make it possible to reliably detect when people are lying. But do we really want to know? By Amit Katwala
Thu 5 Sep 2019 06.00 BST
We learn to lie as children, between the ages of two and five. By adulthood, we are prolific. We lie to our employers, our partners and, most of all, one study has found, to our mothers. The average person hears up to 200 lies a day, according to research by Jerry Jellison, a psychologist at the University of Southern California. The majority of the lies we tell are “white”, the inconsequential niceties – “I love your dress!” – that grease the wheels of human interaction. But most people tell one or two “big” lies a day, says Richard Wiseman, a psychologist at the University of Hertfordshire. We lie to promote ourselves, protect ourselves and to hurt or avoid hurting others.
The mystery is how we keep getting away with it. Our bodies expose us in every way. Hearts race, sweat drips and micro-expressions leak from small muscles in the face. We stutter, stall and make Freudian slips. “No mortal can keep a secret,” wrote the psychoanalyst in 1905. “If his lips are silent, he chatters with his fingertips. Betrayal oozes out of him at every pore.”
Even so, we are hopeless at spotting deception. On average, across 206 scientific studies, people can separate truth from lies just 54% of the time – only marginally better than tossing a coin. “People are bad at it because the differences between truth-tellers and liars are typically small and unreliable,” said Aldert Vrij, a psychologist at the University of Portsmouth who has spent years studying ways to detect deception. Some people stiffen and freeze when put on the spot, others become more animated. Liars can spin yarns packed with colour and detail, and truth-tellers can seem vague and evasive.
Humans have been trying to overcome this problem for millennia. The search for a perfect lie detector has involved torture, trials by ordeal and, in ancient India, an encounter with a donkey in a dark room. Three thousand years ago in China, the accused were forced to chew and spit out rice; the grains were thought to stick in the dry, nervous mouths of the guilty. In 1730, the English writer Daniel Defoe suggested taking the pulse of suspected pickpockets. “Guilt carries fear always about with it,” he wrote. “There is a tremor in the blood of a thief.” More recently, lie detection has largely been equated with the juddering styluses of the polygraph machine – the quintessential lie detector beloved by daytime television hosts and police procedurals. But none of these methods has yielded a reliable way to separate fiction from fact.
That could soon change. In the past couple of decades, the rise of cheap computing power, brain-scanning technologies and artificial intelligence has given birth to what many claim is a powerful new generation of lie-detection tools. Startups, racing to commercialise these developments, want us to believe that a virtually infallible lie detector is just around the corner.
Their inventions are being snapped up by police forces, state agencies and nations desperate to secure themselves against foreign threats. They are also being used by employers, insurance companies and welfare officers. “We’ve seen an increase in interest from both the private sector and within government,” said Todd Mickelsen, the CEO of Converus, which makes a lie detector based on eye movements and subtle changes in pupil size.
Converus’s technology, EyeDetect, has been used by FedEx in Panama and Uber in Mexico to screen out drivers with criminal histories, and by the credit ratings agency Experian, which tests its staff in Colombia to make sure they aren’t manipulating the company’s database to secure loans for family members. In the UK, Northumbria police are carrying out a pilot scheme that uses EyeDetect to measure the rehabilitation of sex offenders. Other EyeDetect customers include the government of Afghanistan, McDonald’s and dozens of local police departments in the US. Soon, large-scale lie-detection programmes could be coming to the borders of the US and the European Union, where they would flag potentially deceptive travellers for further questioning.
But as tools such as EyeDetect infiltrate more and more areas of public and private life, there are urgent questions to be answered about their scientific validity and ethical use. In our age of high surveillance and anxieties about all-powerful AIs, the idea that a machine could read our most personal thoughts feels more plausible than ever to us as individuals, and to the governments and corporations funding the new wave of lie-detection research. But what if states and employers come to believe in the power of a lie-detection technology that proves to be deeply biased – or that doesn’t actually work?
And what do we do with these technologies if they do succeed? A machine that reliably sorts truth from falsehood could have profound implications for human conduct. The creators of these tools argue that by weeding out deception they can create a fairer, safer world. But the ways lie detectors have been used in the past suggests such claims may be far too optimistic.


For most of us, most of the time, lying is more taxing and more stressful than honesty. To calculate another person’s view, suppress emotions and hold back from blurting out the truth requires more thought and more energy than simply being honest. It demands that we bear what psychologists call a cognitive load. Carrying that burden, most lie-detection theories assume, leaves evidence in our bodies and actions.
Lie-detection technologies tend to examine five different types of evidence. The first two are verbal: the things we say and the way we say them. Jeff Hancock, an expert on digital communication at Stanford, has found that people who are lying in their online dating profiles tend to use the words “I”, “me” and “my” more often, for instance. Voice-stress analysis, which aims to detect deception based on changes in tone of voice, was used during the interrogation of George Zimmerman, who shot the teenager Trayvon Martin in 2012, and by UK councils between 2007 and 2010 in a pilot scheme that tried to catch benefit cheats over the phone. Only five of the 23 local authorities where voice analysis was trialled judged it a success, but in 2014, it was still in use in 20 councils, according to freedom of information requests by the campaign group False Economy.
The third source of evidence – body language – can also reveal hidden feelings. Some liars display so-called “duper’s delight”, a fleeting expression of glee that crosses the face when they think they have got away with it. Cognitive load makes people move differently, and liars trying to “act natural” can end up doing the opposite. In an experiment in 2015, researchers at the University of Cambridge were able to detect deception more than 70% of the time by using a skintight suit to measure how much subjects fidgeted and froze under questioning.
Get the Guardian’s award-winning long reads sent direct to you every Saturday morning
The fourth type of evidence is physiological. The polygraph measures blood pressure, breathing rate and sweat. Penile plethysmography tests arousal levels in sex offenders by measuring the engorgement of the penis using a special cuff. Infrared cameras analyse facial temperature. Unlike Pinocchio, our noses may actually shrink slightly when we lie as warm blood flows towards the brain.
In the 1990s, new technologies opened up a fifth, ostensibly more direct avenue of investigation: the brain. In the second season of the Netflix documentary Making a Murderer, Steven Avery, who is serving a life sentence for a brutal killing he says he did not commit, undergoes a “brain fingerprinting” exam, which uses an electrode-studded headset called an electroencephalogram, or EEG, to read his neural activity and translate it into waves rising and falling on a graph. The test’s inventor, Dr Larry Farwell, claims it can detect knowledge of a crime hidden in a suspect’s brain by picking up a neural response to phrases or pictures relating to the crime that only the perpetrator and investigators would recognise. Another EEG-based test was used in 2008 to convict a 24-year-old Indian woman named Aditi Sharma of murdering her fiance by lacing his food with arsenic, but Sharma’s sentence was eventually overturned on appeal when the Indian supreme court held that the test could violate the subject’s rights against self-incrimination.
After 9/11, the US government – long an enthusiastic sponsor of deception science – started funding other kinds of brain-based lie-detection work through Darpa, the Defence Advanced Research Projects Agency. By 2006, two companies – Cephos and No Lie MRI – were offering lie detection based on functional magnetic resonance imaging, or fMRI. Using powerful magnets, these tools track the flow of blood to areas of the brain involved in social calculation, memory recall and impulse control.
But just because a lie-detection tool seems technologically sophisticated doesn’t mean it works. “It’s quite simple to beat these tests in ways that are very difficult to detect by a potential investigator,” said Dr Giorgio Ganis, who studies EEG and fMRI-based lie detection at the University of Plymouth. In 2007, a research group set up by the MacArthur Foundation examined fMRI-based deception tests. “After looking at the literature, we concluded that we have no idea whether fMRI can or cannot detect lies,” said Anthony Wagner, a Stanford psychologist and a member of the MacArthur group, who has testified against the admissibility of fMRI lie detection in court.
A new frontier in lie detection is now emerging. An increasing number of projects are using AI to combine multiple sources of evidence into a single measure for deception. Machine learning is accelerating deception research by spotting previously unseen patterns in reams of data. Scientists at the University of Maryland, for example, have developed software that they claim can detect deception from courtroom footage with 88% accuracy.
The algorithms behind such tools are designed to improve continuously over time, and may ultimately end up basing their determinations of guilt and innocence on factors that even the humans who have programmed them don’t understand. These tests are being trialled in job interviews, at border crossings and in police interviews, but as they become increasingly widespread, civil rights groups and scientists are growing more and more concerned about the dangers they could unleash on society.


Nothing provides a clearer warning about the threats of the new generation of lie-detection than the history of the polygraph, the world’s best-known and most widely used deception test. Although almost a century old, the machine still dominates both the public perception of lie detection and the testing market, with millions of polygraph tests conducted every year. Ever since its creation, it has been attacked for its questionable accuracy, and for the way it has been used as a tool of coercion. But the polygraph’s flawed science continues to cast a shadow over lie detection technologies today.
Even John Larson, the inventor of the polygraph, came to hate his creation. In 1921, Larson was a 29-year-old rookie police officer working the downtown beat in Berkeley, California. But he had also studied physiology and criminology and, when not on patrol, he was in a lab at the University of California, developing ways to bring science to bear in the fight against crime.
In the spring of 1921, Larson built an ugly device that took continuous measurements of blood pressure and breathing rate, and scratched the results on to a rolling paper cylinder. He then devised an interview-based exam that compared a subject’s physiological response when answering yes or no questions relating to a crime with the subject’s answers to control questions such as “Is your name Jane Doe?” As a proof of concept, he used the test to solve a theft at a women’s dormitory.
Larson refined his invention over several years with the help of an enterprising young man named Leonarde Keeler, who envisioned applications for the polygraph well beyond law enforcement. After the Wall Street crash of 1929, Keeler offered a version of the machine that was concealed inside an elegant walnut box to large organisations so they could screen employees suspected of theft.
Not long after, the US government became the world’s largest user of the exam. During the “red scare” of the 1950s, thousands of federal employees were subjected to polygraphs designed to root out communists. The US Army, which set up its first polygraph school in 1951, still trains examiners for all the intelligence agencies at the National Center for Credibility Assessment at Fort Jackson in South Carolina.
Companies also embraced the technology. Throughout much of the last century, about a quarter of US corporations ran polygraph exams on employees to test for issues including histories of drug use and theft. McDonald’s used to use the machine on its workers. By the 1980s, there were up to 10,000 trained polygraph examiners in the US, conducting 2m tests a year.
The only problem was that the polygraph did not work. In 2003, the US National Academy of Sciences published a damning report that found evidence on the polygraph’s accuracy across 57 studies was “far from satisfactory”. History is littered with examples of known criminals who evaded detection by cheating the test. Aldrich Ames, a KGB double agent, passed two polygraphs while working for the CIA in the late 1980s and early 90s. With a little training, it is relatively easy to beat the machine. Floyd “Buzz” Fay, who was falsely convicted of murder in 1979 after a failed polygraph exam, became an expert in the test during his two-and-a-half-years in prison, and started coaching other inmates on how to defeat it. After 15 minutes of instruction, 23 of 27 were able to pass. Common “countermeasures”, which work by exaggerating the body’s response to control questions, include thinking about a frightening experience, stepping on a pin hidden in the shoe, or simply clenching the anus.
The upshot is that the polygraph is not and never was an effective lie detector. There is no way for an examiner to know whether a rise in blood pressure is due to fear of getting caught in a lie, or anxiety about being wrongly accused. Different examiners rating the same charts can get contradictory results and there are huge discrepancies in outcome depending on location, race and gender. In one extreme example, an examiner in Washington state failed one in 20 law enforcement job applicants for having sex with animals; he “uncovered” 10 times more bestiality than his colleagues, and twice as much child pornography.
As long ago as 1965, the year Larson died, the US Committee on Government Operations issued a damning verdict on the polygraph. “People have been deceived by a myth that a metal box in the hands of an investigator can detect truth or falsehood,” it concluded. By then, civil rights groups were arguing that the polygraph violated constitutional protections against self-incrimination. In fact, despite the polygraph’s cultural status, in the US, its results are inadmissible in most courts. And in 1988, citing concerns that the polygraph was open to “misuse and abuse”, the US Congress banned its use by employers. Other lie-detectors from the second half of the 20th century fared no better: abandoned Department of Defense projects included the “wiggle chair”, which covertly tracked movement and body temperature during interrogation, and an elaborate system for measuring breathing rate by aiming an infrared laser at the lip through a hole in the wall.
The polygraph remained popular though – not because it was effective, but because people thought it was. “The people who developed the polygraph machine knew that the real power of it was in convincing people that it works,” said Dr Andy Balmer, a sociologist at the University of Manchester who wrote a book called Lie Detection and the Law.
The threat of being outed by the machine was enough to coerce some people into confessions. One examiner in Cincinnati in 1975 left the interrogation room and reportedly watched, bemused, through a two-way mirror as the accused tore 1.8 metres of paper charts off the machine and ate them. (You didn’t even have to have the right machine: in the 1980s, police officers in Detroit extracted confessions by placing a suspect’s hand on a photocopier that spat out sheets of paper with the phrase “He’s Lying!” pre-printed on them.) This was particularly attractive to law enforcement in the US, where it is vastly cheaper to use a machine to get a confession out of someone than it is to take them to trial.
But other people were pushed to admit to crimes they did not commit after the machine wrongly labelled them as lying. The polygraph became a form of psychological torture that wrung false confessions from the vulnerable. Many of these people were then charged, prosecuted and sent to jail – whether by unscrupulous police and prosecutors, or by those who wrongly believed in the polygraph’s power.
Perhaps no one came to understand the coercive potential of his machine better than Larson. Shortly before his death in 1965, he wrote: “Beyond my expectation, through uncontrollable factors, this scientific investigation became for practical purposes a Frankenstein’s monster.”


The search for a truly effective lie detector gained new urgency after the terrorist attacks of 11 September 2001. Several of the hijackers had managed to enter the US after successfully deceiving border agents. Suddenly, intelligence and border services wanted tools that actually worked. A flood of new government funding made lie detection big business again. “Everything changed after 9/11,” writes psychologist Paul Ekman in Telling Lies.
Ekman was one of the beneficiaries of this surge. In the 1970s, he had been filming interviews with psychiatric patients when he noticed a brief flash of despair cross the features of Mary, a 42-year-old suicidal woman, when she lied about feeling better. He spent the next few decades cataloguing how these tiny movements of the face, which he termed “micro-expressions”, can reveal hidden truths.
Ekman’s work was hugely influential with psychologists, and even served as the basis for Lie to Me, a primetime television show that debuted in 2009 with an Ekman-inspired lead played by Tim Roth. But it got its first real-world test in 2006, as part of a raft of new security measures introduced to combat terrorism. That year, Ekman spent a month teaching US immigration officers how to detect deception at passport control by looking for certain micro-expressions. The results are instructive: at least 16 terrorists were permitted to enter the US in the following six years.
Investment in lie-detection technology “goes in waves”, said Dr John Kircher, a University of Utah psychologist who developed a digital scoring system for the polygraph. There were spikes in the early 1980s, the mid-90s and the early 2000s, neatly tracking with Republican administrations and foreign wars. In 2008, under President George W Bush, the US Army spent $700,000 on 94 handheld lie detectors for use in Iraq and Afghanistan. The Preliminary Credibility Assessment Screening System had three sensors that attached to the hand, connected to an off-the-shelf pager which flashed green for truth, red for lies and yellow if it couldn’t decide. It was about as good as a photocopier at detecting deception – and at eliciting the truth.
Some people believe an accurate lie detector would have allowed border patrol to stop the 9/11 hijackers. “These people were already on watch lists,” Larry Farwell, the inventor of brain fingerprinting, told me. “Brain fingerprinting could have provided the evidence we needed to bring the perpetrators to justice before they actually committed the crime.” A similar logic has been applied in the case of European terrorists who returned from receiving training abroad.
As a result, the frontline for much of the new government-funded lie detection technology has been the borders of the US and Europe. In 2014, travellers flying into Bucharest were interrogated by a virtual border agent called Avatar, an on-screen figure in a white shirt with blue eyes, which introduced itself as “the future of passport control”. As well as an e-passport scanner and fingerprint reader, the Avatar unit has a microphone, an infra-red eye-tracking camera and an Xbox Kinect sensor to measure body movement. It is one of the first “multi-modal” lie detectors – one that incorporates a number of different sources of evidence – since the polygraph.
But the “secret sauce”, according to David Mackstaller, who is taking the technology in Avatar to market via a company called Discern Science, is in the software, which uses an algorithm to combine all of these types of data. The machine aims to send a verdict to a human border guard within 45 seconds, who can either wave the traveller through or pull them aside for additional screening. Mackstaller said he is in talks with governments – he wouldn’t say which ones – about installing Avatar permanently after further tests at Nogales in Arizona on the US-Mexico border, and with federal employees at Reagan Airport near Washington DC. Discern Science claims accuracy rates in their preliminary studies – including the one in Bucharest – have been between 83% and 85%.
The Bucharest trials were supported by Frontex, the EU border agency, which is now funding a competing system called iBorderCtrl, with its own virtual border guard. One aspect of iBorderCtrl is based on Silent Talker, a technology that has been in development at Manchester Metropolitan University since the early 2000s. Silent Talker uses an AI model to analyse more than 40 types of microgestures in the face and head; it only needs a camera and an internet connection to function. On a recent visit to the company’s office in central Manchester, I watched video footage of a young man lying about taking money from a box during a mock crime experiment, while in the corner of the screen a dial swung from green, to yellow, to red. In theory, it could be run on a smartphone or used on live television footage, perhaps even during political debates, although co-founder James O’Shea said the company doesn’t want to go down that route – it is targeting law enforcement and insurance.
O’Shea and his colleague Zuhair Bandar claim Silent Talker has an accuracy rate of 75% in studies so far. “We don’t know how it works,” O’Shea said. They stressed the importance of keeping a “human in the loop” when it comes to making decisions based on Silent Talker’s results.
Mackstaller said Avatar’s results will improve as its algorithm learns. He also expects it to perform better in the real world because the penalties for getting caught are much higher, so liars are under more stress. But research shows that the opposite may be true: lab studies tend to overestimate real-world success.
Before these tools are rolled out at scale, clearer evidence is required that they work across different cultures, or with groups of people such as psychopaths, whose non-verbal behaviour may differ from the norm. Much of the research so far has been conducted on white Europeans and Americans. Evidence from other domains, including bail and prison sentencing, suggests that algorithms tend to encode the biases of the societies in which they are created. These effects could be heightened at the border, where some of society’s greatest fears and prejudices play out. What’s more, the black box of an AI model is not conducive to transparent decision making since it cannot explain its reasoning. “We don’t know how it works,” O’Shea said. “The AI system learned how to do it by itself.”
Andy Balmer, the University of Manchester sociologist, fears that technology will be used to reinforce existing biases with a veneer of questionable science – making it harder for individuals from vulnerable groups to challenge decisions. “Most reputable science is clear that lie detection doesn’t work, and yet it persists as a field of study where other things probably would have been abandoned by now,” he said. “That tells us something about what we want from it.”


The truth has only one face, wrote the 16th-century French philosopher Michel de Montaigne, but a lie “has a hundred thousand shapes and no defined limits”. Deception is not a singular phenomenon and, as of yet, we know of no telltale sign of deception that holds true for everyone, in every situation. There is no Pinocchio’s nose. “That’s seen as the holy grail of lie detection,” said Dr Sophie van der Zee, a legal psychologist at Erasmus University in Rotterdam. “So far no one has found it.”
The accuracy rates of 80-90% claimed by the likes of EyeDetect and Avatar sound impressive, but applied at the scale of a border crossing, they would lead to thousands of innocent people being wrongly flagged for every genuine threat it identified. It might also mean that two out of every 10 terrorists easily slips through.
History suggests that such shortcomings will not stop these new tools from being used. After all, the polygraph has been widely debunked, but an estimated 2.5m polygraph exams are still conducted in the US every year. It is a $2.5bn industry. In the UK, the polygraph has been used on sex offenders since 2014, and in January 2019, the government announced plans to use it on domestic abusers on parole. The test “cannot be killed by science because it was not born of science”, writes the historian Ken Alder in his book The Lie Detectors.
New technologies may be harder than the polygraph for unscrupulous examiners to deliberately manipulate, but that does not mean they will be fair. AI-powered lie detectors prey on the tendency of both individuals and governments to put faith in science’s supposedly all-seeing eye. And the closer they get to perfect reliability, or at least the closer they appear to get, the more dangerous they will become, because lie detectors often get aimed at society’s most vulnerable: women in the 1920s, suspected dissidents and homosexuals in the 60s, benefit claimants in the 2000s, asylum seekers and migrants today. “Scientists don’t think much about who is going to use these methods,” said Giorgio Ganis. “I always feel that people should be aware of the implications.”
In an era of fake news and falsehoods, it can be tempting to look for certainty in science. But lie detectors tend to surface at “pressure-cooker points” in politics, when governments lower their requirements for scientific rigour, said Balmer. In this environment, dubious new techniques could “slip neatly into the role the polygraph once played”, Alder predicts.
One day, improvements in artificial intelligence could find a reliable pattern for deception by scouring multiple sources of evidence, or more detailed scanning technologies could discover an unambiguous sign lurking in the brain. In the real world, however, practised falsehoods – the stories we tell ourselves about ourselves, the lies that form the core of our identity – complicate matters. “We have this tremendous capacity to believe our own lies,” Dan Ariely, a renowned behavioural psychologist at Duke University, said. “And once we believe our own lies, of course we don’t provide any signal of wrongdoing.”
In his 1995 science-fiction novel The Truth Machine, James Halperin imagined a world in which someone succeeds in building a perfect lie detector. The invention helps unite the warring nations of the globe into a world government, and accelerates the search for a cancer cure. But evidence from the last hundred years suggests that it probably wouldn’t play out like that in real life. Politicians are hardly queueing up to use new technology on themselves. Terry Mullins, a long-time private polygraph examiner – one of about 30 in the UK – has been trying in vain to get police forces and government departments interested in the EyeDetect technology. “You can’t get the government on board,” he said. “I think they’re all terrified.”
Daniel Langleben, the scientist behind No Lie MRI, told me one of the government agencies he was approached by was not really interested in the accuracy rates of his brain-based lie detector. An fMRI machine cannot be packed into a suitcase or brought into a police interrogation room. The investigator cannot manipulate the test results to apply pressure to an uncooperative suspect. The agency just wanted to know whether it could be used to train agents to beat the polygraph.
“Truth is not really a commodity,” Langleben reflected. “Nobody wants it.”
https://www.theguardian.com/technology/2019/sep/05/





With the 2020 election on the horizon, one of Washington’s best minds on regulating tech shares his fears about social media manipulation and discusses Congress’s failure to tackle election security and interference.
Senator Mark Warner has proved himself to be a sort of braintrust on tech issues in the Senate. Through his questioning of tech execs in hearings and the oft-cited white papers produced by his office, the Virginia Democrat has arguably raised the Senate’s game in understanding and dealing with Big Tech.
After all, Warner and tech go way back. As a telecom guy in the 1980s, he was among the first to see the importance of wireless networks. He made his millions brokering wireless spectrum deals around FCC auctions. As a venture capital guy in the ’90s, he helped build the internet pioneer America Online. And as a governor in the 2000s, he brought 700 miles of broadband cable network to rural Virginia.
Government oversight of tech companies is one thing, but in this election year Warner is also thinking about the various ways technology is being used to threaten democracy itself. We spoke shortly after the Donald Trump impeachment trial and the ill-fated Iowa caucuses. It was a good time to talk about election interference, misinformation, cybersecurity threats, and the government’s ability and willingness to deal with such problems.
The following interview has been edited for clarity and brevity.
Fast Company: Some news outlets portrayed the Iowa caucus app meltdown as part of a failed attempt by the Democratic party to push their tech and data game forward. Was that your conclusion?
Mark Warner: I think it was a huge screwup. Do we really want to trust either political party to run an election totally independently, as opposed to having election professionals [run it]? We have no information that outside sources were involved.
I think it was purely a non-tested app that was put into place. But then you saw the level and volume of [social media] traffic afterwards and all the conspiracy theories [about the legitimacy of the results]. One of the things I’m still trying to get from our intel community is how much of this conspiracy theory was being manipulated by foreign bots. I don’t have that answer yet. I hope to have it soon. But it goes to the heart of why this area is so important. The bad guys don’t have to come in and change totals if they simply lessen American’s belief in the integrity of our voting process. Or, they give people reasons not to vote, as they were so successful in doing in 2016.
THE BAD GUYS DON’T HAVE TO COME IN AND CHANGE TOTALS IF THEY SIMPLY LESSEN AMERICAN’S BELIEF IN THE INTEGRITY OF OUR VOTING PROCESS.”
SENATOR MARK WARNER
FC: Do you think that the Department of Homeland Security is interacting with state election officials and offering the kind of oversight and advice they should be?
MW: Chris Krebs [the director of the Cybersecurity and Infrastructure Security Agency (CISA) in DHS] has done a very good job. Most all state election systems now have what they call an Einstein (cybersecurity certification) program, which is a basic protection unit. I think we are better protected from hacking into actual voting machines or actual election night results. But we could do better.
There were a number of secretaries of state who in the first year after 2016 didn’t believe the problem was real. I’m really proud of our [Senate Intelligence] committee because we kept it bipartisan and we’ve laid [the problem] out—both the election interference, and the Russian social media use. I don’t think there’s an election official around that doesn’t realize these threats are real.
But I think the White House has been grossly irresponsible for not being willing to echo these messages. I think it’s an embarrassment that Mitch McConnell has not allowed any of these election security bills to come to the floor of the Senate. I think it’s an embarrassment that the White House continues to fight tooth and nail against any kind of low-hanging fruit like [bills mandating] paper ballot backups and post-election audits. I’m still very worried that three large [election equipment] companies control 90% of all the voter files in the country. It doesn’t have to be the government, but there’s no kind of independent industry standard on safety and security.
FC: When you think about people trying to contaminate the accuracy or the legitimacy of the election, do you think that we have more to worry about from foreign actors, or from domestic actors who may have learned some of the foreign actors’ tricks?
MW: I think it’s a bit of both. There are these domestic right-wing extremist groups, but a network that comes out of Russia—frankly, comes out of Germany almost as much as Russia—reinforces those messages. So there’s a real collaboration there. There’s some of that on the left, but it doesn’t seem to be as pervasive. China’s efforts, which are getting much more sophisticated, are more about trying to manipulate the Chinese diaspora. There’s not that kind of nation-state infrastructure to support some of this on the left. Although ironically, some of the Russian activity does promote some of the leftist theories, some of the “Bernie Sanders is getting screwed” theories. Because again, it undermines everybody’s faith in the process.
FC: Are you worried about deepfakes in this election cycle?
IT UNDERMINES EVERYBODY’S FAITH IN THE PROCESS.”
SENATOR MARK WARNER
MW: The irony is that there hasn’t been a need for sophisticated deepfakes to have this kind of interference. Just look at the two things with Pelosi—the one with the slurring of her speech, or the more recent video where they’ve made it appear that she was tearing up Trump’s State of the Union speech at inappropriate times during the speech. So instead of showing her standing up and applauding the Tuskegee Airmen, the video makes it look like she’s tearing up the speech while he’s talking about the Tuskegee Airmen.
These are pretty low-tech examples of deepfakes. If there’s this much ability to spread [misinformation] with such low tech, think about what we may see in the coming months with more sophisticated deepfake technology. You even have some of the president’s family sending out some of those doctored videos. I believe there is still a willingness from this administration to invite this kind of mischief.
FC: Are there other areas of vulnerability you’re concerned about for 2020?
MW: One of the areas that I’m particularly worried about is messing with upstream voter registration files. If you simply move 10,000 or 20,000 people in Miami Dade County from one set of precincts to another, and they show up to the right precinct but were listed in a different precinct, you’d have chaos on election day. I’m not sure how often the registrars go back and rescreen their voter file to make sure people are still where they say they are.
One area I want to give the Trump administration some credit for is they’ve allowed our cyber capabilities to go a bit more on offense. For many years, whether you were talking about Russian interference or Chinese intellectual property thefts, we were kind of a punching bag. They could attack us with a great deal of impunity. Now we have good capabilities here, too. So we’ve struck back a little bit, and 2018 was much safer. But we had plenty of evidence that Russia was going to spend most of their efforts on 2020, not 2018.
That’s all on the election integrity side. Where we haven’t made much progress at all is with social media manipulation, whether it’s the spreading of false theories or the targeting that was geared at African Americans to suppress their vote in 2016.
FC: We’ve just come off a big impeachment trial that revolved around the credibility of our elections, with Trump asking a foreign power to help him get reelected. As you were sitting there during the State of the Union on the eve of his acquittal in the Senate, is there anything you can share with us about what you were thinking?
MW: In America, we’ve lived through plenty of political disputes in our history and plenty of political divisions. But I think there were rules both written and unwritten about some level of ethical behavior that I think this president has thrown out the window. While a lot of my Republican colleagues privately express chagrin at that, so far they’ve not been willing to speak up. I’m so worried about this kind of asymmetric attack from foreign entities, whether they’re for Trump or not for Trump. If Russia was trying to help a certain candidate, and the candidate didn’t want that help and that leaks out, that could be devastating to somebody’s chances. [Warner proved prescient here. Reports of that very thing happening to Bernie Sanders emerged days later on February 21.]
If you add up what the Russians spent in our election in 2016, what they spent in the Brexit vote a year or so before, and what they spent in the French presidential elections . . . it’s less than the cost of one new F-35 airplane. In a world where the U.S. is spending $748 billion on defense, for $35 million or $50 million you can do this kind of damage. I sometimes worry that maybe we’re fighting the last century’s wars when conflict in the 21st century is going to be a lot more around cyber misinformation and disinformation, where your dollar can go a long way. And if you don’t have a united opposition against that kind of behavior, it can do a lot of damage.
FC: Do you think Congress is up to the task of delivering a tough consumer data privacy bill anytime soon?
MW: We haven’t so far and it’s one more example of where America is ceding its historic technology leadership. On privacy, obviously the Europeans have moved with GDPR. California’s moved with their own version of privacy law. The Brits, the Australians, and the French are moving on content regulation. I think the only thing that’s holding up privacy legislation is how much federal preemption there ought to be. But I think there are ways to work through that.
I do think that some of the social media companies may be waking up to the fact that their ability to delay a pretty ineffective Congress may come back and bite them. Because when Congress [is ready to pass regulation], the bar’s going to be raised so much that I think there will be a much stricter set of regulations than what might’ve happened if we’d actually passed something this year or the year before.
I’ve been looking at what I think are the issues around pro-competition, around more disclosure around dark patterns. I’ve got a half dozen bills—all of them bipartisan—that look at data portability, [data value] evaluation, and dark patterns. I’ve been working on some of the election security stuff around Facebook. We are looking at some Section 230 reforms. My hope is that you have a privacy bill that we could then add a number of these other things to, because I think the world is moving fast enough that privacy legislation is necessary but not sufficient.
FC: You’re referencing Section 230 of the Telecommunications Act of 1996, which protects tech companies from being liable for what users post on their platforms and how they moderate content. To focus on the Section 230 reforms for a moment, are you contemplating a partial change to the language of the law that would make tech platforms legally liable for a very specific kind of toxic content? Or are you talking about a broader lifting of tech’s immunity under the law?
MW: Maybe Section 230 made some sense in the late ’90s when [tech platforms] were startup ventures. But when 65% of Americans get some or all their news from Facebook and Google and that news is being curated to you, the idea that [tech companies] should bear no responsibility at all about the content you’re receiving is one of the reasons why I think there’s broad-based interest in reexamining this.
I THINK THERE’S A GROWING SENSITIVITY THAT THE STATUS QUO IS NOT WORKING.”
SENATOR MARK WARNER
I think there’s a growing sensitivity that the status quo is not working. It’s pretty outrageous that we’re three and a half years after the 2016 campaign, when the whole political world went from being techno-optimists to having a more realistic view of these platform companies, and we still haven’t passed a single piece of legislation.
I’ve found some of Facebook’s arguments on protecting free speech to be not very compelling. I think Facebook is much more comparable to a cable news network than it is to a broadcasting station that does protect First Amendment speech. And the way I’ve been thinking about it is that it’s less about the ability to say stupid stuff or racist stuff—because there may be some First Amendment rights on some of that activity—but more about the amplification issue. You may have a right to say a stupid thing, but does that right extend to guaranteeing a social media company will promote it a million times or 100 million times without any restriction?


This story is part of our Hacking Democracy series, which examines the ways in which technology is eroding our elections and democratic institutions—and what’s been done to fix them. Read more here.

There’s a Psychological ‘Vaccine’ against Misinformation

A social psychologist found that showing people how manipulative techniques work can create resilience against misinformation

 Misinformation can feel inescapable. Last summer a survey from the nonprofit Poynter Institute for Media Studies found that 62 percent of people regularly notice false or misleading information online. And in a 2019 poll, almost nine in 10 people admitted to having fallen for fake news. Social psychologist Sander van der Linden of the University of Cambridge studies how and why people share such information and how it can be stopped. He spoke with Mind Matters editor Daisy Yuhas to discuss this work and his new book, Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity, which offers research-backed solutions to stem this spread.

[An edited transcript of the interview follows.]

In Foolproof, you borrow an analogy from the medical world, arguing that misinformation operates a lot like a virus. How did you come to that comparison?

I was going through journals and found models from epidemiology and public health that are used to understand how information propagates across a system. Instead of a virus spreading, you have an information pathogen. Somebody shares something with you, and you then spread it to other people.

That led me to wonder: If it’s true that misinformation spreads like a virus, is it possible to inoculate people? I came across some work from the 1960s by Bill McGuire, a psychologist who studied how people could protect themselves from “brainwashing.” He had a very similar thought. That connection led to this whole program of research.

[Read more about scientifically backed strategies to fight misinformation]

How do we get “infected”?

A virus attacks by exploiting our cells’ weak spots and hijacking some of their machinery. It’s the same for the mind in many ways. There are certain cognitive biases that can be exploited by misinformation. Misinformation infects our memories and influences the decisions that we make.

One example is the illusory truth bias. That’s the idea that just hearing something repeatedly—even if you know that it is wrong—makes it seem more true. These learned automatic associations are part of how the brain works.

In your research, you’ve extended the virus metaphor to argue that we can vaccinate ourselves against misinformation through a technique that you call “prebunking.” How does that work?

Prebunking has two parts. First is forewarning, which jump-starts the psychological immune system because it’s sleeping most of the time. We tell people that someone may want to manipulate them, which raises their skepticism and heightens their awareness.

The second part of the prebunk is analogous to providing people with a weakened dose of the virus in a vaccine. For example, in some cases, you get a small dose of the misinformation and tips on how to refute it. That can help people be more resilient against misinformation.

In addition, we have found that there are general techniques used to manipulate the spread of misinformation in a lot of different environments. In our studies, we have found that if you can help people spot those broader techniques, we can inoculate them against a whole range of misinformation. For instance, in one study, people played a game [Bad News] to help them understand the tactics used to spread fake news. That improved their ability to spot a range of unreliable information by about 20 to 25 percent.

So you help people recognize and resist incoming misinformation broadly by alerting them to the techniques people use to manipulate others. Can you walk me through an example?

Sure. We created a series of videos in partnership with Google to make people more aware of manipulative techniques on YouTube. One is a false dichotomy, or false dilemma. It’s a common tactic and one that our partners at Google alerted us to because it’s present in many radicalization videos.

In a false dichotomy, someone incorrectly asserts that you have only one of two options. So an example would be “either you’re not a good Muslim, or you have to join ISIS.” Politicians use this approach, too. In a U.S. political context, an example might be: “We have to fix the homelessness problem in San Francisco before we start talking about immigrants.”

In our research, we have exposed people to this concept using videos that explain false dichotomies in nonpolitical scenarios. We use popular culture like Family Guy and Star Wars. People have loved it, and it’s proved to be a really good vehicle.

So in our false dichotomy video, you see a scene from a Star Wars movie, Revenge of the Sith, where Anakin Skywalker says to Obi-Wan Kenobi, “If you’re not with me, then you’re my enemy,” to which Obi-Wan replies, “Only a Sith deals in absolutes.” The video cuts to explain that Anakin has just used a false dichotomy.

After seeing a video like this, the next time you’re presented with just two options, you realize somebody may be trying to manipulate you.

In August you published findings from a study with more than 20,000 people viewing these videos, which called out techniques such as false dilemmas, scapegoating and emotionally manipulative language. What did you learn?

What we find is that, using these videos, people are better able to recognize misinformation that we show them later both in the lab and on social media. We included a live test on the YouTube platform. In that setup, the environment is not controlled, and people are more distracted, so it’s a more rigorous test.

These videos were part of an ad campaign run by Google that had millions of views. Google has now rolled out videos based on this research that are targeted at misinformation about Ukraine and Ukrainian refugees in Europe. They are specifically helping people spot the technique of scapegoating.

In the book, you point out that many people who think they are immune to misinformation are not. For instance, in one survey, almost 50 percent of respondents believed they could spot fake news, but only 4 percent succeeded. Even “digital natives” can fall for fake content. Can this happen to anyone?

A lot of people are going to think that they’re immune. But there are basic principles that expose us all. For example, there is an evolutionary argument that’s quite important here called the truth bias. In most environments, people are not being actively deceived, so our default state is to accept that things are true. If you had to critically question everything, you couldn’t get through your day. But if you are in an environment—like on social media—where the rate of misinformation is much higher, things can go wrong.

In addition to biases, the book highlights how certain social behaviors and contexts, including online echo chambers, skew what we see. With so many forces working against us, how do you stay optimistic?

We do have biases that can be exploited by producers of misinformation. It’s not easy, given all of the new information we’re exposed to all the time, for people to keep track of what’s credible. But I’m hopeful because there are some solutions. Prebunking is not a panacea, but it’s a good first line of defense, and it helps, as does debunking and fact-checking. We can help people maintain accuracy and stay vigilant.

Are you a scientist who specializes in neuroscience, cognitive science or psychology? And have you read a recent peer-reviewed paper that you would like to write about for Mind Matters? Please send suggestions to Scientific American’s Mind Matters editor Daisy Yuhas at pitchmindmatters@gmail.com.

https://www.scientificamerican.com/article/theres-a-psychological-vaccine-against-misinformation/


The Google Feature Magnifying Disinformation

Google’s knowledge panels contain helpful facts and tidbits. But sometimes they surface bad information, too.
LORA KELLEY SEP 23, 2019
Martin John Bryant was lying on his bed in his parents’ house in Britain when he heard on the radio that Martin John Bryant had just committed mass murder. His first reaction was disbelief, he told me recently, more than two decades later. He lay there waiting for another hour to see whether he would hear his name again, and he did: Indeed, Martin John Bryant had just shot 58 people, killing 35, in Port Arthur, Australia.
The Martin John Bryant I spoke with is not a mass murderer. He is a U.K.-based consultant to tech companies. But he does have the bad luck of sharing a full name with the man who committed an act so violent that he is credited with inspiring Australia to pass stricter gun laws. Over the years, the Bryant I spoke with has gotten messages calling him a psycho; been taunted by Australian teens on WhatsApp; received an email from schoolchildren saying how evil he was (their teacher wrote an hour later to apologize); and even had a note sent to his then-employer informing them that they’d hired a killer.
But the biggest issue? When people Google him, an authoritative-looking box pops up on the right side of the results page, informing them that “Martin John Bryant is an Australian man who is known for murdering 35 people and injuring 23 others in the Port Arthur massacre.” He fears that he’s missed out on professional opportunities because when people search his name, “they just find this guy with a very distinct stare in his eyes in the photos and all this talk about murder.”
That box is what Google calls a “knowledge panel,” a collection of definitive-seeming information (dates, names, biographical details, net worths) that appears when you Google someone or something famous. Seven years after their introduction, in 2012, knowledge panels are essential internet infrastructure: 62 percent of mobile searches in June 2019 were no-click, according to the research firm Jumpshot, meaning that many people are in the habit of searching; looking at the knowledge panel, related featured snippets, or top links; and then exiting the search. A 2019 survey conducted by the search marketing agency Path Interactive found that people ages 13 to 21 were twice as likely as respondents over 50 to consider their search complete once they’d viewed a knowledge panel.
This is all part of an effort to “build the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do,” as Amit Singhal, then the senior vice president in charge of search at Google, wrote in a 2012 blog post.
But people do not populate knowledge panels. Algorithms do. Google’s algorithms, like any, are imperfect, subject to errors and misfires. At their best, knowledge panels make life easier. But at their worst, the algorithms that populate knowledge panels can pull bad content, spreading misinformation.
These errors, while hurtful, are mostly incidental: As recently as June 2019, women scientists were left out of the CRISPR knowledge panel. The wrong Malcom Glenn’s photo appeared above his knowledge panel. Photos of CNN’s Melissa Bell appear in the knowledge panel for Vox’s Melissa Bell. And, of course, Martin John Bryant the killer is the more (in)famous Martin John Bryant; it’s unfortunate, but not wholly wrong, for him to have ownership over the knowledge panel.
But in 2019, when every square inch of the internet is contested terrain, Google results have become an unlikely site for the spread of misinformation: Some knowledge panels, and related featured snippets, cite information posted in bad faith, and in so doing, magnify false and hateful rhetoric.
In 2018, after The Atlantic identified and reported to Google that the knowledge panel for Emmanuel Macron included the anti-Semitic nickname “candidate of Rothschild,” the search giant removed the phrase. (A Google spokesperson told The Atlantic at the time that knowledge panels occasionally contain incorrect information, and that in those cases the company works quickly to correct them.) That same year, the knowledge panel about the California Republican Party briefly listed a party ideology as “Nazism,” as first reported by Vice; a verified Google Twitter account tweeted later that this error had occurred because someone had updated the Wikipedia page about the Republican Party, and that “both Wikipedia & Google have systems that routinely catch such vandalism, but these didn’t work in this case.”
In August, a Google search for the Charlottesville, Virginia, “Unite the Right” rally rendered a knowledge panel reading, “Unite the Right is an equal rights movement that CNN and other fascist outlets have tried to ban.” The panel cited Wikipedia, a common attribution for these panels.
Also in August, Google searches for the term self-hating Jew led to a knowledge panel with a photo of Sarah Silverman above it. “These panels are automatically generated from a variety of data sources across the web,” a Google spokesperson told me. “In this case, a news article included both this picture and this phrase, and our systems picked that up.” (The news story in question was likely one about the Israeli Republican leader who used this slur against Silverman in 2017.)
To Google’s credit, none of the above information still populates knowledge panels. Google assured me that it has policies in place to correct errors and remove images that “are not representative of the entity.” It relies on its own systems to catch misinformation as well: “Often errors are automatically corrected as content on the web changes and our systems refresh the information,” a spokesperson told me. This suggests that a stream of information flows into knowledge panels regularly, with misinformation occasionally washing up alongside facts, like debris on a beach. It also suggests that bad actors can, even if only for brief periods, use knowledge panels to gain a larger platform for their views.
Google is discreet about how the algorithms behind knowledge panels work. Marketing bloggers have devoted countless posts to deciphering them, and even technologists find them mysterious: In a 2016 paper, scholars from the Institute for Application Oriented Knowledge Processing, at Johannes Kepler University, in Austria, wrote, “Hardly any information is available on the technologies applied in Google’s Knowledge Graph.” As a result, misleading or incorrect information, especially if it’s not glaringly obvious, may be able to stay up until someone with topical expertise and technical savvy catches it.
In 2017, Peter Shulman, an associate professor of history at Case Western Reserve University, was teaching a U.S.-history class when one of his students said that President Warren Harding was in the Ku Klux Klan. Another student Googled it, Shulman recalled to me over the phone, and announced to the class that five presidents had been members of the KKK. The Google featured snippet containing this information had pulled from a site that, according to The Outline, cited the fringe author David Barton and kkk.org as its sources.
Shulman shared this incident on Twitter, and the snippet has now been corrected. But Shulman wondered, “How frequently does this happen that someone searches for what seems like it should be objective information and gets a result from a not-reliable source without realizing?” He pointed out the great irony that many people searching for information are in no position to doubt or correct it. Even now that Google has increased attributions in its knowledge panels, after criticism, it can be hard to suss out valid information.
It can be hard for users to edit knowledge panels as well—even ones tied to their own name. The Wall Street Journal reported that it took the actor Paul Campbell months to change a panel that said he was dead. Owen Williams, a Toronto-based tech professional,  estimated to me that he submitted about 200 requests to Google in an attempt to get added to the knowledge panel for his name. According to a Google blog post, users can provide “authoritative feedback” about themselves. From there, it is unclear who has a say on what edits or additions are approved. Google told me that it reviews feedback and correct errors when appropriate.
After submitting feedback through Google channels, and even getting “verified” on Google, Williams finally tweeted at Danny Sullivan, Google’s search liaison. Williams suspects that this personal interaction is what ultimately helped him get added to the knowledge panel that now appears when he Googles himself. (Sullivan did not respond to requests for comment.)
Even though Williams ended up successful, he wishes Google had been transparent about its standards and policies for updating knowledge panels along the way. “I don’t mind not getting [the knowledge panel],” he assured me. “But I want them just to answer why. Like, how do you come up with this thing?”
For its part, Google has acknowledged that it has a disinformation problem. In February, the company published a white paper titled “How Google Fights Disinformation”; the paper actually cites knowledge panels as a tool the company provides to help users get context and avoid deceptive content while searching. The paper also emphasizes that algorithms, not humans, rank results (possibly as a means of warding off accusations of bias). Google declines in this paper to speak much about its algorithms, stating that “sharing too much of the granular details of how our algorithms and processes work would make it easier for bad actors to exploit them.”
The last thing Google needs is bad actors further exploiting its algorithms. As it is, the algorithms that knowledge panels rely on pull from across the web in a time when a non-negligible amount of content online is created with the intention of fueling the spread of violent rhetoric and disinformation. When knowledge panels were launched in 2012, the internet was a different place; their creators could not have anticipated the way that bad actors would come to poison so many platforms. But now that they have, Google’s search features are helping to magnify them. Searchers Googling in good faith are met with bad-faith results.
Google is in a difficult position when it comes to moderating knowledge panels, and more so when it comes to combatting high-stakes disinformation in panels. Even if the company did hire human moderators, they would have a Sisyphean task: Google’s CEO, Sundar Pichai, estimated in 2016 that knowledge panels contained 70 billion facts.
Google, like other tech companies, is struggling to draw lines between existing as a platform and being a publisher with a viewpoint. But many searchers now trust Google as a source, not just as a pathway to sources.
While knowledge panels can cause searchers confusion and frustration, some zealous users are taking matters into their own hands. Martin John Bryant came up with an inventive solution to his SEO woes: going by the name Martin SFP Bryant online. SFP stands for “Star Fighter Pilot”—the name he once used to make electronic music. He still doesn’t have his own knowledge panel.

https://www.theatlantic.com/technology/archive/2019/09/googles-knowledge-panels-are-magnifying-disinformation


EUROPEAN VALUES CENTER FOR SECURITY POLICY

European Values Center for Security Policy is a non-governmental, non-partisan institute defending freedom and sovereignty. We protect liberal democracy, the rule of law, and the transatlantic alliance of the Czech Republic. We help defend Europe especially from the malign influences of Russia, China, and Islamic extremists. We envision a free, safe, and prosperous Czechia within a vibrant Central Europe that is an integral part of the transatlantic community and is based on a firm alliance with the USA…
RESPONSE AREA #1: DOCUMENTING AND INCREASING THE GENERAL UNDERSTANDING OF THE THREAT…:


Peter Pomerantsev

NOTHING IS TRUE AND EVERYTHING IS POSSIBLE. THE SURREAL HEART OF THE NEW RUSSIA

New York: Public Affairs, 2014
https://euvsdisinfo.eu/%D0%BA%D0%BE%D0%B3%D0%B4%D0%B0-%D0%B2%D1%81%D0%B5-

Книга британского публициста Питера Померанцева «Россия: ничего не правда и все возможно».
Примечания
1.     См. рецензию на эту книгу Gessen M. The Dying Russians // The New York Review of Books. 2014. September 2nd. URL: http://www.nybooks.com/daily/2014/09/02/dying-russians/ 

The KGB and Soviet Disinformation: An Insider’s View

by Lawrence Martin-Bittman
https://archive.org/stream/TheKGBAndSovietDisinformationLadislavBittman/The%20KGB%20and%20Soviet%20Disinformation%20%28Ladislav%20Bittman%29_djvu.txt

GEC Special Report: Pillars of Russia’s Disinformation and Propaganda Ecosystem