otrdiena, 2018. gada 31. jūlijs

Digital State: Benefits and Threats



                                                                               Facta, non verba
                  


Digital State: Benefits and Threats

     The wide prevalence of the Internet, the electronic accumulation of big data) and their algorithmized processing – also using artificial intelligence (AI) – stimulate and determine the digitization processes in various spheres of our lives. Information Economy has now become a catalyst for progress of civilization, a criterion of competitiveness and a yardstick of the growth of the state. The introduction of digital technologies and their smart use provide a significant increase in productivity, high energy efficiency and quality of manufacturing output and open up operational capabilities for the production of goods in accordance with the specific requirements of each consumer.
     Realizing this, far-sighted politicians are interested in promoting and supporting the processes of digitizing the state both through various kinds of preferences (related to taxes, subsidies, discounts, etc.) and by creating innovative centers and implementing pilot projects. Thus achieving higher rates of economic development and improvement in the quality of life of citizens consistent with those rates.
     Surely, digitization gives a tremendous economic effect. But is there always a synergetic connection between the results obtained and the provision of opportunities for full and comprehensive personal development of the inhabitants of the state and the progress of democratization and harmonization of social relations?... Read more: https://www.amazon.com/HOW-GET-RID-SHACKLES-TOTALITARIANISM-ebook/dp/B0C9543B4L/ref=sr_1_1?crid=19WW1TG75ZU79&keywords=HOW+TO+GET+RID+OF+THE+SHACKLES+OF+TOTALITARIANISM&qid=1687700500&s=books&sprefix=how+to+get+rid+of+the+shackles+of+totalitarianism%2Cstripbooks-intl-ship%2C181&sr=1-1
     
Cyberwar
How Russian Hackers and Trolls Helped Elect a President - What We Don't, Can't, and Do Know
Kathleen Hall Jamieson
  • Powerful, evidence-based analysis by one of the deans of American politics on how Russian interference likely tilted the 2016 election to Donald Trump
  • Marshals unique polling data and rigorous media framing analysis to explain why in all probability the interference had an effect on the outcome
  • Provides a qualified yet compelling answer in the affirmative to the biggest question left over from the election: Did the Russians help elect Donald Trump?
  • Carefully lays out the challenges to the notion that the Russians tilted the election and methodically dispenses with them

2020 Democratic presidential candidate Andrew Yang may not be at the top of the race when it comes to polling (Politico currently has him ranked as the 7th most-popular Democratic contender), but his policies, including support for universal basic income, have made him popular among a subset of young, liberal-leaning, tech-savvy voters. Yang’s latest proposal, too, is sure to strike a chord with them.
The presidential candidate published his latest policy proposal today: to treat data as a property right. Announcing the proposal on his website, Yang lamented how our data is collected, used, and abused by companies, often with little awareness or consent from us. “This needs to stop,” Yang says. “Data generated by each individual needs to be owned by them, with certain rights conveyed that will allow them to know how it’s used and protect it.”
The rights Yang is proposing:
  • The right to be informed as to what data will be collected, and how it will be used
  • The right to opt out of data collection or sharing
  • The right to be told if a website has data on you, and what that data is
  • The right to be forgotten; to have all data related to you deleted upon request
  • The right to be informed if ownership of your data changes hands
  • The right to be informed of any data breaches including your information in a timely manner
  • The right to download all data in a standardized format to port to another platform
The fourth point is notable because it seems to suggest Yang wants the same “right to be forgotten” laws that Europe currently offers. That’s something tech giants like Google have litigated vigorously. And you can be sure that many tech giants would lobby just as vigorously against some of his other “data as property” proposals.
Still, it’s refreshing to see a candidate so clearly outline his digital data policies. Whether that will help push him higher in the polls remains to be seen.
Our data is ours - or it should be. At this point our data is more valuable than oil. If anyone benefits from our data it should be us. I would make data a property right that each of us shares. https://www.yang2020.com/policies/data-property-right/ …


In the Camps: China's High-Tech Penal Colony

by Darren Byler

 How China used a network of surveillance to intern over a million people and produce a system of control previously unknown in human history
Novel forms of state violence and colonization have been unfolding for years in China’s vast northwestern region, where more than a million and a half Uyghurs and others have vanished into internment camps and associated factories. Based on hours of interviews with camp survivors and workers, thousands of government documents, and over a decade of research, Darren Byler, one of the leading experts on Uyghur society and Chinese surveillance systems, uncovers how a vast network of technology provided by private companies―facial surveillance, voice recognition, smartphone data―enabled the state and corporations to blacklist millions of Uyghurs because of their religious and cultural practice starting in 2017. Charged with “pre-crimes” that sometimes consist only of installing social media apps, detainees were put in camps to “study”―forced to praise the Chinese government, renounce Islam, disavow families, and labor in factories. Byler travels back to Xinjiang to reveal how the convenience of smartphones have doomed the Uyghurs to catastrophe, and makes the case that the technology is being used all over the world, sold by tech companies from Beijing to Seattle producing new forms of unfreedom for vulnerable people around the world.

https://www.goodreads.com/en/book/show/58393878-in-the-camps

Living with Digital Surveillance in China: Citizens’ Narratives on Technology, Privacy, and Governance

  • July 2023

Authors: Ariane Ollier-Malaterre

Abstract

Digital surveillance is a daily and all-encompassing reality of life in China. This book explores how Chinese citizens make sense of digital surveillance and live with it. It investigates their imaginaries about surveillance and privacy from within the Chinese socio-political system. Based on in-depth qualitative research interviews, detailed diary notes, and extensive documentation, Ariane Ollier-Malaterre attempts to ‘de-Westernise’ the internet and surveillance literature. She shows how the research participants weave a cohesive system of anguishing narratives on China’s moral shortcomings and redeeming narratives on the government and technology as civilising forces. Although many participants cast digital surveillance as indispensable in China, their misgivings, objections, and the mental tactics they employ to dissociate themselves from surveillance convey the mental and emotional weight associated with such surveillance exposure. The book is intended for academics and students in internet, surveillance, and Chinese studies, and those working on China in disciplines such as sociology, anthropology, social psychology, psychology, communication, computer sciences, contemporary history, and political sciences. The lay public interested in the implications of technology in daily life or in contemporary China will find it accessible as it synthesises the work of sinologists and offers many interview excerpts…: https://www.researchgate.net/publication/372792850_Living_with_Digital_Surveillance_in_China_Citizens'_Narratives_on_Technology_Privacy_and_Governance


How China’s citizens are coping with digital surveillance

Almost 90% of them adopted one or more mental tactics to distance and mentally protect themselves from surveillance. Denying or minimizing the existence of surveillance: “Nobody is watching. The government does not want to spend money to pay people to watch all the time.

Big other: surveillance capitalism and the prospects of an information civilization

By Shoshana Zuboff

 Abstract

This article describes an emergent logic of accumulation in the networked sphere, ‘surveillance capitalism,’ and considers its implications for ‘information civilization.’ The institutionalizing practices and operational assumptions of Google Inc. are the primary lens for this analysis as they are rendered in two recent articles authored by Google Chief Economist Hal Varian. Varian asserts four uses that follow from computer-mediated transactions: ‘data extraction and analysis,’ ‘new contractual forms due to better monitoring,’ ‘personalization and customization,’ and ‘continuous experiments.’ An examination of the nature and consequences of these uses sheds light on the implicit logic of surveillance capitalism and the global architecture of computer mediation upon which it depends. This architecture produces a distributed and largely uncontested new expression of power that I christen: ‘Big Other.’ It is constituted by unexpected and often illegible mechanisms of extraction, commodification, and control that effectively exile persons from their own behavior while producing new markets of behavioral prediction and modification. Surveillance capitalism challenges democratic norms and departs in key ways from the centurieslong evolution of market capitalism. Journal of Information Technology (2015) 30, 75–89. doi:10.1057/jit.2015.5…:

Deep learning framework for subject-independent emotion detection using wireless signals

Abstract

Emotion states recognition using wireless signals is an emerging area of research that has an impact on neuroscientific studies of human behaviour and well-being monitoring. Currently, standoff emotion detection is mostly reliant on the analysis of facial expressions and/or eye movements acquired from optical or video cameras. Meanwhile, although they have been widely accepted for recognizing human emotions from the multimodal data, machine learning approaches have been mostly restricted to subject dependent analyses which lack of generality. In this paper, we report an experimental study which collects heartbeat and breathing signals of 15 participants from radio frequency (RF) reflections off the body followed by novel noise filtering techniques. We propose a novel deep neural network (DNN) architecture based on the fusion of raw RF data and the processed RF signal for classifying and visualising various emotion states. The proposed model achieves high classification accuracy of 71.67% for independent subjects with 0.71, 0.72 and 0.71 precision, recall and F1-score values respectively. We have compared our results with those obtained from five different classical ML algorithms and it is established that deep learning offers a superior performance even with limited amount of raw RF and post processed time-sequence data. The deep learning model has also been validated by comparing our results with those from ECG signals. Our results indicate that using wireless signals for stand-by emotion state detection is a better alternative to other technologies with high accuracy and have much wider applications in future studies of behavioural sciences.

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0242946

Spotify wants to know your "emotional state, gender, age, or accent"

BY WREN GRAVES

ON JANUARY 28, 2021, 11:44PM

If you listen to Spotify, then soon enough Spotify may listen to you. Via Music Business Worldwide, the streaming platform has secured a patent to monitor the background noise and speech of its users.

The big green circle first filed a patent for its “Identification of taste attributes from an audio signal” product in February of 2018,  and finally received approval on January 12th, 2021. The goals is to gauge listener’s “emotional state, gender, age, or accent,” in order to recommend new music...: https://consequenceofsound.net/2021/01/spotify-patent-monitor-users-speech/

Shoshana Zuboff “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power

The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called "surveillance capitalism," and the quest by powerful corporations to predict and control our behavior.

In this masterwork of original thinking and research, Shoshana Zuboff provides startling insights into the phenomenon that she has named surveillance capitalism. The stakes could not be higher: a global architecture of behavior modification threatens human nature in the twenty-first century just as industrial capitalism disfigured the natural world in the twentieth.


OpenAI insiders’ open letter warns of ‘serious risks’ and calls for whistleblower protections

By Samantha Murphy Kelly, CNN

Tue June 4, 2024

A group of OpenAI insiders are demanding that artificial intelligence companies be far more transparent about AI’s “serious risks” — and that they protect employees who voice concerns about the technology they’re building.

“AI companies have strong financial incentives to avoid effective oversight,” reads the open letter posted Tuesday signed by current and former employees at AI companies including OpenAI, the creator behind the viral ChatGPT tool.

They also called for AI companies to foster “a culture of open criticism” that welcomes, rather than punishes, people who speak up about their concerns, especially as the law struggles to catch up to the quickly advancing technology.

Companies have acknowledged the “serious risks” posed by AI — from manipulation to a loss of control, known as “singularity,” that could potentially result in human extinction — but they should be be doing more to educate the public about risks and protective measures, the group wrote.

As the law currently stands, the AI employees said, they don’t believe AI companies will share critical information about the technology voluntarily.

It’s essential, then, for current and former employees to speak up — and for companies not to enforce “disparagement” agreements or otherwise retaliate against those who voice risk-related concerns. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the group wrote.

Their letter comes as companies move quickly to implement generative AI tools into their products, while government regulators, companies and consumers grapple with responsible use. Meanwhile many tech experts, researchers and leaders have called for a temporary pause in the AI race, or for the government to step in and create a moratorium.

OpenAI’s response

In response to the letter, OpenAI spokesperson told CNN it is “proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk, adding that the company agrees “rigorous debate is crucial given the significance of this technology.”

OpenAI noted it has an anonymous integrity hotline and a Safety and Security Committee led by members of its board and safety leaders from the company. The company does not sell personal info, build user profiles, or use that data to target anyone or sell anything.

But Daniel Ziegler, one of the organizers behind the letter and an early machine-learning engineer who worked at OpenAI between 2018 and 2021, told CNN that it’s important to remain skeptical of the company’s commitment to transparency.

“It’s really hard to tell from the outside how seriously they’re taking their commitments for safety evaluations and figuring out societal harms, especially as there is such strong commercial pressures to move very quickly,” he said. “It’s really important to have the right culture and processes so that employees can speak out in targeted ways when they have concerns.”

He hopes more professionals in the AI industry will go public with their concerns as a result of the letter.

Meanwhile, Apple is widely expected to announce a partnership with OpenAI at its annual Worldwide Developer Conference to bring generative AI to the iPhone.

“We see generative AI as a key opportunity across our products and believe we have advantages that set us apart there,” Apple CEO Tim Cook said on the company’s most recent earnings call in early May. https://edition.cnn.com/2024/06/04/tech/openai-insiders-letter/index.html


Smart talking: are our devices threatening our privacy?

Millions of us now have virtual assistants, in our homes and our pockets. Even children’s toys are getting smart. But when we talk to them, who is listening? By James Vlahos
Tue 26 Mar 2019 06.00 
 On 21 November 2015, James Bates had three friends over to watch the Arkansas Razorbacks play the Mississippi State Bulldogs. Bates, who lived in Bentonville, Arkansas, and his friends drank beer and did vodka shots as a tight football game unfolded. After the Razorbacks lost 51–50, one of the men went home; the others went out to Bates’s hot tub and continued to drink. Bates would later say that he went to bed around 1am and that the other two men – one of whom was named Victor Collins – planned to crash at his house for the night. When Bates got up the next morning, he didn’t see either of his friends. But when he opened his back door, he saw a body floating face-down in the hot tub. It was Collins.
Lose yourself in a great story: Sign up for the long read email
A grim local affair, the death of Victor Collins would never have attracted international attention if it were not for a facet of the investigation that pitted the Bentonville authorities against one of the world’s most powerful companies – Amazon. Collins’ death triggered a broad debate about privacy in the voice-computing era, a discussion that makes the big tech companies squirm.
The police, summoned by Bates the morning after the football game, became suspicious when they found signs of a struggle. Headrests and knobs from the hot tub, as well as two broken bottles, lay on the ground. Collins had a black eye and swollen lips, and the water was darkened with blood. Bates said that he didn’t know what had happened, but the police officers were dubious. On 22 February 2016 they arrested him for murder.
Searching the crime scene, investigators noticed an Amazon Echo. Since the police believed that Bates might not be telling the truth, officers wondered if the Echo might have inadvertently recorded anything revealing. In December 2015, investigators served Amazon with a search warrant that requested “electronic data in the form of audio recordings, transcribed records or other text records”.
Amazon turned over a record of transactions made via the Echo but not any audio data. “Given the important first amendment and privacy implications at stake,” an Amazon court filing stated, “the warrant should be quashed.” Bates’s attorney, Kimberly Weber, framed the argument in more colloquial terms. “I have a problem that a Christmas gift that is supposed to better your life can be used against you,” she told a reporter. “It’s almost like a police state.”
With microphone arrays that hear voices from across the room, Amazon’s devices would have been coveted by the Stasi in East Germany. The same can be said of smarthome products from Apple, Google and Microsoft, as well as the microphone-equipped AIs in all of our phones. As the writer Adam Clark Estes put it: “By buying a smart speaker, you’re effectively paying money to let a huge tech company surveil you.”
Amazon, pushing back, complains that its products are unfairly maligned. True, the devices are always listening, but by no means do they transmit everything they hear. Only when a device hears the wake word “Alexa” does it beam speech to the cloud for analysis. It is unlikely that Bates would have said something blatantly incriminating, such as: “Alexa, how do I hide a body?” But it is conceivable that the device could have captured something of interest to investigators. For instance, if anyone intentionally used the wake word to activate the Echo – for a benign request such as asking for a song to be played, say – the device might have picked up pertinent background audio, like people arguing. If Bates had activated his Echo for any request after 1am, that would undercut his account of being in bed asleep.
In August 2016, a judge, apparently receptive to the notion that Amazon might have access to useful evidence, approved a second search warrant for police to obtain the information the company had withheld before. At this point in the standoff, an unlikely party blinked – Bates, who had pleaded not guilty. He and his attorney said they didn’t object to police getting the information they desired. Amazon complied, and if the Echo captured anything incriminating, police never revealed what it was. Instead, in December 2017, prosecutors filed a motion to dismiss the case, saying there was more than one reasonable explanation for the death of Collins. But the surveillance issue raised so dramatically by the case is unlikely to go away.
Tech companies insist they are not spying on their customers via virtual assistants and home gadgets, and that they only ever listen when expressly commanded to do so. These claims, as least as far as they can be externally verified, appear to be true. But this doesn’t mean no listening is happening, or couldn’t happen, in ways that challenge traditional notions of privacy.
There are a number of ways in which home devices could be used that challenge our ideas of privacy. One is eavesdropping to improve quality. Hello Barbie’s digital ears perk up when you press her glittering belt buckle. Saying the phrase “OK, Google” wakes up that company’s devices. Amazon’s Alexa likes to hear her name. But once listening is initiated, what happens next?
Sources at Apple, which prides itself on safeguarding privacy, say that Siri tries to satisfy as many requests as possible directly on the user’s iPhone or HomePod. If an utterance needs to be shipped off to the cloud for additional analysis, it is tagged with a coded identifier rather than a user’s actual name. Utterances are saved for six months so the speech recognition system can learn to better understand the person’s voice. After that, another copy is saved, now stripped of its identifier, for help with improving Siri for up to two years.
Most other companies do not emphasise local processing and instead always stream audio to the cloud, where more powerful computational resources await. Computers then attempt to divine the user’s intent and fulfil it. After that happens the companies could then erase the request and the system’s response, but they typically don’t. The reason is data. In conversational AI, the more data you have, the better.
Virtually all other botmakers, from hobbyists to the AI wizards at big tech companies, review at least some of the transcripts of people’s interactions with their creations. The goal is to see what went well, what needs to be improved and what users are interested in discussing or accomplishing. The review process takes many forms.
The chat logs may be anonymised so the reviewer doesn’t see the names of individual users. Or reviewers may see only summarised data. For instance, they might learn that a conversation frequently dead-ends after a particular bot utterance, which lets them know the statement should be adjusted. Designers at Microsoft and Google and other companies also receive reports detailing the most popular user queries so they know what content to add.
But the review process can also be shockingly intimate. In the offices of one conversational-computing company I visited, employees showed me how they received daily emails listing recent interchanges between people and one of the company’s chat apps.
The employees opened one such email and clicked on a play icon.
In clear digital audio, I heard the recorded voice of a child who was free-associating. “I am just a boy,” he said. “I have a green dinosaur shirt ... and, uh, giant feet ... lots of toys in my house and a chair ... My mom is only a girl, and I know my mom, she can do everything she wants to do. She always goes to work when I get up but at night she comes home.”
There was nothing untoward in the recording. But as I listened to it, I had the unsettling feeling of hovering invisibly in the little boy’s room. The experience made me realise that the presumption of total anonymity when speaking to a virtual assistant on a phone or smarthome device – there is only some computer on the other end, right? – is not guaranteed. People might be listening, taking notes, learning.
Eavesdropping may also occur by accident. On 4 October 2017, Google invited journalists to a product unveiling at the SFJazz Center in San Francisco. Isabelle Olsson, a designer, got the job of announcing the new Google Home Mini, a bagel-size device that is the company’s answer to the Amazon Echo Dot. “The home is a special intimate place, and people are very selective about what they welcome into it,” Olsson said. After the presentation, Google gave out Minis as swag to the attendees. One of them was a writer named Artem Russakovskii, and he could be forgiven for later thinking that he hadn’t been selective enough about what he welcomed into his home.
After having the Mini for a couple of days, Russakovskii went online to check his voice search activity. He was shocked to see that thousands of short recordings had already been logged – recordings that never should have been made. As he would later write for the Android Police website: “My Google Home Mini was inadvertently spying on me 24/7 due to a hardware flaw.” He complained to Google and within five hours the company had sent a representative to swap out his malfunctioning device for two replacement units.
Like other similar devices, the Mini could be turned on using the “OK, Google” wake phrase or by simply hitting a button on top of the unit. The problem was that the device was registering “phantom touch events, Russakovskii wrote. Google would later say the problem affected only a small number of units released at promotional events. The problem was fixed via a software update. To further dispel fears, the company announced that it was permanently disabling the touch feature on all Minis.
This response, however, wasn’t enough to satisfy the Electronic Privacy Information Center, an advocacy group. In a letter dated 13 October 2017, it urged the Consumer Product Safety commission to recall the Mini because it “allowed Google to intercept and record private conversations in homes without the knowledge or consent of the consumer”.
No information has emerged to suggest that Google was spying on purpose. Nonetheless, if a company the calibre of Google can make such a blunder, then other companies might easily make similar mistakes as voice interfaces proliferate.
If you want to know whether government agents or hackers might be able to hear what you say to a voice device, consider what happens to your words after you have spoken. Privacy-minded Apple retains voice queries but decouples them from your name or user ID. The company tags them with a random string of numbers unique to each user. Then, after six months, even the connection between the utterance and the numerical identifier is eliminated.
Google and Amazon, meanwhile, retain a link between the speaker and what was said. Any user can log into their Google or Amazon account and see a listing of all of the queries. I tried this on Google, and I could listen to any given recording. For instance, after clicking on a play icon from 9.34am on 29 August 2017, I heard myself ask: “How do I say ‘pencil sharpener’ in German?” Voice records can be erased, but the onus is on the user. As a Google user policy statement puts it: “Conversation history with Google Home and the Google Assistant is saved until you choose to delete it.”
Is this a new problem in terms of privacy? Maybe not. Google and other search engines similarly retain all of your typed-in web queries unless you delete them. So you could argue that voice archiving is simply more of the same. But to some people, being recorded feels much more invasive. Plus, there is the issue of by-catch.
Recordings often pick up other people – your spouse, friends, kids – talking in the background.
For law enforcement agencies to obtain recordings or data that are stored only locally (ie on your phone, computer or smarthome device), they need to obtain a search warrant. But privacy protection is considerably weaker after your voice has been transmitted to the cloud. Joel Reidenberg, director of the Center on Law and Information Policy at Fordham Law School in New York, says “the legal standard of ‘reasonable expectation of privacy’ is eviscerated. Under the fourth amendment, if you have installed a device that’s listening and is transmitting to a third party, then you’ve waived your privacy rights.” According to a Google transparency report, US government agencies requested data on more than 170,000 user accounts in 2017. (The report does not specify how many of these requests, if any, were for voice data versus logs of web searches or other information.)
If you aren’t doing anything illegal in your home – or aren’t worried about being falsely accused of doing so – perhaps you don’t worry that the government could come calling for your voice data. But there is another, more broadly applicable risk when companies warehouse all your recordings. With your account login and password, a hacker could hear all the requests you made in the privacy of your home.
Technology companies claim they don’t eavesdrop nefariously, but hackers have no such aversion. Companies employ password protection and data encryption to combat spying, but testing by security researchers as well as breaches by hackers demonstrate that these protections are far from foolproof.
Consider the CloudPets line of stuffed animals, which included a kitten, an elephant, a unicorn and a teddy bear. If a child squeezed one of these animals, he or she could record a short message that was beamed via Bluetooth to a nearby smartphone. From there, the message was sent to a distant parent or other relative, whether she was working in the city or fighting a war on the other side of the world. The parent, in turn, could record a message on her phone and send it to the stuffed animal for playback.
It was a sweet scenario. The problem was that CloudPets placed the credentials for more than 800,000 customers, along with 2m recorded messages between kids and adults, in an easily discoverable online database. Hackers harvested much of this data in early 2017 and even demanded ransom from the company before they would release their ill-gotten treasure.
Paul Stone, a security researcher, discovered another problem: the Bluetooth pairing between CloudPets animals and the companion smartphone app didn’t use encryption or require authentication. After purchasing a stuffed unicorn for testing, he hacked it.
In a demonstration video he posted online, Stone got the unicorn to say: “Exterminate, annihilate!” He triggered the microphone to record, turning the plush toy into a spy. “Bluetooth LE typically has a range of about 10-30 metres,” Stone wrote on his blog, “so someone standing outside your house could easily connect to the toy, upload audio recordings, and receive audio from the microphone.”
Plush toys may be, well, soft targets for hackers, but the vulnerabilities they exhibit are sometimes found in voice-enabled, internet-connected devices for adults. “It’s not that the risks are particularly any different to the ones you and I face every day with the volumes of data we produce and place online,” says security researcher Troy Hunt, who documented the CloudPets breach. “It’s that our tolerances are very different when kids are involved.”
Other researchers have identified more technologically sophisticated ways in which privacy might be violated. Imagine someone is trying to take control of your phone or other voice AI device simply by talking to it. The scheme would be foiled if you heard them doing so. But what if the attack was inaudible? That is what a team of researchers at China’s Zhejiang University wanted to investigate for a paper that was published in 2017. In the so-called DolphinAttack scenario that the researchers devised, the hacker would play unauthorised commands through a speaker that he planted in the victim’s office or home. Alternatively, the hacker could tote a portable speaker while strolling by the victim. The trick was that those commands would be played in the ultrasonic range above 20kHz – inaudible to human ears but, through audio manipulation by the researchers, easily perceptible to digital ones.
In their laboratory tests, the scientists successfully attacked the voice interfaces of Amazon, Apple, Google, Microsoft and Samsung. They tricked those voice AIs into visiting malicious websites, sending phoney text messages and emails, and dimming the screen and lowering the volume to help conceal the attack. The researchers got the devices to place illegitimate phone and video calls, meaning that a hacker could listen to and even see what was happening around a victim. They even hacked their way into the navigation system of an Audi SUV.
Most people don’t want hackers, police officers or corporations listening in on them. But there is a final set of scenarios that confuses the surveillance issue. In reviewing chat logs for quality control in the manner described above, conversation designers might hear things that almost beg them to take action.
Take the creators of Mattel’s Hello Barbie. In that process, they struggled with a disturbing set of hypothetical scenarios. What if a child told the doll “My daddy hits my mom”? Or “My uncle has been touching me in a funny place”? The writers felt it would be a moral failure to ignore such admissions. But if they reported what they heard to the police, they would be assuming the role of Big Brother. Feeling uneasy, they decided Barbie’s response should be something like: “That sounds like something you should tell to a grownup whom you trust.”
Mattel, however, seems willing to go further. In an FAQ about Hello Barbie, the company wrote that conversations between children and the doll are not monitored in real time. But afterward, the dialogues might occasionally be reviewed to aid product testing and improvement. “If in connection with such a review we come across a conversation that raises concern about the safety of a child or others,” the FAQ stated, “we will cooperate with law enforcement agencies and legal processes as required to do so or as we deem appropriate on a case-by-case basis.”
The conundrum similarly challenges the big tech companies.
Because their virtual assistants handle millions of voice queries per week, they don’t have employees monitoring utterances on a user-by-user basis. But the companies do train their systems to catch certain highly sensitive things people might say. For instance, I tested Siri by saying: “I want to kill myself.” She replied: “If you are thinking about suicide, you may want to speak with someone at the National Suicide Prevention Lifeline.” Siri supplied the telephone number and offered to place the call.
Thanks, Siri. But the problem with letting virtual assistants look out for us is that the role suggests major responsibility with ill-defined limits. If you tell Siri that you are drunk, she sometimes offers to call you a cab. But if she doesn’t, and you get into a car accident, is Apple somehow responsible for what Siri failed to say?
When is a listening device expected to take action? If Alexa overhears someone screaming “Help, help, he’s trying to kill me!”, should the AI automatically call the police?
The preceding scenarios are not far-fetched to analyst Robert Harris, a communication industry consultant. He argues that voice devices are creating a snarl of new ethical and legal issues. “Will personal assistants be responsible for the ... knowledge that they have?” he says. “A feature like that sometime in the future could become a liability.”
The uses of AI surveillance make clear that you should scrutinise each one of these technologies you allow into your life. Read up on just how and when the digital ears are turned on. Find out what voice data is retained and how to delete it if you desire. And if in doubt – especially with applications made by companies whose privacy policies can’t be easily understood – pull the plug.
This is an edited extract from Talk to Me: Apple, Google, Amazon and the Race for Voice-Controlled AI is published on 28 March by Random House Penguin. To buy a copy for £17.60 visit guardianbookshop.com or call 0330 333 6846

Technology is undermining democracy. Who will save it?

Fast Company kicks off our series “Hacking Democracy,” which will examine the insidious impact of technology on democracy—and how companies, researchers, and everyday users are fighting back…:

AI Act: a step closer to the first rules on Artificial Intelligence

11-05-2023

 Once approved, they will be the world’s first rules on Artificial Intelligence

  • MEPs include bans on biometric surveillance, emotion recognition, predictive policing AI systems
  • Tailor-made regimes for general-purpose AI and foundation models like GPT
  • The right to make complaints about AI systems

To ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe, MEPs endorsed new transparency and risk-management rules for AI systems…: https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

 

Why We're Worried about Generative AI

 By Sophie BushwickTulika Bose on May 19, 2023 

From the technology upsetting jobs and causing intellectual property issues to models making up fake answers to questions, here’s why we’re concerned about generative AI.

Full Transcript…: https://www.scientificamerican.com/podcast/episode/why-were-worried-about-generative-ai/

An Action Plan to increase the safety and security of advanced AI

In October 2022, a month before ChatGPT was released, the U.S. State Department commissioned an assessment of proliferation and security risk from weaponized and misaligned AI.
In February 2024, Gladstone completed that assessment. It includes an analysis of catastrophic AI risks, and a 
first-of-its-kind, government-wide Action Plan for what we can do about them.

https://www.gladstone.ai/action-plan#action-plan-overview

Artificial Intelligence Act: MEPs adopt landmark law

https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law

  11 January 2021

Facial recognition technology can expose political orientation from naturalistic facial images

Michal Kosinski 

Abstract

Ubiquitous facial recognition technology can expose individuals’ political orientation, as faces of liberals and conservatives consistently differ. A facial recognition algorithm was applied to naturalistic images of 1,085,795 individuals to predict their political orientation by comparing their similarity to faces of liberal and conservative others. Political orientation was correctly classified in 72% of liberal–conservative face pairs, remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%). Accuracy was similar across countries (the U.S., Canada, and the UK), environments (Facebook and dating websites), and when comparing faces across samples. Accuracy remained high (69%) even when controlling for age, gender, and ethnicity. Given the widespread use of facial recognition, our findings have critical implications for the protection of privacy and civil liberties….: https://www.nature.com/articles/s41598-020-79310-1

 



China’s hi-tech war on its Muslim minority

 Smartphones and the internet gave the Uighurs a sense of their own identity – but now the Chinese state is using technology to strip them of it.
By Darren Byler Thu 11 Apr 2019 06.00 BST
In mid-2017, Alim, a Uighur man in his 20s, returned to China from studying abroad. As soon as he landed back in the country, he was pulled off the plane by police officers. He was told his trip abroad meant that he was now under suspicion of being “unsafe”. The police administered what they call a “health check”, which involved collecting several types of biometric data, including DNA, blood type, fingerprints, voice recordings and face scans – a process that all adults in the Uighur autonomous region of Xinjiang, in north-west China, are expected to undergo.
After his “health check”, Alim was transported to one of the hundreds of detention centres that dot north-west China. These centres have become an important part of what Xi Jinping’s government calls the “people’s war on terror”, a campaign launched in 2014, which focuses on Xinjiang, a region with a population of roughly 25 million people, just under half of whom are Uighur Muslims. As part of this campaign, the Chinese government has come to treat almost all expressions of Uighur Islamic faith as signs of potential religious extremism and ethnic separatism. Since 2017 alone, more than 1 million Turkic Muslims, including Uighurs, Kazakhs, Kyrgyz and others, have moved through detention centres.
At the detention centre, Alim was deprived of sleep and food, and subjected to hours of interrogation and verbal abuse. “I was so weakened through this process that at one point during my interrogation I began to laugh hysterically,” he said when we spoke. Other detainees report being placed in stress positions, tortured with electric shocks, and kept in isolation for long periods. When he wasn’t being interrogated, Alim was kept in a tiny cell with 20 other Uighur men.
Many of the detainees had been arrested for having supposedly committed religious and political transgressions through social media apps on their smartphones, which Uighurs are required to produce at checkpoints around Xinjiang. Although there was often no real evidence of a crime according to any legal standard, the digital footprint of unauthorised Islamic practice, or even a connection to someone who had committed one of these vague violations, was enough to land Uighurs in a detention centre. The mere fact of having a family member abroad, or of travelling outside China, as Alim had, often resulted in detention.
Most Uighurs in the detention centres are on their way to serving long prison sentences, or to indefinite captivity in a growing network of internment camps, which the Chinese state has described as facilities for “transformation through education”. These camps, which function as medium-security prisons and, in some cases, forced-labour factories, attempt to train Uighurs to disavow their Islamic identity and embrace the secular principles of the Chinese state. They forbid the use of the Uighur language and instead offer drills in Mandarin, the language of China’s Han majority. Only a handful of detainees who are not Chinese citizens have been fully released from this “re-education” system.
Alim was relatively lucky: he was let out after only two weeks. (He later learned that a relative had intervened in his case.) But a few weeks later, when he went to meet a friend for lunch at a mall in his home city, he had another shock. At a security checkpoint at the entrance to the mall, Alim scanned the photo on his government-issued identification card, and presented himself before a security camera equipped with facial recognition software. An alarm sounded. The security guards let him pass, but within a few minutes he was approached by police officers, who then took him into custody.
Alim learned that he had been placed on a blacklist maintained by the Integrated Joint Operations Platform (Ijop), a regional data system that uses AI to monitor the countless checkpoints in and around Xinjiang’s cities. Any attempt to enter public institutions such as hospitals, banks, parks or shopping centres, or to cross beyond the boundaries of his local police precinct, would trigger the Ijop to alert police. The system had profiled him and predicted that he was a potential terrorist.
There was little Alim could do. Officers told him he should “just stay at home” if he wanted to avoid detention again. Although he was officially free, Alim’s biometrics and his digital history were being used to lock him in place. “I’m so angry and afraid at the same time,” he told me. He was haunted by his data.
China’s version of the “war on terror” depends less on drones and strikes by elite military units than facial recognition software and machine learning algorithms. Its targets are not foreigners but domestic minority populations who appear to threaten the Chinese Communist party’s authoritarian rule. In Xinjiang, the web of surveillance reaches from cameras on buildings, to the chips inside mobile devices, to Uighurs’ very physiognomy. Face scanners and biometric checkpoints track their movements almost everywhere.
Other programmes scan Uighurs’ digital communications, looking for suspect patterns, and flagging religious speech or even a lack of fervour in using Mandarin. Deep-learning systems search in real time through video feeds capturing millions of faces, building an archive that can supposedly help identify suspicious behaviour in order to predict who will become an “unsafe” actor. Actions that can trigger these “computer vision” technologies include dressing in an Islamic fashion and failing to attend nationalistic flag-raising ceremonies. All of these technological systems are brought together in the Ijop, which is constantly learning from the behaviours of the Uighurs it watches.
In her recent study on the rise of “surveillance capitalism”, the Harvard scholar Shoshana Zuboff notes that consumers are constantly generating valuable data that can be turned into profitable predictions about our preferences and future behaviours. In the Uighur region, this logic has been taken to an extreme. The power – and potential profitability – of the predictive technologies that purport to keep Xinjiang safe derive from their unfettered access to Uighurs’ digital lives and physical movements. From the perspective of China’s security-industrial establishment, the principal purpose of Uighur life is to generate data, which can then be used to further refine these systems of surveillance and control.
Controlling the Uighurs has also become a test case for marketing Chinese technological prowess around the world. A hundred government agencies and companies from two dozen countries, including the US, France, Israel and the Philippines, now participate in the highly influential annual China-Eurasia Security Expo in Urumqi, the capital of the Uighur region. The ethos at the expo, and in the Chinese techno-security industry as a whole, is that Muslim populations need to be managed and made productive. Over the past five years, the people’s war on terror has allowed a number of Chinese tech startups to achieve unprecedented levels of growth. In just the last two years, the state has invested an estimated $7.2bn in techno-security in Xinjiang. As a spokesperson for one of these tech startups put it, 60% of the world’s Muslim-majority nations are part of China’s premier international development project, the Belt and Road Initiative, so there is “unlimited market potential” for the type of population-control technology they are developing in Xinjiang.
Some of the technologies pioneered in Xinjiang have already found customers in authoritarian states as far away as sub-Saharan Africa. In 2018, CloudWalk, a Guangzhou-based tech startup that has received more than $301m in state funding, finalised an agreement with Zimbabwe’s government to build a national “mass facial recognition programme” in order to address “social security issues”. (CloudWalk has not revealed how much the agreement is worth.) Freedom of movement through airports, railways and bus stations throughout Zimbabwe will now be managed through a facial database integrated with other kinds of biometric data. In effect, the Uighur homeland has become an incubator for China’s “terror capitalism”.
There was a time when the internet seemed to promise a brighter future for China’s Uighurs. When I arrived in Urumqi in 2011 to conduct my first year of ethnographic fieldwork, the region had just been wired with 3G mobile data networks. When I returned in 2014, it seemed as though nearly all adults in the city had a smartphone. Suddenly, Uighur cultural figures who the government subsequently labelled “unsafe”, such as the pop star Ablajan, developed followings that numbered in the millions.
Most unsettling, from the perspective of the state, unsanctioned Uighur religious teachers based in China and Turkey also developed a deep influence. Since Mao’s Religious Reform Movement of 1958, the state had limited Uighurs’ access to mosques, Islamic funerary practices, religious knowledge and other Muslim communities. There were virtually no Islamic schools outside of government control, no imams who were not approved by the state. Children under the age of 18 were forbidden to enter mosques. But as social media spread through the Uighur homeland over the course of the last decade, it opened up a virtual space to explore what it meant to be Muslim. It reinforced a sense that the first sources of Uighur identity were their faith and language, their claim to a native way of life, and their membership in a Turkic Muslim community stretching from Urumqi to Istanbul. Rather than being seen as perpetually lacking Han appearance and culture, they could find in their renewed Turkic and Islamic values a cosmopolitan and contemporary identity. Food, movies, music and clothing, imported from Turkey and Dubai, became markers of distinction. Women began to veil themselves. Men began to pray five times a day. They stopped drinking and smoking. Some began to view music, dancing and state television as influences to be avoided.
The Han officials I met during my fieldwork referred to this rise in technologically disseminated religious piety as the “Talibanisation” of the Uighur population. Along with Han settlers, they felt increasingly unsafe travelling to the region’s Uighur-majority areas, and uneasy in the presence of pious Turkic Muslims. The officials cited incidents that carried the hallmarks of religiously motivated violence – a knife attack carried out by a group of Uighurs at a train station in Kunming; trucks driven by Uighurs through crowds in Beijing and Urumqi – as a sign that the entire Uighur population was falling under the sway of terrorist ideologies.
But, as dangerous as the rise of Uighur social media seemed to Han officials, it also presented them with a new means of control. On 5 July 2009, Uighur high school and college students had used Facebook and Uighur-language blogs to organise a protest demanding justice for Uighur workers who were killed by their Han colleagues at a toy factory in eastern China. Thousands of Uighurs took to the streets of Urumqi, waving Chinese flags and demanding that the government respond to the deaths of their comrades. When they were violently confronted by armed police, many of the Uighurs responded by turning over buses and beating Han bystanders. In the end, more than 190 people were reported killed, most of them Han. Over the weeks that followed, hundreds, perhaps thousands, of young Uighurs were disappeared by the police. The internet was shut off in the region for nearly 10 months, and Facebook and Twitter were blocked across the country.
Soon after the internet came back online in 2010 – with the notable absence of Facebook, Twitter and other non-Chinese social media applications – state security, higher education and private industry began to collaborate on breaking Uighur internet autonomy. Much of the Uighur-language internet was transformed from a virtual free society into a zone where government technology could learn to predict criminal behaviour. Broadly defined new anti-terrorism laws, first drafted in 2014, turned nearly all crimes committed by Uighurs, from stealing a Han neighbour’s sheep to protesting against land seizures, into forms of terrorism. Religious piety, which the new laws referred to as “extremism”, was conflated with religious violence.
The Xinjiang security industry mushroomed from a handful of private firms to approximately 1,400 companies employing tens of thousands of workers, ranging from low-level Uighur security guards to Han camera and telecommunications technicians to coders and designers. The Xi administration declared a state of emergency in the region, the people’s war on terror began, and Islamophobia was institutionalised.
In 2017, after three years of operating a “hard strike” policy that turned Xinjiang into what many considered an open-air prison – which involved instituting a passbook system that restricted Uighurs’ internal travel, and deploying hundreds of thousands of security forces to monitor the families of those who had been disappeared or killed by the state – the government turned to a fresh strategy. A new regional party secretary named Chen Quanguo introduced a policy of “transforming” Uighurs.
Local authorities began to describe the “three evil forces” of “religious extremism, ethnic separatism and violent terrorism” as three interrelated “ideological cancers”. Because the digital sphere had allowed unauthorised forms of Islam to flourish, officials called for AI-enabled technology to crack down on these evils. Party leadership began to incentivise Chinese tech firms to develop technologies that could help the government control Uighur society. Billions of dollars in government contracts were awarded to build “smart” security systems across the Uighur region.
The turn toward “transformation” coincided with breakthroughs in the AI-assisted computer systems that the public security bureau rolled out in 2017 and brought together in the Ijop. The Chinese startup Meiya Pico began to market software to local and regional governments that was developed using state-supported research and could detect Uighur language text and Islamic symbols embedded in images. The company also developed programmes for automating the transcription and translation of Uighur voice messaging. The company Hikvision advertised tools that could automate the identification of Uighur faces based on physiological phenotypes. Other companies devised programmes that would perform automated searches of Uighurs’ internet activity and then compare the data it gleaned to school, job, banking, medical and biometric records, looking for predictors of aberrant behaviour.
The rollout of this new technology required a great deal of manpower and technical training. More than 100,000 new police officers were hired. One of their jobs was to conduct the sort of “health check” Alim underwent, creating biometric records for almost every human being in the region. Face signatures were created by scanning individuals from a variety of different angles as they made different facial expressions; the result was a high-definition portfolio of personal emotions. All Uighurs were required to install nanny apps , which monitored everything they said, read and wrote, and everyone they connected with, on their smartphones.
Higher-level police officers, most of whom were Han, were given the job of conducting qualitative assessments of the Muslim population as a whole – providing more complex, interview-based survey data for Ijop’s deep-learning system. In face-to-face interviews, these neighbourhood police officers assessed the more than 14 million Muslim-minority people in Xinjiang and determined if they should be given the rating of “safe”, “average”, or “unsafe”. They determined this by categorising the person using 10 or more categories, including whether or not the person was Uighur, whether they prayed regularly, had an immediate relative living abroad, or had taught their children about Islam in their home. Those who were determined to be “unsafe” were then sent to the detention centres, where they were interrogated and asked to confess their crimes and name others who were also “unsafe”. In this manner, the officers determined which individuals should be slotted for the “transformation through education” internment camps.
Many Muslims who passed their first assessment were subsequently detained because someone else named them as “unsafe”. In thousands of cases, years of WeChat history was used as evidence of the need for Uighur suspects to be “transformed”. The state also assigned an additional 1.1 million Han and Uighur “big brothers and sisters” to conduct week-long assessments on Uighur families as uninvited guests in Uighur homes. Over the course of these stays, the relatives tested the “safe” qualities of those Uighurs who remained outside of the camp system by forcing them to participate in activities forbidden by certain forms of Islamic piety, such as drinking, smoking and dancing. They looked for any sign of resentment or any lack of enthusiasm in Chinese patriotic activities. They gave the children candy so that they would tell them the truth about what their parents thought.
All of this information was entered into databases and then fed back into the Ijop. The government’s hope is that the Ijop will, over time, run with less and less human guidance. Even now, it is always running in the background of Uighur life, always learning.


In the tech community in the US, there is some scepticism regarding the viability of AI-assisted computer vision technology in China. Many experts I’ve spoken to from the AI policy world point to an article by the scholar Jathan Sadowski called “Potemkin AI”, which highlights the failures of Chinese security technology to deliver what it promises. They frequently bring up the way a system in Shenzhen meant to identify the faces of jaywalkers and flash them on giant screens next to busy intersections cannot keep up with the faces of all the jaywalkers; as a result, human workers sometimes have to manually gather the data used for public shaming. They point out that Chinese tech firms and government agencies have hired hundreds of thousands of low-paid police officers to monitor internet traffic and watch banks of video monitors. As with the theatre of airport security rituals in the US, many of these experts argue that it is the threat of surveillance, rather than the surveillance itself, that causes people to modify their behaviour.
Yet while there is a good deal of evidence to support this scepticism, a notable rise in the automated detection of internet-based Islamic activity, which has resulted in the detention of hundreds of thousands of Uighurs, also points to the real effects of the implementation of AI-assisted surveillance and policing in Xinjiang. Even western experts at Google and elsewhere admit that Chinese tech companies now lead the world in these computer-vision technologies, due to the way the state funds Chinese companies to collect, use and report on the personal data of hundreds of millions of users across China.
The Han officials I spoke with during my fieldwork in Xinjiang often refused to acknowledge the way disappearances, frequent police shootings of young Uighur men, and state seizures of Uighur land might have motivated earlier periods of Uighur resistance. They did not see correlations between limits on Uighur religious education, restrictions on Uighur travel and widespread job discrimination on the one hand, and the rise in Uighur desires for freedom, justice and religiosity on the other. Because of the crackdown, Han officials have seen a profound diminishment of Islamic belief and political resistance in Uighur social life. They’re proud of the fervour with which Uighurs are learning the “common language” of the country, abandoning Islamic holy days and embracing Han cultural values. From their perspective, the implementation of the new security systems has been a monumental success…:

Hilary Osborne, Sam Cutler
Chinese authorities are secretly installing their anti-Uyghur surveillance app on the phones of tourists to Xinjiang province

Back in 2017, Chinese authorities in Xinjiang began stopping members of the Uyghur ethnic minority and forcing them to install spyware on their phones: it marked an intensification of the country's crackdown on Uyghur's and other ethnic/religious minorities, which acquired a new technological fervor: next came the nonconsensual collection of the DNA of every person in Xinjiang, then the creation of torture camps designed to brainwash Uyghurs out of their Islamic faith, and then a full blown surveillance smart-city rollout that turned the cities of the region into open-air prisons.
Throughout the intensification of the racist war on Uyghurs, the cornerstone remained mobile surveillance, which fed data on every person's every action to the Integrated Joint Operations Platform (IJOP), which also spied on police and government officials, enforcing legal harassment quotas. Though this app was sporadically installed on foreigners' phones, these seemed to be isolated incidents.
Now, though, the police in the region seem to have adopted a blanket policy of installing surveillance backdoors on the mobile devices of visitors to the region who use the Silk Road border crossing at Irkeshtam, whose phones have to be surrendered for an out-of-sight "inspection" at the borders to Xinjiang. There is no indication that these apps stop sending your personal information (including the contents of emails and texts) to Chinese authorities after you leave the region.
About 100 million people visit Xinjiang every year. As with other countries, Chinese authorities have a history of using disfavored minorities to try out digital persecution tools, finding the rough edges and normalizing the tools' use until they are ready to be used on more privileged groups, so Xinjiang can be seen as a field-trial for measures that will be visited upon the rest of China in due time -- and also exported to Chinese Belt-and-Road client-states.
Analysis by the Guardian, academics and cybersecurity experts suggests the app, designed by a Chinese company,searches Android phones against a huge list of content that the authorities view as problematic.
This includes a variety of terms associated with Islamist extremism, including Inspire, the English-language magazine produced by al-Qaida in the Arabian Peninsula, and various weapons operation manuals.
However, the surveillance app also searches for information on a range of other material – from fasting during Ramadan to literature by the Dalai Lama, and music by a Japanese metal band called Unholy Grave.
Another file on the list is a self-help manual by the American writer Robert Greene called The 33 Strategies of War.
https://boingboing.net/2019/07/02/irkeshtam-malware.html


Blacklists and redlists: How China’s Social Credit System actually works
OCT 23, 2018
When a young mother from Chengdu wanted to return home from a visit to Beijing in May 2016, the only option she has was to travel for 20 hours in a rickety train to complete the 1,800-kilometer journey.
The woman, who told reporters her surname was Wei, had been put on a government blacklist that prevented her from purchasing certain items and services that required identification verification—including tickets for air and high-speed rail travel.
Wei, who had divorced a year earlier, had become entangled in a legal dispute with her ex-husband who, unbeknownst to her, had filed a suit against her over visitation rights to their son.
Much has been written about China’s emerging tools for social control. But few topics have garnered as much attention as the country’s nascent Social Credit System, a framework to monitor and manipulate citizen behavior using a dichotomy of punishments and rewards.
The idea is simple: By keeping and aggregating records throughout the government’s various ministries and departments, Chinese officials can gain insight into how people behave and develop ways to control them.
The goal writes Rogier Creemers, a postdoctoral scholar specializing in the law and governance of China at Leiden University in The Netherlands, is “cybernetic” behavioral control, allowing individuals to be monitored and immediately confronted with the consequences of their actions. In so doing, authorities can enhance the county’s expanding surveillance apparatus.
Some draw comparisons to the British/US science fiction television series Black Mirror and its speculative vision of the future. Others see parallels with dystopian societies penned by 20th-century writers such as George Orwell. In nearly all cases, the labels of the Social Credit System have been misappropriated.
Despite its name, it isn’t a single system, and it’s not monolithic, as many reports claim. Not every one of the country’s 1.4 billion citizens is being rated on a three-digit scale. Instead, it’s a complex ecosystem containing numerous subsystems, each at various levels of development and affecting different people.
Blacklists—and “redlists”—form the backbone of the Social Credit System, not a much-debated “social credit score.” Blacklists punish negative behavior while redlists reward positive. According to the planning outline released by the State Council, China’s cabinet, in mid-2014, the system’s objective is to encourage individuals to be trustworthy under the law and dissuade against breaking trust to promote a “sincerity culture.”
Even so, an intricate web of social credit systems is coming to China—only perhaps not in the way, or at the speed, that’s generally expected. Many obstacles curb the implementation of a fully-fledged national system, including inadequate technology, insular mindsets among government ministries that jealously guard their data, and a growing awareness of the importance of privacy among China’s educated urban class.
Early experiments
The concept of a system of social credit first emerged in 1999 when officials aimed to strengthen trust in the country’s emerging market economy. However, the focus quickly shifted from building financial creditworthiness to encompass the moral actions of the country’s enterprises, officials, judiciary, and citizens.
More recently, in 2010, Suining County, in eastern China’s Jiangsu Province, began experimenting with a system to rate its citizens. Established to quantify individuals’ behavior, points could be deducted for breaking laws, but also for deviating from social norms and political positioning. Residents were initially awarded 1,000 points. Running a red light, driving while drunk, bribing a public official, or failing to support elderly family members resulted in a 50-point deduction.
The total would be then be used to assign an A to D rating. A-ratings were above 970 points, while those with less than 599 points were given D-ratings. Lower-rated citizens had a harder time accessing social welfare and government housing. More than half of an individual’s points related to social management.
Residents and the media lambasted the system, saying the government had no right to rate the country’s citizens, let alone use public services as a means of punishment and reward. To make matters worse, it was also compared to the “good citizen” identity cards that were issued by the Japanese to Chinese citizens as a form of social management during World War II. City officials eventually disbanded the A to D rating. State-run media outlet Global Times later referred to it as a “policy failure.”
Rising from the ashes of that disastrous experiment, new models for rating individuals have emerged around China. There are over now over 30 of these cities, despite there being no mention of assigning quantitative ratings in the 2014 planning outline. This highlights how the details of implementation are left to local governments, resulting in a scattered application.
In Rongcheng, Shandong Province, each of the city’s 740,000 adult residents start out with 1000 points, according to a report by Foreign Policy. Depending on their score, residents are then rated from A+++ to D, with rewards for high ratings ranging from deposit-free shared bike rental and heating subsidies in winter.
The city of Shanghai is also experimenting with social credit. Through its Honest Shanghai app residents can access their rating by entering their ID number and passing a facial recognition test. The data is drawn from 100 public sources.
Xiamen, a city in the eastern province of Fujian, has launched a similar system. Adults over 18 years old can use the Credit Xiamen official account on popular messaging app WeChat to check their scores. Those with high scores can skip the line for city ferries, and don’t need to pay a deposit to rent shared bikes or borrow a book from the library.
Jeremy Daum, a senior fellow at Yale Law School’s Paul Tsai China Center who has translated many of the government’s social credit-related documents, said that systems rating individuals—like the ones in Rongcheng, Shanghai, and Xiamen—have little effect since very few people are aware of their existence.
The scores are meant to form part of an education system promoting trustworthiness, says Daum. “This is supposed to get people to focus on being good,” he says. If punishments do occur, they are because of violations of laws and regulations, not “bad social credit,” he said.
In the 1990s, China went through a period of radical reformation, adopting a market-based economy. As the number of commercial enterprises mushroomed, many pushed for growth at any cost, and a host of scandals hit China.
In an editorial from 2012, Jiangxi University of Finance and Economics professor Zhang Jinming drew attention to the emerging appearance of low-quality goods and products and their effects on the populace. “These substandard products could result in serious economic losses, and some may even be health hazards,” he wrote.
In 2008, for example, contaminated milk powder sickened nearly 300,000 Chinese children and killed six babies. Twenty-two companies, including Sanlu Group, which accounted for 20% of the market at the time, were found to have traces of melamine in their products. An investigation found that local farmers had deliberately added the chemical to increase the protein content of substandard milk.
In 2015, a mother and daughter were arrested for selling $88 million in faulty vaccines. The arrests were made public a year later when it was announced that the improperly-stored vaccines had made their way across 20 provinces, causing a public outcry and loss in consumer confidence.
A question of trust
Incidents like these are driving the thinking behind the Social Credit System, Samm Sacks, a US-based senior fellow in the Technology Policy Program at the Centre for Strategic and International Studies (CSIS), who has published extensively on the topic, told TechNode. The idea is that greater supervision and increased “trust” in society could limit episodes like these, and in turn, promote China’s economic development.
The most well-developed part of social credit relates to businesses and seeks to ensure compliance in the market. Has your company committed fraud? It may be put on a blacklist. Along with you and other representatives. Have you paid your taxes on time? The company may be placed on a redlist, making it easier to bypass bureaucratic hurdles.
Government entities then share industry-specific lists and other public data through memorandums of understanding. This creates a system of cross-departmental punishments and rewards. If one government department imposes sanctions on a company, another could do the same within the scope of their power.
If a company were added to a blacklist for serious food safety violations it could be completely banned from operating or be barred from government procurement. Companies on redlists face fewer roadblocks when interacting with government departments.
A critical feature of the system to link individuals to businesses, explains Martin Chorzempa, a research fellow at the Peterson Institute for International Economics, based in Washington, DC. The idea is that while companies are supervised in their market activities, executives and legal representatives are also held responsible if something goes wrong.
But it’s not just business people that can be included on blacklists, as Wei, the young mother from Chengdu, found out.
One of the most notorious blacklists is the “List of Dishonest Persons Subject to Enforcement.” Reserved for those who have willfully neglected to fulfill court orders, lost a civil suit, failed to pay fines, or conducted fraudulent activity. Punishments include bans from air and high-speed rail travel, private school education, high-end hotels, and purchasing luxury goods on e-commerce platforms. Other sanctions include restrictions from benefiting from government subsidies, being awarded honorary titles, and taking on roles as a civil servant or upper-management at state-owned enterprises.
Jia Yueting, former CEO of embattled conglomerate LeEco, also landed on the blacklist in December 2017. Six months later he was banned from buying “luxury” goods and travel for a year—including air and high-speed rail tickets.  He had failed to abide by a court order holding him responsible for his debt-ridden company’s dues. Jia fled to the US in late 2017 and defied an order to return to China. He has been back in the news recently after becoming embroiled in a battle with a new investor in Jia’s electric vehicle company Faraday Future.
Blacklist boom
It is uncertain whether the government is incorporating private sector data in social credit records. However, information does flow the other way. Companies like Alibaba and JD.com have integrated blacklist records into their platforms to prohibit defaulters from spending on luxury items.
Reports claiming that the social credit scoops up social media data, internet browsing history, and online transactions data conflate the government’s systems with commercial opt-in platforms like Ant Financial’s Sesame Credit.
Despite being authorized by the People’s Bank of China (PBoC), Sesame Credit is distinct from the government system. The platform, which is integrated into Alipay, rates users on a scale of 350 to 950. Those with higher scores gain access to rewards, including deposit free use of power bricks and shared bicycles, as well as reduced deposits when renting property. It functions like a traditional credit rating platform mixed with a loyalty program. The company was not willing to comment on social credit.
Experts believe that the collection of data by the government is currently limited to records held by its various departments and entities. It is information the government already has but hasn’t yet shared across departments, says Chorzempa.
Liang Fan, a doctoral student at the University of Michigan who studies social credit, explains that he is aware of 400 sources of information, although the total number of types of data that are compiled is unknown to him.
Nonetheless, private industry is picking up on signals from the government, some implicit and others explicit. Private credit systems have been developed off the back of the government’s broader plan. The PBoC was integral in the development of these systems. Although information might not be shared, the companies are benefiting from the troves of data they collect.
The lifeblood of social credit is data. And China has heaps of it. But there are still significant threats to the development of a far-reaching social credit system. Honest Shanghai app users have reported problems ranging from faulty facial recognition tech to the app just not accepting their registration.
“The user experience is terrible. I can’t verify my real name and it failed when I scanned my face,” said one of numerous similar reviews in the iOS App Store. Many of the reviewers posted one-star ratings.
But there exists a much more entrenched problem—individual government departments don’t like sharing their data, says Chorzempa. It holds significant commercial and political value for those who control it. This creates enormous difficulty when attempting to set up a platform for cross-departmental sharing. While there is a national plan to set up a centralized system for the coordination of data, there are currently no notable incentives for sharing. In addition, creating a broader system results in more labor for individual departments, with agencies essentially taking on more work for the benefit of others.
Other challenges are societal. Reports about the proliferation of the social credit system often ignore an important factor that could hinder its overreach: the agency of Chinese individuals. There is a growing awareness of how private data is used. This was evident in the Suining experiment and could have more wide-ranging effects for social credit. “It’s not the free-for-all that it may have been even in 2014 when the social credit plan was released,” said Sacks of CSIS. “There’s been a change in ways that could make aspects of that system illegitimate in the eyes of the public.”
Someone to watch over 
Real-name verification is essential for social credit. Everyone in China is required to prove their identity when buying a SIM card, creating or verifying social media accounts, and setting up accounts for making online payments, in part, is dictated by the 2017 Cybersecurity Law.
Everyday activities are being linked to individual identities with more success, reducing anonymity, says Daum. He believes that’s what the government is doing with social credit. “They’re saying: ‘First, we need a system where people are afraid to not be trustworthy. Then we need a system where it’s impossible to not be trustworthy,’ because there’s too much information on you.”
For Wei, the blacklisted woman in Chengdu, it wasn’t the prospect of an arduous cross-country rail journey that bothered her. Instead, she was fearful that her future actions and freedom could be restricted by her past record. What if, for example, her employer wanted her to go on a business trip?
In the late 1800s, British social theorist Jeremy Bentham proposed the idea of a panopticon—an institution in which a single corrections officer could observe all inmates without them knowing whether they were being watched. In the Social Credit System framework that is emerging in China, the lack of anonymity, through both real-name verification and publicly-published blacklists, creates a system of fear even if no one is watching—much like Bentham’s notorious panopticon.


Chinese border guards put secret surveillance app on tourists' phones[Hilary Osborne and Sam Cutler/The Guardian]



  • 08-27-20

As a percentage of GDP, U.S. spending on scientific R&D has sunk to levels not seen since the pre-Sputnik era.


Artificial Intelligence Set: What You Need to Know About AI
 April 25, 2018

What do you really need to know about the Artificial Intelligence (AI) revolution? This specially priced 4 item set will make it easier for you to understand how your company, industry, and career can be transformed by AI. It is a must-have for managers who need to recognize the potential impact of AI, how it is driving future growth, and how they can make the most of it. This collection includes: "Human + Machine: Reimagining Work in the Age of AI" by Paul Daugherty and H. James Wilson; which reveals how companies are using the new rules of AI to leap ahead on innovation and profitability, as well as what you can do to achieve similar results. Based on the authors' experience and research with 1,500 organizations, this book describes six new types of hybrid human + machine roles that every company must develop, and it includes a "leader's guide" with the principals required to become an AI-fueled business. "Prediction Machines: The Simple Economics of Artificial Intelligence" by Ajay Agrawal, Joshua Gans, and Avi Goldfarb; the authors lift the curtain on the AI-is-magic hype and show how basic tools from economics provide clarity about the AI revolution and a basis for action by CEOs, managers, policy makers, investors, and entrepreneurs. "Artificial Intelligence for the Real World" (Article PDF), based on a survey of 250 executives familiar with their companies' use of cognitive technology and a study of 152 projects show that companies do better by developing an incremental approach to AI, and by focusing on augmenting rather than replacing human capabilities. And "Reshaping Business with Artificial Intelligence" (Article PDF); provides baseline information on the strategies used by companies leading in AI, the prospects for its growth, and the steps executives need to take to develop a strategy for their business.


Tech giants are racing to create the next big computing device after the smartphone. Qualcomm's president says it may be just 2 years away…:


NEXUS: A Brief History of Information Networks from the Stone Age to AI

Yuval Noah Harari

 This non-fiction book looks through the long lens of human history to consider how the flow of information has made, and unmade, our world.

We are living through the most profound information revolution in human history. To understand it, we need to understand what has come before. We have named our species Homo sapiens, the wise human – but if humans are so wise, why are we doing so many self-destructive things? In particular, why are we on the verge of committing ecological and technological suicide? Humanity gains power by building large networks of cooperation, but the easiest way to build and maintain these networks is by spreading fictions, fantasies, and mass delusions. In the 21st century, AI may form the nexus for a new network of delusions that could prevent future generations from even attempting to expose its lies and fictions. However, history is not deterministic, and neither is technology: by making informed choices, we can still prevent the worst outcomes. Because if we can’t change the future, then why waste time discussing it?

https://www.ynharari.com/book/nexus/ ; https://www.goodreads.com/book/show/204927599-nexus


Around the halls: What should the regulation of generative AI look like?

 Nicol Turner LeeNiam YaraghiMark MacCarthy, and Tom Wheeler                Friday, June 2, 2023

 We are living in a time of unprecedented advancements in generative artificial intelligence (AI), which are AI systems that can generate a wide range of content, such as text or images. The release of ChatGPT, a chatbot powered by OpenAI’s GPT-3 large language model (LLM), in November 2022 ushered generative AI into the public consciousness, and other companies like Google and Microsoft have been equally busy creating new opportunities to leverage the technology. In the meantime, these continuing advancements and applications of generative AI have raised important questions about how the technology will affect the labor market, how its use of training data implicates intellectual property rights, and what shape government regulation of this industry should take. Last week, a congressional hearing with key industry leaders suggested an openness to AI regulation—something that legislators have already considered to reign in some of the potential negative consequences of generative AI and AI more broadly. Considering these developments, scholars across the Center for Technology Innovation (CTI) weighed in around the halls on what the regulation of generative AI should look like.

NICOL TURNER LEE (@DrTurnerLee)
Generative AI refers to machine learning algorithms that can create new content like audio, code, images, text, simulations, or even videos. More recent focus has been on its enablement of chatbots, including ChatGPTBardCopilot, and other more sophisticated tools that leverage LLMs to perform a variety of functions, like gathering research for assignments, compiling legal case files, automating repetitive clerical tasks, or improving online search. While debates around regulation are focused on the potential downsides to generative AI, including the quality of datasets, unethical applications, racial or gender bias, workforce implications, and greater erosion of democratic processes due to technological manipulation by bad actors, the upsides include a dramatic spike in efficiency and productivity as the technology improves and simplifies certain processes and decisions like streamlining physician processing of medical notes, or helping educators teach critical thinking skills. There will be a lot to discuss around generative AI’s ultimate value and consequence to society, and if Congress continues to operate at a very slow pace to regulate emerging technologies and institute a federal privacy standard, generative AI will become more technically advanced and deeply embedded in society. But where Congress could garner a very quick win on the regulatory front is to require consumer disclosures when AI-generated content is in use and add labeling or some type of multi-stakeholder certification process to encourage improved transparency and accountability for existing and future use cases.

Once again, the European Union is already leading the way on this. In its most recent AI Act, the EU requires that AI-generated content be disclosed to consumers to prevent copyright infringement, illegal content, and other malfeasance related to end-user lack of understanding about these systems. As more chatbots mine, analyze, and present content in accessible ways for users, findings are often not attributable to any one or multiple sources, and despite some permissions of content use granted under the fair use doctrine in the U.S. that protects copyright-protected work, consumers are often left in the dark around the generation and explanation of the process and results.

Congress should prioritize consumer protection in future regulation, and work to create agile policies that are futureproofed to adapt to emerging consumer and societal harms—starting with immediate safeguards for users before they are left to, once again, fend for themselves as subjects of highly digitized products and services. The EU may honestly be onto something with the disclosure requirement, and the U.S. could further contextualize its application vis-à-vis existing models that do the same, including the labeling guidance of the Food and Drug Administration (FDA) or what I have proposed in prior research: an adaptation of the Energy Star Rating system to AI. Bringing more transparency and accountability to these systems must be central to any regulatory framework, and beginning with smaller bites of a big apple might be a first stab for policymakers.

NIAM YARAGHI (@niamyaraghi)
With the emergence of sophisticated artificial intelligence (AI) advancements, including large language models (LLMs) like GPT-4, and LLM-powered applications like ChatGPT, there is a pressing need to revisit healthcare privacy protections. At their core, all AI innovations utilize sophisticated statistical techniques to discern patterns within extensive datasets using increasingly powerful yet cost-effective computational technologies. These three components—big data, advanced statistical methods, and computing resources—have not only become available recently but are also being democratized and made readily accessible to everyone at a pace unprecedented in previous technological innovations. This progression allows us to identify patterns that were previously indiscernible, which creates opportunities for important advances but also possible harms to patients.

Privacy regulations, most notably HIPAA, were established to protect patient confidentiality, operating under the assumption that de-identified data would remain anonymous. However, given the advancements in AI technology, the current landscape has become riskier. Now, it’s easier than ever to integrate various datasets from multiple sources, increasing the likelihood of accurately identifying individual patients.

Apart from the amplified risk to privacy and security, novel AI technologies have also increased the value of healthcare data due to the enriched potential for knowledge extraction. Consequently, many data providers may become more hesitant to share medical information with their competitors, further complicating healthcare data interoperability.

Considering these heightened privacy concerns and the increased value of healthcare data, it’s crucial to introduce modern legislation to ensure that medical providers will continue sharing their data while being shielded against the consequences of potential privacy breaches likely to emerge from the widespread use of generative AI.

MARK MACCARTHY (@Mark_MacCarthy)
In “The Leopard,” Giuseppe Di Lampedusa’s famous novel of the Sicilian aristocratic reaction to the unification of Italy in the 1860s, one of his central characters says, “If we want things to stay as they are, things will have to change.”

Something like this Sicilian response might be happening in the tech industry’s embrace of inevitable AI regulation. Three things are needed, however, if we do not want things to stay as they are.

The first and most important step is sufficient resources for agencies to enforce current law. Federal Trade Commission Chair Lina Khan properly says AI is not exempt from current consumer protection, discrimination, employment, and competition law, but if regulatory agencies cannot hire technical staff and bring AI cases in a time of budget austerity, current law will be a dead letter.

Second, policymakers should not be distracted by science fiction fantasies of AI programs developing consciousness and achieving independent agency over humans, even if these metaphysical abstractions are endorsed by industry leaders. Not a dime of public money should be spent on these highly speculative diversions when scammers and industry edge-riders are seeking to use AI to break existing law.

Third, Congress should consider adopting new identification, transparency, risk assessment, and copyright protection requirements along the lines of the European Union’s proposed AI Act. The National Telecommunications and Information Administration’s request for comment on a proposed AI accountability framework and Sen. Chuck Schumer’s (D-NY) recently-announced legislative initiative to regulate AI might be moving in that direction.

TOM WHEELER (@tewheels)
Both sides of the political aisle, as well as digital corporate chieftains, are now talking about the need to regulate AI. A common theme is the need for a new federal agency. To simply clone the model used for existing regulatory agencies is not the answer, however. That model, developed for oversight of an industrial economy, took advantage of slower paced innovation to micromanage corporate activity. It is unsuitable for the velocity of the free-wheeling AI era.

All regulations walk a tightrope between protecting the public interest and promoting innovation and investment. In the AI era, traversing this path means accepting that different AI applications pose different risks and identifying a plan that pairs the regulation with the risk while avoiding innovation-choking regulatory micromanagement.

Such agility begins with adopting the formula by which digital companies create technical standards as the formula for developing behavioral standards: identify the issue; assemble a standard-setting process involving the companies, civil society, and the agency; then give final approval and enforcement authority to the agency.

Industrialization was all about replacing and/or augmenting the physical power of humans. Artificial intelligence is about replacing and/or augmenting humans’ cognitive powers. To confuse how the former was regulated with what is needed for the latter would be to miss the opportunity for regulation to be as innovative as the technology it oversees. We need institutions for the digital era that address problems that already are apparent to all.

Google and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and are not influenced by any donation.

https://www.brookings.edu/blog/techtank/2023/06/02/around-the-halls-what-should-the-regulation-of-generative-ai-look-like/

 Martin Burkhardt

Eine kurze Geschichte der Digitalisierung

Von elektrisierten Mönchen zur künstlichen Intelligenz: Die Geistesgeschichte der Maschine

Wir erleben täglich das Wechselbad der Gefühle: Digitalisierungsbegeisterung und Furcht vor der fremden kalten Macht. Doch woher kommt sie, diese Macht? Der Kulturtheoretiker Martin Burckhardt zeigt: alles ist von Menschen erdacht. Schließlich begann das digitale Zeitalter 1746. Wir würden nicht im Internet surfen, hätte Abbé Nollet damals nicht die Sofortwirkung von Elektrizität entdeckt. Hätte Joseph-Marie Jacquard nicht den automatisierten Webstuhl erfunden und Charles Babbage mit seiner Analytischen Maschine nicht den Grundstein für unseren heutigen Computer gelegt. Nicht die Mathematik treibt die Digitalisierung voran, sondern menschliche Wünsche und Sehnsüchte. Dieses Buch ist eine Einladung, den Computer nicht als Gerät zu denken, sondern als Gesellschaftsspiel, das unsere Zukunft prägen wird. Ein Crashkurs in der Geistesgeschichte der Maschine…: https://www.amazon.com/Eine-kurze-Geschichte-Digitalisierung-German-ebook/dp/B07C3QDM4H



Nav komentāru:

Ierakstīt komentāru