otrdiena, 2018. gada 6. februāris

Human Capital in the Modern World


                                                                    Omnia mutantur et nos mutamur in illis


Human Capital in the Modern World

Throughout the evolution of civilisation, the human has been the object of ruthless exploitation who has always been associated with labour force, and in a market economy – also with capital. The human’s personality has been belittled and pushed into the background. But has the situation changed today?
          The consumer society continues to impose its own habits of life: it makes people, by all available means and methods, pursue money in order to spend it on advertised entertainments and diverse commodities; it forces employees to toil in such an intense working regime that often there is no time left for creative development of the personality, and the desire to communicate and know oneself disappears. The life of an ordinary toiler is filled with the interest of making money and building a career in addition to home and family chores. Many people live in stress; complexes and phobias arise.
However, the state of affairs begins to radically change as the society becomes more digitalised and the achievements of the fourth industrial revolution along with information and communication technologies are put into practice. This greatly enhances the role of the human, changes the working conditions and the labour market situation, as well as multiplies the importance of intellectual capital and the value of the human. It contributes to, and creates, an environment for unleashing the potential of each individual, for its full development, expansion of knowledge, creativity and capacity and for active participation in public administration.... Read more: https://www.amazon.com/HOW-GET-RID-SHACKLES-TOTALITARIANISM-ebook/dp/B0C9543B4L/ref=sr_1_1?crid=19WW1TG75ZU79&keywords=HOW+TO+GET+RID+OF+THE+SHACKLES+OF+TOTALITARIANISM&qid=1687700500&s=books&sprefix=how+to+get+rid+of+the+shackles+of+totalitarianism%2Cstripbooks-intl-ship%2C181&sr=1-1


Introduction Concern about a “jobless future” has never been greater. Seemingly every day, an academic, researcher or technology leader suggests that in a world of automation and artificial intelligence (AI), workers will increasingly be a surplus to what businesses need – or as Stanford University’s Jerry Kaplan says in his best-selling book, it won’t be long before “humans need not apply.”1 The concerns are understandable. AI – long academic theory and Hollywood plotline – is becoming “real” at an astonishing pace and finding its way into more and more aspects of work, rest and play. AI is now being used to read X-rays and MRIs. It’s at the heart of stock trading. Chat with Siri or Alexa, and you’re using AI. Soon, AI will be found in every job, profession and industry around the world. When machines do everything, lots of people wonder what will we do? What work will be left for people? How will we make a living when machines are cheaper, faster and smarter than we are – machines that don’t take breaks or vacations, don’t get sick and don’t care about chatting with their colleagues about last night’s game? For many people, the future of work looks like a bleak place, full of temporary jobs (a “gig” economy), minimum wage labor and a ruling technocracy safely hidden away in their gated communities and their circular living machines. Although plausible, this vision of the future is not one we share. Our vision is quite different – and much more optimistic. Our vision is based on a different reading of the trends and the facts; a different interpretation of how change occurs and how humans evolve. Our view of the future of work is based on the following principles: • Work has always changed. Few if any people make a living nowadays as knockeruppers, telegraphists, switchboard operators, computers (the first computers were people), lamplighters, nursemaids, limners, town criers, travel agents, bank tellers, elevator operators or secretaries. Yet these were all jobs that employed thousands of people in the past. • Lots of current work is awful. Millions of people around the world do work they hate — work that is dull, dirty or dangerous. Rather than trying to keep people in these jobs, we should liberate them to do more fulfilling, more enjoyable, more lucrative work. We shouldn’t have a “pre-nostalgia” for the mortgage processor in the way that some people are nostalgic about miners and steelworkers (people who typically weren’t miners or steelworkers, it goes without saying). • Machines need man. Machines can do more, but there is always more to do. Can a machine (in its software or hardware form) create itself, market itself, sell itself? Deliver itself? Feed itself? Clean itself? Fix itself? Machines are tools, and tools need to be used. By people. To imagine otherwise is to fall into the realm of science-fiction extrapolation. • Don’t underestimate human imagination or ingenuity. Our greatest quality is our curiosity. We want to know what’s around the riverbend. How it works. What it means. EMOJI/FILTER/AVATAR DESIGNERS • BIG DATA AS A SERVICE FOR INDIVIDUALS • AI AUGMENTED SOCIAL CAREER COACH • PERSONAL DATA ACTUARY • PERSONAL DATA MONETIZER • P+M SPECIALISTS GIG NEGOTIATOR • REMOTE DIGITAL FINANCIAL FITNESS COACH • DRONE JOCKEY/ DRONE LOGISTICS MANAGER • EXPERIENCE DESIGNER/ENGINEER /VIRTUAL STORE ARCHITECT • 3D PRINTING ENGINEER • VIRTUAL PROJECT EXPERTS/ “SOMMELIERS” • IMMERSION OVER LAYERS • JOURNEY SCENARISTS UXWRIGHTS/CXWRIGHTS • SMART HOME TECH SUPPORT • GENETIC MIXOLOGIST • ORGAN CREATORS • HUMAN NURTURERS • NANO BOT ENGINEERS • GENETIC DOPING PATHOLOGIST • SMART CLOTHING SPECIALIST • FITNESS COMMITMENT COUNSELLOR • TALKER/WALKER • MICRO ENERGY SPECIALIST • AGRICULTURAL GENE SPECIALIST/DNA ENGINEER • ALGAE FARMERS • INSECT BREEDERS • METHANE CONVERSION SPECIALISTS • APPLIANCE ENERGY INCENTIVE REPRESENTATIVE • ENGLISH AS A FOREIGN LANGUAGE FOR ROBOTS • AI ASSISTED HEALTHCARE TECHNICIANS • AUTONOMOUS FLEET ATTENDANTS • SPACE CONTROLLERS • AI TRAINER • EXPERIENCE CURATORS • DEMENTIA SPECIALISTS • ARTISAN CYBER MAKERS • AUTONOMOUS TRAVEL CASE LAW • GENE MODIFICATION CASE LAW • EQUALITY CASE LAW • GENETIC DOPING ARBITRATION • “SOMMELIERS” • IMMERSION OVER LAYERS • JOURNEY SCENARISTS • EMOJI/FILTER/AVATAR DESIGNERS • BIG DATA AS A SERVICE FOR INDIVIDUALS • AI AUGMENTED SOCIAL CAREER COACH • PERSONAL DATA ACTUARY • PERSONAL DATA MONETIZER • P+M SPECIALISTS GIG NEGOTIATOR • REMOTE DIGITAL FINANCIAL FITNESS COACH • DRONE JOCKEY/DRONE LOGISTICS MANAGER • EXPERIENCE DESIGNER/ENGINEER /VIRTUAL STORE ARCHITECT • 3D PRINTING ENGINEER • VIRTUAL PROJECT EXPERTS/ “SOMMELIERS” • IMMERSION OVER LAYERS • JOURNEY SCENARISTS UXWRIGHTS/CXWRIGHTS • SMART HOME TECH SUPPORT • GENETIC MIXOLOGIST • ORGAN CREATORS • HUMAN NURTURERS • NANO BOT ENGINEERS • |
21 Jobs of the Future: A Guide to Getting – and Staying – Employed for the Next 10 Years How we can make it better. In an age of intelligent machines, man will continue to want to explore – and make – what’s next. Doing so will be the source of new work ad infinitum. • Technology will upgrade all aspects of society. Many aspects of modern societies are still far from perfect. Is our healthcare system as good as it’s ever going to be? The way we bank? How we educate our kids? Insure our houses? Board an airplane? Of course not. Technology – which is still, in truth, peripheral to many aspects of our work and our lives – is set to become central to how we do everything and, in the process, make the services and experiences we want much, much better. And, in doing so, it will also impact how we occupy our time. • Technology solves – and creates – problems. The guilty little secret of the technology world is that every solution begets a problem. Fix A, and then B goes on the fritz. Develop C – which is a great new thing – and then realize you’ve also created D – which is a terrible new thing that needs fixing. Intelligent machines will address many problems in society (see above), but in doing so, they will also create lots of new problems that people will need to work on addressing. Work that they will monetize. The work ahead goes on forever. Wash, rinse, repeat. In the future, work will change but won’t go away. Many types of jobs will disappear. Many workers will struggle to adjust to the disappearance of the work they understand and find it hard to thrive with work they don’t understand. Wrenching transformations – which is what the future of work holds for us all – are never easy. But a world without work is a fantasy that is no closer to reality in 2017 than it was 501 years ago upon the publication of Thomas More’s Utopia…:

 https://www.cognizant.com/whitepapers/21-jobs-of-the-future-a-guide-to-getting-and-staying-employed-over-the-next-10-years-codex3049.pdf

Human Resource Management

By Sarah Gilmore, Steve Williams

This book provides a concise, engaging, and accessible introduction to human resource management which is academically rigorous and appropriate for students taking courses in HRM, business studies and related areas. In addition to covering the core issues relating to human resource management, such as recruitment and selection, training and development, and reward, the book also includes material devoted to new and emerging issues, like talent management and the effective management of staff on international assignments. It features a wide range of pedagogic features designed to enhance your knowledge of human resource management issues, including boxed skills exercises and learning activities, section summaries, suggestions for further reading, assignment and discussion questions, and a guide to key concepts.

Written by a team of experts who have extensive experience of teaching, researching, and consultancy activity, the book is an essential companion when it comes to helping you to develop your understanding of human resource management topics. This new edition features a new chapter entitled 'Utilizing human resources', covering areas such as flexibility, talent management and health and safety.



08 Oct 2019 | 15:00 GMT
Many Experts Say We Shouldn’t Worry About Superintelligent AI. They’re Wrong
By Stuart Russell
 This article is based on a chapter of the author’s newly released book, Human Compatible: Artificial Intelligence and the Problem of Control.
AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the “standard model” of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.
Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies presented a detailed case for taking the risk seriously. In what most would consider a classic example of British understatement, The Economist magazine’s review of Bostrom’s book ended with: “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”
Switching the machine off won’t work for the simple reason that a superintelligent entity will  already have thought of that possibility and taken steps to prevent it.
Surely, with so much at stake, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware. Instead, a great deal of effort has gone into various forms of denial.
Some well-known AI researchers have resorted to arguments that hardly merit refutation. Here are just a few of the dozens that I have read in articles or heard at conferences:
Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.
Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.
No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.
Perhaps the most common response among AI researchers is to say that “we can always just switch it off.” Alan Turing himself raised this possibility, although he did not put much faith in it:
If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled.... This new danger...is certainly something which can give us anxiety.
Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off. We can no more “just switch it off” than we can beat AlphaGo (the world-champion Go-playing program) just by putting stones on the right squares.
Other forms of denial appeal to more sophisticated ideas, such as the notion that intelligence is multifaceted. For example, one person might have more spatial intelligence than another but less social intelligence, so we cannot line up all humans in strict order of intelligence. This is even more true of machines: Comparing the “intelligence” of AlphaGo with that of the Google search engine is quite meaningless.
Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,” he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.
Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument.
Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits. Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept.
This is cold comfort to the chimpanzees and other species that survive only because we deign to allow it, and to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.
The risks of superintelligence can also be dismissed by arguing that superintelligence cannot be achieved. These claims are not new, but it is surprising now to see AI researchers themselves claiming that such AI is impossible. For example, a major report from the AI100 organization, “Artificial Intelligence and Life in 2030 [PDF],” includes the following claim: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”
To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: They’ve always known that there will never be a cure for cancer.
What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect that the main reason is tribalism—the instinct to circle the wagons against what are perceived to be “attacks” on AI. It seems odd, however, to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.
If superhuman AI is not strictly impossible, perhaps it’s too far off to worry about? This is the gist of Andrew Ng’s assertion that it’s like worrying about “overpopulation on the planet Mars.” Unfortunately, a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.
For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we wait until 2068 to start working on a solution? Far from it! There would be a worldwide emergency project to develop the means to counter the threat, because we can’t say in advance how much time is needed.
Ng’s argument also appeals to one’s intuition that it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever more capable AI systems, with very little thought devoted to what happens if we succeed. A more apt analogy, then, would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive. Some might call this plan unwise.
Another way to avoid the underlying issue is to assert that concerns about risk arise from ignorance. For example, here’s Oren Etzioni, CEO of the Allen Institute for AI, accusing Elon Musk and Stephen Hawking of Luddism because of their calls to recognize the threat AI could pose:
At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.
Even if we take this classic ad hominem argument at face value, it doesn’t hold water. Hawking was no stranger to scientific reasoning, and Musk has supervised and invested in many AI research projects. And it would be even less plausible to argue that Bill Gates, I.J. Good, Marvin Minsky, Alan Turing, and Norbert Wiener, all of whom raised concerns, are unqualified to discuss AI.
The accusation of Luddism is also completely misdirected. It is as if one were to accuse nuclear engineers of Luddism when they point out the need for control of the fission reaction. Another version of the accusation is to claim that mentioning risks means denying the potential benefits of AI. For example, here again is Oren Etzioni:
Doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.
And here is Mark Zuckerberg, CEO of Facebook, in a recent media-fueled exchange with Elon Musk:
If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick.
The notion that anyone mentioning risks is “against AI” seems bizarre. (Are nuclear safety engineers “against electricity”?) But more importantly, the entire argument is precisely backwards, for two reasons. First, if there were no potential benefits, there would be no impetus for AI research and no danger of ever achieving human-level AI. We simply wouldn’t be having this discussion at all. Second, if the risks are not successfully mitigated, there will be no benefits.
The potential benefits of nuclear power have been greatly reduced because of the catastrophic events at Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011. Those disasters severely curtailed the growth of the nuclear industry. Italy abandoned nuclear power in 1990, and Belgium, Germany, Spain, and Switzerland have announced plans to do so. The net new capacity per year added from 1991 to 2010 was about a tenth of what it was in the years immediately before Chernobyl.
Strangely, in light of these events, the renowned cognitive scientist Steven Pinker has argued [PDF] that it is inappropriate to call attention to the risks of AI because the “culture of safety in advanced societies” will ensure that all serious risks from AI will be eliminated. Even if we disregard the fact that our advanced culture of safety has produced Chernobyl, Fukushima, and runaway global warming, Pinker’s argument entirely misses the point. The culture of safety—when it works—consists precisely of people pointing to possible failure modes and finding ways to prevent them. And with AI, the standard model is the failure mode.
Pinker also argues that problematic AI behaviors arise from putting in specific kinds of objectives; if these are left out, everything will be fine:
AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.
Yann LeCun, a pioneer of deep learning and director of AI research at Facebook, often cites the same idea when downplaying the risk from AI:
There is no reason for AIs to have self-preservation instincts, jealousy, etc.... AIs will not have these destructive “emotions” unless we build these emotions into them.
Unfortunately, it doesn’t matter whether we build in “emotions” or “desires” such as self-preservation, resource acquisition, knowledge discovery, or, in the extreme case, taking over the world. The machine is going to have those emotions anyway, as subgoals of any objective we do build in—and regardless of its gender. As we saw with the “just switch it off” argument, for a machine, death isn’t bad per se. Death is to be avoided, nonetheless, because it’s hard to achieve objectives if you’re dead.
A common variant on the “avoid putting in objectives” idea is the notion that a sufficiently intelligent system will necessarily, as a consequence of its intelligence, develop the “right” goals on its own. The 18th-century philosopher David Hume refuted this idea in A Treatise of Human Nature. Nick Bostrom, in Superintelligence, presents Hume’s position as an orthogonality thesis:
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.
For example, a self-driving car can be given any particular address as its destination; making the car a better driver doesn’t mean that it will spontaneously start refusing to go to addresses that are divisible by 17.
By the same token, it is easy to imagine that a general-purpose intelligent system could be given more or less any objective to pursue—including maximizing the number of paper clips or the number of known digits of pi. This is just how reinforcement learning systems and other kinds of reward optimizers work: The algorithms are completely general and accept any reward signal. For engineers and computer scientists operating within the standard model, the orthogonality thesis is just a given.
The most explicit critique of Bostrom’s orthogonality thesis comes from the noted roboticist Rodney Brooks, who asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.”
Those who argue the risk is negligible have failed to explain why superintelligent AI will necessarily remain under human control.
Unfortunately, it’s not only possible for a program to behave like this; it is, in fact, inevitable, given the way Brooks defines the issue. Brooks posits that the optimal plan for a machine to “achieve goals set for it by humans” is causing problems for humans. It follows that those problems reflect things of value to humans that were omitted from the goals set for it by humans. The optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern.
In summary, the “skeptics”—those who argue that the risk from AI is negligible—have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.
Rather than continue the descent into tribal name-calling and repeated exhumation of discredited arguments, the AI community must own the risks and work to mitigate them. The risks, to the extent that we understand them, are neither minimal nor insuperable. The first step is to realize that the standard model—the AI system optimizing a fixed objective—must be replaced. It is simply bad engineering. We need to do a substantial amount of work to reshape and rebuild the foundations of AI.
This article appears in the October 2019 print issue as “It’s Not Too Soon to Be Wary of AI.”
About the Author
Stuart Russell, a computer scientist, founded and directs the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley.


Rule of the Robots: How Artificial Intelligence Will Transform Everything

by Martin Ford

https://www.youtube.com/watch?v=T4T8D18Yvdo

Strange Tools: Art and Human Nature 

by Alva Noë


A philosopher makes the case for thinking of works of art as tools for investigating ourselves.
The philosopher and cognitive scientist Alva Noë argues that our obsession with works of art has gotten in the way of understanding how art works on us. For Noë, art isn't a phenomenon in need of an explanation but a mode of research, a method of investigating what makes us human--a strange tool. Art isn't just something to look at or listen to--it is a challenge, a dare to try to make sense of what it is all about. Art aims not for satisfaction but for confrontation, intervention, and subversion. Through diverse and provocative examples from the history of art-making, Noë reveals the transformative power of artistic production. By staging a dance, choreographers cast light on the way bodily movement organizes us. Painting goes beyond depiction and representation to call into question the role of pictures in our lives. Accordingly, we cannot reduce art to some natural aesthetic sense or trigger; recent efforts to frame questions of art in terms of neurobiology and evolutionary theory alone are doomed to fail.
By engaging with art, we are able to study ourselves in profoundly novel ways. In fact, art and philosophy have much more in common than we might think. Reframing the conversation around artists and their craft, Strange Tools is a daring and stimulating intervention in contemporary thought.: https://www.amazon.com/Strange-Tools-Art-Human-Nature/dp/0809089165

Nav komentāru:

Ierakstīt komentāru