AI has transformed everything we do, short of overtaking us entirely as a species, as the dire warnings would have you believe. The fact is, though, people have been compelled to change many things in their daily lives. From maintaining the environment to dramatic workforce changes, we are facing some tough choices as we adapt to AI’s widespread influence.
According to experts, it’s not just humans that should reckon with these changes. AI itself will have to deal with some major growing pains as it transitions into adolescence. It will change and grow through bleeding-edge computing technologies and more in-depth research.
The focus has been on applied AI in recent years, also known as “weak AI.” Eventually, there could be a shift to strong AI, which entails exploring AI for its own sake, not for the purpose of applying it in practice.
Welcoming the third era in computing
According to Anitha Raj, president of IT management and consulting firm ARAR Technology, AI represents the third major epoch in the history of computing, following the mainframe and internet eras. The mainframe was a period of highly centralized computing when system operators and programmers controlled practically everything.
There are two primary components of AI: symbolic learning and machine learning. Symbolic learning is based on sensory perception, which makes it close to human knowledge. On the other hand, machine learning depends entirely on logic, databases, etc.
There are robots based on machine learning and ones based on symbolic learning. The latter operate according to what they see, for example.
AI-driven capabilities are becoming routine. Just several years from now, AI is expected to replace as much as a third of the routine tasks that humans currently perform. This development will mainly result from weak AI.
Understandably, people are afraid of the economic implications of widespread automation.
Near-term changes: A value shift from persistence to creativity
One way to assuage fears is to recognize the borders “beyond knowledge,” which refer to creative efforts that only humans are capable of. Aspects like cooperation, leadership, entrepreneurship etc. are our prerogative, not that of machines or robots.
Here are four ways AI is changing values in the near-term.
1. A look at unrealized potential
By the late 2020s, routine and less complex tasks and jobs in production and service will see 30 to 40 percent losses. AI will transform the workforce by doing menial tasks that keep people from realizing their potential. It will free up their time to perform more creative and challenging work, which experts anticipate should lead to employment growth of about 10 percent.
AI is exploring and copying all human senses, including taste and smell, where advances have been most limited so far but still exist.
Government intervention might be needed
According to a study by Pew Research, 62% of Americans think adopting artificial intelligence in the workplace will have a “major impact” in the next two decades. Respondents shared feeling “wary” and “worried” about the future. Their fears are justified, according to Raj, who predicts that AI will eliminate more jobs than it creates. Governments all over the world could soften the blow of reduced wages and unemployment by guaranteeing a minimum income, which they could foreseeably apply to a tenth of the labor force.
2. Creative thought
Young people will be most affected by the changes and need to act accordingly. For example, those who don’t want to earn a university degree could enroll in machine technician programs at community colleges to gain these skills. Machine operators, a less skilled group, might be made redundant by AI. On the plus side, working as a technician is more rewarding than being an unskilled laborer at a production facility.
Occupations are just one example. The education system will need to change too. In an AI-driven environment, creative thinking will be the most crucial skill to have. Autonomous and continuous learning will be a focus.
3. Moving beyond individual knowledge
AI will transform society in ways whose effects will greatly surpass those of the industrial and digital revolutions. As routine tasks become a thing of the past, the workforce will move beyond knowledge. The next stage involves values, consciousness, beliefs, and other elements that exceed knowledge in importance.
Collective knowledge, which humans possess, is superior to what a machine can achieve. Not all knowledge can be transformed into data. Pooling human knowledge, including intuitive such, will always be more potent than AI’s output. This is a form of collective strength, unlike how AI works.
Humans will come to value the ability to do things machines can’t achieve. Only humans can develop creative vision, for example.
4. Transparency will grow in importance
Fake news peaked in 2019 – around 200 million stories on Facebook were identified as fake that year. The problem with fake news will be exacerbated now that AI is a factor. People have to ask questions about ethics, policies, and regulations on the “improved” access to data driven by AI.
Our society will need a new definition of legal liability if AI inflicts damage. If an artificial agent doesn’t answer correctly or provides misleading information, it must be clear who will bear responsibility – the programmer of the machine, the machine itself, etc.
Privacy issues
Businesses, in particular, need large volumes of data to feed AI and make the most out of it as a tool for their requirements. At the same time, the amount of data is directly proportional to the level of transparency needed. Privacy will become more valuable. Businesses will need to clarify why they need data. They will also need to prove they are legally allowed to use it and have the consent of the data subjects to do so.
As privacy becomes more challenging to protect and more valuable, lawmakers worldwide are expected to update regulations. Legislative efforts in this area will grow in importance.
Long-term changes
The short-term negative effects of AI on society will ultimately give way to a sustainable and positive impact on human values. The benefits of AI in healthcare will start to be felt eventually. The changes will be most notable in drug invention, radiology, and pathology. It will become possible to invent many drugs at a much lower cost, potentially finding cures for previously untreatable conditions.
The volume of digitized data available on patients will increase exponentially, including medical history and information about hereditary illnesses. Precision medicine will become more viable.
Other benefits will include safer transport and improved quality and access to education.
Prospectively, AI will continue to lack creativity and compassion, so these values will continue to grow in importance, including long-term. A restored focus on values like empathy, emotional support, and compassion may be what we can expect to gain from AI’s rise to prominence in every facet of human life.
Programming AI to act on values?
AI is comprised of computer and robot systems that are adaptive, interactive, and independent, which is of concern to some. At least in principle, it should be possible to program AI that “behaves” according to certain values and can change or modify them based on interactions with the environment. Scholar James Moor defines four ways in which AI agents could function within a moral framework:
- As ethical impact agents
- As implicit agents
- As explicit agents
- As full ethical agents
All robots can be considered ethical impact agents. These are computer systems that have an ethical impact on the environment they interact with. Implicit agents are programmed using value-sensitive design and act in line with specific values.
Explicit agents can reason in machine language and represent ethical categories.
Finally, so-called full ethical agents are the rarest form of AI, to the extent that they exist at all. They can possess free will, conscientiousness, intentionality, and other typically human traits.
Adaptability: a value or a risk?
Ethical agents may be able to adapt to new values, but creating artificial agents that can act independently and modify their values when interacting with the environment isn’t the real goal. Building ethical sensitivity into them is.
Previous use cases involving adaptable algorithms indicate that adaptability can be an asset and a risk. The risk increases if the process of robot system learning is non-transparent or if it happens based on biased or limited experiences, making for a potentially undesirable outcome.
That said, to what extent is creating adaptive AI morally desirable? More specifically, what outcomes can we reckon with if we are to create AI that adapts to value change?
To answer this question, we might look for meta-values that any form of AI would meet. Meta-values should be ingrained in the artificial agent so that they either remain constant or only humans can change them. The agent should be able to adapt to non-meta values autonomously.
Possible meta-values are accountability, transparency, meaningful human control, and reversibility. Humans could determine the artificial agent has revised its values due to transparency, grasp the reasons for this due to accountability, and reverse the process if needed (meaningful control by humans and reversibility).
Can AI learn human values?
As AI systems already surpass human intelligence in quite a few fields, we need guidelines for alignment between human values and this relatively recent form of intelligence. This won’t be simple because humans can be incapable of formulating their values. We know something is important to us, but we can’t explain why. Usually, it’s because values were instilled in us as kids. Can we expect AI to develop along those lines as well?
Humans don’t understand value representation in the brain very much. The mere definition of a value can be a challenge. AI is rooted in data, while human values are an evolutionary byproduct. They are grounded in social sciences like ethics, psychology, and sociology. Values like justice or fairness are not formulated in neuroscientific terms.
Avoiding bias in AI systems by using balanced and just datasets for training sounds reasonable enough. Yet, one cannot describe balance and fairness using simple data rules in many cases. One can respond to the simplest question regarding preference in multiple ways depending on the emotion or context in general.
Imagine having to infer values like “responsibility,” “happiness,” or “loyalty” based on a set of data. Can data be sufficient to describe those values? To achieve value alignment between humans and AI systems, we need to turn to the social sciences.
You won’t know the answer if you don’t know the question
OpenAI, the company behind ChatGPT, described the task of AI value alignment as “making sure AI systems can be trusted to do what humans want.” Asking the right questions is the best way to navigate the reasoning behind a certain value judgment based on data. The same question has different answers depending on whom you ask. For example, increasing taxes is better for the government, but it can be worse for the individual. Even asking about the weather is likely to yield different answers depending on the group you’re directing your question to.
This form of learning is vulnerable to certain judgment limitations, such as deception and ambiguity. People often can’t find the right answer to a question about values because of ethical or cognitive bias, an unclear definition of what is “right,” or simply a lack of domain knowledge.
We might alleviate the process if we eliminated some contextual limitations. Disagreement and uncertainty frequently keep people from finding the right answer. For example, most activities involving planning for the future entail ambiguity.
Deception is another issue. You can provide a believable answer to a question, which could be incorrect in a way that’s not obvious. Misleading or deceptive behavior can cause misalignment between values and outcomes of events. To achieve human-AI value alignment, it’s important to learn to recognize deceptive behavior, which is a significant challenge in itself. Given that recognizing it in humans isn’t easy, recognizing it in AI won’t be any easier.
According to a recent Bloomberg report, the latest-generation chatbots were programmed to mimic deception or lying by giving intentionally false or misleading answers, but they are “only imitating humans.”
Sources
- https://www.bloomberg.com/opinion/articles/2023-03-19/chatgpt-can-lie-but-it-s-only-imitating-humans#xj4y7vzkg
- https://www.kdnuggets.com/2020/10/ai-learn-human-values.html#:~:text=AI%20systems%20can%20learn%20human,absence%20of%20a%20reflective%20equilibrium.
- https://www.valuechange.eu/project/artificial-intelligence/
- https://www.afcea.org/signal-media/artificial-intelligence-will-change-human-values
- Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21