CHAPTER 2
What Does It Mean to Be Human in the Era of Artificial Intelligence?

A child may ask, “What is the world's story about?” And a grown man or woman may wonder, “What way will the world go? How does it end and, while we're at it, what's the story about?”

—John Steinbeck, East of Eden

Since the beginning of human cognition, humankind's search for meaning has produced works of art and literature, breakthroughs in science and technology, and voyages across oceans and into outer space. This search for meaning is so fundamental to the human experience as to exist independent of Maslow's hierarchy of needs, as evidenced by cave paintings from the Paleolithic Period, the Pyramids of Giza, and Wolfgang Amadeus Mozart composing his Requiem on his deathbed.

Despite this search's antiquity and the many cultures and individuals who believe that the question of the meaning of humankind has been answered, no collective answer has been found or agreed to. Perhaps this is why scientific and technological advancements, in particular, recenter the human gaze on the age‐old question: What does it mean to be human?

  • What does it mean to be human if the Earth is not the center of the universe? (1543)
  • if we can journey beneath the ocean? (1776)
  • if we travel to the moon and beyond? (1969)
  • if machines can beat us at chess? (1997)
  • if machines can create art? (2018)
  • if machines can write convincingly? (2022)

This is not a question asked at the natural history museum. The fact that the upper‐body strength of a gorilla is six times that of an adult human or that whales have greater empathy centers in their brains does not raise an existential question for humankind.

Yet when a new technology is introduced, if it holds any correlation to what has previously been considered an exclusively human capability or skill, it is met with fear and/or skepticism. As if what it means to be human is a checklist.

When one closes one's eyes and ponders what it means to be human, what comes to mind might be a memory of running through a forest in the golden hour of the afternoon, jumping into a lake amidst the laughter of friends, or being sung to sleep by a loved one. These kind of experiential memories are building blocks of how we assign meaning to our lives, and, although they reflect what it means to be human, they do not define the human experience. In other words, if someone sang to their computer every night, the computer would not become more human because it experienced the same thing a human child does.

Society stands on the precipice of the next great era of transformation, fueled by technology that is more complex and powerful by several orders of magnitude than its precedents. Addressing the existential question of what it means to be human in this era will lay the foundation for culture, the future of which is intertwined with economic growth, both inside and outside of organizations. It will also create common language and focus for coalitions across public and private sectors, nonprofits, and academia to address society's fundamental challenges. Lastly, it will provide individuals with tools to find meaning and grounding in the human–machine paradigm and its impact to the future of work and, more importantly, individual purpose.

It is not within the scope of this book to present a definition of what it means to be human. Rather, this book will highlight the need for these discussions to take place, propose frameworks for clarifying the distinction between humans and machines, and provide parallel economic, strategic, and technological frameworks and principles to guide individuals and organizations into and through their Autonomous Transformation journeys. These frameworks can be leveraged at the whiteboard when building a technological road map, in the boardroom when discussing the transformation agenda, or in a café between sips of coffee.

These frameworks lay the foundation for creating a more human future, which requires a mutual understanding of what it means to be human and therefore what would contribute to a more human future.

The Pain of Uncertainty

In 2016, a group of researchers in London performed a study in which participants were presented with an image of a rock and asked to guess whether there was a snake underneath the rock. After their guess, the correct answer was displayed (either an image of a snake, or text that read “No snake”), and regardless of the accuracy of their guess, each time a snake was presented, the participant would receive a painful electric shock on the back of their left hand. Throughout the experiment, the researchers altered the likelihood that a snake would appear, and observed a link between a higher degree of uncertainty and acute stress responses. They concluded that stress responses are tuned to environmental uncertainty and had a direct impact on task performance.1

This is one of dozens of studies that have linked the experience of uncertainty to physiological impacts, ranging from registering in the brain as physical pain to decreased performance and the ability to learn.

In the context of the era of artificial intelligence, uncertainty bears a significant cost for society, for employees, and for leaders. Given its physiological implications and the impact on the ability to learn and perform, there is a strong business case for eliminating uncertainty within the organization, the ecosystem, and society.

In Microsoft's transformation that began in 2014 when Satya Nadella took over as chief executive officer, this paradigm was memorialized by a nontraditional human resources leader, Joe Whittinghill, who established the leadership principle “Create Clarity” to address the psychological drive for certainty, especially in times of change. Where certainty may not be possible, clarity is nevertheless possible, an example of which is encapsulated in one of Microsoft’s leadership mantras: “Get bad news out fast.”

In the era of artificial intelligence, if you are a manager or organizational leader and there is a possibility that your team members are experiencing uncertainty about the future of their livelihoods against the backdrop of technological upheaval, there are both economic and moral reasons to create and communicate clarity.

In the context of the broader market, this phenomenon can be observed in real time with the latest technologies at any given point. With each technological breakthrough, humans experience a cycle of existential reconciliation, navigating uncertainty for the social, economic, and experiential impacts of each new breakthrough.

This generates a fiscal opportunity that should be monitored closely, as the desire for certainty can lead to quick transactions or misplaced trust in advisors who, regardless of intentions, may not have the expertise to deliver on creating the needed certainty.

Leaders who approach each new technological breakthrough that reaches the public discourse with a focus on generating clarity for their stakeholders, organizations, and team members will realize an economic benefit and contribute to a healthy workplace in which team members can focus on doing their best work while continuing to learn and grow.

Capability

The first and most basic distinction between humans and machines is capability.

Watching machines at work in a manufacturing plant is an unforgettable experience. When they perform as designed, they move with speed and precision, at times lifting enormously heavy objects or cutting through material with blades or lasers. They perform these tasks tirelessly, only pausing to resolve errors, accommodate production schedule changes, or for fixes and upgrades.

In parallel, systems in the banking industry detect fraudulent credit card charges by wading through oceans of data in milliseconds, analyzing against an individual's spending pattern, location, recent charges, and a number of other parameters to determine whether a charge should be approved or declined.

In Figure 2.1, the capabilities in which a machine can be considered distinctly better than a human map to the two examples above. If humans were to create products in a manufacturing plant without machines, it would take significantly longer, leading to longer wait times and increased prices, with the potential of rendering the products economically unfeasible. Likewise, fraudulent credit card charges would require an unbelievable amount of human workers to analyze at the scale that machines are capable, and would inevitably take much longer.

There is a relatively little‐known example of human ingenuity that involves a violinist and a potato chip manufacturer. During the manufacturing process, chips are dipped in grease, which lingers and must be significantly reduced before the chips can move into the next stage of the process. The manufacturer believed a better process could be developed than their existing approach, which was effectively to shake the chips so that grease would slide off. Finding the balance between maximum grease removal and breaking the fewest possible chips was an ongoing and costly challenge.

The chip manufacturer put out a request for proposals for solving this problem, and inevitably received myriad technology‐based proposals to slightly improve the process. The winning proposal, however, came from a violinist who proposed finding the resonant frequency of the grease, and playing a sound at that frequency that would vibrate the grease and not the chip.2 The approach was adopted, creating exponentially more value than the manufacturer had envisioned.

Schematic illustration of Capabilities of Humans and Machines.

Figure 2.1 Capabilities of Humans and Machines

The ability to imagine and create is among the most fundamental of human characteristics. Artists across media, geographies, and throughout history have created renderings from the fantastic to the mundane that have directly and indirectly shaped cultures. This speaks to the left side of Figure 2.1, where empathy and asymmetrical thinking, for example, have been paired to imagine and create meaningful and impactful art.

For some readers this may bring generative artificial intelligence to mind, and DALL·E 2 and ChatGPT are great examples on which to practice this framework. For those who may not be familiar with DALL·E 2 and ChatGPT: DALL·E 2 renders images that have never existed but look like an artist or photographer created them, and ChatGPT writes text that is convincing enough for a reader to believe that a human could have written it.

At first glance, because each of these transformers can be focused on the subject of art, it might appear that machines have now developed the capability to imagine and create. At second glance, however, particularly at the technological underpinnings of these transformers, one can observe that they have been developed with unbelievably large sets of representative data and examples (DALL·E 2 consists of 3.5 billion parameters and GPT‐3 consists of 175 billion parameters), the patterns of which are then leveraged to generate an image or a passage of text. Through the lens of the diagram in Figure 2.1, DALL·E and ChatGPT have reached new heights of elegance when it comes to the application of pattern recognition and speed, which falls squarely onto the right side of the diagram.

What would it look like for a machine to exhibit human‐level creativity? It would need to move beyond imitation based on instructions or input (e.g., “A painting of a distinguished family of golden retrievers in the style of Rembrandt”3) to creation—bringing something original into existence through imaginative skill. As you read this, you are likely sitting amidst an overwhelming accumulation of applied human creativity. Maybe you are on a flight, hurtling through the air because the Wright Brothers invented a way to defy gravity. Perhaps you are nestled in an armchair by a fireplace because humans have innovated methods to tame fire and bring it into our homes for warmth and ambience. If you are reading this on a digital screen, there are not enough pages in this book to list the technological and scientific breakthroughs orchestrated to create what you are holding in your hands.

The role machines have played in all of this creation is remarkable utility in following instructions and extending human capability, both in the digital and physical spheres. The surface area to which human creativity can be applied has been expanded by several orders of magnitude thanks to machines.

An example of this in action is the development of a new product. A human applies empathy and imagination to imagine a new product. This product idea is then analyzed for viability. This involves steps such as determining whether there is a market for this new product, whether there are already competitors or preexisting patents, what it would cost to create the product, what could be charged for the product, whether there are channels for distribution, and so on. The application of machines in these analyses greatly expands the scope of research while simultaneously reducing the amount of time required. Throughout this analysis, the human is orchestrating several analytical methods across many different systems, all the while generalizing across these analysis points as answers are found—accruing to a mental model that ultimately determines whether the development of this new product is a worthwhile pursuit.

This is an example of the inherently complementary capabilities of humans and machines at their best. Humans can achieve more both by adding machine capabilities to their solution set as well as by offloading manual, repetitive tasks. This is possible because machines are capable of reaching parity with humans when it comes to perception tasks—vision, transcribing speech, translating, and reading.

Consciousness

The second distinction between humans and machines is consciousness. The first principle of the philosophy of René Descartes, a seventeenth‐century French philosopher is “Cogito, ergo sum”—“I think, therefore I am.” This is a critical distinction that remains intact as of this writing when it comes to humans and machines. There is not a machine in public existence that is conscious of itself from an existential perspective. That is a metaphysical phenomenon that has not been created and could not be achieved on accident (if at all).

As Edith Elkind, a computer science professor at the University of Oxford, put it, “Machines will become conscious when they start to set their own goals and act according to these goals rather than do what they were programmed to do. This is different from autonomy: Even a fully autonomous car would still drive from A to B as told.”4

Machines do not possess the fundamental building blocks of the human psyche. They have no instinctive desires. This can be easy to forget for those familiar with branches of artificial intelligence, as negative and positive reinforcements are leveraged in reinforcement learning, but there has yet to be any indication that a machine craves positive reinforcement. They instead follow instructions, with positive and negative reinforcements indicating whether they are closer to completing the instruction and whether another attempt is required.

In 1980, John Searle, an American philosopher, created a thought experiment to demonstrate the narrowness (and therefore lack of consciousness) of machines that employ machine learning, called the Chinese Room Argument. It entails a person sitting alone in a room into which Chinese characters are slipped under the door. The person then follows instructions for which characters should be slipped back under the door in response, leading those outside the door to mistakenly believe that there is a Chinese speaker in the room. The takeaway from this thought experiment, simplified, is that the fact that a machine is able to translate Chinese into English does not mean that the machine understands either Chinese or English.

Consciousness is a more sensitive topic than capability, as it touches on a deeper question, particularly for those with a faith or religious background. The idea that machines could reach a point of consciousness at parity with humans can be interpreted to challenge the idea that the world was created by a higher power. It is important to be aware of this potential sensitivity when approaching this topic so as not to preclude the opportunity for meaningful and productive discussion.

It is worth noting here that leaders in technology and science have spoken publicly about concerns of the risk of machine consciousness. The risk machine consciousness would pose to humankind is inarguably high and unpredictable. However, the likelihood of machine consciousness being developed is a different story, as it lacks economic viability. Achieving this kind of technological breakthrough would likely require unbelievable amounts of data, the best and brightest minds from around the world, significant computing resources, and several preceding breakthroughs (such as quantum computing). At the end of that road of investment, there is little to no indication that profit would await the investing company or independent investor, and a high likelihood of existential risk. This, paired with the question as to whether it is even feasible in the first place, makes the development of machine consciousness unlikely in the present era.

Notes

  1. 1 A. O. de Berker, R. B. Rutledge, C. Mathys, L. Marshall, G. F. Cross, R. J. Dolan, and S. Bestmann, “Computations of Uncertainty Mediate Acute Stress Responses in Humans,” Nature Communications 7 (March 29, 2016): 10996. doi:10.1038/ncomms10996.
  2. 2 Another interesting story on resonating frequencies: Nikola Tesla developed an earthquake machine leveraging the resonating frequency of the earth.
  3. 3 DALL·E 2's answer to this prompt can be found at brianevergreen.com/woofington
  4. 4 M. Weisberger, “Will AI Ever Become Conscious?,” LiveScience, May 24, 2018, https://www.livescience.com/62656-when-will-ai-be-conscious.html (accessed January 15, 2023).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset