Index

academic approach, to risk mitigation, 157–158

Accenture, 124

accuracy rates, 76

administrative tasks, 2–3

Agrawal, Ajay, 79–87

AI. See artificial intelligence (AI)

AI agents, 103

AI assistants

human traits in, 100

onboarding of, 125–128

training of, 99–100

AI Canvas, 80–87

Aida, 103–104, 106

Aiden, 209

AI operations team, 61–67

in-house, 65–66

third-party, 66

Airbnb, 168–173

AI systems

data-centric approach to building, 40–43

dependability of, 63

employee embrace of, 117–122

employee involvement in design of, 131, 133

failures by, 147–151, 155–156

feedback to, 132–133

flexibility of, 64

no-code platforms, 189–195

production and deployment of, 39–40, 42, 43

scalability and extendibility of, 64–65

sustaining, 101–102

training of, by humans, 99–100

Albert, 121

Alexa, 99

algorithms, 140

biases in, 167–170

consumer reaction to effects of, 172–173

deployment of, 168

detrimental effects of, 168–170

market conditions and, 168, 170

perception of, by targeted users, 170–172

plan for faulty, 147–151

AlphaGo, 205–206

Amazon, 37, 126, 156

Amelia, 32

Amico, Richard, 1–7

Ammanath, Beena, 179–186

amplification, of human capabilities, 102–103

Andreessen, Marc, 195

Apple, 102, 114–115

artificial intelligence (AI)

See also machine learning

adoption of, 37–40, 136–137

case study of, 225–236

human collaboration with, 97–116

impact of, 97–98, 202–203

implementation of, 125–137

scaling, 217–224

understanding, 33–34, 135–136

AT&T, 115

automation

for micro-decisions, 140–141

of processes, 208

using AI, 98

automation projects

case study, 92–93

choosing, 89–94

AutoNLP, 200

autonomous systems, 140, 144, 145

Babic, Boris, 123–137

Baidu, 37

Beane, Matt, 133–134

biases, 15, 135

in algorithms, 151, 155–156, 161, 167–170

detection of, 161

human, 25, 128–129, 132

sources of, 185

big data, 17–18, 40–41, 48

black-box problem, 33, 100, 121, 136, 182–183

Blackman, Reid, 155–166, 179–186

Borealis AI, 209

business processes

automation of, 27–29

decision-making, 110–111

flexibility of, 106–107

personalization, 111, 114

redesigning for collaborative intelligence, 98–116

for reinforcement learning, 210–211

scalability of, 108–109

speed of, 107–108

business redesign, 104–114

Campbell, Craig, 80

capabilities, assessment of existing, 91

capability building, 90, 93

Carnival Corporation, 111, 114

causality, 19, 24

Center of Excellence (COE) model, 220–221

chatbots, 103, 106, 147–148

Chen, Daniel L., 123–137

classification, 74–75

clustering, 22

coachbots, 132–133, 135

cobots, 104, 106–107

cocreation, 105

Codex, 199, 201

cognition, theory of distributed, 134

cognitive engagement, 31–33

cognitive insight, 29–31

collaboration, 5–6, 223

collaborative intelligence, 97–116

collective intelligence, 134–136

company roles, 114–115

complexity, 22–23

compliance issues, 181, 182

confidence rates, 74

consumer internet companies, 37–40

consumer reactions, to AI algorithms, 172–173

control, loss of, 119, 130–131

corporate culture, 51

Cortana, 99, 103, 104

counterfactual explanations, 136

coupled systems, 134–135

creativity, 5, 102–103

credit approval, 101

credit card fraud, 107–108

cross-validation, 20, 23–24

customer interactions, 103–104

customer service, 31–33, 103–104

customization

cost of, 39, 42

human-machine collaboration for, 106–107

DALL·E 2, 199

Danks, David, 135

Danske Bank, 108

data

analytics, 48–50, 53–59

biased, 185

big, 17–18, 40–41, 48

bottlenecks, 63

curation, 30

enterprise data strategy, 93

ethics, 155–166

feedback, 85–86, 132–133

high-quality, 41, 42

image, 50

input, 85

for no-code platforms, 192–193

overfitting, 22

pattern detection in, 29–31

personal, 102

privacy of, 130, 133

separating signal from noise in, 20–24

small data sets, 38–39, 42

sorting, 125–126

text, 200–201

training, 76, 85, 99–100

use of, by AI, 14, 40–41

visualization, 49

wide, 18, 19, 25

data-centric AI development, 40–43

data compliance officers, 101

data governance board, 159–160

data science, 48, 49, 54

data science team, 47–52

data scientists, 33–34, 51, 53, 55, 71, 218, 222

Daugherty, Paul, 97–116

Davenport, Thomas H., 27–35, 55

decision-making

analytics for, 49

deep learning and, 33

explanations for, 100–101, 136, 163, 182–183

human, 3–4

human in the loop (HITL), 141–142

human in the loop for exceptions (HITLFE), 142–143

human on the loop (HOTL), 143–144

human out of the loop (HOOTL), 144

micro-decisions, 139–145

monitoring, 128–131

prediction and, 18–19

sequential, 206–207, 209–210

in uncertainty, 80

user modeling of, 126–127

using AI, 79–87, 120–121

using collaborative intelligence, 110–111

using wide data, 19

decision-making tools, 139–145

deep learning, 30, 32, 33, 48, 50, 52

DeepMind, 199, 202, 205–206

design thinking, 5

digital twins, 110–111

discrimination, 173, 174, 182

See also biases

distributed cognition, 134

Dreamcatcher AI, 102–103

Drucker, Peter, 131

dystopians, 112

efficiency, 12, 13

Elicit, 201–202

embodiment, of AI, 104

employees

adoption of AI and, 117–122, 123–137

decision-making by, 110–111

fear of being replaced by AI, 123–124

feedback for, 131–132

impact of AI on morale of, 225–236

incentivizing to identify AI ethical risks, 164

negative impacts of AI on, 133–134

new roles and skills for, 114–115

resistance to change by, 119–120

employment opportunities, 101

endowment effect, 172

EQT Ventures, 4–5

Esposito, Mark, 61–67

ethical issues, 34, 101–102, 155–166, 223

academic approach to, 157–158

defining ethical AI standards, 182–183

high-level AI ethics principles for, 159

monitoring, 164–165

“on-the-ground” approach to, 158

operationalizing, 159–165

organizational awareness of, 163–164

risk mitigation for, 179–186

ethical risk framework, 160–161

ethics council, 160, 164

ethics managers, 101

Evgeniou, Theodoros, 123–137

exceptions, in decision-making, 142–143

exoskeletons, 104

expert systems, 33

explanations

See also black-box problem

for AI decisions, 100–101, 163, 182–183

counterfactual, 136

meaning of, 136

extended mind, 134

face recognition, 21

failures, 51

cost of, 156–157

ethical issues and, 155–157

plan for dealing with, 147–151

false negatives, 194

false positives, 194

Fast Forward Labs, 47

Fayard, Anne-Laure, 123–137

feature extraction, 20, 21–22

Feature Stores, 219

feedback data, 85–86, 132–133

foundational models, 199, 202–203

fraud detection, 107–108, 130, 194

Frick, Walter, 47–52

fusion skills, 114–115

game-playing systems, 205–206

Gans, Joshua, 79–87

gap analysis, 91

general artificial intelligence, 202–203

General Data Protection Regulation (GDPR), 100–101

General Electric, 110–111

Ghosh, Bhaskar, 89–94

GitHub, 199

Goh, Danny, 61–67

Goldfarb, Avi, 79–87

Goldman Sachs, 155, 156

Google, 37, 126, 156, 159, 199, 206, 209

governance teams, 159–160, 218, 222, 223

GPT-3, 198–199, 202

Gruetzemacher, Ross, 197–204

Harmer, Peter, 5

health care, ethics in, 161–162

health treatment recommendations, 31

hiring processes, 108–109

home security alarms, using AI Canvas for, 80–87

HSBC, 107–108

Hugging Face, 200

human autonomy, 130–133

human in the loop (HITL), 141–142

human in the loop for exceptions (HITLFE), 142–143

human judgment, 3–4, 25, 84

human learning, 14

human on the loop (HOTL), 143–144

human out of the loop (HOOTL), 144

humans

as AI sustainers, 101–102

assistance of machines by, 99–102

collaboration between AI and, 97–116

for explanations, 100–101

machines assisting, 102–104, 124

Hume, Kathryn, 71–77, 205–213

Hutchins, Edwin, 134

Hyundai, 104

IBM, 155, 156

image data, 50

implementation phases, 125–137

AI assistants, 125–128

coach phase, 131–134

monitor phase, 128–131

teammate phase, 134–136

implicit bias, 185

informed consent, 162

InstructGPT, 198–199

intelligence amplification, 141–142

intelligent agents, 31, 32

intelligent machines, as “colleagues,” 4–5

interactions, 103–104

interoperability, 222

investment decisions, 132

job losses, 34, 112–113, 123–124, 133–134

job opportunities, 101

job replacement, 97–98

job skills, 114–115

judgmental bootstrapping, 126–127

judgment work, 3–4

Knickrehm, Mark, 115

knowledge work, automation of, 90

Koko, 100

Kolbjørnsrud, Vegard, 1–7

labor impacts, 97–98, 123–124

of AI, 112–113, 123–124, 133–134

of language-based AI, 201

language-based AI tools, 197–204

language models, 198–199

large language models, 198–199

lead scoring, 191

legacy industries, use of AI in, 37–40

legal decisions, 128–129, 167

legal issues, 181, 182

LIME, 183

linear regression, 73

machine learning

See also artificial intelligence (AI)

about, 12–13, 48

AI and, 48–49

algorithms, 72–74, 99–100

applications of, 13–15, 18–20, 28–30, 32–33

big data and, 18

cross-validation and, 20, 23–24

feature extraction and, 20, 21–22

limitations of, 15–16

mistakes to avoid using, 24–25

opportunities to use, 71–77

predictive analytics and, 55–59

regularization and, 20, 22–23

supervised, 20, 72–77, 207–208

understanding, 13–16, 17–25, 33–34

unsupervised, 22

machine learning operations (MLOps), 40–43, 218

standardization of, 218–219

teams for, 220–221

tools for, 221–223

management, redefined, 1–7

management options, for micro-decisions, 141–145

managers

creativity needed by, 5

decision-making by, 3–4

knowledge of machine learning by, 17–25

social skills of, 5–6

time spent on administrative tasks by, 2–3

Marble Bar Asset Management (MBAM), 126, 129, 132, 133

Martinho-Truswell, Emma, 11–16

Mason, Hilary, 47–52

maturity models, 90

Mayflower Autonomous Ship, 144

medical care prediction algorithm, 167–168

medical ethics, 161–162

medium-size businesses, AI for, 189–195

Mehta, Nitin, 167–177

Mercedes-Benz, 106–107

micro-decisions, 139–145

Microsoft, 147–148, 159, 199

Mizuno, Takaaki, 61–67

MLOps. See machine learning operations

Model Catalogs, 219

model-centric development, 40–41

NASA, 28–29

natural language processing (NLP), 126, 197–204

applications of, 201–202

capabilities of, 198–199

preparing for future of, 200–203

Netflix, 126, 206

Ng, Andrew, 37–43, 72, 73

no-code platforms, 189–195

“on-the-ground” approach, to risk mitigation, 158

OpenAI, 198–199, 201, 202

operational redesign, 104–114

optimistic realists, 113

optimization, of processes, 208

Optum, 155, 156

organizational awareness, of ethical issues, 163–164

outcomes, of actions, 85

out-of-context model, 24–25

out-of-sample accuracy, 24–25

overfitting, 22

Pallail, Gayathri, 89–94

Pandora, 111

pattern detection, 29–31, 163

performance feedback, 131–132

personal data, 102

personalization, 111, 114

personalized recommendations, 19

pod model, 220–221

Power, Brad, 117–122

Prasad, Rajendra, 89–94

predictions, 18–19

algorithms for, 73–74

biases in, 167–168

false results in, 194

lowering cost of, 80

testing accuracy of, 23–24

using AI, 80–87, 207–208

predictive analytics, 53–59, 73

predictive models, 55

Predix application, 110–111

preferences, 19

privacy issues, 102, 130, 133, 135, 137, 161–162

problem solving, 11–12

limitations of AI, 15

process automation, 27–29, 33, 208

processes

See also business processes

bespoke, 219

for building and operationalizing AI models, 218–219

product data science, 49, 50

production environment, 62–65

productivity, 98, 113, 124

productivity skeptics, 113

product managers, ethical guidance for, 162–163

project opportunities

automation projects, 89–94

spotting, 71–77

projects, sequencing, 91

proof of concept, 39–40, 42, 43

racial disparities, 168–174

RAID (Research Analysis and Information Database), 126

reasoning capabilities, 128

recidivism prediction algorithm, 167

recommendation systems, 31, 49, 50, 125–126

regression analysis, 52

regularization, 20, 22–23

regulated industries, “black-box” issue in, 33

Reilly, Jonathon, 189–195

reinforcement learning, 205–213

applications of, 206–207, 209–210

capabilities of, 207–210

opportunities for, 210–213

relationships, disruption of, 120

report writing, 3

research and development (R&D), 49, 51

resistance, to AI, 119–120, 123

risk framework, 160–161

risk mitigation, 157–165

academic approach to, 157–158

for ethical risks, 179–186

in health care, 161–162

high-level AI ethics principles for, 159

“on-the-ground” approach to, 158

technical tools for, 183–184

robotic process automation (RPA), 27–29, 33

robots, 104, 112

accidents caused by, 148

Ronanki, Rajeev, 27–35

Ross, Michael, 139–145

Royal Bank of Canada, 209

rule-based expert systems, 33

safety engineers, 101

sales prospects, 191

scalability, 64–65, 108–109

scaling AI, 217–224

standardization of model building and, 218–219

teams for, 220–221

tools for, 221–223

Schlesinger, Leonard A., 225–236

Schmidt, Eric, 203

search algorithms, 49

SEB, 32, 103–104, 106

Sedol, Lee, 205

selection, 23

self-determination, 161

self-driving cars, 121–122, 140, 182

sequential decision tasks, 206–207, 209–210

SHAP, 183

shrinkage, 23

Sidewalk Labs, 156–157

Siegel, Eric, 53–59

Singh, Param Vir, 167–177

singularity, 112–113

Siri, 99

small businesses, AI for, 189–195

smart-pricing algorithm, 168–173

social inequalities, 167–168

social skills, 5–6

software, 40, 48, 55–56, 126

accidents, 148

no-code platforms, 189–195

software-centric development, 40–43

software development, 201

software engineering, 50–51

spam filters, 49

speed, 107–108

Spotify, 206

Srinivasan, Kannan, 167–177

stakeholders

engaging, in ethical issues, 164–165

mobilization of, 121–122

multiple, 223

standardization, of AI model building, 218–219

Starbucks, 111

statistics, 17, 19, 73, 74

supervised learning, 20, 72–77, 207–208

sustainers, 101–102

sympathy, 100

talent recruitment, 34

talent shortages, 39, 41

tasks

administrative, 2–3

automation of, 76–77

breaking down, 75

examination of, 75–76

Tay chatbot, 147–148

Taylor, James, 139–145

Taylor, Matthew E., 205–213

teams

AI operations, 61–67

assessing existing capabilities of, 91

capability building of, 90, 93

data science, 47–52, 220

for ethics risk mitigation, 180–181

governance, 159–160, 218, 222, 223

for scaling AI, 220–221

technology optimists, 113

text analytics, 200

text data, 200–201

text summarization, 50

theory of distributed cognition, 134

Thomas, Robert J., 1–7

Thompson, Layne, 4

training, of machines by humans, 99–100

training data, 76, 85, 99–100

transparency, 136, 137, 172

trial and error, 208, 212

trust, 120–121, 135, 137, 174

Tse, Terence, 61–67

uncertainty, 51, 80

unconscious bias, 185

See also bias

understanding, trust and, 135

Unilever, 108–109

unsupervised learning, 22

user modeling, 126–127

utopians, 112

Vanguard, 32

variables, 21–22

Vartak, Manasi, 217–224

Verneek, 201

virtual assistants, 103–104

Waymo, 121–122

wearable robotic devices, 104

West, Tessa, 131

wide data, 18, 19, 25

Wilson, Andrew, 124

Wilson, H. James, 97–116

work, future of, 112–113

worker displacement, 34, 112–113, 123–124, 133–134

Yampolskiy, Roman V., 147–151

Yeomans, Mike, 17–25

Zhang, Shunyuan, 167–177

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset