6.2 Which situations raise ethical concerns? - Video Tutorials & Practice Problems
Video duration:
14m
Play a video:
<v ->All right, so one of the things</v> that you might be asking yourself as we've gone through all of this information about AI is what does this all mean for people? So even today, 70% of financial transactions are initiated by algorithms. In case it wasn't bad enough for newspaper reporters, simple sports stories and stock price articles are now being written by robots. And there are predictions that within 20 years, 50% of today's jobs will be threatened by algorithms. So within 10 years, 40% of today's Fortune 500 companies might not exist. And this is coming from a publication that you don't actually think of as a fearmonger, it's scientific American. (chuckling) So these are actually fairly conservative types of predictions that we're talking about here. And they might make you feel a little bit scared, right? Because what does this mean for people? But here's the thing that people aren't thinking about. All technology actually works this way, and we've seen this all before. So in 1800, 80% of the world's population were farmers. And in 2020, just 27% were farmers. But it didn't mean that you suddenly had 53% unemployment. It's not true. What happened is that they found other things to do. In some ways, technology created a lot of the jobs that those former farmers went to. And so we always think that, "Hey, if the old is going away, that means it will be replaced with nothing." And the truth is that that's not usually true. And a lot of times this is kind of a personality test because you hear people make predictions about AI, and they say, "Hey, AI is gonna do a lot of work that humans have to do now, and we're gonna have a lot more leisure time." And other people say, "Well, AI is gonna do a lot of work that humans do now, so we'll all be unemployed." And here's the thing I want you to really understand. Those are actually the same prediction. It's just a question of whether you're an optimist or a pessimist. And so, yes, it's true that AI is gonna do things humans do now. And the question is, are we gonna find other things to do? Maybe things that we liked better, and are there gonna be disruptions from AI? Yes, all technology causes disruptions. It doesn't mean that the future is gonna be wonderful for everybody. But I think people who are talking about AI as being this somehow different thing that's going to ruin the world, I think that they're kind of projecting things that we really don't have any idea about. And so, will AI take all of our jobs? No, and especially in the near future, you're not gonna lose your job to AI. But if you don't use AI, you might lose your job to someone who does because AI doesn't in general automate jobs away, it automates tasks away. It takes away some of the tasks that you would be doing now. And then the goal is for you to replace them with some higher order things that you will do with that extra time that will be even more valuable than what you're doing with your time now. So one of the things I think you should walk away with is that technology effects are actually pretty hard to predict. As human beings, we're not very good at that. So in the 1920s, experts predicted that the growth in phone usage would require all women in north America to become switchboard operators. Now, try to ignore the sexist part of that because that's how people thought back then, only women could be telephone operators. But be that as it may, it was a ridiculous prediction because what happened was that all sorts of technologies were invented, including self-dialing and automatic switching, so that you did not have to have human beings as telephone operators doing the things that they they did in 1920. And that's what usually happens. What usually happens is we make predictions where we're extrapolating the future based on the present. And it turns out that the future is different from the present because human beings tend to make decisions that avoid really bad outcomes happening to them. So when we see things that would be bad happening to us, we actually change the trajectory of where things are going so that things come out a little bit better. Now, as I said, I don't wanna be a Pollyanna here. It's not true that this future is gonna work out for absolutely everybody. The future never works out for everybody any more than the present is working out for everybody. But when you hear all these apocalyptic predictions of doom and how the world is going to hell in a hand basket, I think you should take those with a grain of salt because I think that human beings are actually pretty good at harnessing technology and using it to make our lives generally better. Now let's look at some other human considerations that might cause some concerns ethically. So think about the idea of correlation versus causation. So a lot of you might not have thought about how those things are different, but what AI typically can do is it tells you when things are correlated with each other. It says, "When I see pattern A, that means that we want to give a little more weight to this outcome." It doesn't necessarily mean that A causes B. And that's something that we have to be really particular about because you can jump to that conclusion by just seeing that two things are correlated with each other, you could assume that one of them causes the other. Now people actually have to make that determination. And I'll give an example from marketing. So if I said to you, "Which page on your website is most highly correlated with conversion? I wonder if you'd know what the answer is. I know the answer, and I don't even know your website. The answer is "your thank you page." The answer is the page that you show after someone converts. So after they check out with their shopping cart or after they sign up for your newsletter, whatever the conversion was, you have a page that comes right after it. And that is a 100% correlated with your conversion. Now, does anyone think your thank you page causes the conversion? Well no, obviously not. But if all you do is look at statistics, you might be misled into thinking that. Now that's obviously a ridiculous example. Thank you very much. I come up with ridiculous examples, but there are examples all the time of people who mistake correlation and causation, just because they're not thinking things through. You, the marketer have to think things through. Don't take at face value that your model is saying something is causation when it's really only telling you correlation. You, the marketer, you, the human being, have to decide if something is causative. And spurious correlation is a real danger. So continuing with our theme of ridiculous examples, here is a correlation of divorce rates in the US State of Maine and per capita consumption of margarine. You can see that they're correlated pretty well, 99% correlation here. That's pretty strong. But does anyone think one causes the other? That a margarine is causing divorces? Or divorced people eat more margarine? I think those things don't make any sense, but your machine learning and AI models are going to find these types of correlations. And you as the human being have to use your wisdom, your human judgment, to decide, "Hey, this one doesn't make any sense. These ones maybe do make sense. Let's pay attention to these." Now there are many other types of ethical concerns that AI can bring up. And outside of marketing, it's fairly common for there to be uses of AI that maybe we need to think about a little bit and see what kind of ethical concerns are there. So if you are using AI to help a judge decide prison sentences, well, what if people with lower education exhibit higher recidivism, if they're more likely to commit crimes in the future, should you use that as a factor? Is that actually discriminating against people who weren't able to afford to go to school? How about job applications? What if college sports is a common characteristic of current executives? Is that telling you what you think it is or is that actually telling you that more men are in college sports and more men are currently executives? Is it actually discriminating as women without you realizing it? How about credit risk? What if people in certain neighborhoods default more on loans, should you do kind of the AI version of red lining and start to say, "Hey, we're going to change the credit scores of those people just based on where they live." So there's all sorts of perhaps spurious correlations that might punish people for things that are not their fault. And you have to decide if these things are correlations or if they're actually causative. That's where a lot of the ethical concerns come in. And so there's all sorts of areas in AI that have these problems, and AI can make bad decisions. Sometimes, it's spurious correlations, but sometimes, it's also your data. You remember how much I emphasized focusing on making sure that your data is correct? Well, if you have data that's incorrect or data that's incomplete, you're gonna have problems. So a lot of early facial recognition algorithms fail to recognize people of color because they were trained on mostly Caucasian faces. And so this is going to cause problems if your data isn't representative. So your goal data needs to sample the problems it's actually going to be put to. And so all sorts of things can happen when you don't do that. So whether it's spurious correlations or bad data, there's all sorts of ways that AI can go wrong and cause serious ethical concerns. But some people would tell you that marketing doesn't have any of those concerns. What are we worried about? Who cares if my lead scoring system isn't perfect? It just hurts me, right? Maybe I'm not pursuing the right prospects. And so that might hurt my conversion rate. That might hurt my sales. And so what do we care in marketing? We don't have to worry about these ethical concerns. I'd say that's not true. I'd say you ought to really think about what your problem is and think about how your system could actually hurt people in ways that really are unethical. So what if the leads in your business are applying for credit? What does it mean for you to score them lower? What if your leads are college applicants? So think about what your problem is, because if you don't think through this now, you might have a mountain of bad publicity later. Because when this comes out that you're doing the things that you're doing, and you haven't really thought through what the ethical implications are, that can really cause a lot of trouble. And you might be tempted to just blame the AI, say, "Well, you know, hey, a computer error. AI, hey, what are you gonna do?" I gotta tell you, that's not really the right problem. The right problem is that you didn't think this through. If you put garbage data in to get garbage predictions out, who's that on? Is that on the AI? No, that's on you. If you don't use the right data, if you don't use the data right, it isn't the technology's fault, it's your fault. And you need to think this through now. People need to be responsible for what their AI is doing. It's not about just unleashing the technology and letting it run a muck. It's about us being really careful about how these decisions are being made. And don't blame the technologists either. These are not the people who are going to assess your business risk. That's not what their job is. Who should assess the risk? Well, you should. The business team should, the lawyers should, your executive should. And the way to think about this is if people shouldn't do it, your AI shouldn't do it either. There is some type of technologies that might come to the rescue. There's especially an interesting technology called Explainable AI. Explainable AI means that it's no longer what they call a black box where you can't see what's inside it. Explainable AI might still get the wrong answer. So it might still be guilty, but it's guilty with an explanation. And so what Explainable AI can do is it not only tells you the prediction in a confidence level, but it also tells you why the AI made those decisions. What went into the decisions, what the weights were, what the features were that it saw? So it kind of gives you a rationale for what the answer is. That can help you to identify when things are being done for the wrong reasons. And so it might show you that certain weights are coming in that you weren't expecting. It might show you that it's putting more weight on something that you thought maybe shouldn't be so important. And all of those kinds of things can help you to make your model fairer and make your model more accurate. But it's up to you to decide if you need it. Right now, the reason that Explainable AI isn't really the main stream is because so far it isn't as effective as the garden-variety stuff that we've been talking to you about that doesn't explain itself. And so what I want you to be thinking about is maybe Explainable AI might be something for you to look into if you've got these types of ethical concerns that you need to figure out.