6 Exercise: Project Progress - Video Tutorials & Practice Problems
Video duration:
5m
Play a video:
<v ->All right, time to get back to your project.</v> What is your strategy for solving your AI problem within the organization that you've chosen? One thing you can do as you try and think of and brainstorm what kinds of things could block your plans are to ask yourself what's gone wrong in your organization in the past with similar proposals? When someone else is proposing some new breathtaking way of doing something, what are the objections that were raised, who might raise those objections, and why are they objecting? What are the kinds of things that your history in the organization can tell you about what might go wrong this time? Another thing for you to do is to do something called a pre-mortem, kind of put yourself in the future and say, "Hey, if the project failed, "what caused it to fail," so if this went wrong, what are the things that derailed it? And sometimes putting yourself in the future and kind of imagining looking back can help you to figure out what your obstacles really are. As you're coming up with the obstacles, remember the power of why, so if your obstacle is, "Hey, we just don't have the funding," that doesn't feel very actionable. So try and ask yourself, "Why," say, "So, why don't you have the funding?" "Well, it's targeted for higher priorities. "Why," "Well, management doesn't see the value of AI." "Well, why," "Well, they don't see how it'll make us money," "Well, why," "Well, we haven't shown them the ROI." "And why haven't you shown the ROI?" "Well, it's true that we probably haven't chosen "the right project," and that feels a lot more actionable where you can now say, "Hey, if we really choose "a project that has good ROI, we're gonna show them "how it makes money, they're gonna see the value. "We're going to get the funding that's targeted "for higher priorities now because we're gonna "become the higher priority," so keep asking why until you get something that's a root cause, or something actionable for you to really do something about The other thing that you want to ask yourself is, how risky is it, if your AI model makes a mistake? Now, obviously, you don't want it to make mistakes all over the place, you don't want it to be accurate 52% of the time, but your AI model was probably only gonna be accurate 80% of the time, maybe more. And so the question is, how accurate does it need to be? And how bad is it when it makes a mistake? And so, how bad is the mistake in terms of business impact or embarrassment in front of your customers? So, how much will it really cost in both money, time, and maybe reputation? And the other thing to ask yourself is, is one of the ways of making it more accurate to solicit feedback from your users, is that a safe thing to do? Is your user really going to do a good job of telling you when it made a mistake, or is it really just gonna be their opinion and it's not gonna make your AI any better? Last, I want you to think about the ethical concerns that affect your plan, so is there anything about the project that you're doing that could be problematic, that could be embarrassing, that could be unfair? And maybe not just your first initiative, but also the whole strategy of all the things that you want AI to be able to tackle. Now would explainable AI help in your situation? Maybe you need to pursue that. But maybe you just need to focus on correlation versus causation, maybe you need to focus on making your data very, very accurate, maybe you need to focus on your feature analysis to make sure that you're not using features that might be discriminatory in certain situations. So, really think about what ethical concerns might affect your plan, that's your job. It's not the job of the technologist. And you can't just say, "Hey, the technology ran amok. "What was I supposed to do?" This is what you were supposed to do so do it upfront. And so, let's summarize what we want you to do here, as you continue working on your plan. So, we want you to identify any type of obstacle that could derail your plan. By doing that, that's gonna prep you for putting a plan together that overcomes those obstacles. So first, identify those obstacles and think about what things have gone wrong in the past that might be able to help. And assess your risk to the organization. When your AI makes a mistake, how bad is it? What are the impacts that will really come from those kinds of mistakes? And I also want you to brainstorm ethical concerns for both your first AI initiative and your overall strategy. The problems that you hope for AI to solve, are there any ethical dilemmas that you really ought to tackle now and think about whether explainable AI might actually help you overcome those dilemmas?