Craig Martell is head of machine learning at Lyft and an adjunct professor of machine learning for Northeastern University’s Align program in Seattle. Previously, he was head of machine intelligence at Dropbox, led a number of AI teams and initiatives at LinkedIn, and was a tenured professor at the Naval Postgraduate School in Monterey, California. Martell holds a doctorate in computer science from the University of Pennsylvania and is coauthor of Great Principles of Computing (MIT Press, 2015).
Learn more about Craig Martell’s approach to AI on the Me, Myself, and AI podcast.
|Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Disagree||“Responsible AI should be a requirement for every model built. It should be exercised as a matter of course, just as building a safe workspace is necessary as a matter of course. It is independent of the larger corporate social responsibility efforts — which are often targeted at specific efforts.”|
|Responsible AI should be a part of the top management agenda. Strongly agree||“I think it is a false division to separate responsible AI from responsible product-building in general. It is management’s responsibility to make sure a product works safely and as designed for all customers. You wouldn’t be comfortable releasing a product to market that was 100% safe for 80% of the people and only 75% safe for the other 20%. This doesn’t mean AI has to get it right all the time — but the good it brings and the mistakes it makes should be distributed equally over all customers. And there should always be a human escape hatch for when the AI system gets it wrong (and it will).”|