Competing With Data & Analytics
What to Read Next
Customer service and … cognitive computing? Really?
Yes, it’s happening. A recent Forbes article, IBM’s Watson Now a Customer Service Agent, Coming to Smartphones Soon, describes cognitive computing’s growing influence on customer service at a wide range of consumer-facing organizations — think financial services, telecoms, retail and insurance companies. ANZ Bank in South Africa offers a glimpse into a present that sounds more fictional than real. Joyce Phillips, CEO of ANZ’s global wealth and private banking group, says that:
ANZ Bank is going to start deploying Watson at its private wealth group, beginning with insurance offerings. “Imagine if you could sit down with an adviser and, in the time it takes to make a cappuccino, Watson will pull up all of your accounts, read all the fine print, and tell you what kinds of insurance protection you’re missing or where you’re overcovered.”
According to IBM’s press release, Watson Engagement Advisor allows companies to “crunch big data in record time to transform the way they engage clients in key functions such as customer service, marketing and sales.”
The IBM Watson Engagement Advisor will help companies make their interactions count by knowing, delivering and learning what each customer wants — in the context of their preferences and actions — sometimes before even the customer knows it themselves.
The IBM Watson Engagement Advisor “Ask Watson” feature greets, and offers help to, customers via any channel, be it through a website chat window or a mobile push alert, saving consumers the hassle of performing searches, combing through websites and forums, or waiting endlessly for a response about the information they need. Calling upon IBM’s Big Data Analytics technologies, IBM Watson retrieves data about customers to help ensure interactions are tailored to their needs, and search its corpus of stored information for the best solutions.
But it’s this very capability — the ability to determine consumer wants and desires, even (sometimes!) before customers themselves do so — that raises issues of individual privacy rights. The key question is this: Will Watson’s benefits (and those of other cognitive computational technologies) outweigh the likely public costs of lost privacy?
In Big Data for All: Privacy and User Control in the Age of Analytics, researchers Omer Tene, an affiliate scholar at The Center for Internet and Society at Stanford Law School, and Jules Polonetsky, director and co-chair of the Future of Privacy Forum, explore privacy risks associated with big data:
The harvesting of large sets of personal data and the use of state of the art analytics clearly implicate growing privacy concerns. Protecting privacy become harder as information is multiplied and shared ever more widely among multiple parties around the world. As more information regarding individuals’ health, financials, location, electricity usage and online activity percolates, concerns arise about profiling, tracking, discrimination, exclusion, government surveillance and loss of control.
Tene and Polonetsky suggest that in order to balance beneficial uses of data and individual privacy, policymakers must address the fundamental concepts of privacy law. This includes redefining (or defining) the definition of personally identifiable information (PII), the role of individual control and the principles of data minimization and purpose limitation.
In their research, Tene and Polonetsky outline a legal framework with two basic tenets to help policymakers protect individual privacy rights in the era of big data. These are:
- Access, Portability and Sharing the Wealth: Provide individuals with access to their data in a machine-readable, usable format, and allow them to take advantage of applications to analyze their own data and draw useful conclusions.
- Enhanced Transparency: Organizations reveal not only the existence of their databases but also the criteria used in their decision-making processes (subject to protection of trade secrets and other IP).
The transparency requirement is based on the Federal Trade Commission’s Fair Information Practice Principles of transparency and accuracy, according to the researchers.
The point: “In a big-data world, what calls for scrutiny is often not the accuracy of raw data but rather the accuracy of the inferences drawn from the data. Inaccurate, manipulative or discriminatory conclusions may be drawn from perfectly innocuous, accurate data.”