Topics
Artificial Intelligence and Business Strategy
In collaboration with
BCG1-800-Flowers faces the same cold-start problem any consumer-facing business might face: It doesn’t know exactly what its customers need when they first come to its website. What’s more unique to the platform, which operates through a network of local florists and affiliates worldwide, though, is that each time a customer comes to its site, they may have a different end goal in mind. Consumers shop for gifts and floral arrangements for different occasions — as varied as funerals, birthdays, and holidays — which can make it difficult for technology to recommend the best product during a specific online shopping session.
In Season 2, Episode 3, of the Me, Myself, and AI podcast, 1-800-Flowers president Amit Shah explains the company’s unique challenge as a platform business and engagement brand facing this perennial cold-start problem. He also shares his insights into why marketers may have a leg up in working with AI and machine learning, how to foster a team of curious learners, and why it’s important to tolerate failures.
Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast.
Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.
Transcript
Sam Ransbotham: Flowers are not digital products at all, but digital technologies like artificial intelligence still offer considerable value to companies that sell nondigital products. In today’s episode, Amit Shah, president of 1-800-Flowers, describes how AI and machine learning, offered through a platform like 1-800-Flowers, can empower small businesses to compete with much larger organizations.
Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.
Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.
Sam Ransbotham: Today we’re joined by Amit Shah. Amit is the president of 1-800-Flowers. Amit, welcome. We’re excited to learn more about what you’re doing.
Amit Shah: Thank you so much, Sam and Shervin. Great to be here.
Sam Ransbotham: Amit, we talked earlier and discovered that we all had some connection to the state of Georgia — you and Shervin and I. But now you’re in Long Island; you’re the president of 1-800-Flowers. Tell us a little bit about your career path to get there.
Amit Shah: I started out as an analyst at McKinsey, which is very well known for helping teach problem-solving both individually and collectively at scale, and then worked at a range of startups in the Northeast area and ended up being part of [the] Liberty Media/ProFlowers group at an early stage of my career and got very much involved, I would say, in the front seat of growth hacking. That’s what led me to [my] current career, as I’ve seen growth hacking and the mindset of the hacker looking for the continuous change, continuous upliftment, and a continuous desire to provide the best customer experience actually both prevail and ultimately become the most critical element of any C-suite or boardrooms at large.
Sam Ransbotham: Amit, starting in a marketing role and then transitioning to president — that marketing background must’ve made a difference. How has being from marketing affected your role as president?
Amit Shah: That’s a great question, Sam. Evolving into the role of leading multiple functions, departments, and colleagues outside of marketing, what really stand out [are] two key elements. The first thing is that I think marketing traditionally was seen as one of the functional competencies in empowering and accelerating problem-solving. Actually, marketing is becoming a growth function, and growth is becoming the key differentiator [for] companies. So I expect to continue to see an acceleration of marketing leaders actually taking on much more leadership responsibility and ultimately starting to steer the ship, because growth has become the essential currency and essential differentiator. So that is one vector. And then I would say the second vector that informs me and [is] prescient to our AI conversation is that I feel like people in the marketing sphere are actually much more contextually aware and contextually practicing advanced problem-solving using machine learning than a lot of their peers. I think there’s a unique ability and a unique mindset that I bring to this leadership role as president of 1-800-Flowers, having been exposed to that quality and quantum of problem-solving, which is surrounding a lot of growth and marketing leaders around us.
Sam Ransbotham: Let’s address the obvious question. Flowers themselves aren’t digital at all. How is 1-800-Flowers using digital data and AI in particular?
Amit Shah: Currently, we are a platform of 15 brands. If you think about it, we are a platform that empowers engagement. We play along the full spectrum of human expression and engagement, starting from birthdays all the way to sympathy and everything in between, so that is who we are. We think that we have built an all-star range of brands to really power an engagement platform. So if you think about what differentiates modern organizations, it is not just the ability to adopt technologies, which has become a table stake, but the ability to out-solve their competitors in facing deep problems. So when I think about AI, I think about our competitiveness on that frontier. Are we better problem solvers?
I’ll give you a terrific example of that. When I started my career 20 years ago as a young analyst at McKinsey, there was a clear differentiator between people who [were] masters of Excel and who [were] not. It was a tool that empowered decision-making at scale and communication of the decision-making. When I think about AI and its power five years down the road, I think every new employee that starts out will actually have an AI toolkit — like we used to get the Excel toolkit — to both solve problems better and communicate that better to clients, to colleagues, to any stakeholder. AI to me is not a skill-set issue; it is a mindset issue. And over the long term, companies that adopt and understand that this is a mindset and a skill-set game actually will be the ones that are more competitive than their peer reference group.
Shervin Khodabandeh: That’s super insightful and right on, and it’s almost exactly the conversation that we were having with Will [Grannis] from Google. It is all about mindset, yet it’s quite daunting, I would say, that for so many companies, they’re still viewing it as a technology issue, as a technology black box, as “We need a lot of data; we need a lot of data scientists.” Which of course, you do need those, but you need to also focus on the right problems and that problem definition on how you go about solving those problems, and then how do you change the mindset of the actual users? Because I could imagine you have been able to shift the mindset of two kinds of groups, like the mindset of the consumers, where 20 years ago, they wouldn’t share information on birthdays and things like that with any digital platform, but of course, you’ve changed that mindset, including myself; but you also must have changed the mindset and the ways of working of so many mom-and-pop local florists that are actually engaging with a platform to do business. Can you comment on that a little bit and how hard [it] was to do that?
Amit Shah: I think, Shervin, you’ve bought [up] a very important crux of this issue. When we talk about a mindset shift, a metamorphosis of acceptance of mindset over even skill set, it really requires a multistakeholder approach. And certainly, we are very proud that we support almost a community of 4,000-plus florists who are powering Main Street businesses. And I would say one of the last remaining outposts of successful Main Street businesses in the U.S. is the florist. And it plays a very important part in the community, not just as a trusted provider to all the important occasions for everyone in the community, but also, I would say, as a [signpost] of how the context with AI is evolving.
Let me give you a few examples of it. The question comes down to, is the context around me getting more competitive and evolving? And I would say, for the small florist and a company like us, being surrounded by platforms like Facebook and Google, which are auction-rich machine learning environments set up to extract the highest yield per click, means that any business owner that is seeking growth, that is seeking to get in front of customers, already is being mediated by machine learning and artificial intelligence. So when I think about this multistakeholder empowerment, I think about how do we empower the smallest florist in [the] heartland of America [to] compete with this evolution of context? How do we empower that small business entity to get to that strength? And I think that’s where the mindset comes in, because what this requires is, first of all, understanding that the context is already rich in AI and ML. The second point is that, unless you can assemble a response to it, you are always on the losing side. So our thinking is that by providing [that] suite of services, by providing and working very closely with our florist community, our supplier community, we are actually providing them relevance in a rapidly evolving context, where getting in front of their customers itself is a machine learning problem.
Shervin Khodabandeh: How do you go about doing that? How much of that is technologically driven through the platform, and how much of that is good old-fashioned human [elbow] grease and relationship management and working closely with these little places? How much of that was technology solving the problem versus people and processes and change management and those kinds of things?
Amit Shah: A very strong starting point is realizing how [you can] basically collect data and make inferences at scale. So I’ll give you a simple example. To set up a reminder program on our platform is actually a perpetual cold-start problem, and let me explain what that means. It means that, for example, if you come to our website, or you go to any florist’s website, and let’s say you have come in to express happy birthday to your sister whose birthday is a week away, and you might come and pick an arrangement. Let’s say she loves white calla lilies, and you come and do some clicking on white flowers, white arrangements, and then pick a calla lily arrangement and send it to her.
Most companies will take a record of that data and say that the next time Shervin comes to our site, let’s show him white, for example. But it could be that your next visit is actually right before Valentine’s [Day]; you’re here to buy flowers which are predominantly generally red or pink for Valentine’s [Day], and you’re trying to express that, so your entire click history, your entire corpus of digital breadcrumbs that you have given us to solve a machine learning problem, is actually irrelevant, because you’re starting again as a cold-start outcome. And this fact of personalization, the enormity of data, the enormity of decisions required to resolve this outcome so that you can create a better customer experience, is what we are empowering our stakeholders to really realize, so that is one dimension of it.
The second dimension of it is what we talked about — that currently customers are intermediated by extremely expensive, I would say, auction-rich environments controlled by a few major platforms, and to play in those platforms you need to have a baseline competency. So we employ a lot of advanced algorithmic trading and algorithmic models, for example, to understand what should be your bid at any given time of the day, day of the week, and month of the year in order to maximize your yield and minimize your CAC [customer acquisition cost]. And those data sets, that sophistication, that investment is almost outside the realm, I would say, of a lot of localized businesses and outcomes.
So this question of building alliances, this question of trusting larger entities, is going to become also more important over time. So when we think about our mission and our vision, we are inspired by what part can we play in catalyzing those outcomes, in empowering, in accelerating those outcomes. Because whether we are talking about florists on Main Street as one of the last remaining independent, important businesses in America, or we think about someone who is trying to get to a funeral home to express something very personal to them, those moments define us and define the communities that we live in. And we think that we have a strong part to play in helping realize that vision. And we look at that vision not just as a financial or a transactional outcome, but we look at that as an outcome for the whole society. For example, we have free e-cards that you can come to our site right now and send. We really want you to just literally express to someone that, hey, you are thinking of them, because we think that it’s way more important for us to appreciate and empower that expression that over time hopefully leads you to have a deeper connection with us as a brand, deeper connection with us as a platform, and then use us to express that emotion. But the empowerment of emotion in and of itself is a very important part of our mission and our vision.
And going back to AI, and the reason I talked about solving fundamental personalized problems at scale, is that all of our expressions are ultimately personalized expressions. So unless you are employing and deploying those technologies and the mindset that customers are here to express and connect, you are not going to be looking at the problem or the solution in the way that empowers that end customer first.
Sam Ransbotham: Is there something specific in your background or education that shaped how you think about that, how you embody that customer-first mindset?
Amit Shah: I think it was a mix of my liberal arts education and a desire to push problem-solving as a key characteristic and an attribute of my skill set as I moved through the various leadership challenges and ranks.
One of the key lessons that I took away from my liberal arts education at Bowdoin was around the importance of this learning quotient and having an LQ-first mindset, because what liberal arts really forces you to do is adopt continuous learning and asking questions which are deeper than the functional competency. And this, I think, over time, actually … when machines start doing repetitive tasks, decisioning will become actually a very important ethical choice as well.
When I mentor college students and I give talks, I always point out the primacy of “Take your nontechnical classes very seriously and consider a liberal arts education.” Because I think the seminal questions faced by a leader 10 years hence, 15 years hence, are not going to be just around how competent they are, but how thoughtful they are and how good they are at learning.
Shervin Khodabandeh: Exactly.
Sam Ransbotham: Well, as a professor at a university that focuses on liberal arts education, I can wholeheartedly agree with that. But I also want to also think about, is there an example of a place … you mentioned how you’re trying to learn about individual customers and how difficult that is because in your context it’s not just, here’s what they did last time and we predict that they [will] do more of the same. In fact, last time we told them exactly the opposite of what they’re going to do this time. Can you give us some examples of how you’re using AI to learn about your customers’ needs? And what kind of things have you learned and how have you set up your organization to learn those things?
Amit Shah: It’s exceedingly hard, no matter what AI leaders and the ecosystem like to [say] about it, chiefly because of three reasons. I think all business leaders face a trifecta of issues when they think about AI adoption. The first starts with having cross-functional and competent teams. Generally, what you find within organizations is that the teams are spoken for and, especially, data science and machine learning competencies are extremely hard to find and fund, I would say. The second issue is the data sets are noisy and incomplete, so when we talk about essential ingredients of AI, in most companies actually the data is extremely siloed, extremely difficult to join, and often incomplete. And the third, which is a much more evolving vector, is that it has to be explainable in its end state; it has to be trustworthy as a stack. So, what we actually found is rapidly evolving — and I think this is going to be very true of most organizations — is [adopting] AI as a service.
Most companies, I think, can get very quickly started — to your question, Sam — by adopting AI as a service and then asking a very simple question: What two or three problems can I solve better tomorrow employing this stack that I’m not doing currently? And there [are] very interesting outcomes when you start looking under the layer. So one of the problems I said is a cold-start problem for us, so we are working on a recommendation system, which has been very successful, that utilizes a lot of neural learning and learning with very thin data sets to make inferences.
The other place that we found is forecasting, for example. Forecasting is a very difficult exercise, especially if you can imagine that … for example, Valentine’s Day actually moves by day of the week. So last year it was a Friday, this year it was a Sunday, compared to Mother’s Day, which is always on a Sunday, and that has very deep business implications as an outcome. So forecasting is a perfect candidate to put toward this, but the mindset, again, is, are you testing and learning along the way? In some cases, the early attempts at machine learning will be no better than your decision-based engines. But what we have seen is that, actually, persistence over even the medium term has very asymmetric payoffs. And [it’s] extremely important to evangelize and understand those payoffs, because, as I said, the context that most modern companies find themselves in is already awash in machine learning.
Sam Ransbotham: Two of the three things you mentioned involve cross-platform; it’s the idea of people working together — you mentioned it from a data perspective and also from the team perspective. The tension is, everyone can’t work on everything all the time, otherwise that’s not a team — that’s the whole organization. So how do you set that up within your organization so that you’ve got that nice blend of cross-functional but not everybody involved in everything?
Amit Shah: I would say, to be brutally honest, it’s a field of aspirational tensions. When you’re trying to shift mindsets over skill sets, it’s not about how do you assemble teams and how do you get to a solution, but how do you ultimately sell your vision and how do you get people enthusiastically believing in that vision? So I would say our early attempts at organizing were a lot more command and control, where we were saying that, hey, if you have [a] data science background or you have [an] analytics background, maybe you are prime for this. I think over time what we have realized is actually [that] learning systems have a self-organizing principle at their core, so now we are thinking more about, as I was saying, the early days of just rolling out Excel to everyone. What if we rolled out AI as a service to everyone? If someone is just making a schedule of meetings, do they get more empowered by AI as a service? Will they themselves find out some novel solutions to something that was completely not thought of as an important enough problem? And the reason I say that, Sam, is not to suggest that there’s not a cohesive listing of problems that can be solved by AI and assembling cross-functional teams and doing that. I think that’s the easier part. But what I’m suggesting and egging on my peer reference group to really think about is that the real empowerment and the real transformation in the mindset will come when you roll out AI to every endpoint. We don’t think twice about rolling out email to every new employee; why do we constrain and self-limit ourselves to think about AI as only the domain of specialists? It’s a problem-solving methodology; it’s a problem-solving mindset.
Shervin Khodabandeh: It’s an operating system you build apps on.
Amit Shah: Exactly.
Shervin Khodabandeh: I think that’s quite insightful, because whatever you make available to smart and inquisitive people ends up becoming better. It’s a very good challenge to any organization, “Why not have the suite of AI products self-service for the layman user to be able to do things with?”
Amit Shah: To your point, one other thing that comes to mind is the importance of appreciating failures as essential input to better learning. I think what I find in adopting an AI-first mindset is a deep respect and celebration of failure as an organizational currency. If you think about the history of employees within an organization, all the origin stories and the stories thereafter are around successes. But the AI-first mindset, in my mind, is, how do you actually collectively embrace [failure]? Not by putting up posters [that say] “Run fast and fail fast”; those don’t really change people’s activities, their behaviors, and their acceptance of their career trajectory as much as celebrating failures [does]. The reason I say that is that all machine learning, all learning in the future, actually has a very healthy equilibrium between outcomes that are successful and outcomes that failed. Because outcomes that failed actually teach the system equally as much as outcomes that succeeded.
Shervin Khodabandeh: I think it’s a very important point on failure. How do you operationalize that?
Amit Shah: I pray a lot and I toss the coin the lot, but it’s a very important question. I think it has to start from the leadership. I think it has to start from a very, very human manifestation of how decisions are extremely difficult and even for leaders to be very open about when their decisions did not lead to successful outcomes. So I think one of the key learnings in my life, and which I’ve tried to follow very deeply, is around radical transparency — around making sure that people appreciate that these were the reasons I took a certain decision and that I’m open enough at the end of it for any inputs, whether it went successfully or it didn’t go successfully. So that is one way of operationalizing it when the leadership starts living out that outcome.
The second, I think, very important part of it is, how do you incentivize that outcome? For example, we have a constant red team that we call internally that runs up against the main growth team, for example. If the main growth team has $100 million to spend on marketing, I give 10% to a red team that is actually going against the conventional wisdom. The reason it is going against the conventional wisdom is actually to build up a corpus of failures that then can act as a foil to what [we learned] from spending that $100 million. This is a very important part, again, of increasing the collective LQ of the team, because if everything is done by consensus, we know from behavioral economics and a lot of studies done [that] it is not the best decision-making outcome as well, so that is one example of it. So how do you set up team structures and incentives?
And then the last thing I would say, which has been a learning mode of late [for] me, is, how do you actually translate that into ESG or ethical goals? Because what I’ve seen with the newer cohort of employees, of stakeholders that we have had, is that it is not so much just about learning, but learning within a context that I believe in. My newer understanding more and more has to be around, hey, if we ingest AI models, are they explainable, are they debiased, can I make sure that the team appreciates that sudden choices that we may make may not have the immediate business payoff but actually [are] much more better aligned with our vision and our mission?
Sam Ransbotham: Well, we started this discussion talking about mom-and-pop flower shops, and that resonates with me. Actually, I didn’t mention it, but my mom owned a flower shop. So mom-and-pop is actually literal [at] this point. Amit, we really appreciate you taking the time to talk with us today. Thanks for spending some time with us.
Shervin Khodabandeh: Thank you so much. This has been very insightful. Thank you.
Amit Shah: I loved this conversation. Thank you, guys. Appreciate it.
Sam Ransbotham: Shervin, Amit was quite interesting. One thing that struck me was how we talk about, “Oh yeah, the machines can learn from past,” etc., etc., but how every scenario for them is a bit of a cold-start problem because every holiday is different. Every time someone comes to them, they’re getting something for a different reason, and it wouldn’t be a cold start if they knew the underlying reasons, but they don’t always [know]. When we go to any of the normal collaborative filtering platforms, like a Netflix or other places — or even transportation like Uber and Lyft — those people have a much better ability to build off our history than 1-800-Flowers does. It is one cold start and tying that to how emotionally aware they need to be because, by definition, these are very human experiences that they’re involved in. If they screw that up, that’s not good.
Shervin Khodabandeh: Yeah. Also, it’s a cold start, which by definition means it’s a learning opportunity, because the cold-start problem is the same as a learning problem. And if you have many cold-start problems, isn’t that another way of saying you basically have to be comfortable with a very accelerated rate of learning? And that’s your success, because otherwise, yes, everything is a sympathy [arrangement] or everything is that one demographic that I really, really know, rather than being adaptive to all those situations.
The other thing that is [an] emerging theme over research that he talked about a lot was the notion of learning quotient. We asked him about teams, and he said what they care about a lot is an individual’s desire and willingness and ability to want to learn. And that fits so well with what AI itself is, which is it’s all about learning and the notion of human and machine learning from each other, which is also the theme of our work. I found it quite insightful that he picked up on that. And in many ways it sort of also fits into his point around mindset and culture change, because he also talked about [how] it’s not so much about the skill set or the tech; it’s much more about changing the ways of working and changing the operating model and changing the mindset of what you can and should do with AI, with this tool and capability that he thought would just be as commonplace as an Excel spreadsheet that is now pretty commonplace.
Sam Ransbotham: Exactly.
Shervin Khodabandeh: And so the importance of learning and ongoing learning and adaptability I thought was quite elegant in what he said.
Sam Ransbotham: Well, you’re not going to get an argument from me. I mean, I’m a professor, I’m an academic, so I think that I’m biased.
Shervin Khodabandeh: You love learning.
Sam Ransbotham: I’m biased to think that learning is kind of a big thing, but even more than that, he also mentioned the importance of liberal arts thinking in that learning. And I think that was an interesting angle. You and I, we make fun of our engineering backgrounds a lot, but as we’re seeing these technologies get easier and easier to use, it’s really highlighting the importance of the human and importance of the human working with a machine. I think if we go back 20 or 30 years ago, there was so much talk about the death of IT. “IT doesn’t matter” — you remember that phase.
Shervin Khodabandeh: Yeah.
Sam Ransbotham: But … nope, that didn’t happen a bit. I mean, as IT became easier to use, companies just wanted more and more of it. And this is the natural extension of that.
Shervin Khodabandeh: Yeah, and I think this notion of technology raising the playing field so that humans can operate at a higher level and then humans inventing better technology so that level again keeps getting raised … I think that’s sort of a common theme that’s happened with technology. Actually, chess is a great example of that, because if you look at how 20 years ago — you talked about, Sam, 20 years ago the death of IT — 20 years ago, 25, 30 years ago was almost like the death of the computer in chess because it was argued that there was no way —
Sam Ransbotham: It was done.
Shervin Khodabandeh: Yeah, right? Like no way a human could be beaten by a computer. And then the game changed when Kasparov lost to Deep Blue, but then what happened is chess players got smarter. So the chess Elo ranking, the highest ranking of [the] highest chess players, has been steadily increasing because of how AI has helped humans get smarter.
Sam Ransbotham: Thanks for listening. Next time, we’ll talk with JoAnn Stonier, chief data officer of Mastercard, about how Mastercard uses design thinking to ensure its use of AI supports its overall business strategy.
Allison Ryder: Thanks for listening to Me, Myself, and AI. If you’re enjoying the show, take a minute to write us a review. If you send us a screenshot, we’ll send you a collection of MIT SMR’s best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback@mit.edu.
Comments (4)
Allison Ryder
Doyensys Inc
Xena Ugrinsky
Jim Kinzie