Machine Learning and eCommerce Talk at SolidusConf

David Jones · Jun 14, 2016 · Share:

David Jones speaking at SolidusConf

I had the pleasure of speaking at SolidusConf in Toronto in May about Machine Learning and eCommerce. I had a great time meeting everyone and chatting about all things machine learning. The video of my talk has been made available online. You can watch below:


Talk Transcript

Hey everyone I'm David Jones from Resolve Digital and today I want to talk to you about machine learning and e-commerce.

But before I get started you may have noticed I sound a bit different from some of you. That's because I'm from New Zealand. So I thought, to get to know each other a little better I could show you some unique New Zealand things.

Here’s a picture of New Zealand. Fun fact for you: In 1982 we had 70 million sheep and only three million people. When I tell people this they think if you're on your way to work maybe you've stepped over a few sheep and to get to your desk you have to clear sheep from your office chair. In reality most sheep are in rural areas and you wouldn't really see sheep on your day-to-day commute.

This is “Marmite.” Has anyone had Marmite before? Oh, a few more than I thought! OK excellent. Marmite is a yeast spread. It’s unique to New Zealand. We recently had a Rails Camp in upstate New York and we gave it to some unsuspecting victims. It was after a bit of drinking. Some people said it tasted a little cheesy and finished it up, and other people, after their first bite, just hated it and threw it in the bin.

One thing we have in common with Canada is the Queen. We're all very proud of that aren’t we? Recently I’ve been to London and I visited Buckingham Palace and you can see just on the roof top the Union Jack was flying. This signifies the Queen is not in residence so I didn't come too close to the Queen unfortunately.

One of the things we like his rugby and this is the All Blacks playing against Canada. We ended up winning this particular match 79 – 15. I'm pretty sure I know why: Because Canada plays with their eyes shut!

This is my first time in Canada and I’m really happy to be in Toronto. Thought it was pretty cool that you guys made it onto The Simpsons and so we’re definitely going to have to come back.

So let's get to it. Machine learning and e-commerce. They’re like a fine wine and cheese that go better together. At Resolve Digital we build, maintain and optimize e-commerce stores, and rarely do we see people using machine learning in practice. So what I wanted to do is talk about this because there can be some great benefits for all of us if we embrace.

These are some things you can do to give you some idea around machine learning and e-commerce.

Recommendations— one of the cornerstones. Based on everything I know about you, all the data I have about you, what products do I think you most want to purchase. This is very useful if you have a lot of products.

Bots— did any of you see Facebook’s F8 Conference? They were talking about integrating bots into the Facebook platform and there's been a lot of chat about “conversational commerce.”

The reality is most of us don't really know how this is going to work in the future, but it's an important part of machine learning because of the natural language processing required to understand what people are talking about when they're chatting with a bot.

Email marketing— there's one fundamental thing I keep coming back to: Why are we sending the same message to everyone in an email marketing campaign when each of us has different and individualized needs and desires. With machine learning we can treat it as a prediction problem to individualize what we send to each person.

It's really not enough to segment groups. I feel there's been a bit of a shift to try and segment lists, but really you need to be doing it on an individual basis.

Fraud detection— we had a client that had a massive fraud problem. To give you an idea of the mindset of someone who’s committing fraud, if you have say 100 stolen credit cards, what you want to do is translate them into cash. So you look for items that are small and easily shippable, have high value and high resale value.

So I can go to a store that doesn't have very good fraud protection, buy a GoPro that's worth $300-$400. Maybe I buy 10 of them at once. Then I can put them into the secondhand market because I know I can get 80% of the value and the shipping is pretty cheap.

That's how to commit fraud — so don't do that! The way we detect fraud with machine learning is we take into consideration a lot of different data points about an order: Address verification (does the billing address match the credit card?). If you have a hundred stolen credit cards it’s going to be difficult to know the addresses unless you stole that information too.

Then there’s shipping. Maybe I’ve ordered in the US but I'm strangely shipping to Canada. That just might be an odd thing for some businesses. There are certain hints like that.

Another one I've seen is figuring out how old an email address is. If you're a fraudster you're trying to generate a lot of new email addresses that have no trace on the Internet before and try and make each order different.

Machine learning centers around wrapping up these different points, trying to figure out how dodgy an order is, and making a decision.

Pricing— that was interesting earlier about the pricing class and figuring out where you can decide on pricing. If any of you have used Uber (which I imagine most of you have) then you would have been subject to dynamic pricing. Uber is making a prediction on demand. If they predict the demand is going up in short space of time they slap a surcharge on you. It’s really annoying, but keeps the supply and demand level and optimizes their revenue.

Finally let's talk about abandoned cart emails. I'm pretty sure everyone knows what an abandoned car email is, but what's important is the timing. How long do you wait in terms of when you’re going to actually send that email? Quite often people put a personalized discount with their email and you know, some people need a little bit of motivation and sometimes need a lot of motivation.

You can treat that as a prediction task to figure out how much discount to give. Obviously you want to minimize the discount and maximize the number of conversions you get from those emails.

OK so this is obviously Amazon.com which is one of the largest online retailers. Here is their home page and I think one thing we can all agree on is this particular page has probably been iterated on and tested hundreds if not thousands of times to get to what we see today. There's a lot to learn about this particular page.

Aside from learning that I’m interested in running and coffee, what do you notice about this page? This is how I see it. I would guess that 50% or more of this page has come through some sort of machine learning algorithm. I'm talking about the sets of lists they’ve chosen to put here, the order of those lists, the items inside those lists, and the order of those items. Even the ads have probably been chosen because they think they’re going to be most relevant to me.

This is a really large company that has refined this page a lot and is probably making a lot of extra money by going through this process. But the thing is that machine learning is not just for the big guys anymore.

It wasn't too long ago that you actually needed a team of data scientists to run all this in production. It's unlike our day-to-day programming and can be quite complicated. Only big companies used to be able to access this.

This is a little comic I drew up to explain the barriers to entry. Here we have a very tall wall that most of us can't get over and when we have a look over the wall we see unicorns and amazing things on the other side. These big companies have private access and the resources to just be able to pick up this stuff. They've been getting the gains from this for quite a long time.

One solution we can all pick up and utilize today is predictive APIs. It's like a ladder in this situation. We’re able to encapsulate a lot of the complexities of machine learning and treat it as an API problem that you can integrate with. The complexities are handled for you through the API.

Predictive APIs are essentially our secret weapon to machine learning but I'm pretty sure that it doesn't run on a floppy disk, it runs in the cloud.

Let's go into what machine learning is and how it works. In 1959 Arthur Samuel defined machine learning as the “field of study that gives computers the ability to learn without being explicitly programmed.” The key here is the explicitly programmed part. I’m going into that in a little more detail so we can all understand exactly what explicit programming is, and then how not to do that.

Here's a bit of semi-pseudo code on how you might solve a particular problem. I ask you, based on a certain product, to show me three related products. Let's say we're looking at a fedora. You know fedora is a type of hat, so you’d expect the category of this product would be “hat.”

I might assume if you're looking at fedoras, you're probably interested in the top selling hats in the store. So let’s go ahead and grab some related products. We look at the category and grab the top selling hats and return that.

We put this live and realize now every time we’re viewing a hat in the store we’re always showing the same 3 products. That's kind of lame and is also going to make that those particular products get even more popular because they’re being shown more.

So you say ok, no problem, we can re-architect this a bit. We're going to take double the amount that we were looking for and randomly grab the amount we want from that. So we could grab six top sellers and then grab three from that.

We put that live and then realize sometimes you also want to recommend a tie to go with the hat. Now we're only recommending the top selling hats so you want to evolve this a little bit and you say, no problem, we can have many categories now.

We'll go through all of those and grab six, and then grab a final three out of that. The other good thing is we’re increasing the serendipity of the results, but it still has some limits. Now when we put this live we realize there's actually a specific tie that goes well with this fedora, so we're going to introduce hand-picked related products. Very fancy! Because we're looking for three related products now we've only got one hand-picked product, so we have to back-fill the other two.

This code is really not the point. My point is what we're doing here is basically observing customer behavior, then going back and writing more explicit rules and repeating this process. You're never going to be done. You're going to be constantly theorizing about what new thing we should incorporate to make it better.

That’s explicit programming. What it gives you is immense codebase complexity and an ever increasing amount of complexity. You do not want to be him! Coming back to the definition, we're trying to avoid explicit programming.

In order to fully grasp this you have to completely shift your thinking away from what I just showed you. Let's take an example. On the left hand side you’ve got traditional e-commerce where someone maybe every five months is sitting down and trying to decide all the new prices for their products.

What price should we sell the product for? Will you take cost and add 50%? That’s $12.78. I'm in a good mood today and nine is my favorite number after all, so let's just make it $9.99. This is somewhat arbitrary but it's lightly based on some science perhaps.

A mind shift is required in order to really understand how this could be automated. So instead of every five months, we're looking every five minutes. Based on everything we know about the history, a stock level of 31 right now, and 230 people who viewed it in the last 24 hours, we predict that a price of $14.32 will optimize sales.

We’ve flipped the equation. Now it's a become a production problem. Some companies base their entire business off this as a point of differentiation. Take a wine club for example. Here is Club W which is building their entire business off the fact that you have a palette profile and you're going to give feedback on which wines you like and dislike. It's going to match you completely based off data.

To be honest I would trust a service like this way more-- that I knew was listening to their data, responding to it and learning from it-- then someone else who might not have this sort of involvement and maybe is just selling you something they have a lot of in-stock or they got cheap that week.

This is a new model and process for the way you need to deal with these problems. First you're going to need to collect data. Second you have to train what's called a model. Finally you can make predictions.

I'm going to go through these steps. The collection of data in e-commerce we're talking about is your purchase data, product views, likes and ratings, and wish lists. Any data that connects a product to a user.

If it’s data that indicates someone is going to like or dislike something, even in a small way, that could be considered useful. Pay attention to those kind of connections.

I just want to take a little side step for a moment. Has anyone heard of Maslow's “Hierarchy of Needs?” OK cool. So you know that each one of these things has to be fulfilled in order to get to the next level. In order to feel love/belonging you need to feel safe (from a human standpoint). The thing is that data has human needs too.

Let's take a look at this. If data was a person it would like to be collected. Then it wants to be stored. Then it wants to be analyzed. Then it wants to make predictions, and finally the best thing you can do for data is actually make decisions based off those predictions.

This is the kind of framework you need in order to optimize and think about your data funnel. I imagine most people here may be getting to analysis perhaps, or you're doing some reporting. But it's about taking it right to the top.

This is a quick example of how this equation starts to change. Now if we implement the related products again, what we're doing is calling out to an API every time someone orders something. We're passing over a user ID and the product IDs that have been purchased. Just note this is all meta data. We’re not actually sending the full order data. We’re just interested in the connections between users and products.

In your products controller every time someone is viewing something, just log the fact that user X has viewed product Y. You'd want to make sure these calls are asynchronous and fast. The predictive API we're calling is essentially ingesting their data. It’s training a model at this point and it's exposing predictions back to you after that stage.

Let's talk about the training model. This is probably the hardest part for me to explain because it's quite technical and there are a lot of different ways to solve this little problem. So what I'm going to do is go through a very common example used in e-commerce, an approach called collaborative filtering.

This is how collaborative filtering works. Imagine you have these people and imagine that you're an online store selling different classes: Skateboarding, surfing, horse-riding and cross-country skiing.

Each of these particular people have ordered different combinations of these items. Some haven’t ordered anything all. Let’s say the purple man is currently on the website. We're wondering: Should we recommend horse riding to to him?

We wrap that diagram up (a social network diagram) into a table. Every time someone has ordered a class we put a tick. We can see at the bottom left you've got the purple man and we’re wondering if he’ll be interested in horse riding.

What you do to figure this out (and how the algorithm works) is you look at users who are similar to this user and have a similar purchase history. In this case the first two people have both purchased skateboarding and cross country skiing classes as well. These people are probably quite a good guide for figuring out if purple man is interested in horse riding or not. It appears that no, those other two people have not purchased a horse riding class, so the best prediction you can make is purple man’s probably not interested in horse riding.

Imagine what I just showed you except there could be 10,000 products across the top and 100,000 users coming down the side. Each of these is not just a tick or a blank. It could have varying levels of weighting so maybe someone viewed a product you treat that like a 0.1 value. If they ordered something that's quite a strong indication they're interested so you call that 1.0. If they ordered it twice then it's a 1.5 value. So this is all the weighting going on to try and determine a solution.

Hopefully this gives you a good understanding at a high level of how this works. So now when we go to make predictions our method for getting in products is quite simple. We just make a call out to a predictive API. We give it a user ID and the product ID that they're looking at and we ask for three related products in advance.

My point is machine learning is really greater than explicit programming. It's learning all the nuances of who's buying what and why, and automatically updating recommendations based on that data. You're not having to implement explicit rules and theorize about these things. You just treat it as a mathematical problem.

You have to apply this machine learning mindset to e-commerce. To do that you have to think differently. Think about the three steps I outlined around collecting data, building models, and making predictions. That's the language you need to use.

This is how I look at it. If you’re showing the same information to many customers you're probably doing it wrong, or at least you have an opportunity to show them different or more personalized items based off data.

It might be the pricing, or categories, or the way you're showing categories or taxons and the way the names on those and how they’re filed together or search results in the way that filed or discounts are offering are there so many things you can do. This is what I look for when you look at the things that you're showing to customer.

Now I want to talk about a real world example, a client of ours, United Cellars. They are a wine website and online retailer in New Zealand and Australia. We set out with the goal to increase their revenue, and thought product recommendations would be good way to do it. In terms of implementation, we thought of using predictive API's. Given that this was a consulting gig, we needed to be very productive and build these features quickly.

The site had 16,000 product views, 60,000 orders, and 3,000 product ratings. This is only 79,000 rows of data. Where I live in the San Francisco Bay Area people are always talking about millions, tens of millions, even hundreds of millions of rows of data. This is definitely not what we’re talking here.

I wanted to state this because it means if you're a smaller store, you just need to start collecting this data, trying to build it up. The quality of the data is key. If you have clear intentions about what you're trying to achieve you can increase the quality and therefore the effectiveness of your data. You don't need as much to achieve the same thing.

As we were building this we realized that taste preferences for wine are vastly different. It depends on your background, what you like and maybe how much wine experience you’ve had. This store offers 10,000 different wines and if I think about trying to match the preferences of everyone here it’s a difficult problem. I don't think any one individual could do a very good job of that. It seemed like a really good fit because we have this range of tastes.

We decided to implement two things. Personalized product recommendations (within the context of the entire store, which products are most relevant to this person?) and similar products. This is not in the context of the entire store. It’s in the context of exactly what I'm looking at right now. If I'm looking at Product X, which products are similar based on everything we know about the way people have been ordering it. That's going to change between pages as I'm looking at different types of wine.

We set up an A/B test to see how effective this could be. We showed 50% of the users the original site, and 50% with the recommendations in place. This is what it looked like when we put it live. Across the left side you have “recommended for you” which is those storewide recommendations, and across the bottom we've got “more red wine to consider” which is a similar product recommendation based on the context of the specific wines we're looking at right now.

As soon as we put this live we realized we made one big mistake. We were recommending products that were out of stock. Those products were the best recommendations for the customer, but if you can't buy them, why recommend them? You need to filter the results. I would extend this beyond just out of stock but really anything you can see that lessens the relevance of those recommendations.

Another thing we were able to see with the existing orders is that people would commonly buy either red wine or white wine. Not many people would buy equal amounts of the two. I suggest you think about and understand the purchasing decisions people make on your store and how they shop. That might give you insight into how to best present recommendations.

We got some results back: A 45% longer average session, a 22% increase in conversion rate, and a 37% increase in average order size. This resulted in a 71% increase in revenue. This is because we increased the conversion rate and customers were also buying more. It's those two things combined that gave us fantastic results, really beyond our expectations.

Now I just want to rally the troops a little bit. If you're not doing this sort of stuff right now, now is a really good time to get started. The hardware costs are falling a lot. To run this stuff and to be cost effective it's very computationally expensive, especially if you have a lot of data. So that's one significant factor.

There are also a lot of advancements. Academic research is focusing on making everything more efficient and effective, and lowering the error rate. Hardware cost and advancements tie together because the libraries that are being built are becoming more tightly coupled with the hardware so we can combine those two things to be even more powerful. For example, swapping two GPUs for CPUs to do this processing is one implementation trick people have been using.

If you’ve looked at any tech news over the last while you’ll see there’s so much investment in this space. There's a lot of promise, and a lot of hype too. Be careful of that.

To give you an example, here’s a graph from Y Combinator (a company that invests in a lot of startups). There are application rounds where people submit their ideas to get investment. Y Combinator gets something like 2,000 applicants each round.

They decided to graph the number of mentions of words in their applications over time. You can see there's clearly a lot of startup people getting involved in AI. They even said this was an underestimation of what’s actually going on.

OK, we've been running our predictive APIs in production, so i just wanted to give a couple of tips.

You have to be careful about monitoring performance. All the benefits of machine learning can be entirely reversed if you slowed the application down while you're doing it. You have to be very careful about the speed at which you log the data and then you do all the processing externally from the main app.

People talk about real-time predictions quite often. I think it's overrated. Most of the time you don't actually need real-time predictions at all. If you have 100,000 orders, you’re doing product recommendations and a new order just came in, the chances of that new order significantly impacting what you're showing users right now is really low. I would encourage you, in order to get the best performance, to cache everything instead and only update at set times. For example Netflix updates their model every day, so that gives you something to anchor off.

I'm sorry this is a very small picture but i just wanted to mention this incredible machine learning canvas. You need to be really clear about what you're trying to do and how you trying to solve the problem: Who's going to see what, where it’s displayed, how it’s going to work, and what data is needed to calculate all that.

This machine learning canvas by Louis helps you scope out the problem and understand how the bits and pieces come together to make sure you have the full flow there. It's really a nice way to approach a project like this.

Some conclusions. Machine learning is greater and more powerful than explicit programming. It allows us to automatically adapt, take in and react to all of the data, and offer a very personalized experience.

Predictive API’s are great. They allow us to encapsulate the complexities of running machine learning in production and to treat it like a regular API that we would deal with and start to incorporate these features into production more easily.

Small data can be enough. You saw in the United Cellars example we had relatively small data. I’d argue that quality is key so be clear about what you're trying to achieve and make sure you have good quality data.

Finally machine learning is highly effective in e-commerce. It's not like the moment I step into a physical store you can automatically re-arrange the products, instantly know everything I want and sort by how I’d like it. We have a real opportunity. We’re incredibly lucky to be able to collect the data and customize everything we’re showing to each individual.

We have a unique opportunity to do this because of the way people interact with e-commerce sites. It just makes the customer experience more personalized. It's a win-win. It makes stores more effective and more useful and helpful to the customer. They’re less frustrated because they get what they want quickly.

The slides for this presentation will be up at resolve.digital. You can follow me, @d_jones if you like, and thank you very much!


"Machine Learning and eCommerce Talk at SolidusConf" written by David Jones. A huge thanks to Barry Harrison for compiling the transcript.

Let's work together

Tell us about your project and we'll get back to you shortly.


Join The Conversation

Share and start a conversation about this post: