Artificial Intelligence Comes to Spend Management

Paddy Lawton
Paddy Lawton
General Manager, Coupa Spend Analytics, Coupa Software

Paddy Lawton is the founder of London-based Spend360, which was acquired by Coupa in January of 2017. Prior to that, he was the CEO of Digital Union. He holds a BSc in Computing and Software Engineering from Oxford Brookes University.

Read time: 25 mins
Artificial Intelligence Comes to Spend Management

ai come to spendBack in January, Coupa acquired Spend360, a leader in artificial intelligence for spend management founded by Paddy Lawton, the author of today’s blog post.

What happens when a mathematician falls down a rabbit hole into the world of sourcing and procurement? Artificial intelligence (AI) comes to spend management, that’s what.

It’s a tale that illustrates how AI has been evolving over the past several years, to the point where it’s finally starting to deliver real value. I’ll tell you all about how that happened and what it means, but first I want to give some background so you understand exactly what it is we’re talking about, and why it’s happening now.

Intelligent--in theory

It seems that every third business article you read these days makes some mention of artificial intelligence and machine learning, and how they’re going to transform the world. If you’ve been around for a while, you may have heard this before, since artificial intelligence was first developed in the mid-1940s. Machine learning, which is actually a subset of artificial intelligence, has been around since the 1980s.

 

Truth be told, up until recently, they weren’t very good. The theory was solid, but artificial intelligence systems learn from data—lots and lots of data. It’s only with the big data explosion of the internet that we’ve had enough data to feed into these systems for them to learn enough to become intelligent.

Even if we’d had the data, it wouldn’t have mattered, because we didn’t have the computing power to be able to handle it. It was almost like Einstein’s theories: You couldn’t prove them because there was nothing big enough to run the math.

It’s only been in the past few years that we’ve had the massive volumes of data and cheap computing power we needed to train machines to become intelligent. But there’s also something else without which we could not have done this: Word2vec.

The killer app

This is an app created by a team of researchers at Google that helps technologists turn words into vectors—“quantities having direction as well as magnitude, especially as determining the position of one point in space relative to another.” That’s according to the definition that comes up when you type, “what are vectors,” into Google.

To deliver that perfect answer, Google has spent the better part of two decades parsing billions of internet searches in order to train their algorithms to recognize what a person wants based on even the most cryptic, poorly worded or misspelled query. Using word vectors, they built a neural network that can derive meaning not just from the patterns of characters that make up words, but from their context and relationship to other words.

A couple of years ago, they released their mathematical models in the form of an app, for all the world to build their own neural networks. That’s leading to breakthroughs in deep learning, which is what’s really new in this field. This is the clever stuff.

Naan bread

In fact, deep learning is what most people are talking about when they say “artificial intelligence and machine learning.” Putting those together is like going to an Indian restaurant and ordering naan bread. You don’t need to say the ‘bread’ bit as it’s already implied. So if you want to sound really smart, say “machine learning and deep learning” instead.

And if you want to really be smart, here’s the difference:

Machine learning is rules-based, and doesn’t require a neural network to become intelligent.

For example, let’s say you’ve got a list of suppliers, along with various spellings of their names. IBM might be International Business Machines, Intl Bus Machines, IBM France, and so on. With machine learning, once you’ve given the machine the rules, it can normalize all the variations into one.

The other thing it can do is when you’ve got a new supplier, it can see if it matches with any existing supplier. You don’t need deep learning for this kind of identification and data normalization because it’s only a name and variations of it.

The clever stuff

Deep learning is when you train a system to understand whole sentences and the context around them. For this you need a neural network, which is a mathematical approximation of how your brain works. With this kind of system, you train it with all the data that you’ve ever seen before and it learns to make inferences.

For example, if you have an invoice with a line of text that says something like, “plastic, 500 ml, Crystal Geyser,” it will infer that it’s bottled water even though it doesn’t say so because it’s been given some prior knowledge to establish context.

Machine learning and deep learning are different but related and have different but related uses inside and outside of spend management. Google, Facebook, Microsoft—these companies can make powerful neural networks because they can train them with tons and tons of highly accurate data. They’re trying to do things like face recognition—identifying someone’s face as it’s in motion--or teaching a digital assistant to “converse” with you.

That’s pretty sophisticated stuff, but there are a lot of types of data and applications for machine learning and deep learning that are appropriate to different industries and problems. Which brings us round to spend management.

Down the rabbit hole

Back in 1999, I was working at a “gun for hire” tech services company in the UK when a client came to us for help automating their tendering process. This was right around when Ariba and Commerce One were getting going, so it was an interesting time, and I spent the Christmas holiday reading up on it.

There really weren’t any automated sourcing tools at that time, so this was quite an interesting problem. Over a period of year, we built a reverse auction tool called Easy Market, which came to be used by about 150 retailers around the world.

That led to us becoming aware of a problem that lent itself to machine learning. A typical retailer would have about 2,000 big suppliers it knew and sourced from, and about 8-10,000 more suppliers that were lost in what we learned was called the tail. What these retailers would do is keep sourcing the same stuff from the same top 15-20 percent of suppliers because they couldn’t see the rest of their spending.

So, they would keep bashing these people over and over again. But that doesn’t work forever. People either get tired of that and stop doing business with you, or the price bottoms and then comes back up as the weaker players go out.

No one was doing spend analysis or if they were, they were getting a big consulting firm to come in with a team of people who’d manually classify everything. That would get them to where they could source about another 10 percent of their spending. They’d never get to the long tail.

A massive job

We thought that there must be a better way of doing it. Then we realized what a massive job it was. It could be anywhere from a million to five million invoices to categorize. It was quite an interesting problem, having to classify all this unstructured data. We sold Easy Market in 2006 and started working on it.

Because of my background in machine learning, I saw it as a classic situation where we could use something called Naive Bayes classifier, which is Bayesian math from the 17th century. It’s a short, brilliant, very elegant statistical probability formula where if you have some knowledge about what something is, then you can use statistics from the knowledge base to predict what something else could be. That’s machine learning. You give the machine some data to learn from, and boom, it goes.

We thought, “that’s what we need,” but we needed quite a lot of data and we’d got none. So, initially, we had to do what the consultants were doing--we got a gang of people to say, “What’s that? That’s a bottle. That’s a computer,” and so forth, classifying the data manually.

The Mechanical Turk

To move into the automation of it, we had to build rules. A rule is, “if AFS Logistics, then Regional Trucking.” The problem is that’s not scalable because you’d have to build a rule for every single supplier—approximately 2.6 million rules. It’s really a chicken and egg problem. You can build the machine on its own, but it’s useless without the data. The manual bit let us build the rules engine so that we could start making it less manual. The plan was always to move into deep learning, but we weren’t ready, and neither was the technology. So, the evolution was manual until 2012.

We manually classified the data for about 200 engagements. It was sort of a Mechanical Turk operation, but to be fair, even if we’d had the data in 2010-2011, it would have been quite an expensive hobby. The mathematical processing capability that’s needed for a neural network is similar to what’s needed to render all the polygons that you see in a video game and make it look like real life. Up until very recently, that kind of computing power was prohibitively expensive.

Now we’ve got the cloud, we’ve got graphics cards on Amazon Web Services (AWS), and virtually unlimited computing power. With AWS, it probably costs $15 to rent what in effect would be a $10 million super computer for amount of the time we need it to do our processing.

The big breakthrough

The big breakthrough though was when we were able to turn all of our data into vectors using Word2vec. That meant that even if the machine hadn’t seen a thing before, but the words were similar, or had a very short distance between them, it could infer their meaning. We were able to take all of the learning from the previous 200 engagements and use the data from the next 200 to train the machine in deep learning.

Along the way, the machine has gotten smarter and faster. We can categorize a million invoices in about a week. Now that we’ve joined Coupa, we’re using data from hundreds of engagements to make our machine even better.  

What can the machine do now? We can take an invoice or purchase order. We can take purchasing card information, travel and expense data, data about anything that people have spent money on. We join all the data together so it’s consistent so we can work with it. Then we run it through three processes—two machine learning and one deep learning. The machine learning is primarily for normalization, and deep learning for classification. It’s a blend because machine learning is quick, but neural networks can take hours.

Every single engagement we do becomes knowledge within the system. This is the true definition of artificial intelligence. It combines both types of learning and it references all of the growing knowledge bases.

 

The tip of the tail

What this means is that we can go all the way down to almost the tip of the tail because we don’t need a rule for it. After running it across a few different knowledge bases, we can classify 85-90 percent of spending using machine learning. It’s that last bit that requires deep learning. With that you may get to 95 percent. Over time, with more learning, you may get to 98 percent or 99 percent.

This changes things. You’re going to find all the fragments--loads of them--that people have been dumping into general ledger (GL) accounts. You can track what’s called maverick spend, people buying things on credit cards that you actually have contracts for.

You can roll up all those fragments and take advantage of economies of scale. You can cut down the number of suppliers. You can find all that IBM spending that goes by all those other names, and roll it all into your sourcing program. You can find IBM spending that goes through value added resellers and add that in.

These are all the classic spend optimization activities, but now you’re looking at the whole picture. It’s what procurement people have always been trying to do. With artificial intelligence, now they can.

New levers

They can also find efficiencies that don’t necessarily require grinding your suppliers into the ground. Cost used to be the only lever, and negotiating discounts is still going to be part of it, but what has happened since we started is that the world is starting to realize that there’s a limit to that.

Ethical sourcing has become quite prevalent, with a focus on doing things properly. No company wants to read about an accident in a sweatshop in Bangladesh and the whole world finds out that you were buying from them at the same time you do. Not only is your supply chain done in, your brand is damaged and maybe you get sued as well.

Risk and compliance have become more important considerations. Maybe some company is providing a unique and vital piece of your supply chain, but you don’t know it until there’s an earthquake and the factory is wiped out. That’s the kind of thing that can happen when you can’t see into the tail. Having sight of more things lets you see what you’re buying (and from whom) so you can apply your policies equally across the board.

Changing the game

Does this put the consultants out of business? No, but they have to change their game. They’re now interpreting the data and advising you, which is what they’re really good at, but previously they had to do all that manual work to get to it. Now they can provide more valuable services.

That’s about it, really, for the machine. The techniques are the same as we’ve known about for years, but now we have the tools, technology and data to make it work. The rest is just making it more clever with more data.

The real power is in what people do with it. Now that you can see just about all of the tail without ever having to do this job of normalizing 2.6 million rows of data again, it opens up an enormous number of possibilities.

There are a very limited number of people who can do this kind of work, and up until this point spend analytics has always been sort of a niche deep in the back office. I don't know that we would have found it had we not stumbled upon it. But zoom out and total visibility via analytics is the keystone to addressing the issues that all business face across the globe, and utilising these technologies will help companies save money, act more effectively and responsibly. That’s very exciting for us, because we know can make a massive difference.

Paddy Lawton is General Manager, Coupa Spend Analytics. He is the founder of London-based Spend360, which was acquired by Coupa in January of 2017. Prior to that, he was the CEO of  Digital Union. He holds a BSc in Computing and Software Engineering from Oxford Brookes University. To hear more from Paddy, download our webinar, “Cutting Through the Noise: A Pragmatic Look at Artificial Intelligence’s Impact on Spend Management”