How Coupa Defines Artificial Intelligence (AI)
Even though it is a common term and has been around for a long time, there is a lot of confusion around the term AI.
Some think it is just a buzzword. I don’t think it is. However, it does have some different meanings and those meanings can get tangled up quickly. I’m going to try to untangle the term, and let you in on how we are thinking about it at Coupa.
To understand the AI term, we first should explain Analytics.
Analytics, as a term, got started in 2006/7 with the publication of the HBR article and book with the same title, Competing on Analytics. The article and book caught the imagination of the business community. Leaders realized that if they weren’t taking advantage of the data that was available to them, their organizations would fall behind. And, people within organizations realized that if they didn’t become more data driven, they were risking their careers.
By 2013, the term was widespread. Most organizations were convinced of the value of analytics. Universities were quickly re-tooling to create analytics programs. Analytics professionals were rebranding themselves as data scientists. And, it became normal to use the term “machine learning” in a business setting.
If there was one complaint I had about the term analytics at the time, it was that a lot of organizations took it to mean simply reporting or visualizing data. Even though analytics was heavily associated with reporting, it was quickly agreed that analytics was more than that and could really be broken down into three distinct categories:
- Descriptive Analytics — to describe what happened or to understand new patterns and relationships in the data
- Predictive Analytics — to predict what was going to happen
- Prescriptive Analytics — to prescribe what to do, or to make a decision
But, around 2012, there was a major breakthrough in image recognition. For the first time, machines could recognize images better than people. This was driven by innovation in deep learning (neural network) algorithms, access to data, and new computing power.
This breakthrough generated a lot of excitement. And, since deep learning algorithms work like we think the brain works and recognizing images is a very human activity, the term AI was re-born. We say re-born, because the term first appeared in 1956 and then would go in and out of style over the years as it failed to live up to the promises at the time. The “out” periods became known as “AI Winters.”
This time, researchers felt deep learning was the path to making machines capable of thinking like humans. And, the research took off as a result. Besides images, this research was applied to understanding video, translation, and even self-driving cars.
The research also spawned a widespread concern that AI would quickly make many jobs obsolete or, worse, make humans obsolete. It is in the latter that people started to worry about Ray Kurzweil’s idea of the singularity — when machine intelligence would surpass human intelligence and then continue to grow beyond the control of humans.
Around 2017, when the research started in 2012 was still going strong, AI also seemed to replace the term “Analytics” in business. That is, AI started to appear in the annual reports of many traditional businesses, and software and consulting firms began using AI to describe their solutions.
I think this is where the confusion set in — the single term AI was being used to describe two different things. We think there are two basic definitions of AI.
In the first definition, AI means machines can think, learn, and reason like humans. This is what I think researchers had in mind when they used the term back in 2012. A more exact term for this is Artificial General Intelligence or AGI.
With this first definition, it makes sense for groups like OpenAI to think about safe AGI. Or, for people to debate what consciousness is and how would we know if a machine had it. Although, I think the hype around AGI died down some in 2020, it is still going strong. I tend to be skeptical that we are that close to true AGI and have appreciated Melanie Mitchell’s book and Rodney Brooks’s blog posts on the subject.
But, the business community is using a different definition. In this second definition, AI is the use of sophisticated algorithms, data, and computing power to solve specific problems. These algorithms can be very complex and solve hard problems, but they are in no way general — they simply do the task they are programmed to do and no more.
This is the definition of AI we use at Coupa. This might also be called Practical AI or Enterprise AI.
At Coupa, we use sophisticated algorithms to help detect new patterns (e.g., fraud), read documents (e.g., invoices or receipts), make interesting predictions (e.g., when a shipment will arrive, what demand will be, or how much cash you will have on-hand), or make a complicated decision (e.g., how to optimize flow through the supply chain, route trucks, or set inventory levels).
To be clear, this definition is how most people in industry are using the term. And, there is cutting edge work being done in this area as well. Two great books highlighting this work are Prediction Machines and Competing in the Age of AI.
A cynical view might suggest that, for business use, the term AI has just replaced the term Analytics. I think there is more to it than this. Analytics has always been strongly associated with visualization and reporting. I like the term AI because it gets away from just visualization and implies much more. It makes business leaders think harder about applying algorithms to re-think processes, create new ways of working, and better decisions. It also implies more of a mindset — you can’t just build a few reports and think that you are done, you have to continually look for ways to insert AI to improve.
At Coupa we see the long-term value in practical AI. It makes our products stronger and allows our customers to harness the power of Community Intelligence.
Need access to the Community? Click here to join.