Why Coupa?Watch Now
Coupa is a company of talkers, passionate about sharing tips, tricks and advice for improving finance and procurement and saving companies of all sizes time and money. But we’re not the only people with opinions and ideas. We’d love to hear from you so join the conversation!
- October 01, 2015
- Rajiv Ramachandran
- IT & Technology
In his recent blog post - “No User Interface”, Coupa CEO Rob Bernshteyn introduces the idea of intelligent applications that understand our behavior, perform work on our behalf, and even make intelligent suggestions. He uses the example of how a next-generation smart business expense application will auto-populate business expense reports with fields like time of day, geo-location and itinerary data. The human traveler will then only have to use the interface to review and approve them.
I agree that the interfaces of today will soon become obsolete, though applications will not be able to completely read our minds in the foreseeable future. So, there still has to be a way to communicate with applications, which is all an interface is.
But we won’t be using our fingers to communicate via keyboards and mice. Instead we’ll be using a more direct and flexible means that is so intuitive it doesn’t even feel like an interface: Our voices. Voice recognition technology is advancing rapidly and in the not too distant future we will be able to communicate with our applications simply by “conversing” with them.
At the same time, advances in analytics will make the applications we’ll be conversing with so much smarter. Massive high-speed computing power will let them read, analyze and deliver insights from huge volumes of unstructured data in seconds.
We will speak to these applications and tell them what we want, and they will offer us not only relevant information but insights beyond what we can muster with the comparatively limited computing power of our brains. I think of it like IBM’s Watson married to Apple’s Siri.
If you think Watsiri as the latest celebrity power couple sounds far-fetched, think again. Apple and IBM have already formed an alliance and co-developed over a dozen mobile apps combining Apple’s design sense with IBM’s enterprise savvy. It seems only a matter of time before rising stars Siri and Watson share double billing in one.
The voice of the future
Using voice to interact with software has been possible for decades, but it was cumbersome and impractical.
Apple’s Siri was the first user-friendly voice interface that gained mass adoption by performing simple tasks for iPhone users, accompanied by automated attempts at witty repartee. Now with Amazon’s Echo standalone smart speaker, we have Alexa, a voice-activated personal assistant the company says can answer questions, read the news, find and play music, write your shopping list and more.
Not only are these popular applications making voice-based interaction easy, they are making it part of our everyday lives. Notice that these technologies have human names. If you watch the ad from Amazon, the message is that Alexa is like part of your extended family. That’s where we’re going - talking to an application like you would to another person. People are already doing it, and the technology will get better and better as it grows up.
Data is now artistic
While voice technology has been maturing to adolescence over the last few years, big data computing has also been outgrowing its ugly duckling phase. The world of data is moving from small to big, but it’s also moving from mathematical to artistic, making information and analysis easier and easier to digest.
We’ve gone from mind-numbing columnar tables that you needed sophisticated programming to make sense of, to maps, graphs and other visualizations that people can configure themselves in minutes with a minimum of training. We’re getting to a place where users can now get insight into data without writing code or even going into the database. With these self-service analytics tools, data-driven decision-making is blazing its own trail into the mainstream.
The birth of the automated advisor
Given where we are with both these fields of computing, it’s only a matter of time before these two technologies merge. When that happens, we’ll witness the birth of a powerful, voice-controlled, data-driven class of applications I call ‘personal advisors.’
These are an evolution from the personal assistants like Siri, Alexa and Microsoft’s Cortana. These assistants largely do what we tell them to do and nothing more. Personal advisors will be capable of learning about our wants, needs and habits and scanning data to proactively offer us smarter ways to do things.
What we’re really looking for is not just a less intrusive interface. We’re looking for systems with intelligence. Today, we are all drinking information from a fire hose. There’s so much data out there that no human can assimilate, analyze and derive insight from all of it. But systems are rapidly becoming capable of doing that.
They can understand our voices, interpret our instructions, look at data, look at patterns and come back and have a conversation about its findings. If the results don’t seem exactly right, we can ask more questions or refine our instructions.
The computer-human coaching team
If we look at the analytics that are being applied to sports today, we can see how this technology-as-personal-advisor scenario might play out. We have world champion tennis players who are working hard to condition themselves and play their game every day. They have a certain amount of knowledge and skill, and they have top-notch coaches at their disposal. But human coaches can only tell you so many things.
Analytic systems can analyze scores, video footage and player data and deliver insights such as over the past 10 years you’ve missed an unusually high percentage of backhands when playing on clay courts. They might even make some suggestions such as changing your shoes, your racquet, your footwork or your path to the ball.
Armed with this information, the player and coach might go back and look at the relevant footage and statistics, apply their own knowledge and insights and map out a plan for improvement.
With supercomputers like IBM Watson capable of performing billions of transactions and matching patterns and natural language processing application like Siri and Alexa being able to understand every word and every accent on the planet, there’s potential for personal advisors in any number of areas.
Current user interfaces require us to “speak” to our applications in their own language. This next generation of voice interfaces will be easier and more intuitive because we’ll be able to speak to application in our own language. Couple that with exponential advances in data interpolation and we may soon be entering a new era of harmony and partnership between humans and machines.
All trademarks and registered trademarks are the property of their respective owners.