Latency and the Case for Conversational B2B Software

One of the more popular buzzwords floating around the world of IT these days is “conversational [blank].”  The term takes its cue from the combination of topics like ambient computing, AI, and voice interfaces. The simplest example is talking to a tools like Amazon’s Echo/Alexa.

To date, the use cases for these types of applications have been consumer-oriented. Adding items to a shopping list; doing a quick weather search, etc. These are simple, yet useful ways to leverage a conversational interface. But what about more complex tasks? Often, systems like Alexa or Google home are simply not programmed to pull together data from anywhere but the web or your limited home network – they can do a web search for you, or connect to pre-integrated smart devices (such as turning on/off lights or TVs).

In addition, when we think about conversational uses outside the home – there is an issue of latency on two fronts. The typical latency associated with processing data, etc. But there’s also what I’d call real world latency with mobile conversational front ends. Users typically need to have a specific application open to perform any meaningful tasks, which can take sometimes multiple steps. And, when a device gets older, opening apps and getting a response from a voice command can take what seems like ages.

Now, when we think about building conversational front ends into B2B applications – a lot of these latency and performance considerations are actually diminished?


Well, when we think of a consumer-oriented conversational interaction with an AI or broad use application like Alexa – there’s a huge array of potential commands, questions, phrases – and as such, a far greater data source and volume required to answer these questions in a satisfactory manner. But with B2B operations, we typically know about 80% of the actions, use cases etc. a user might encounter in a given day. For example, a sales rep might always ask “when is my next call?” or “what is my biggest opportunity in forecast?”

Also, in a B2B scenario, we have a captive and controlled pool of data from which to more quickly retrieve answers and insights. In Sugar’s case – the CRM database holds tons of valued data, and using machine learning overlays with a voice command we can give users access to insights they may have to perform multiple searches to find, or possible build a report to access. So, in a B2B scenario – conversational UIs drive productivity and user experience in a lot of ways, driving down overall real world latency compared to traditional usage paradigms. And when a user, such as a customer service representative, has an application like a CRM open on their desktop all day – there is very little latency, fast access to answers, and an ease of use that can drive first call resolution and increase overall customer satisfaction.

In addition, when we consider how voice-enabled front ends allow us to enter data much more quickly into a CRM, the value of such usage models increases further.

For these reasons and more the SugarCRM engineering team is constantly thinking about and developing more ways to give back more to every Sugar user for every action they take in the system. Stay tuned for some cool announcements, especially at SugarCon 2018…