Question Variations

I set up the following question “I want to see my last paycheck”
A question variation was configured with “Iast paycheck”
In the chat dialogue I entered “see may last paycheck”
The system then returned a message asking me to select which of four topics I wanted to cover. One of those topics was “I want to see my last paycheck”

Questions:

  1. “see my last paycheck” is in the chat bot and is configured as part of the question variations; so why did the system not know I wanted to see my last paycheck.
  2. When I select 1 does the system automatically update (“learn”) the Question variations so that next time I come through and say See My Paycheck it knows to show me the last paycheck?

Hi @Sunil_Vatave,

Containing the the phrase does not meaning it will be matched with high confidence. Here is one case the chatbot was not confident enough to give the answer right away. So it gave the a choice for user to confirm. This also help disambiguate user’s intent and produces an answer more relevant. In this particular case, it may a bit overdone. But on average, this technique seems to make the conversation more satisfying. Let us know if you have any suggestions.

So once the question get selected from the choice in real chat (not in preview), the system does remember such selection. However, the system does not update the Q&As automatically, because sometimes the chatbot designer wants to answer the question differently. So what the system does is to show the question as unanswered on the Q&A board, and give the question that users selected the most as suggestion. The designer can submit/update the Q&A right away without manually input any answer if they agree with the suggestion. And the unanswered question will become an variation of that existing question.

@Sunil_Vatave
As mentioned by @wchen, Juji uses machine learning to measure how semantically closely your input is to one of your training data samples (i.e., the examples you put in).

Only when the semantic similarity exceeds a certain threshold, the system will automatically display the corresponding answer. When the similarity is below the threshold, it then suggests possible matches.

For example, your training example “last paycheck” may mean many things, and your input “see my last paycheck” matched your training examples marginally and didn’t exceed the threshold to get an answer automatically. Our system currently takes a relatively conservative approach to favor a high degree of matching. We believe this approach scales better especially when the system grows with more knowledge. For example, in the future, your chatbot may support actions such as "mail my last paycheck"or “missed last paycheck”, and you want the matches to be precise to retrieve the right answers. See my recent blog on this point on ensuring tighter semantic matches vs. only keyword matches.

On the Wenxi’s second point, the system currently also requires a human to verify the answer to ensure the coherence of your knowledge base because users’ choices or system suggestions may be unintended or wrong. Having a person (e.g., a human agent) verify the answer before updating the knowledge base is safer and goes a longer way.