As mentioned by @wchen, Juji uses machine learning to measure how semantically closely your input is to one of your training data samples (i.e., the examples you put in).
Only when the semantic similarity exceeds a certain threshold, the system will automatically display the corresponding answer. When the similarity is below the threshold, it then suggests possible matches.
For example, your training example “last paycheck” may mean many things, and your input “see my last paycheck” matched your training examples marginally and didn’t exceed the threshold to get an answer automatically. Our system currently takes a relatively conservative approach to favor a high degree of matching. We believe this approach scales better especially when the system grows with more knowledge. For example, in the future, your chatbot may support actions such as "mail my last paycheck"or “missed last paycheck”, and you want the matches to be precise to retrieve the right answers. See my recent blog on this point on ensuring tighter semantic matches vs. only keyword matches.
On the Wenxi’s second point, the system currently also requires a human to verify the answer to ensure the coherence of your knowledge base because users’ choices or system suggestions may be unintended or wrong. Having a person (e.g., a human agent) verify the answer before updating the knowledge base is safer and goes a longer way.