San Francisco– Google has claimed that its new chatbot, called Meena, can chat just about anything and is better than other similar conversational agents.

In a paper published on pre-print repository arXiv.org, Google scientists showed that Meena can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots.

To put Meena, an end-to-end trained neural conversational model, to test, the scientists used a new human evaluation metric for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation.

“Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA,” study authors Daniel Adiwardana, Senior Research Engineer, and Thang Luong, Senior Research Scientist, Google Research, Brain Team, said in a blog post this week.

Modern chatbots tend to be highly specialised — they perform well as long as users do not stray too far from their expected usage.

To better handle a wide variety of conversational topics, open-domain dialogue research explores a complementary approach attempting to develop a chatbot that is not specialized but can still chat about virtually anything a user wants.

Besides being a fascinating research problem, such a conversational agent could lead to many interesting applications, such as further humanising computer interactions, improving foreign language practice, and making relatable interactive movie and videogame characters.

However, current open-domain chatbots have a critical flaw — they often don’t make sense. They sometimes say things that are inconsistent with what has been said so far, or lack common sense and basic knowledge about the world.

Moreover, chatbots often give responses that are not specific to the current context.

For example, “I don’t know,” is a sensible response to any question, but it’s not specific. Current chatbots do this much more often than people because it covers many possible user inputs.

Meena, on the other hand, learns to respond sensibly to a given conversational context, according to the paper titled “Towards a Human-like Open-Domain Chatbot”.

“The training objective is to minimise perplexity, the uncertainty of predicting the next token (in this case, the next word in a conversation),” Adiwardana and Luong said in the blog post.

“At its heart lies the Evolved Transformer seq2seq architecture, a Transformer architecture discovered by evolutionary neural architecture search to improve perplexity,” they added. (IANS)