Amazon Alexa seems to come of age at last. The reference here is for the Amazon digital assistant now picking up skills to better express emotions that would make it much more human-like than any time before. It can also be considered as becoming more matured and picking up new skills as it grows.

Amazon made the above announcement with the launch of new tools which developers can use to imbibe some emotions into the way Alexa speaks. That includes making the assistant seem speaking in an excited tone or feeling disappointed, with the intensity in each case varies depending on the way the assistant is expressing during the time.

Amazon announced there is going to be two capabilities that developers can program Alexa to respond emotionally, those being happy/ excited and disappointed/ empathetic. The retail giant said Alexa’s new emotional expressions have been built using Neural TTS technology and applies to two speaking styles of the digital assistant – news and music in the United States.

Users in Australia will have the option to invoke a news speaking style that is unique to their region. With the news and music speaking styles, Alexa is programmed to adjust different aspects of its voice to suit the content being delivered. That way, Alexa is aware which words need to be emphasised on, or where and how much to pause and so on.

Elaborating further, Amazon said there are three pre-set intensities for Alexa to express its emotions, those being high, medium to low-intensity expressions. So, it is going to be like when the user asks for the results of a game and Alexa finds the user’s favourite team has lost, there is going to be a distinctive disappointment in Alexa’s voice as it answers the query.

Similarly, if the team has won, Alexa would have a joyous tone in its voice. Amazon justified the introduction of emotions in Alexa’s voice claiming such a style of speaking make it seem a lot more natural and hence makes it easy for humans to relate to the same easily. In contrast, anything spoken in a flat tone makes it seem machine-like, even if what is spoken is factually correct.

Amazon also said blind listening tests it conducted revealed the news style of speaking seemed 31 percent more natural with emotions than otherwise. Similarly, the music style seemed 84 percent more natural with emotions than the usual standard voice.

LEAVE A REPLY

Please enter your comment!
Please enter your name here