Long-time Slashdot reader Ammalgam shares a blog post from Unirobotica.com about a new and more accurate facial recognition technology developed by Fujitsu Laboratories:

Fujitsu wanted a way to better track emotions, even as other companies are already using emotion tool to recognize facial expressions. Microsoft, for example, is one such company, but its AI tool is limited to eight core states — anger, contempt, fear, disgust, happiness, sadness, surprise, or neutral. [And obtains an accuracy rate of just 60%]

The current technology works by identifying various action units (AUs) that are basically certain facial muscle movements that we make. That is to say, if the technology identifies cheeks raised and lip corners pulled up, the AI concludes that the person it is analyzing is happy. However, this one goes much further.

According to Fujitsu, instead of creating more images to train the AI, researchers at its lab came up with a tool to extract more data out of one picture. This is done by what it calls a normalization process, where pictures taken from a particular angle are converted into images that resemble a frontal shot. After the technology appropriately enlarges, reduces, or rotates the newly created frontal picture, this new AI detects these AUs much more easily, and much more accurately.

Share on Google+

of this story at Slashdot.

…read more

Source:: Slashdot