A New Era for Emotion and Communication – Digital, Talking, Emoting Heads

Meet Zoe…

Mar. 19, 2013 — This is Zoe – a digital talking head which can express human emotions on demand with “unprecedented realism” and ushers in a new period of human-computer interaction.

Zoe can express a nearly complete spectrum of human emotions and may be used as a digital personal assistant, or even could help replace texting with “face messaging.”

Whereas texting suffers from a lack of emotionality, Zoe can display emotions such as happiness, anger, and fear, and modifies its voice to suit the emotion the user wants it to communicate. Users can type in any message, specifying the requisite emotion as well, and the face recites the text. According to its designers, it is the most expressive controllable avatar ever created, replicating human emotions with unprecedented realism.

The system, called “Zoe,” is the result of a collaboration between researchers at Toshiba’s Cambridge Research Lab and the University of Cambridge’s Department of Engineering. Students have already spotted a striking resemblance between the disembodied head and Holly, the ship’s computer in the British sci-fi comedy, Red Dwarf.

Appropriately enough, the face is actually that of Zoe Lister, an actress perhaps best-known as Zoe Carpenter in the Channel 4 series, Hollyoaks. To recreate her face and voice, researchers spent several days recording Zoe’s speech and facial expressions. The result is a system that is light enough to work in mobile technology, and could be used as a personal assistant in smartphones, or to “face message” friends.

The framework behind “Zoe” is also a template that, before long, could enable people to upload their own faces and voices — but in a matter of seconds, rather than days. That means that in the future, users will be able to customise and personalise their own, emotionally realistic, digital assistants.

If this can be developed, then a user could, for example, text the message “I’m going to be late” and ask it to set the emotion to “frustrated.” Their friend would then receive a “face message” that looked like the sender, repeating the message in a frustrated way.

The team who created Zoe are currently looking for applications, and are also working with a school for autistic and deaf children, where the technology could be used to help pupils to “read” emotions and lip-read. Ultimately, the system could have multiple uses — including in gaming, in audio-visual books, as a means of delivering online lectures, and in other user interfaces.

“This technology could be the start of a whole new generation of interfaces which make interacting with a computer much more like talking to another human being,” Professor Roberto Cipolla, from the Department of Engineering, University of Cambridge, said.

“It took us days to create Zoe, because we had to start from scratch and teach the system to understand language and expression. Now that it already understands those things, it shouldn’t be too hard to transfer the same blueprint to a different voice and face.”

As well as being more expressive than any previous system, Zoe is also remarkably data-light. The program used to run her is just tens of megabytes in size, which means that it can be easily incorporated into even the smallest computer devices, including tablets and smartphones.

It works by using a set of fundamental, “primary colour” emotions. Zoe’s voice, for example, has six basic settings — Happy, Sad, Tender, Angry, Afraid and Neutral. The user can adjust these settings to different levels, as well as altering the pitch, speed and depth of the voice itself.

By combining these levels, it becomes possible to pre-set or create almost infinite emotional combinations. For instance, combining happiness with tenderness and slightly increasing the speed and depth of the voice makes it sound friendly and welcoming. A combination of speed, anger and fear makes Zoe sound as if she is panicking. This allows for a level of emotional subtlety which, the designers say, has not been possible in other avatars like Zoe until now.

To make the system as realistic as possible, the research team collected a dataset of thousands of sentences, which they used to train the speech model with the help of real-life actress, Zoe Lister. They also tracked Lister’s face while she was speaking using computer vision software. This was converted into voice and face-modelling, mathematical algorithms which gave them the voice and image data they needed to recreate expressions on a digital face, directly from the text alone.

The effectiveness of the system was tested with volunteers via a crowd-sourcing website. The participants were each given either a video, or audio clip of a single sentence from the test set and asked to identify which of the six basic emotions it was replicating. Ten sentences were evaluated, each by 20 different people.

Volunteers who only had video and no sound only successfully recognised the emotion in 52% of cases. When they only had audio, the success rate was 68%. The two together, however, produced a successful recognition rate of 77% — slightly higher than the recognition rate for the real-life Zoe, which was 73%! This higher rate of success compared with real life is probably because the synthetic talking head is deliberately more stylised in its manner.

As well as finding applications for their new creation, the research team will now work on creating a version of the system which can be personalised by users themselves.

“Present day human-computer interaction still revolves around typing at a keyboard or moving and pointing with a mouse.” Cipolla added. “For a lot of people, that makes computers difficult and frustrating to use. In the future, we will be able to open up computing to far more people if they can speak and gesture to machines in a more natural way. That is why we created Zoe — a more expressive, emotionally responsive face that human beings can actually have a conversation with.”

To life, love and laughter,

 

 

John Schinnerer, Ph.D.

Executive Coach

Author of the award-winning Guide To Self: The Beginner’s Guide To Managing Emotion & Thought

Guide To Self, Inc.

913 San Ramon Valley Blvd. #280

Danville CA 94526

GuideToSelf.com – Web site

WebAngerManagement.com – 10-week online anger management course

DrJohnBlog.GuideToSelf.com –  Awarded #1 Blog in Positive Psychology by PostRank, Top 100 Blog by Daily Reviewer

@johnschin – Twitter


Story Source:

The above story is reprinted from materials provided by University of Cambridge. The original story is licensed under a Creative Commons Licence.

University of Cambridge (2013, March 19). Face of the future rears its head: Digital talking head expresses human emotions on demand. ScienceDaily. Retrieved March 20, 2013, from http://www.sciencedaily.com­ /releases/2013/03/130319160046.htm

 

Ads Targeted At How You Feel – Beware the Next Level of Marketing!

June 15, 2012

Microsoft has applied for a patent for  targeting ads to users based on their emotional state, using a Kinect type device, GeekWire reports.

Do you look happy? You’ll see ads for vacation packages and consumer electronics, but not weight-loss programs or self-help products. Do you look sad? You won’t see that over-the-top animated ad for children’s birthday parties at the local bowling alley. Feeling frustrated? It’s PC support ads for you.

Those are actual examples from the patent application, which incorporates some of the same ideas as the earlier filing for deducing the user’s mood — including scanning messages and social media postings.

Also included: audio and video capture devices (to detect facial expressions and tone of voice) in addition to the company’s Kinect sensor, which would be used to analyze body movements as another input for the emotion-detecting algorithm.

Protect your mind. It’s the only one you get!

Peace,
John

John Schinnerer, Ph.D.

Positive Psychology Coach

Author of the award-winning Guide To Self: The Beginner’s Guide To Managing Emotion & Thought

Guide To Self, Inc.

913 San Ramon Valley Blvd. #280

Danville CA 94526

GuideToSelf.comWeb site

WebAngerManagement.com – 10-week online anger management course

DrJohnBlog.GuideToSelf.com –  Awarded #1 Blog in Positive Psychology by PostRank, Top 100 Blog by Daily Reviewer

@johnschinTwitter

Reading terrorists minds about imminent attack – Specfic brain waves related to guilty knowledge

July 30, 2010

Imagine technology that allows you to get inside the mind of a terrorist to know how, when and where the next attack will occur.

That’s not nearly as far-fetched as it seems, according to a new Northwestern University study.
Say, for purposes of illustration, that the chatter about an imminent terrorist attack is mounting, and specifics about the plan emerge, about weapons that will be used, the date of such a dreaded event and its location.

If the new test used by the Northwestern researchers had been used in such a real-world situation with the same type of outcome that occurred in the lab, the study suggests, culpability extracted from the chatter could be confirmed.

In other words, if the test conducted in the Northwestern lab ultimately is employed for such real-world scenarios, the research suggests, law enforcement officials ultimately may be able to confirm details about an attack – date, location, weapon — that emerges from terrorist chatter.

In the Northwestern study, when researchers knew in advance specifics of the planned attacks by the make-believe “terrorists,” they were able to correlate P300 brain waves to guilty knowledge with 100 percent accuracy in the lab, said J. Peter Rosenfeld, professor of psychology in Northwestern’s Weinberg College of Arts and Sciences.

For the first time, the Northwestern researchers used the P300 testing in a mock terrorism scenario in which the subjects are planning, rather than perpetrating, a crime. The P300 brain waves were measured by electrodes attached to the scalp of the make-believe “persons of interest” in the lab.

The most intriguing part of the study in terms of real-word implications, Rosenfeld said, is that even when the researchers had no advance details about mock terrorism plans, the technology was still accurate in identifying critical concealed information.
 

“Without any prior knowledge of the planned crime in our mock terrorism scenarios, we were able to identify 10 out of 12 terrorists and, among them, 20 out of 30 crime- related details,” Rosenfeld said. “The test was 83 percent accurate in predicting concealed knowledge, suggesting that our complex protocol could identify future terrorist activity.”

Rosenfeld is a leading scholar in the study of P300 testing to reveal concealed information. Basically, electrodes are attached to the scalp to record P300 brain activity — or brief electrical patterns in the cortex — that occur, according to the research, when meaningful information is presented to a person with “guilty knowledge.”

Research on the P300 testing emerged in the 1980s as a handful of scientists looked for an alternative to polygraph tests for lie detection. Since it was invented in the 1920s, polygraphy has been under fire, especially by academics, with critics insisting that such testing measures emotion rather than knowledge.

Rosenfeld and Northwestern graduate student John B. Meixner are co-investigators of the study, outlined in a paper titled “A Mock Terrorism Application of the P300-based Concealed Information Test,” published recently in the journal Psychophysiology.

Study participants (29 Northwestern students) planned a mock attack based on information they were given about bombs and other deadly weapons. They then had to write a letter detailing the rationale of their plan to encode the information in memory.

Then, with electrodes attached to their scalps, they looked at a computer display monitor that presented names of stimuli. The names of Boston, Houston, New York, Chicago and Phoenix, for example, were shuffled and presented at random. The city that study participants chose for the major terrorist attack evoked the largest P300 brainwave responses.

The test includes four classes of stimuli known as targets, non-targets, probes and irrelevants. Targets are sights, sounds or other stimuli the person being questioned already knows or is taught to recognize before the test. Probes are stimuli only a guilty suspect would be likely to know. And irrelevants are stimuli unlikely to be recognized.

“Since 9/11 preventing terrorism is a priority,” Rosenfeld said. “Sometimes you catch suspicious people entering a building. You suspect that they’re terrorists, and you have some leads from the chatter. You’ve heard they’re going to attack one city or another in one fashion or another on one date or another. Our hope is that our new complex protocol – different from the first P300 technology developed in the 1980s – will one day confirm such chatter in the real world.”

In the laboratory setting, study participants only had about 30 minutes to learn about the attack and to detail their plans. Thus, Rosenfeld said, encoding of guilty knowledge was relatively shallow. It is assumed that real terrorists rehearse details central to a planned attack repeatedly, leading to deeper encoding of related memories, he said. “We suspect if our test was employed in the real world the deeper encoding of planned crime-related knowledge could further boost detection of terrorist intentions.”

Provided by Northwestern University

The implications of this are far-reaching, disturbing and reassuring simultaneously.

Disturbing since this same procedure, when perfected, can be used with any of us (which is fine along as you’re staying away from involvement in destructive activities, OR activities which arouse guilt in you!).

Reassuring as it will provide a better means of discovering solid leads on imminent attacks by domestic threats. 

Far-reaching because this technology can and likely will be extended far beyond the scope of hunting terrorists. Easy rationalizations can be made to use it to fight drug trafficking and other major clear cut illegal operations. But where does the line get drawn once we get into lesser, gray areas?

Obviously, it will be many years before the technology is accessible and affordable enough to use ubiquitously. However, what about if the IRS uses it around issues of tax evasion? Or the courts use it in child custody evaluations? At what point do our civil liberties get breached?

This will be an ongoing issue as we head into the next decade because, like it or not, it’s coming!

Best,

John Schinnerer, Ph.D.

Positive Psychology Coach

Author of the award-winning Guide To Self: The Beginner’s Guide To Managing Emotion & Thought

Guide To Self, Inc.

913 San Ramon Valley Blvd. #280

Danville CA 94526

GuideToSelf.comWeb site

DrJohnBlog.GuideToSelf.com  Awarded Top3 Blog in Positive Psychology by PostRank, Top 100 Blog by Daily Reviewer

Follow me on Twitter at http://www.Twitter.com/@johnschin  

Follow my YouTube channel at http://www.YouTube.com/jschinnerer

The Next Step is Here – Software To Measure Emotion While Surfing Web

From Science Daily…

New Software to Measure Emotional Reactions to Web

ScienceDaily (June 9, 2010) — While most people have intuitive reactions to Web sites, a group of Canadian scientists is developing software that can actually measure those emotions and more.Aude Dufresne, a professor at the University of Montreal Department Of Communications, led a team of researchers that are designing a new software to evaluate the biological responses of Internet users.Simply put, the new software measures everything in Web users from body heat to eye movements to facial expressions and analyzes how they relate to online activities.

The technology is now being tested at the newly opened Bell User Experience Centre, which is located at the telecom giant’s Nun’s Island campus. Bell will use the University of Montreal technology to investigate how people react to Web sites. Such studies will provide companies with facts on how they can improve online experiences.

“With e-commerce and the multiplication of retail Web sites, it has become crucial for companies to consider the emotions of Web users,” says Professor Dufresne. “Our software is the first designed to measure emotions at conscious and preconscious levels, which will give companies a better sense of the likes and dislikes of Web users.”‘

For full article, click here.

Between the fMRI, neuromarketing and emotional measurement software, we have to be more mindful about our media consumption.

Cheers,

John Schinnerer, Ph.D.

Real Emotions for Real Men

Guide To Self, Inc.

First Intelligent System To Scan & Recognize Emotions – Help for Autistic Children

 From ScienceDaily (Oct. 19, 2009) — Computer scientists from Nanyang Technological University in Singapore are working on the development of an efficient and intelligent facial expression recognition system. The system is capable of locating the face region using derivative-based filtering and recognizing facial expressions using boosting classifier. The portable device is being developed to help autistic children understand the emotions of surrounding people. 

A paper detailing the specifics of the device will be published in the journal Intelligent Decision Technologies. 

Teik-Toe Teoh, Yok-Yen Nguwi and Siu-Yeung Cho of the Centre for Computational Intelligence of the School of Computer Engineering of Nanyang Technological University state that “emotion is a state of feeling involving thoughts, physiological changes, and an outward expression. In this paper, we propose a system that synergizes the use of derivative filtering and boosting classifier. “ 

The portable facial expression recognizer locates the edge of the human face through Gaussian derivatives, Laplacian derivatives and filter out non-face images using Adaboost. Secondly, the feature locator finds crucial fiducial points for subsequent feature extraction and selection processing. Finally, the meaningful features are classified into the corresponding classes.

For full article, click here.

Have a tremendous Tuesday!

John Schinnerer, Ph.D.

A Curious Guy

Lifelong Learner

Well-Versed in the Foundations of Positive Psychology