The CSAIL group at MIT have a new paper out titled ‘Predicting Latent Narrative Mood using Audio and Physiologic Data‘ that uses a Samsung Simband in combination with audio and physiological data to show how wearables might be used to predict moods. Initial tests have shown an accuracy rate of ~83%.

Figure 3: Real-time estimation of the emotional content in 30 seconds of collected data, using our optimized NN. The color of the text at the top of the plot reflects the ground truth labels generated by the research assistant (blue for negative, red for positive, black for neutral). The predictions of the network (y-axis) reflect the underlying emotional state of the narrator.
No comments:
Post a Comment