FACE AND EMOTION RECOGNITION IN REAL TIME USING MACHINE LEARNING

Authors

  • Santosh Laxman Rathod Author

DOI:

https://doie.org/10.5281/hpvr8486

Keywords:

Face and Emotion Recognition in Real Time using Machine Learning,,

Abstract

It is possible for humans to tell how someone is feeling just by looking at them. However, using 
a computer program to carry out the same action is very hard. Computer vision and machine 
learning have come a long way recently, making it possible to read feelings from pictures. This 
system uses real-time video to determine the state of persons feeling. The collection with more 
than 20,000 pictures of people's faces is used to teach the deep learning model how to recognize 
emotions. To find the face in the movie, the Haar Cascade frontal face algorithm is used. 
Utilising a convolutional neural network (CNN), one can guess what someone is saying in real 
time from a computer live stream.One of these features on the countenance is the best. A lot of 
different fields use mood recognition to do things like figure out why people do the things they 
do, find mental illnesses, see how people in a crowd are feeling, and more. In the recommended 
structure, there are three steps that are able to describe face-emotion recognition.  

,

References

REFERENCES

S. Kahou, V. Michalski, K. Konda, R. Memisevic, and C. Pal, “Recurrent neural networks

for video emotion detection ICMI, pp. 467–474, 2015.

In the Proceedings of the 2015 ACM on International Conference on Multimodal Interaction,

Z. Yu and C. Zhang, "Image based static facial expression recognition with multiple deep

network learning," ICMI ’15, (New York, NY, USA), pp. 435–442, ACM, 2015.

"Hierarchical committee of deep convolutional neural networks for robust facial expression

recognition," Journal on Multimodal User Interfaces, B. Kim, J. Roh, S. Dong, and S. Lee, pp.

–17, 2016.

G. Levi “Emotion recognition using mapped binary data and convolutional neural networks

in the wild patterns,” in Proc. ACM Multimodal Interaction International Conference (ICMI),

November 2015

P. Lucey, J. Z. Ambadar, J. Saragih, T. Kanade, F. Cohn, and I. Matthews, “The ck+, or the

enlarged Cohn-Kanade dataset: An action unit is all dataset and emotionspecified expression,”

in IEEE Computer Society Conference on Computer Vision and Pattern Recognition

Workshops (CVPRW), 2010 pp. 94–101, June 2010.

T. Liu, Z. Chen, H. Liu, Z. Zhang, and Y. Chen, “Multimodal hand gesture designing in

multi-screen touchable teaching system for human-computer interaction,” At Second

International Conference on Advances in Image Processing, pp. 100–109, Chengdu China,

June 2018.

Downloads.

Published

2024-11-05

How to Cite

FACE AND EMOTION RECOGNITION IN REAL TIME USING MACHINE LEARNING. (2024). Phoenix: International Multidisciplinary Research Journal ( Peer Reviewed High Impact Journal ), 2(4), 1-5. https://doi.org/10.5281/hpvr8486