Free Trial

Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.

  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint

REFERENCES

Abboud, B. , Davoine, F. , & Dang, M. (2004, September). Facial expression recognition and synthesis based on appearance model. Signal Processing Image Communication, 19(8), 723-740. doi:10.1016/j.image.2004.05.009

Adolphs, R. (2002). Recognizing emotion from facial expressions: psychological and neurological mechanisms. Behavioral and Cognitive Neuroscience Reviews, 1, 21-61. doi:10.1177/1534582302001001003

Ambadar, Z. , Schooler, W. S. , & Cohn, J. (2005). The importance of facial dynamics in interpreting subtle facial expressions. Psychological Science, 16(5), 403-410. doi:10.1111/j.0956-7976.2005.01548.x

Amin, M. A. , Afzulpurkar, N. V. , Dailey, M. N. , Esichaikul, V. E. , & Batanov, D. N. (2005). Fuzzy-C-Mean determines the principle component pairs to estimate the degree of emotion from facial expressions. In Fuzzy Systems and Knowledge Discovery (LNCS 3613, pp. 484-493).

Ashraf, A. B. , Lucey, S. , Chen, T. , Prkachin, K. , Solomon, P. , Ambadar, Z. , & Cohn, J. (2007). The painful face: Pain expression recognition using active appearance models. In Proc. of the ACM International Conference on Multimodal Interfaces (pp. 9-14).

Barlettt, M. S. , Littlewort, G. , Frank, M. G. , Lainscsek, C. , Fasel, I. , & Movellan, J. (2005). Recognizing facial expression: machine learning and application to spontaneous behavior. In Proc. IEEE Computer Vision and Pattern Recognition (pp. 568-573).

Barlettt, M. S. , Littlewort, G. , Frank, M. G. , Lainscsek, C. , Fasel, I. , & Movellan, J. (2006). Fully automatic facial action recognition in spontaneous behavior. In Proc. IEEE Automatic Face and Gesture Recognition (pp. 223-230).

Bassili, J. N. (1978). Facial motion in the perception of faces and of emotional expression. Journal of Experimental Psychology. Human Perception and Performance, 4, 373-379. doi:10.1037/0096-1523.4.3.373

Bassili, J. N. (1979). Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face. Journal of Personality and Social Psychology, 37, 2049-2058. doi:10.1037/0022-3514.37.11.2049

Beaudot, W. (1994). The neural information in the vertebra retina: a melting pot of ideas for artificial vision. PhD thesis, TIRF laboratory, Grenoble, France.

Boucher, J. D. , & Ekman, P. (1975). Facial areas and emotional information. The Journal of Communication, 25, 21-29. doi: 10.1111/j.1460-2466.1975.tb00577.x

Cohn, J. F. (2006). Foundations of Human Computing: Facial Expression and Emotion. In Proc. Int. Conf. on Multimodal Interfaces (pp. 233-238).

Cohn, J. F. , & Schmidt, K. L. (2004). The timing of facial motion in posed and spontaneous smiles. Journal Wavelets. Multiresolution & Information Processing, 2(2), 121-132. doi:10.1142/S021969130400041X

Cootes, T. F. , Edwards, G. J. , & Taylor, C. J. (1998). Active appearance models. Lecture Notes in Computer Science, 484-491. doi:10.1007/BFb0054760

Dailey, M. N. , Cottrell, G. W. , Padgett, C. , & Adolphs, R. (2002). EMPATH: A neural network that categorizes facial expressions. Journal of Cognitive Neuroscience, 14(8), 1158-1173. doi:10.1162/089892902760807177

Darwin, C. (1872). The expression of the emotions in man and animals. London: Murray. doi:10.1037/10001-000

Deng, X. , Chang, C. H. , & Brandle, E. (2004). A new method for eye extraction from facial image. Proc. In 2nd IEEE international workshop on electronic design, test and applications (Vol. 2, pp. 29-34). Perth, Australia: DELTA.

Descartes, H. (1649). Les Passions de l'ame. Paris: Henry Le Gras.

Donato, G. , Bartlett, M. S. , Hager, J. C. , Ekman, P. , & Sejnowski, T. J. (1999). Classifying facial actions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10), 974-989. doi:10.1109/34.799905

Dornaika, F. , & Davoine, F. (2004). Head and facial animation tracking using appearance adaptive models and particle filters. In Workshop Real-Time Vision for Human-Computer Interaction RTV4HCI in conjunction with CVPR, 2. DC, USA: July Washington.

Ekman, P. , & Friesen, W. V. (1978). The facial action coding system (facs), A technique for the measurement of facial action. Palo Alto, CA: Consulting Psychologists Press.

Ekman, P. , Friesen, W. V. , & Ellsworth, P. (1972). Emotion in the human face. New York: Pergamon Press.

Ekman, P. , Matsumoto, D. , & Friesen, W. V. (2005). Facial Expression in Affective Disorders. In Ekman, P. , & Rosenberg, E. L. (Eds.), What the Face Reveals (pp. 429-439).

El Kaliouby, R. , & Robinson, P. (2004). Mind reading machines: automated inference of cognitive mental states from video. SMC, 1, 682-688.

Fasel, I. , Fortenberry, B. , & Movellan, J. (2005). A generative framework for real time object detection and classification. Computer Vision and Image Understanding, 98, 182-210. doi:10.1016/j.cviu.2004.07.014

Gao, Y. , Leung, M. K. H , Hui, S. C. , & Tananda, M. W. (2003 May). Facial expression recognition from line-based caricatures. IEEE Transaction on System, Man and Cybernetics - PART A: System and Humans, 33(3).

Gouta, K. , & Miyamoto, M. (2000). Facial areas and emotional information. Japanese Journal of Phycology, 71, 211-218.

Gunes, H. , Piccardi, M. , & Pantic, M. (2008). From the Lab to the Real World: Affect Recognition using Multiple Cues and Modalities. In Or, J. (Ed.), Affective Computing: Focus on Emotion Expression, Synthesis, and Recognition (pp. 185-218). Vienna, Austria: I-Tech Education and Publishing.

Hammal, Z. (2006 February). Dynamic facial expression understanding based on temporal modeling of transferable belief model. In Proceedings of the International Conference on Computer Vision Theory and Application, Setubal, Portugal.

Hammal, Z. , Arguin, M. , & Gosselin, F. (2009). Comparing a Novel Model Based on the Transferable Belief Model with Humans During the Recognition of Partially Occluded Facial Expressions. Journal of Vision (Charlottesville, Va.), 9(2), 1-19. Retrieved from http://journalofvision.org/9/2/22/, doi:10.1167/9.2.22. doi:10.1167/9.2.22

Hammal, Z. , Caplier, A. , & Rombaut, M. (2005). A fusion process based on belief theory for classification of facial basic emotions. In Proceedings of the 8th International Conference on Information Fusion, Philadelphia, PA, USA.

Hammal, Z. , Couvreur, L. , Caplier, A. , & Rombaut, M. (2007). Facial expressions classification:A new approach based on Transferable Belief Model. International Journal of Approximate Reasoning, 46, 542-567. doi:10.1016/j.ijar.2007.02.003

Hammal, Z. , Eveno, N. , Caplier, A. , & Coulon, P. Y. (2006a). Parametric models for facial features segmentation. Signal Processing, 86, 399-413. doi:10.1016/j.sigpro.2005.06.006

Hammal, Z. , Kunz, M. , Arguin, M. , & Gosselin, F. (2008). Spontaneous pain expression recognition in video sequences. In BCS International Academic Conference 2008-Visions of Computer Science (pp. 191-210).

Hammal, Z. , & Massot, C. (2010). Holistic and Feature-Based Information Towards Dynamic Multi-Expressions Recognition. In International Conference on Computer Vision Theory and Application, 17-21 May, Angers, France.

Harwood, N. K. , Hall, L. J. , & Shinkfield, A. J. (1999). Recognition of facial emotional expressions from moving and static displays by individuals with mental retardation. American Journal of Mental Retardation, 104(3), 270-278. doi:10.1352/0895-8017(1999)104<0270:R0FEE F>2.0.C0;2

Haxby, J. , Hoffman, E. , & Gobbini, M. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4(6), 223-233. doi:10.1016/S1364-6613(00)01482-0

Haxby, J. V. , Hoffman, E. A. , & Gobbini, M. I. (2002). Human neural systems for face recognition and social communication. Biological Psychiatry, 51(1), 59-67. doi:10.1016/S0006-3223(01)01330-0

Izard, C. E. , Dougherty, L. M. , & Hembree, E. A. (1983). A systemfor identifying affect expressions by holistic judgments. Newark, Delaware: Instructional Resources Center, University of Delaware.

Ji, Q. , Lan, P. , & Looney, C. (2006). A probabilistic framework for modeling and real-time monitoring human fatigue. IEEE System Man Cybernetic-Part A, 36(5), 862-875. doi:10.1109/TSMCA.2005.855922

Kapoor, A. , Burleson, W. , & Picard, R. W. (2007). Automatic Prediction of Frustration. International Journal of Human-Computer Studies, 65(8), 724-736. doi:10.1016/j.ijhcs.2007.02.003

Kashima, H. , Hongo, H. , Kato, K. , & Yamamoto, K. (2001 June). A robust iris detection method of facial and eye movement. In Proc. Vision Interface Annual Conference, Ottawa, Canada.

Kimura, S. , & Yachida, M. (1997). Facial Expression Recognition and Its Degree Estimation (pp. 295-300). Proc. Computer Vision and Pattern Recognition.

Lien, J. J. , Kanade, T. , Cohn, J. F. , & Li, C. (1998). Subtly different facial expression recognition and expression intensity estimation. In Proceedings of IEEE Computer Vision and Pattern Recognition, Santa Barbara, CA (pp. 853-859).

Littlewort, G. C. , Bartlett, M. S. , & Kang, L. (2007 November). Faces of Pain: Automated Measurement of Spontaneous Facial Expressions of Genuine and Posed Pain. In Proc. ICMI, Nagoya, Aichi, Japan

Lucas, B. D. , & Kanade, T. (1981). An iterative image Dregistration technique with an application to stereo vision. In Image Understanding Workshop (pp. 121-130). US Defense Advanced Research Projects Agency.

Lyons, M. J. , & Akamatsu, S. (1998, 14-16 Apr 1998). Coding facial expressions with gabor wavelets. In Proc. Third IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan (pp. 200-205).

Malciu, M. , & Preteux, F. (2001 May). Mpeg-4 compliant tracking of facial features in video sequences. In Proc. of International Conference on Augmented, Virtual Environments and 3D Imaging, Mykonos, Greece (pp. 108-111).

Mehrabian, A. (1968). Communication without words. Psychology Today, 2(4), 53-56.

Nusseck, M. , Cunningham, D. W. , Wallraven, C. , & Bulthoff, H. (2008). The contribution of different facial regions to the recognition of conversational expressions. Journal of Vision (Charlottesville, Va.), 8(8). doi:10.1167/8.8.1

Oliver, N. , Pentland, A. , & Berard, F. (2000). Lafter: a real-time face and tracker with facial expression recognition. Pattern Recognition, 33, 1369-1382. doi:10.1016/S0031-3203(99)00113-2

Pantic, M. , & Bartlett, M. S. (2007). Machine analysis of facial expressions. In Delac, K. , & Grgic, M. (Eds.), Face recognition (pp. 377-416). Vienna, Austria: I-Tech Education and Publishing.

Pantic, M. , & Patras, I. (2006). Dynamics of facial expression: Recognition of facial actions and their temporal segments from face profile image sequences. IEEE Transactions on System Systems, Man, and Cybernetics. Part B: Cybernetics, 36, 433-449. doi:10.1109/TSMCB.2005.859075

Pardas, M. (2000, Jun 2000). Extraction and tracking of the eyelids. In Proc. International Conference on Acoustics, Speech and Signal Processing, Istambul, Turkey (Vol. 4, pp. 2357-2360).

Pardas, M. , & Bonafonte, A. (2002). Facial animation parameters extraction and expression detection using hmm. Signal Processing Image Communication, 17, 675-688. doi:10.1016/S0923-5965(02)00078-4

Pardas, M. , & Sayrol, E. (2001, November). Motion estimation based tracking of active contours. Pattern Recognition Letters, 22(13), 1447-1456. doi:10.1016/S0167-8655(01)00084-8

Rosenblum, M. , Yacoob, Y. , & Davis, L. S. (1996). Human expression recognition from motion using a radial basis function network architecture. IEEE Transactions on Neural Networks, 7, 1121-1137. doi:10.1109/72.536309

Roy, S. , Roy, C. , Hammal, Z. , Fiset, D. , Blais, C. , Jemel, B. , & Gosselin, F. (2008 May). The use of Spatio-temporal Information in decoding facial expression of emotions. In Proc. Vision Science Society, Naples Grand Hotel, Floride.

Smets, P. (1998). The transferable belief model for quantified belief representation. In Handbook of defeasible reasoning and uncertainty management system (Vol. 1, pp. 267-301). Dordrecht: Kluwer Academic.

Smets, P. (2000 July). Data fusion in the transferable belief model. In Proc. of International Conference on Information Fusion, Paris, France (pp. 21-33).

Smets, P. (2005). Decision making in the TBM: the necessity of the pignistic transformation. International Journal of Approximate Reasoning, 38, 133-147. doi:10.1016/j.ijar.2004.05.003

Smith, M. , Cottrell, G. , Gosselin, F. , & Schyns, P. G. (2005). Transmitting and decoding facial expressions of emotions. Psychological Science, 16, 184-189. doi: 10.1111/j.0956-7976.2005.00801.x

Tekalp, M. (1999). Face and 2d mesh animation in mpeg-4. Tutorial Issue on the MPEG-4 Standard. Image Communication Journal.

Tian, Y. , Kanade, T. , & Cohn, J. (2000 March). Dual state parametric eye tracking. In Proc. 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble (pp. 110-115).

Tian, Y. , Kanade, T. , & Cohn, J. F. (2001). Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 97-115. doi:10.1109/34.908962

Tian, Y. L. , Kanade, T. , & Cohn, J. F. (2005). Facial expression analysis. In Li, S. Z. , & Jain, A. K. (Eds.), Handbook of face recognition (pp. 247-276). New York: Springer. doi:10.1007/0-387-27257-712

Tong, Y. , Liao, W. , & Ji, Q. (2006). Inferring facial action units with causal relations. In Proc. IEEE Computer Vision and Pattern Recognition (pp. 1623-1630).

Tsapatsoulis, N. , Karpouzis, K. , Stamou, G. , Piat, F. , & Kollias, S. (2000 September). A fuzzy system for emotion classification based on the mpeg-4 facial definition parameter set. In Proceedings of the 10th European Signal Processing Conference, Tampere, Finland.

Tsekeridou, S. , & Pitas, I. (1998 September). Facial feature extraction in frontal views using biometric analogies. In Proc. 9th European Signal Processing Conference, Island of Rhodes, Greece (Vol. 1, pp. 315-318).

Valstar, M. F. , Gunes, H. , & Pantic, M. (2007 November). How to Distinguish Posed from Spontaneous Smiles using Geometric Features. In Proc. ACM Int'l Conf on Multimodal Interfaces, Nagoya, Japan (pp. 38-45).

Valstar, M. F. , Pantic, M. , Ambadar, Z. , & Cohn, J. F. (2006). Spontaneous vs. posed facial behavior: automatic analysis of brow actions. In Proc. ACM Intl. Conference on Multimodal Interfaces (pp. 162-170).

Wallraven, C. , Breidt, M. , Cunningham, D. W. , & Bulthoff, H. (2008). Evaluating the perceptual realism of animated facial expressions. TAP, 4(4).

Wang, J. G. , Sung, E. , & Venkateswarlu, R. (2005). Estimating the eye gaze from one eye. Computer Vision and Image Understanding, 98, 83-103. doi:10.1016/j.cviu.2004.07.008

Wehrle, T. , Kaiser, S. , Schmidt, S. , & Scherer, K. R. (2000). Studying the dynamics of emotional expression using synthesized facial muscle movements. Journal of Personality and Social Psychology, 78(1), 105-119. doi:10.1037/0022-3514.78.1.105

Weyers, P. , Muhlberger, A. , Hefele, C. , & Pauli, P. (2006). Electromyografic responses to static and dynamic avatar emotional facial expressions. Psychophysiology, 43, 450-453. doi:10.1111/j.1469-8986.2006.00451.x

Yacoob, Y. , & Davis, L. S. (1996). Recognizing human facial expressions from long image sequences using optical flow. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18, 636-642. doi:10.1109/34.506414

Young, A. W. , Rowland, D. , Calder, A. J. , Etcoff, N. L. , Seth, A. , & Perrett, D. I. (1997). Facial expression megamix: Tests of dimensional and category accounts of emotion recognition. Cognition, 63, 271-313. doi:10.1016/S0010-0277(97)00003-6

Yuille, A. , Hallinan, P. , & Cohen, D. (1992, August). Feature extraction from faces using deformable templates. International Journal of Computer Vision, 8(2), 99-111. doi:10.1007/BF00127169

Zeng, Z. , Pantic, M. , Roisman, G. I. , & Huang, T. S. (2009, January). A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on PatternAnalysis and Machine Intelligence, 31(1), 39-58. doi:10.1109/TPAMI.2008.52

Zhang, Y. , & Qiang, J. (2005). Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27, 699-714. doi:10.1109/TPAMI.2005.93


  

You are currently reading a PREVIEW of this book.

                                                                                                                    

Get instant access to over $1 million worth of books and videos.

  

Start a Free 10-Day Trial


  
  • Safari Books Online
  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint