Ghaemmaghami P (2017) Information retrieval from neurophysiological signals. Hassanien AE, Azar A (2014) Brain computer interfaces: current trends and applications, intelligent systems reference library, vol 74. Müller V, Boden MA (2008) Mind as machine: a history of cognitive science 2 vols. In: Bovik AL (ed) In: communications, networking and multimedia, handbook of image and video processing, 2nd edn. Smith MA, Chen T (2005) 9.1: image and video indexing and retrieval. ĭimitrova N, Zhang HJ, Shahraray B, Sezan I, Huang T, Zakhor A (2002) Applications of video-content analysis and retrieval. Slaney M (2011) Web-scale multimedia analysis: does content matter? IEEE Multimedia 18(2):12–15. Wang D, Zhao X (2022) Affective video recommender systems: a survey. Īssabumrungrat R, Sangnark S, Charoenpattarawut T, Polpakdee W, Sudhawiyangkul T, Boonchieng E, Wilaiprasitporn T (2022) ubiquitous affective computing: a review. ![]() ![]() Hanjalic A, Xu L (2005) Affective video content representation and modeling. īaveye Y, Chamaret C, Dellandréa E, Chen L (2018) Affective video content analysis: a multidisciplinary insight. Scherp A, Mezaris V (2014) Survey on modeling and indexing events. Pereira (Ed) A Portrait of State-of-the-Art Research at the Technical University of Lisbon. Pereira F, Ascenso J, Brites C, Fonseca P, Pinho P, Baltazar J (2007) Evolution and Challenges in Multimedia Representation Technologies. Pouyanfar S, Yang Y, Chen SC, Shyu ML, Iyengar SS (2018) Multimedia Big Data Analytics. The simulations presented in this paper show the pioneering applicability of the proposed framework for the development of brain–computer interface (BCI) devices for affective tagging of videos.Ĭaviedes JE (2012) The evolution of video processing technology and its main drivers. The proposed model is developed using the EEG data taken from publicly available datasets “AMIGOS” and “DREAMER.” The model is tested using two different approaches, i.e., single-subject classification and multi-subject classification, and an average accuracy of 90%-95% and 90%-93% is achieved, respectively. The proposed feature representations highlight the spatial features of EEG signals and are therefore used to train a convolution neural network model for implicit tagging of two categories of videos in the Arousal domain, i.e., “Low Arousal” and “High Arousal.” The arousal emotional space represents the excitement level of the viewer thus, this domain is selected to analyze the viewer’s engagement while watching video clips. Thus, here a contribution is made toward the effective modeling of EEG signals through two different representations, i.e., spatial feature matrix and combined power spectral density maps. The information behind different brain regions, frequency waves, and connections among them play an important role in understanding a human’s cognitive state. ![]() Considering the emotional aspect of videos, in this paper, a deep learning-based paradigm for affective tagging of video clips is proposed, in which participants’ irrational EEG responses are used to examine how people perceive videos. Thus, by examining the viewer’s cognitive state while watching such content, its affectiveness can be evaluated. Multimedia content, such as videos or movie clips, is typically created with the intent to evoke certain feelings or emotions in viewers. Nowadays, multimedia content, such as photographs and movies, is ingrained in every aspect of human lives and has become a vital component of their entertainment.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |