If anyone have an idea about the ER diagram of this project ..plz help and post it
Posts: 14,118
Threads: 61
Joined: Oct 2014
Abstract— The human face is an important organ of an individual‘s body and it especially plays an important role in extraction of
an individual‘s behavior and emotional state. Manually segregating the list of songs and generating an appropriate playlist based on an
individual‘s emotional features is a very tedious, time consuming, labor intensive and upheld task. Various algorithms have been
proposed and developed for automating the playlist generation process. However the proposed existing algorithms in use are
computationally slow, less accurate and sometimes even require use of additional hardware like EEG or sensors. This proposed system
based on facial expression extracted will generate a playlist automatically thereby reducing the effort and time involved in rendering
the process manually. Thus the proposed system tends to reduce the computational time involved in obtaining the results and the
overall cost of the designed system, thereby increasing the overall accuracy of the system. Testing of the system is done on both user
dependent (dynamic) and user independent (static) dataset. Facial expressions are captured using an inbuilt camera. The accuracy of
the emotion detection algorithm used in the system for real time images is around 85-90%, while for static images it is around 98-
100%.The proposed algorithm on an average calculated estimation takes around 0.95-1.05 sec to generate an emotion based music
playlist. Thus, it yields better accuracy in terms of performance and computational time and reduces the designing cost, compared to
the algorithms used in the literature survey.
INTRODUCTION
Music plays a very important role in enhancing an individual‘s life as it is an important medium of entertainment for music lovers and
listeners and sometimes even imparts a therapeutic approach. In today‘s world, with ever increasing advancements in the field of
multimedia and technology, various music players have been developed with features like fast forward, reverse, variable playback
speed (seek & time compression),local playback, streaming playback with multicast streams. Although these features satisfy the user‘s
basic requirements, yet the user has to face the task of manually browsing through the playlist of songs and select songs based on his
current mood and behaviour. The introduction of Audio Emotion Recognition (AER) and Music Information Retrieval (MIR) in the
traditional music players provided automatically parsing the playlist based on various classes of emotions and moods.
AER is a technique which deals with classifying a received audio signal, by considering its various audio features into various classes
of emotions and moods, whereas MIR is a field that extracts some critical information from an audio signal by exploring some audio
features like pitch ,energy,MFCC, flux etc.Though both AER and MIR included the capabilities of avoiding manual segregation of
songs and generation of playlist, yet it is unable to incorporate fully a human emotion controlled music player. Although human
speech and gesture are a common way of expressing emotions, but facial expression is the most ancient and natural way of expressing
feelings, emotions and mood.
The main objective of this paper is to design an efficient and accurate algorithm that would generate a playlist based on current
emotional state and behaviour of the user. The algorithm designed requires less memory overheads, less computational and processing
time, reducing the cost of any additional hardware like EEG or sensors.