Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Emotion Based Music Player Project #911

Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions Emotion based music player/Dataset/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
<h2>Emotion Based Music Player</h2>

### Goal 🎯
The objective of the emotion-based music player project is to create an intelligent system that detects and analyzes users' emotions in real-time through techniques like facial recognition, voice analysis, or biosensors. Based on the detected emotional state, the player automatically curates and adjusts music playlists to enhance the user's mood and provide a personalized listening experience. The system aims to reduce the burden of manual song selection, adapt to emotional changes dynamically, and offer privacy-conscious and culturally relevant music suggestions, while giving users the flexibility to override or customize the music based on their preferences.

### Model(s) used for the Web App 🧮

The models and technologies used in the emotion-based music player project include:

1. Pretrained Keras Model (model.h5): A deep learning model, likely a Convolutional Neural Network (CNN), is loaded to predict emotions based on processed facial landmarks and hand movements.

2. Mediapipe Library: Mediapipe is used for extracting facial landmarks and hand landmarks, which serve as input features for emotion recognition. It captures key points from the user's face and hands for emotion detection.

3. Streamlit and WebRTC: Used for the web interface and real-time video streaming, capturing the users face for emotion recognition through a web camera.

4. The project leverages deep learning (Keras) and computer vision (Mediapipe) to detect emotions based on facial and hand landmark data, then uses the model to predict the emotion, which influences the music recommendation. ​

Binary file added Emotion based music player/Dataset/emotion.npy
Binary file not shown.
Binary file added Emotion based music player/Dataset/labels.npy
Binary file not shown.
Binary file added Emotion based music player/Images/Capture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Emotion based music player/Images/Information.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Emotion based music player/Images/Output.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Emotion based music player/Images/emotion.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Emotion based music player/Images/open page.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 11 additions & 0 deletions Emotion based music player/Model/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
<h1>Emotion Based Music Player</h1>

<h1>Description</h1>
Welcome to new project emotion based music built by using mediapipe and keras. also opencv and streamlit is used to create a webapp. for capturing the webcam in the browser i used streamlit-webrtc module.
<br><br>

<h1>Connect with me</h1>
If you have any queries regarding any of the topic, feel free to talk to e using below links:<br>

github : https://github.com/shanmukhi-developer<br>
linkedin : https://www.linkedin.com/in/nadipudi-shanmukhi-satya-6904a0242/<br>
Binary file added Emotion based music player/Model/model.h5
Binary file not shown.
13 changes: 13 additions & 0 deletions Emotion based music player/Requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
*Requirements for Running the Project*

Python 3.x
Python libraries:
1. streamlit
2. streamlit-webrtc
3. opencv-python
4. mediapipe
5. keras
6. numpy

-->A pre-trained Keras model (model.h5) and a NumPy labels file (labels.npy), both included in the project.
-->A webcam to capture live video input.
47 changes: 47 additions & 0 deletions Emotion based music player/Web apps/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
*Emotion Based Music Player*<br>
*🎯 Goal*<br>
An emotion-based music player solves key challenges by using emotion detection techniques, such as facial recognition or voice analysis, to automatically curate playlists that match the user’s mood. It adapts to real-time emotional changes, providing a dynamic music experience tailored to the user's feelings.<br>

*🧵 Dataset*<br>
The dataset used in this project are emotion.npy and label.npy which are typically used to store NumPy arrays, which may contain data such as numerical values, model weights, or feature sets, possibly for your emotion-based music player project..<br>
<br>

*🧾 Description*<br>
This project utilizes various python libraries such as streamlit, keras, numpy to create the model,to take the input from user and give the required output<br>
<br>


<br>

*🚀 Models Implemented*<br>
1. Pretrained Deep Learning Model (model.h5):

-->Likely a Convolutional Neural Network (CNN) used for emotion recognition, especially in processing facial and hand landmarks for detecting user emotions.
-->The model is loaded using Keras' load_model function, indicating it's a neural network trained on emotion-labeled data.<br>
2. Mediapipe's Holistic and Hands Models:

-->Mediapipe Holistic: Used for detecting key facial and body landmarks.
-->Mediapipe Hands: Used for detecting hand landmarks to infer gestures that may also be used for emotion recognition.
<br>


*📚 Libraries Needed*<br>
1. Streamlit
2. Streamlit-webrtc
3. Opencv-python
4. Mediapipe
5. Keras
6. Numpy

<br>


*📢 Conclusion*<br>
The emotion-based music player successfully integrates deep learning and computer vision techniques to create a personalized, emotion-driven music experience. By leveraging facial expression and hand gesture recognition through Mediapipe, combined with a pretrained deep learning model, the system can detect the user's emotional state in real-time. This allows for dynamic music recommendations that adapt to the user's mood, enhancing the listening experience.<br>

The project demonstrates how artificial intelligence can transform user interaction with media, making it more intuitive, personalized, and engaging. With future improvements, such as more advanced emotion recognition and enhanced music recommendations, this system could revolutionize how users interact with digital content, making it more emotionally responsive and contextually aware.<br>

<br>

✒️ Your Signature<br>
Nadipudi Shanmukhi satya<br>
102 changes: 102 additions & 0 deletions Emotion based music player/Web apps/music.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
import streamlit as st
from streamlit_webrtc import webrtc_streamer
import av
import cv2
import numpy as np
import mediapipe as mp
from keras.models import load_model
import webbrowser

model = load_model("model.h5")
label = np.load("labels.npy")
holistic = mp.solutions.holistic
hands = mp.solutions.hands
holis = holistic.Holistic()
drawing = mp.solutions.drawing_utils

st.header("Emotion Based Music Recommender")

if "run" not in st.session_state:
st.session_state["run"] = "true"

try:
emotion = np.load("emotion.npy")[0]
except:
emotion=""

if not(emotion):
st.session_state["run"] = "true"
else:
st.session_state["run"] = "false"

class EmotionProcessor:
def recv(self, frame):
frm = frame.to_ndarray(format="bgr24")

##############################
frm = cv2.flip(frm, 1)

res = holis.process(cv2.cvtColor(frm, cv2.COLOR_BGR2RGB))

lst = []

if res.face_landmarks:
for i in res.face_landmarks.landmark:
lst.append(i.x - res.face_landmarks.landmark[1].x)
lst.append(i.y - res.face_landmarks.landmark[1].y)

if res.left_hand_landmarks:
for i in res.left_hand_landmarks.landmark:
lst.append(i.x - res.left_hand_landmarks.landmark[8].x)
lst.append(i.y - res.left_hand_landmarks.landmark[8].y)
else:
for i in range(42):
lst.append(0.0)

if res.right_hand_landmarks:
for i in res.right_hand_landmarks.landmark:
lst.append(i.x - res.right_hand_landmarks.landmark[8].x)
lst.append(i.y - res.right_hand_landmarks.landmark[8].y)
else:
for i in range(42):
lst.append(0.0)

lst = np.array(lst).reshape(1,-1)

pred = label[np.argmax(model.predict(lst))]

print(pred)
cv2.putText(frm, pred, (50,50),cv2.FONT_ITALIC, 1, (255,0,0),2)

np.save("emotion.npy", np.array([pred]))


drawing.draw_landmarks(frm, res.face_landmarks, holistic.FACEMESH_TESSELATION,
landmark_drawing_spec=drawing.DrawingSpec(color=(0,0,255), thickness=-1, circle_radius=1),
connection_drawing_spec=drawing.DrawingSpec(thickness=1))
drawing.draw_landmarks(frm, res.left_hand_landmarks, hands.HAND_CONNECTIONS)
drawing.draw_landmarks(frm, res.right_hand_landmarks, hands.HAND_CONNECTIONS)


##############################

return av.VideoFrame.from_ndarray(frm, format="bgr24")

lang = st.text_input("Language")
singer = st.text_input("singer")
choose = st.text_input("Select")
if lang and singer and st.session_state["run"] != "false":
webrtc_streamer(key="key", desired_playing_state=True,
video_processor_factory=EmotionProcessor)
btn = st.button("Recommend me ")

if btn:
if not emotion:
st.warning("Please let me capture your emotion first")
st.session_state["run"] = "true"
elif choose == "youtube":
webbrowser.open(f"https://www.youtube.com/results?search_query={lang}+{emotion}+song+{singer}")
else:
webbrowser.open(f"https://open.spotify.com/search/{lang}%20{emotion}%20songs%20{singer}")
np.save("emotion.npy", np.array([""]))
st.session_state["run"] = "false"
8 changes: 8 additions & 0 deletions Emotion based music player/Web apps/tempCodeRunnerFile.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
import streamlit as st
from streamlit_webrtc import webrtc_streamer
import av
import cv2
import numpy as np
import mediapipe as mp
from keras.models import load_model
import webbrowser
Loading