Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Resume Analyzer #975 #980

Merged
merged 5 commits into from
Nov 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions AI Resume Analyzer/Resume Analyzer Sentence Transformers/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
</br>

![Screenshot (5313)](https://github.com/user-attachments/assets/ff312d66-7b70-4479-b1cf-06c9a871d853)

</br>

![Screenshot (5314)](https://github.com/user-attachments/assets/aed37f5f-361f-40e7-86a2-4ad77234db0d)

</br>

![Screenshot (5315)](https://github.com/user-attachments/assets/7ea6d358-40e3-4f57-908e-fd0a78d494b1)
49 changes: 49 additions & 0 deletions AI Resume Analyzer/Resume Analyzer Sentence Transformers/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@


import streamlit as st
from resume_analyzer import ResumeAnalyzer
import os

def main():
st.set_page_config(page_title="Smart Resume Analyzer", layout="wide")
st.title("Smart Resume Analyzer")

job_description = st.text_area("Enter the Job Description", height=200)
st.write("Upload resumes as PDF files:")
uploaded_files = st.file_uploader("Upload PDF files", accept_multiple_files=True, type="pdf")

if st.button("Analyze Resumes") and uploaded_files:
resume_files = []
for uploaded_file in uploaded_files:
file_path = os.path.join("/tmp", uploaded_file.name)
with open(file_path, "wb") as f:
f.write(uploaded_file.getbuffer())
resume_files.append(file_path)

analyzer = ResumeAnalyzer()
analysis_results = analyzer.analyze_resumes(resume_files, job_description)

for result in analysis_results:
st.write(f"**Resume Name**: {result['Resume Name']}")
st.write(f"**Job Titles**: {result['Job Titles']}")
st.write(f"**Primary Skills**: {result['Primary Skills']}")
st.write(f"**Secondary Skills**: {result['Secondary Skills']}")
st.write(f"**Total Experience (Years)**: {result['Total Experience (Years)']}")
st.write(f"**Relevant Experience Duration (Years)**: {result['Relevant Experience Duration (Years)']:.2f}")
st.write(f"**Average Experience Relevance**: {result['Average Experience Relevance']:.2f}")
st.write(f"**Relevant Projects**: {result['Relevant Projects']}")
st.write(f"**Average Project Relevance**: {result['Average Project Relevance']:.2f}")
st.write(f"**Score**: {result['Score']:.2f}")
st.write("-" * 50)

report_path = analyzer.generate_ranking_report(analysis_results)
with open(report_path, "rb") as report_file:
st.download_button(
label="Download Analysis Report as PDF",
data=report_file,
file_name="resume_analysis_report.pdf",
mime="application/pdf"
)

if __name__ == "__main__":
main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
streamlit
sentence-transformers
PyMuPDF
rapidfuzz
fpdf
python-dateutil
spacy
# en_core_web_sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz
# pandas
# spacy==3.5.0
# https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.5.0/en_core_web_sm-3.5.0-py3-none-any.whl

# python -m spacy download en_core_web_sm
# # # # import nltk
# # # # nltk.download('punkt')
# # # # nltk.download('stopwords')
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

Large diffs are not rendered by default.

12 changes: 12 additions & 0 deletions AI Resume Analyzer/Resume Analyzer Sentence Transformers/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@

import streamlit as st
import spacy
from sentence_transformers import SentenceTransformer

# @st.cache_resource
def load_nlp_model():
return spacy.load("en_core_web_sm")

# @st.cache_resource
def load_transformer_model():
return SentenceTransformer('all-MiniLM-L6-v2')
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@


# ATS Resume Expert

ATS Resume Expert is a Streamlit-based web application that uses Google's Generative AI (Gemini) model to analyze resumes in PDF format against specific job descriptions. The application evaluates the resume content, providing insights and match percentages to help users understand how well their resume aligns with job requirements.

## Features
- **Resume Analysis**: Upload a PDF resume, and the AI evaluates it based on a provided job description.
- **Job Match Scoring**: The AI provides a match percentage between the resume and job description, highlighting strengths, weaknesses, missing keywords, and more.
- **Streamlit UI**: User-friendly interface with text input for job description and resume upload capability.

## Getting Started

### Prerequisites
1. **Python**: Make sure you have Python 3.7+ installed.
2. **Google API Key**: This project requires access to Google Generative AI's Gemini model. Obtain an API key and configure it in the environment.

### Installation
1. Clone this repository:
```bash
git clone https://github.com/your-username/ATS-Resume-Expert.git
cd ATS-Resume-Expert
```
2. Install the required packages:
```bash
pip install -r requirements.txt
```
Here is a sample `requirements.txt`:
```
streamlit
dotenv
pdf2image
pillow
google-generativeai
```

3. Install **poppler** (required for `pdf2image`):
- **Windows**: [Download Poppler for Windows](http://blog.alivate.com.au/poppler-windows/), extract, and add `poppler/bin` to your PATH.
- **Linux**: Run `sudo apt install poppler-utils`.
- **macOS**: Run `brew install poppler`.

4. Create a `.env` file in the project root with your Google API key:
```
GOOGLE_API_KEY=your_google_api_key
```

### Running the App
1. Start the Streamlit app:
```bash
streamlit run app.py
```

2. Open the provided local URL to access the ATS Resume Expert app.

## Usage
1. **Job Description**: Enter the job description in the text area.
2. **Resume Upload**: Upload a PDF version of the resume.
3. **Analyze Resume**:
- Click **Tell Me About the Resume** to get an evaluation of the resume based on job requirements.
- Click **Percentage Match** to receive a match score along with suggestions for improvement.

## File Structure
- **app.py**: Main application code.
- **README.md**: Documentation for the app.
- **requirements.txt**: List of required Python libraries.
- **.env**: Environment file for API keys (not included in repository).

## Troubleshooting
1. **Poppler Installation**: Ensure Poppler is installed and accessible in your PATH if you encounter PDF processing errors.
2. **API Errors**: Check your Google API key and usage limits if there are issues with the AI model responses.


Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
import base64
import streamlit as st
import os
import io
from PIL import Image
import pdf2image
import google.generativeai as genai

genai.configure(api_key=os.getenv("apikey"))

def get_gemini_response(input,pdf_cotent,prompt):
model=genai.GenerativeModel('gemini-pro-vision')
response=model.generate_content([input,pdf_content[0],prompt])
return response.text

def input_pdf_setup(uploaded_file):
if uploaded_file is not None:
## Convert the PDF to image
images=pdf2image.convert_from_bytes(uploaded_file.read())

first_page=images[0]

# Convert to bytes
img_byte_arr = io.BytesIO()
first_page.save(img_byte_arr, format='JPEG')
img_byte_arr = img_byte_arr.getvalue()

pdf_parts = [
{
"mime_type": "image/jpeg",
"data": base64.b64encode(img_byte_arr).decode() # encode to base64
}
]
return pdf_parts
else:
raise FileNotFoundError("No file uploaded")

## Streamlit App

st.set_page_config(page_title="ATS Resume EXpert")
st.header("ATS Tracking System")
input_text=st.text_area("Job Description: ",key="input")
uploaded_file=st.file_uploader("Upload your resume(PDF)...",type=["pdf"])


if uploaded_file is not None:
st.write("PDF Uploaded Successfully")


submit1 = st.button("Tell Me About the Resume")

#submit2 = st.button("How Can I Improvise my Skills")

submit3 = st.button("Percentage match")

input_prompt1 = """
You are an experienced Technical Human Resource Manager,your task is to review the provided resume against the job description.
Please share your professional evaluation on whether the candidate's profile aligns with the role.
Highlight the strengths and weaknesses of the applicant in relation to the specified job requirements.
"""

input_prompt3 = """
You are an skilled ATS (Applicant Tracking System) scanner with a deep understanding of data science and ATS functionality,
your task is to evaluate the resume against the provided job description. give me the percentage of match if the resume matches
the job description. First the output should come as percentage and then keywords missing and last final thoughts.
"""

if submit1:
if uploaded_file is not None:
pdf_content=input_pdf_setup(uploaded_file)
response=get_gemini_response(input_prompt1,pdf_content,input_text)
st.subheader("The Repsonse is")
st.write(response)
else:
st.write("Please uplaod the resume")

elif submit3:
if uploaded_file is not None:
pdf_content=input_pdf_setup(uploaded_file)
response=get_gemini_response(input_prompt3,pdf_content,input_text)
st.subheader("The Repsonse is")
st.write(response)
else:
st.write("Please uplaod the resume")









Binary file not shown.
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
streamlit
google-generativeai
python-dotenv
langchain
PyPDF2
faiss-cpu
langchain_google_genai
Loading