Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Project Addition]: Next Word Predictor using Deep Learning #818

Open
Karthik110505 opened this issue Jun 24, 2024 · 5 comments
Open

[Project Addition]: Next Word Predictor using Deep Learning #818

Karthik110505 opened this issue Jun 24, 2024 · 5 comments
Labels
Status: Up for Grabs Up for grabs issue.

Comments

@Karthik110505
Copy link
Contributor

Deep Learning Simplified Repository (Proposing new issue)

🔴 Project Title : Next Word Predictor using Deep Learning

🔴 Aim : To find the generate word of the given sentence using deep Learning algorithms

🔴 Dataset : https://www.kaggle.com/datasets/thedevastator/wikipedia-biographies-text-generation-dataset

🔴 Approach : Try to use 3-4 algorithms to implement the models and compare all the algorithms to find out the best fitted algorithm for the model by checking the accuracy scores. Also do not forget to do a exploratory data analysis before creating any model.


📍 Follow the Guidelines to Contribute in the Project :

  • You need to create a separate folder named as the Project Title.
  • Inside that folder, there will be four main components.
    • Images - To store the required images.
    • Dataset - To store the dataset or, information/source about the dataset.
    • Model - To store the machine learning model you've created using the dataset.
    • requirements.txt - This file will contain the required packages/libraries to run the project in other machines.
  • Inside the Model folder, the README.md file must be filled up properly, with proper visualizations and conclusions.

🔴🟡 Points to Note :

  • The issues will be assigned on a first come first serve basis, 1 Issue == 1 PR.
  • "Issue Title" and "PR Title should be the same. Include issue number along with it.
  • Follow Contributing Guidelines & Code of Conduct before start Contributing.

To be Mentioned while taking the issue :

  • Full name : Veera Venkata Karthik Barrenkala

  • GitHub Profile Link : https://github.com/Karthik110505

  • Email ID :[email protected]

  • Approach for this Project :

    • Load the Dataset
    • Exploratory Data Analysis (EDA):
    • Visualize common words and text length distributions.
    • Model Implementation:
      • Artificial Neural Network (ANN)
      • Long Short-Term Memory (LSTM)
      • BiLSTM
    • Train and evaluate each model.
    • Compare performance using accuracy and loss metrics.
  • What is your participant role? (Mention the Open Source program) GSSoC24


Happy Contributing 🚀

All the best. Enjoy your open source journey ahead. 😎

Copy link

Thank you for creating this issue! We'll look into it as soon as possible. Your contributions are highly appreciated! 😊

@abhisheks008
Copy link
Owner

You can start working on it. Assigned @Karthik110505

@abhisheks008 abhisheks008 added Status: Assigned Assigned issue. level 2 Level 2 for GSSOC gssoc Girlscript Summer of Code 2024 labels Jun 26, 2024
@abhisheks008 abhisheks008 added Status: Up for Grabs Up for grabs issue. and removed Status: Assigned Assigned issue. level 2 Level 2 for GSSOC gssoc Girlscript Summer of Code 2024 labels Aug 11, 2024
@Sakeebhasan123456
Copy link

hii @abhisheks008 ,i would like to work on this project

Approach

Data Preprocessing

  • Lowercasing: Converted all text to lowercase.
  • Punctuation Removal: Removed all punctuation marks.
  • Tokenization: Split text into individual words.
  • Stop Words Removal: Removed common English stop words.
  • Sequence Creation: Created sequences of 5 words each to use as input data.

Models Implemented

  1. Basic LSTM Model
    • Single LSTM layer.
  2. Stacked LSTM Model
    • Two LSTM layers stacked.
  3. Bidirectional LSTM Model
    • LSTM layer that processes input in both directions.
  4. LSTM with Dropout
    • LSTM layer with dropout regularization to prevent overfitting.

##Fine-tunning

Using Optuna technique to find out best optimal result

Training

  • Optimizer: Adam
  • Loss Function: Categorical Crossentropy
  • Metrics: Accuracy
  • Epochs:
  • Batch Size:
  • Validation Split:

Results

  • Accuracy: (Include accuracy results for each model)
  • Loss: (Include loss results for each model)
  • Plots: Accuracy and loss plots are saved in the Images/ folder.

@Sakeebhasan123456
Copy link

@abhisheks008 Actually I want to do this for the Gsocs Extended program please give me a chance to show my skill

@abhisheks008
Copy link
Owner

Assigning this issue to you @Sakeebhasan123456

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Up for Grabs Up for grabs issue.
Projects
None yet
Development

No branches or pull requests

3 participants