You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deep Learning Simplified Repository (Proposing new issue)
🔴 Project Title : Automated Legal Document Summarizer
🔴 Aim : Create a model that can read and summarize lengthy legal documents, preserving the key legal points and clauses.
🔴 Dataset : collected from diverse sources to ensure a variety and contents for comprehensive testing.
🔴 Approach : Approach:
Use a pre-trained transformer model fine-tuned on a legal text dataset.
Incorporate Named Entity Recognition (NER) to identify and highlight important entities (e.g., names, dates, legal terms).
Evaluate the summaries for accuracy and completeness by comparing them to human-generated summaries.
📍 Follow the Guidelines to Contribute in the Project :
You need to create a separate folder named as the Project Title.
Inside that folder, there will be four main components.
Images - To store the required images.
Dataset - To store the dataset or, information/source about the dataset.
Model - To store the machine learning model you've created using the dataset.
requirements.txt - This file will contain the required packages/libraries to run the project in other machines.
Inside the Model folder, the README.md file must be filled up properly, with proper visualizations and conclusions.
🔴🟡 Points to Note :
The issues will be assigned on a first come first serve basis, 1 Issue == 1 PR.
"Issue Title" and "PR Title should be the same. Include issue number along with it.
Follow Contributing Guidelines & Code of Conduct before start Contributing.
Approach for this Project :
Use a pre-trained transformer model fine-tuned on a legal text dataset.
Incorporate Named Entity Recognition (NER) to identify and highlight important entities (e.g., names, dates, legal terms).
Evaluate the summaries for accuracy and completeness by comparing them to human-generated summaries.
What is your participant role? (Mention the Open Source program)
Happy Contributing 🚀
All the best. Enjoy your open source journey ahead. 😎
The text was updated successfully, but these errors were encountered:
Proposed Models/Architectures:
BERTSUM: A variant of BERT (Bidirectional Encoder Representations from Transformers) specifically designed for extractive summarization.
T5 (Text-To-Text Transfer Transformer): A versatile model that can be fine-tuned for summarization tasks by framing them as text-to-text problems.
PEGASUS: A model designed for abstractive summarization with a focus on generating high-quality summaries.
Longformer: A transformer model designed to handle long documents, making it suitable for summarizing lengthy legal texts.
Tools: Hugging Face Transformers, TensorFlow, PyTorch
Deep Learning Simplified Repository (Proposing new issue)
🔴 Project Title : Automated Legal Document Summarizer
🔴 Aim : Create a model that can read and summarize lengthy legal documents, preserving the key legal points and clauses.
🔴 Dataset : collected from diverse sources to ensure a variety and contents for comprehensive testing.
🔴 Approach : Approach:
Use a pre-trained transformer model fine-tuned on a legal text dataset.
Incorporate Named Entity Recognition (NER) to identify and highlight important entities (e.g., names, dates, legal terms).
Evaluate the summaries for accuracy and completeness by comparing them to human-generated summaries.
📍 Follow the Guidelines to Contribute in the Project :
requirements.txt
- This file will contain the required packages/libraries to run the project in other machines.Model
folder, theREADME.md
file must be filled up properly, with proper visualizations and conclusions.🔴🟡 Points to Note :
✅ To be Mentioned while taking the issue :
Use a pre-trained transformer model fine-tuned on a legal text dataset.
Incorporate Named Entity Recognition (NER) to identify and highlight important entities (e.g., names, dates, legal terms).
Evaluate the summaries for accuracy and completeness by comparing them to human-generated summaries.
Happy Contributing 🚀
All the best. Enjoy your open source journey ahead. 😎
The text was updated successfully, but these errors were encountered: