Skip to content

Commit

Permalink
Update DocSum readme (#24)
Browse files Browse the repository at this point in the history
* Update DocSum readme

Signed-off-by: lvliang-intel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: lvliang-intel <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
lvliang-intel and pre-commit-ci[bot] authored Mar 28, 2024
1 parent 03a6df8 commit 83a885f
Showing 1 changed file with 12 additions and 1 deletion.
13 changes: 12 additions & 1 deletion DocSum/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,15 @@
Text summarization is an NLP task that creates a concise and informative summary of a longer text. LLMs can be used to create summaries of news articles, research papers, technical documents, and other types of text. Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. In this example use case, we use LangChain to apply some summarization strategies and run LLM inference using Text Generation Inference on Intel Gaudi2.
# Document Summarization

In a world where data, information, and legal complexities is prevalent, the volume of legal documents is growing rapidly. Law firms, legal professionals, and businesses are dealing with an ever-increasing number of legal texts, including contracts, court rulings, statutes, and regulations.
These documents contain important insights, but understanding them can be overwhelming. This is where the demand for legal document summarization comes in.

Large Language Models (LLMs) have revolutionized the way we interact with text, LLMs can be used to create summaries of news articles, research papers, technical documents, and other types of text. Suppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. In this example use case, we use LangChain to apply some summarization strategies and run LLM inference using Text Generation Inference on Intel Xeon and Gaudi2.

The document summarization architecture shows below:

![Architecture](https://i.imgur.com/XT0YUhu.png)

![Workflow](https://i.imgur.com/m9Ac9wy.png)

# Environment Setup

Expand Down

0 comments on commit 83a885f

Please sign in to comment.