Skip to content

This project is designed to extract text from documents and prepare it for processing by Large Language Models (LLM). Implemented a feature to store and utilize text style information, enabling the program to identify and segment content based on potential headers and titles.

Notifications You must be signed in to change notification settings

ChenTaHung/HTML-Text-Parser

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HTML-Text-Parser

This project is designed to extract text from documents and prepare it for processing by Large Language Models (LLM). It not only pulls text but also preserves its styles and decorations by converting everything into structured data. This approach ensures that the style information is maintained through tags or classes, helping to keep the text's original formatting and emphasis.

Handling large blocks of text directly is often impractical for LLMs, as they can struggle to process and interpret extensive, undivided text effectively. To solve this, we implement a chunking strategy where text is divided based on its styling cues, such as font size, boldness, or italics, etc. Text with larger fonts or emphasized styles is typically deemed more significant, often representing headings or subheadings, which are given higher scores and treated as separate chunks, including following context. This method enhances the readability and usability of the text in LLM applications.

Installation

First, clone the GitHub repository.

git clone https://github.com/ChenTaHung/HTML-Text-Parser.git path/to/clone/the/repository # HTTPS
git clone [email protected]:ChenTaHung/HTML-Text-Parser.git path/to/clone/the/repository # SSH

Then, switch to the directory where the repository has been cloned.

import os
os.chdir('/path/to/the/cloned/repository')
from src.main.TextParsing.HTMLParser import HTMLParser
from src.main.TextParsing.TextChunker import TextChunker

Usage

Input a html file (.html):

If the original documents are in PDF files format, please convert them into html files (recommending the Adobe convertor), and make sure the html contents are parsable.

# Open the html file and read into the program
with open('data/FASB_2022_html/ASU_2022-01.html', 'r') as html_file:
    html_content = html_file.read()

# instantiate the Parser object
parser = HTMLParser(html_content)
text_info_df = parser.parse()

# Get all the text out:
allText = parser.get_text()

The text_info_df holds all the extracted text along with its styles and decorations in a structured format.

Image

Now that we have the dataframe containing all the text segments, we can use the chunker to break the text into smaller pieces, making it more manageable for processing by the LLMs.

# instantiate the Chunker object
# The constructor accepts the dataframe we parsed out as the input
chunker = TextChunker(text_info_df)

# Chunk text
result_chunks_list = chunker.chunk_text()

The critical step here is the chunk_text() function, it includes the following parameters:

def chunk_text(self, 
               cutoff = 7, 
               auto_adjust_cutoff=False, 
               keep_text_only=True, 
               refine=True, 
               sel_metric='words', 
               lower_bound=100, 
               upper_bound=650
               )

The chunk_text function is designed to segment text into smaller chunks based on various criteria, making it easier for Large Language Models to process the text effectively. Here’s how you can utilize this function:

  • cutoff (int, optional): This parameter sets the threshold for including a row in a chunk. The default value is 7.

  • auto_adjust_cutoff (bool, optional): Enables automatic adjustment of the cutoff value based on the data. It is set to False by default.

  • keep_text_only (bool, optional): If set to True, the function returns only the concatenated text of each chunk, omitting any DataFrame structure. This is the default behavior.

  • refine (bool, optional): Activates a refinement process on the chunks using the selected metric. This is set to True by default.

  • sel_metric (str, optional): Specifies the metric used for refining the chunks, with 'words' as the default option.

  • lower_bound (int, optional): Sets the minimum size of a chunk when refining. The default is set at 100 words.

  • upper_bound (int, optional): Sets the maximum size of a chunk when refining. The default is set at 650 words.

The function returns a list of chunks. These chunks are either simple concatenated text contents or DataFrames, depending on the keep_text_only parameter. This function is essential for preparing large texts in a format that is more manageable for LLMs to process.

Potential Future works

  1. Able to handle external CSS files that defined the predefined classes.
  2. Optimize the logic of refining chunks.
  3. Optimize and generalize the score dictionary for scoring each text.

Environment

OS : macOS Sonoma 14.5

IDE: Visual Studio Code 

Language : Python       3.9.7 
    - numpy             1.20.3
    - numpydoc          1.1.0
    - pandas            1.5.3
    - regex             2021.8.3
    - beautifulsoup4    4.10.0

Developers

Denny Chen

About

This project is designed to extract text from documents and prepare it for processing by Large Language Models (LLM). Implemented a feature to store and utilize text style information, enabling the program to identify and segment content based on potential headers and titles.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published