-
Notifications
You must be signed in to change notification settings - Fork 6
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
501589a
commit 003acee
Showing
3 changed files
with
346 additions
and
229 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,345 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Summarization\n", | ||
"\n", | ||
"Summarizing and compressing information, whether from a text, a book or freely from previous experience, is something that is inherently useful for many different types of knowledge work.\n", | ||
"Large language models are adept at summarizing due to their sophisticated understanding of language structure, semantics, and context derived from the vast amounts of text they have been trained on.\n", | ||
"\n", | ||
"Let's take a Luminous model and try this out!\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from intelligence_layer.core import CompleteInput, LuminousControlModel, NoOpTracer\n", | ||
"\n", | ||
"\n", | ||
"# first, we define a `LuminousControlModel`; this model holds much more information than a simple string...\n", | ||
"# it knows how to call the API, how to complete, how to tokenize and more...\n", | ||
"model = LuminousControlModel(\"luminous-base-control-20240215\")\n", | ||
"\n", | ||
"summarization_instruction = (\n", | ||
" \"Summarize the main events leading up to the 1969 moon landing.\"\n", | ||
")\n", | ||
"# different models require different prompting formats\n", | ||
"# our model \"knows\" its prompt format and can thus transform the above instruction into the correct format\n", | ||
"summarization_prompt = model.to_instruct_prompt(instruction=summarization_instruction)\n", | ||
"\n", | ||
"# we want to run a completion on our model, let's build the correct input object\n", | ||
"completion_input = CompleteInput(prompt=summarization_prompt)\n", | ||
"\n", | ||
"# we allow the tracing of subresults, but let's ignore this for now -> NoOpTracer()\n", | ||
"completion_output = model.complete(completion_input, NoOpTracer())\n", | ||
"\n", | ||
"print(completion_output.completion)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"In practive, however, we likely want to summarize some text or document.\n", | ||
"Let's try this out by simply including some document in our prompt." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"document = \"\"\"Title: The Historic Journey to the 1969 Moon Landing: A Chronicle\n", | ||
"\n", | ||
"Introduction:\n", | ||
"The Apollo 11 mission, culminating in the first human steps on the moon on July 20, 1969, stands as a pinnacle achievement in human history and a landmark in the annals of space exploration. This document presents a detailed overview of the key events and milestones leading up to this historic moon landing.\n", | ||
"\n", | ||
"Background:\n", | ||
"The roots of the 1969 moon landing trace back to the geopolitical and scientific rivalries of the Cold War era, most notably between the United States of America and the Soviet Union. The launch of Sputnik by the Soviet Union in 1957 marked the beginning of the space race, a competition that would define much of the late 20th century's scientific endeavors.\n", | ||
"\n", | ||
"Formation of NASA and the Space Race:\n", | ||
"In response to Sputnik, the United States created the National Aeronautics and Space Administration (NASA) in 1958. NASA's primary goal was to surpass Soviet space achievements, a challenge underscored by President John F. Kennedy's ambitious declaration in 1961 that America would send a man to the moon and return him safely to Earth before the decade's end.\n", | ||
"\n", | ||
"Project Apollo:\n", | ||
"The Apollo program, initiated in 1961, was the United States' answer to the space race. It followed the earlier Mercury and Gemini programs, which developed the necessary technology and skills for space travel. Apollo's objective was lunar landing, and it underwent a series of unmanned and manned flights to test and perfect the required technology.\n", | ||
"\n", | ||
"Apollo 1 Tragedy:\n", | ||
"The journey to the moon was not without its setbacks. In 1967, a tragic fire during a test for Apollo 1 claimed the lives of astronauts Virgil \"Gus\" Grissom, Edward H. White, and Roger B. Chaffee. This incident led to major design and engineering overhauls in the Apollo program.\n", | ||
"\n", | ||
"Technological and Engineering Feats:\n", | ||
"The success of the Apollo 11 mission relied on the Saturn V rocket, the most powerful rocket ever built. The Lunar Module, known as the Eagle, was another critical component, designed to land on the moon's surface and return to the orbiting Command Module.\n", | ||
"\n", | ||
"The Apollo 11 Mission:\n", | ||
"On July 16, 1969, Apollo 11 was launched from Kennedy Space Center. Astronauts Neil Armstrong, Buzz Aldrin, and Michael Collins embarked on a historic journey. Four days later, on July 20, Armstrong and Aldrin descended to the lunar surface in the Eagle, while Collins remained in lunar orbit.\n", | ||
"\n", | ||
"\"One Small Step for Man\":\n", | ||
"The world watched in awe as Neil Armstrong became the first human to set foot on the moon, uttering the iconic words, \"That's one small step for man, one giant leap for mankind.\" Aldrin joined him shortly after, and together they spent approximately two hours walking on the moon, conducting experiments, and collecting samples.\n", | ||
"\n", | ||
"Conclusion:\n", | ||
"The 1969 moon landing was a moment of unparalleled human achievement. It was not just a victory for the United States, but a triumph for mankind, symbolizing the boundless possibilities of human ingenuity and perseverance. The Apollo 11 mission set a precedent for all future space exploration, leaving an indelible mark on history.\"\"\"\n", | ||
"\n", | ||
"# the instruction from before should still work fine, but let's now include the document\n", | ||
"summarization_prompt = model.to_instruct_prompt(\n", | ||
" instruction=summarization_instruction, input=document\n", | ||
")\n", | ||
"# we set maximum tokens to make sure that we generate the full, unabridged, summary\n", | ||
"completion_input = CompleteInput(prompt=summarization_prompt, maximum_tokens=256)\n", | ||
"completion_output = model.complete(completion_input, NoOpTracer())\n", | ||
"\n", | ||
"print(completion_output.completion)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Cool, our summary now uses information from the document!\n", | ||
"\n", | ||
"At the core of the Intelligence Layer is the concept of a `Task`; a task could be any process involving an LLM, for example our summary use-case here.\n", | ||
"By using a summarization-task, instead of the above method, we can isolate the responsibility for summarizing in said task.\n", | ||
"We can then simply export this task, evaluate it or deploy it into productive settings.\n", | ||
"\n", | ||
"The IL has pre-buiilt `Task`s for summarizing texts. Let's try this out.\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from intelligence_layer.core import Chunk, Language\n", | ||
"from intelligence_layer.use_cases import (\n", | ||
" SteerableSingleChunkSummarize,\n", | ||
" SingleChunkSummarizeInput,\n", | ||
")\n", | ||
"\n", | ||
"\n", | ||
"# instantiating a `SteerableSingleChunkSummarize` with our model from before\n", | ||
"single_chunk_summarize = SteerableSingleChunkSummarize(model)\n", | ||
"\n", | ||
"# this task needs a matching input;\n", | ||
"single_chunk_summarize_input = SingleChunkSummarizeInput(\n", | ||
" chunk=Chunk(\n", | ||
" document\n", | ||
" ), # a `Chunk` is any text that fits into the model's context window\n", | ||
")\n", | ||
"single_chunk_summarize_output = single_chunk_summarize.run(\n", | ||
" single_chunk_summarize_input, NoOpTracer()\n", | ||
")\n", | ||
"\n", | ||
"print(single_chunk_summarize_output.summary)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Cool!\n", | ||
"\n", | ||
"Notice how the output is quite similar but we did not have to provide the instruction. It is in fact embedded in our task.\n", | ||
"\n", | ||
"Let's adjust this instruction to steer the format of our summary. The task is called `SteerableSingleChunkSummarize` after all.\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# to do this, we can instantiate a new task instance, this time overwriting the `instruction_configs`\n", | ||
"bullet_point_summarize = SteerableSingleChunkSummarize(\n", | ||
" model, instruction_configs={Language(\"en\"): \"Summarize the text in bullet points.\"}\n", | ||
")\n", | ||
"# we can run the same input as before, because the interface (input/output) of our `Task` did not change\n", | ||
"bullet_point_summarize_output = bullet_point_summarize.run(\n", | ||
" single_chunk_summarize_input, NoOpTracer()\n", | ||
")\n", | ||
"\n", | ||
"print(bullet_point_summarize_output.summary)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"The beauty of prompting!\n", | ||
"\n", | ||
"We successfully summarized a short document.\n", | ||
"Most often, however, we will encounter longer texts; they may not fit into the model's context window in one go.\n", | ||
"\n", | ||
"Luckily, we provide means to handle this. You could, for example, instantiate a `SteerableLongContextSummarize`-task.\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from intelligence_layer.use_cases import SteerableLongContextSummarize\n", | ||
"\n", | ||
"# again, we can simply insert our model from before\n", | ||
"long_context_summarize = SteerableLongContextSummarize(model)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Ok, let's find a longer text to summarize.\n", | ||
"\n", | ||
"Lately, I've been into wooden scyscrapers, so let's hit up Wikipedia." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from intelligence_layer.use_cases import LongContextSummarizeInput\n", | ||
"\n", | ||
"long_document = \"\"\"Plyscraper\n", | ||
"\n", | ||
"A plyscraper, or timber tower is a skyscraper made (at least partly) of wood. They may alternatively be known as mass timber buildings.\n", | ||
"\n", | ||
"Materials\n", | ||
"There are four main types of engineered wood used for mass timber including cross-laminated timber (CLT), glued laminated timber (glulam), laminated strand lumber (LSL), and laminated veneer lumber (LVL). Of these three wood systems, CLT is the most commonly used.[1]\n", | ||
"\n", | ||
"When other materials, such as concrete or steel, are used in conjunction with engineered wood, these plyscrapers are called “hybrids”. For hybrid buildings, there are some approaches to how different materials can be used including the “Cree’s System” which was developed by Cree Buildings, and the “Finding the Forest Through the Trees\" (FFTT) construction model” developed by Michael Green. Cree's System combines the use of concrete and wood mainly in its hybrid flooring systems. In some instances, concrete can also be used as a core or for the foundation of a building because wood is too light. The FFTT construction model incorporates a wooden core and wooden floor slabs mixed with steel beams to provide ductility to the building.[1][2]\n", | ||
"\n", | ||
"Advantages and disadvantages\n", | ||
"When considering which engineered wood system to use for a plyscraper the individual benefits of each must be compared. CLT has a high fire resistance due to the fire-resistant adhesive used and the surface char layer that forms when it is exposed to fire. The surface char layer protects the interior of the wood from further damage. Glulam is typically used for columns and beams as an alternative to commonly used steel and concrete.[1][3] This is because it has a greater tensile strength-to-weight ratio than steel and can resist compression better than concrete. LVL also has the same strength as concrete.[4] As plyscrapers are made from wood, they sequester carbon during construction and are renewable if the forests that they are sourced from are sustainably managed.[1][3]\n", | ||
"\n", | ||
"Despite these benefits, there are bound to be some drawbacks when using the various engineered woods. Steel overall has a greater strength and durability for the same sized profile when compared to its wood counterpart.[5] Thus, a building made with steel beams would require smaller beams than the same building constructed with wooden beams. Walls and columns in the interior spaces of these plyscrapers can get so thick that the size of said interior space gets heavily reduced. This issue however, does not occur within shorter buildings.\n", | ||
"\n", | ||
"Tallest plyscrapers\n", | ||
"\n", | ||
"Mjøstårnet in Brumunddal, Norway\n", | ||
"Over the years, many plyscrapers have been constructed, each becoming taller than the last. In 2017, the tallest habitable plyscraper was Brock Commons Tallwood House, on the campus of the University of British Columbia near Vancouver, Canada and measured 53 meters tall (174 feet).[6] It was overtaken in 2019, when the Mjøstårnet was built in Brumunddal, Norway. The Mjøstårnet measured 85.4 meters (280 ft).[7] However, in 2022, the title of the tallest habitable plyscraper shifted once again when the Ascent MKE Building, located in Milwaukee, Wisconsin was built measuring 86.56 meters (284 ft).[8]\n", | ||
"\n", | ||
"Future\n", | ||
"The use of mass timber has been popular in Europe for a few years but has started to gain traction in the United States as knowledge of engineered woods has developed and experience with them as construction materials has increased.[9] Plyscrapers are still in their infancy stage, but as we learn more about mass timber and become more proficient using it, many mass-timber buildings are currently being proposed. In fact, they are becoming increasingly popular among contractors and builders due to the ease of construction, as putting together a plyscraper is faster and quieter compared to its steel and concrete counterparts.[8]\n", | ||
"\n", | ||
"Several proposals to create more plyscrapers have been made all over the world. For example, in Tokyo, a plyscraper that is 350 meters (1,150 ft) tall called the W350 Project has been proposed with plans to be complete in 2041.[10] The W350 Project plans to be a hybrid plyscraper using only 10% steel and the remainder engineered wood. In London, research and planning are underway for the Oakwood Tower which is estimated to be 300 meters (980 ft) tall.[11] This would be an 80-story building integrated into the London skyline. In Chicago, there is a concept proposal for a 228-meter-tall (748 ft) building called the River Beech Tower.[2] The team designing this building plans to implement a system called the exterior diagrid system which would take advantage of the natural axial strength of timber allowing for efficient load distribution throughout the whole building.\"\"\"\n", | ||
"\n", | ||
"long_context_summarize_input = LongContextSummarizeInput(\n", | ||
" text=long_document,\n", | ||
")\n", | ||
"long_context_summarize_output = long_context_summarize.run(\n", | ||
" long_context_summarize_input, NoOpTracer()\n", | ||
")\n", | ||
"\n", | ||
"# to be able to handle longer text, our `LongContextSummarize`-task chunks the text into smaller pieces under the hood\n", | ||
"# by default, it returns multiple summaries, one per chunk\n", | ||
"print(\n", | ||
" \"\\n\".join(\n", | ||
" partial_summary.summary\n", | ||
" for partial_summary in long_context_summarize_output.partial_summaries\n", | ||
" )\n", | ||
")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Let's try to write a single summary!\n", | ||
"\n", | ||
"We may do so by using yet another task, the `RecursiveSummarize`-task. It will compress a text into smaller summaries until only a single, coherent, summary is left. This approach is not bound by the model's context window.\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from intelligence_layer.use_cases import RecursiveSummarize\n", | ||
"\n", | ||
"\n", | ||
"# notice how tasks are composable\n", | ||
"# we instantiate our recursive summarization strategy with our previous long context strategy\n", | ||
"recursive_summarize = RecursiveSummarize(long_context_summarize)\n", | ||
"\n", | ||
"# running the same input as before\n", | ||
"recursive_summarize_output = recursive_summarize.run(\n", | ||
" long_context_summarize_input, NoOpTracer()\n", | ||
")\n", | ||
"\n", | ||
"print(recursive_summarize_output.summary)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Cool, this new summary is a combination of the two subsummaries, combining information from different parts of the input text.\n", | ||
"\n", | ||
"Finally, let's have a look at the Intelligence Layer tracing. Any task may be given a tracer when run. This will automatically record what is happening under the hood.\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from intelligence_layer.core import InMemoryTracer\n", | ||
"from IPython.display import Pretty\n", | ||
"\n", | ||
"\n", | ||
"# for our experiment, we use an `InMemoryTracer`\n", | ||
"# you may also use other tracers, for example the `FileTracer` if you want to persist the traces\n", | ||
"tracer = InMemoryTracer()\n", | ||
"traced_recursive_summarize_output = recursive_summarize.run(\n", | ||
" long_context_summarize_input, tracer\n", | ||
")\n", | ||
"\n", | ||
"# printing the traces\n", | ||
"tracer" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Don't be afraid, I promise this trace makes sense!\n", | ||
"\n", | ||
"Notice how each block is labeled with the task that was run as well as the respective inputs and outputs.\n", | ||
"We can now can a better insight into which tasks are doing what. If we find a mistake, we can double down to figure out what went wrong.\n", | ||
"\n", | ||
"Cool! You are now familiar with the basics concepts of a `Model`, a `Task` and summarization using the Intelligence Layer.\n" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3 (ipykernel)", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.11.0" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 4 | ||
} |
Oops, something went wrong.