From 84f9bae1294bfd6b9da62e2ab0aa734ae12255e9 Mon Sep 17 00:00:00 2001 From: Nayeon Kim <76769919+n-y-kim@users.noreply.github.com> Date: Tue, 20 Aug 2024 03:21:40 +0900 Subject: [PATCH] Update 0-architecture.md (#961) --- docsite/posts/index/0-architecture.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docsite/posts/index/0-architecture.md b/docsite/posts/index/0-architecture.md index c00e6e8802..5b9a691fa7 100644 --- a/docsite/posts/index/0-architecture.md +++ b/docsite/posts/index/0-architecture.md @@ -65,7 +65,7 @@ Our underlying dataframe technology may change over time, but our primary goal i ### LLM Caching -The GraphRAG library was designed with LLM interactions in mind, and a common setback when working with LLM APIs is various errors errors due to network latency, throttling, etc.. +The GraphRAG library was designed with LLM interactions in mind, and a common setback when working with LLM APIs is various errors due to network latency, throttling, etc.. Because of these potential error cases, we've added a cache layer around LLM interactions. When completion requests are made using the same input set (prompt and tuning parameters), we return a cached result if one exists. This allows our indexer to be more resilient to network issues, to act idempotently, and to provide a more efficient end-user experience.