diff --git a/tutorials/how-to-implement-rag-generativeapis/index.mdx b/tutorials/how-to-implement-rag-generativeapis/index.mdx index 94753e525f..20e190fbc9 100644 --- a/tutorials/how-to-implement-rag-generativeapis/index.mdx +++ b/tutorials/how-to-implement-rag-generativeapis/index.mdx @@ -156,7 +156,7 @@ Then, we will embed them as vectors and store these vectors in your PostgreSQL d print('Vectors successfully added for document',file.metadata['source']) ``` - The chunk size of 500 characters is chosen to fit within the context size limit of the embedding model used in this tutorial, but could be raised up to 4096 characters for `bge-multilingual-gemma2` model (or slightly more as context size is counted in tokens). Keeping chunks small also optimize performance during inference. + The chunk size of 500 characters is chosen to fit within the context size limit of the embedding model used in this tutorial, but could be raised to up to 4096 characters for `bge-multilingual-gemma2` model (or slightly more as context size is counted in tokens). Keeping chunks small also optimizes performance during inference. 9. You can now run you vector embedding script with: