Skip to content

Commit

Permalink
[Core][Retrieval] Implement NDCG metric (run-llama#14100)
Browse files Browse the repository at this point in the history
* Implement NDCG metric

* Update noteboook
  • Loading branch information
DarkLight1337 authored Jun 14, 2024
1 parent f175139 commit 3795036
Show file tree
Hide file tree
Showing 3 changed files with 216 additions and 36 deletions.
89 changes: 57 additions & 32 deletions docs/docs/examples/evaluation/retrieval/retriever_eval.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
"\n",
"This notebook uses our `RetrieverEvaluator` to evaluate the quality of any Retriever module defined in LlamaIndex.\n",
"\n",
"We specify a set of different evaluation metrics: this includes hit-rate and MRR. For any given question, these will compare the quality of retrieved results from the ground-truth context.\n",
"We specify a set of different evaluation metrics: this includes hit-rate, MRR, and NDCG. For any given question, these will compare the quality of retrieved results from the ground-truth context.\n",
"\n",
"To ease the burden of creating the eval dataset in the first place, we can rely on synthetic data generation."
]
Expand All @@ -40,13 +40,14 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install llama-index-llms-openai"
"%pip install llama-index-llms-openai\n",
"%pip install llama-index-readers-file"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bb6fecf4-7215-4ae9-b02b-3cb7c6000f2c",
"id": "285cfab2",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -62,7 +63,6 @@
"metadata": {},
"outputs": [],
"source": [
"from llama_index.core.evaluation import generate_question_context_pairs\n",
"from llama_index.core import VectorStoreIndex, SimpleDirectoryReader\n",
"from llama_index.core.node_parser import SentenceSplitter\n",
"from llama_index.llms.openai import OpenAI"
Expand All @@ -82,7 +82,25 @@
"execution_count": null,
"id": "589c112d",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2024-06-12 23:57:02-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt\n",
"Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.109.133, ...\n",
"Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 75042 (73K) [text/plain]\n",
"Saving to: ‘data/paul_graham/paul_graham_essay.txt’\n",
"\n",
"data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.08s \n",
"\n",
"2024-06-12 23:57:03 (864 KB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]\n",
"\n"
]
}
],
"source": [
"!mkdir -p 'data/paul_graham/'\n",
"!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'"
Expand Down Expand Up @@ -171,15 +189,11 @@
{
"data": {
"text/markdown": [
"**Node ID:** node_0<br>**Similarity:** 0.8181379514114543<br>**Text:** What I Worked On\n",
"\n",
"February 2021\n",
"**Node ID:** node_38<br>**Similarity:** 0.814377909267451<br>**Text:** I also worked on spam filters, and did some more painting. I used to have dinners for a group of friends every thursday night, which taught me how to cook for groups. And I bought another building in Cambridge, a former candy factory (and later, twas said, porn studio), to use as an office.\n",
"\n",
"Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.\n",
"\n",
"The first programs I tried writing were on the IBM 1401 that our school district used for what was then called \"data processing.\" This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.\n",
"One night in October 2003 there was a big party at my house. It was a clever idea of my friend Maria Daniels, who was one of the thursday diners. Three separate hosts would all invite their friends to one party. So for every guest, two thirds of the other guests would be people they didn't know but would probably like. One of the guests was someone I didn't know but would turn out to like a lot: a woman called Jessica Livingston. A couple days later I asked her out.\n",
"\n",
"The language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in ...<br>"
"Jessica was in charge of marketing at a Boston investment bank. This bank thought it understood startups, but over the next year, as she met friends of mine from the startup world, she was surprised how different reality was. And ho...<br>"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
Expand All @@ -191,13 +205,15 @@
{
"data": {
"text/markdown": [
"**Node ID:** node_52<br>**Similarity:** 0.8143530600618721<br>**Text:** It felt like I was doing life right. I remember that because I was slightly dismayed at how novel it felt. The good news is that I had more moments like this over the next few years.\n",
"**Node ID:** node_0<br>**Similarity:** 0.8122448657654567<br>**Text:** What I Worked On\n",
"\n",
"In the summer of 2016 we moved to England. We wanted our kids to see what it was like living in another country, and since I was a British citizen by birth, that seemed the obvious choice. We only meant to stay for a year, but we liked it so much that we still live there. So most of Bel was written in England.\n",
"February 2021\n",
"\n",
"Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.\n",
"\n",
"In the fall of 2019, Bel was finally finished. Like McCarthy's original Lisp, it's a spec rather than an implementation, although like McCarthy's Lisp it's a spec expressed as code.\n",
"The first programs I tried writing were on the IBM 1401 that our school district used for what was then called \"data processing.\" This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.\n",
"\n",
"Now that I could write essays again, I wrote a bunch about topics I'd had stacked up. I kept writing essays through 2020, but I also started to think about other things I could work on. How should I choose what to do? Well, how had I chosen what to work on in the past? I wrote an essay for myself to answer that ques...<br>"
"The language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in ...<br>"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
Expand Down Expand Up @@ -246,7 +262,15 @@
"execution_count": null,
"id": "2d29a159-9a4f-4d44-9c0d-1cd683f8bb9b",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 61/61 [04:59<00:00, 4.91s/it]\n"
]
}
],
"source": [
"qa_dataset = generate_question_context_pairs(\n",
" nodes, llm=llm, num_questions_per_chunk=2\n",
Expand All @@ -263,7 +287,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\"Describe the transition from using the IBM 1401 to microcomputers, as mentioned in the text. What were the key differences in terms of user interaction and programming capabilities?\"\n"
"\"Describe the transition from using the IBM 1401 to microcomputers, as mentioned in the text. How did this change impact the way programs were written and executed?\"\n"
]
}
],
Expand Down Expand Up @@ -319,7 +343,7 @@
"metadata": {},
"outputs": [],
"source": [
"include_cohere_rerank = True\n",
"include_cohere_rerank = False\n",
"\n",
"if include_cohere_rerank:\n",
" !pip install cohere -q"
Expand All @@ -334,7 +358,7 @@
"source": [
"from llama_index.core.evaluation import RetrieverEvaluator\n",
"\n",
"metrics = [\"mrr\", \"hit_rate\"]\n",
"metrics = [\"mrr\", \"hit_rate\", \"ndcg\"]\n",
"\n",
"if include_cohere_rerank:\n",
" metrics.append(\n",
Expand All @@ -356,8 +380,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Query: In the context provided, the author describes his early experiences with programming on an IBM 1401. Based on his description, what were some of the limitations and challenges he faced while trying to write programs on this machine?\n",
"Metrics: {'mrr': 1.0, 'hit_rate': 1.0, 'cohere_rerank_relevancy': 0.99620515}\n",
"Query: In the context, the author mentions his early experiences with programming on an IBM 1401. Describe the process he used to write and run a program on this machine, and explain why he found it challenging to create meaningful programs on this system.\n",
"Metrics: {'mrr': 1.0, 'hit_rate': 1.0, 'ndcg': 0.6131471927654584}\n",
"\n"
]
}
Expand Down Expand Up @@ -402,9 +426,10 @@
"\n",
" full_df = pd.DataFrame(metric_dicts)\n",
"\n",
" hit_rate = full_df[\"hit_rate\"].mean()\n",
" mrr = full_df[\"mrr\"].mean()\n",
" columns = {\"retrievers\": [name], \"hit_rate\": [hit_rate], \"mrr\": [mrr]}\n",
" columns = {\n",
" \"retrievers\": [name],\n",
" **{k: [full_df[k].mean()] for k in metrics},\n",
" }\n",
"\n",
" if include_cohere_rerank:\n",
" crr_relevancy = full_df[\"cohere_rerank_relevancy\"].mean()\n",
Expand Down Expand Up @@ -443,26 +468,26 @@
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>retrievers</th>\n",
" <th>hit_rate</th>\n",
" <th>mrr</th>\n",
" <th>cohere_rerank_relevancy</th>\n",
" <th>hit_rate</th>\n",
" <th>ndcg</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>top-2 eval</td>\n",
" <td>0.801724</td>\n",
" <td>0.685345</td>\n",
" <td>0.946009</td>\n",
" <td>0.643443</td>\n",
" <td>0.745902</td>\n",
" <td>0.410976</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" retrievers hit_rate mrr cohere_rerank_relevancy\n",
"0 top-2 eval 0.801724 0.685345 0.946009"
" retrievers mrr hit_rate ndcg\n",
"0 top-2 eval 0.643443 0.745902 0.410976"
]
},
"execution_count": null,
Expand Down
94 changes: 91 additions & 3 deletions llama-index-core/llama_index/core/evaluation/retrieval/metrics.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import math
import os
from typing import Any, Callable, ClassVar, Dict, List, Literal, Optional, Type

Expand All @@ -7,6 +8,7 @@
BaseRetrievalMetric,
RetrievalMetricResult,
)
from typing_extensions import assert_never

_AGG_FUNC: Dict[str, Callable] = {"mean": np.mean, "median": np.median, "max": np.max}

Expand All @@ -18,8 +20,8 @@ class HitRate(BaseRetrievalMetric):
- The more granular method checks for all potential matches between retrieved docs and expected docs.
Attributes:
use_granular_hit_rate (bool): Determines whether to use the granular method for calculation.
metric_name (str): The name of the metric.
use_granular_hit_rate (bool): Determines whether to use the granular method for calculation.
"""

metric_name: ClassVar[str] = "hit_rate"
Expand Down Expand Up @@ -77,11 +79,11 @@ class MRR(BaseRetrievalMetric):
- The more granular method sums the reciprocal ranks of all relevant retrieved documents and divides by the count of relevant documents.
Attributes:
use_granular_mrr (bool): Determines whether to use the granular method for calculation.
metric_name (str): The name of the metric.
use_granular_mrr (bool): Determines whether to use the granular method for calculation.
"""

metric_name: str = "mrr"
metric_name: ClassVar[str] = "mrr"
use_granular_mrr: bool = False

def compute(
Expand Down Expand Up @@ -140,6 +142,91 @@ def compute(
return RetrievalMetricResult(score=mrr_score)


DiscountedGainMode = Literal["linear", "exponential"]


def discounted_gain(*, rel: float, i: int, mode: DiscountedGainMode) -> float:
# Avoid unnecessary calculations. Note that `False == 0` and `True == 1`.
if rel == 0:
return 0
if rel == 1:
return 1 / math.log2(i + 1)

if mode == "linear":
return rel / math.log2(i + 1)
elif mode == "exponential":
return (2**rel - 1) / math.log2(i + 1)
else:
assert_never(mode)


class NDCG(BaseRetrievalMetric):
"""NDCG (Normalized Discounted Cumulative Gain) metric.
The position `p` is taken as the size of the query results (which is usually
`top_k` of the retriever).
Currently only supports binary relevance
(``rel=1`` if document is in ``expected_ids``, otherwise ``rel=0``)
since we assume that ``expected_ids`` is unordered.
Attributes:
metric_name (str): The name of the metric.
mode (DiscountedGainMode): Determines the formula for each item in the summation.
"""

metric_name: ClassVar[str] = "ndcg"
mode: DiscountedGainMode = "linear"

def compute(
self,
query: Optional[str] = None,
expected_ids: Optional[List[str]] = None,
retrieved_ids: Optional[List[str]] = None,
expected_texts: Optional[List[str]] = None,
retrieved_texts: Optional[List[str]] = None,
) -> RetrievalMetricResult:
"""Compute NDCG based on the provided inputs and selected method.
Parameters:
query (Optional[str]): The query string (not used in the current implementation).
expected_ids (Optional[List[str]]): Expected document IDs, unordered by relevance.
retrieved_ids (Optional[List[str]]): Retrieved document IDs, ordered by relevance from highest to lowest.
expected_texts (Optional[List[str]]): Expected texts (not used in the current implementation).
retrieved_texts (Optional[List[str]]): Retrieved texts (not used in the current implementation).
Raises:
ValueError: If the necessary IDs are not provided.
Returns:
RetrievalMetricResult: The result with the computed MRR score.
"""
# Checking for the required arguments
if (
retrieved_ids is None
or expected_ids is None
or not retrieved_ids
or not expected_ids
):
raise ValueError("Retrieved ids and expected ids must be provided")

mode = self.mode
expected_set = set(expected_ids)

dcg = sum(
discounted_gain(rel=docid in expected_set, i=i, mode=mode)
for i, docid in enumerate(retrieved_ids, start=1)
)
idcg = sum(
discounted_gain(rel=True, i=i, mode=mode)
for i in range(1, len(retrieved_ids) + 1)
)

ndcg_score = dcg / idcg

return RetrievalMetricResult(score=ndcg_score)


class CohereRerankRelevancyMetric(BaseRetrievalMetric):
"""Cohere rerank relevancy metric."""

Expand Down Expand Up @@ -209,6 +296,7 @@ def compute(
METRIC_REGISTRY: Dict[str, Type[BaseRetrievalMetric]] = {
"hit_rate": HitRate,
"mrr": MRR,
"ndcg": NDCG,
"cohere_rerank_relevancy": CohereRerankRelevancyMetric,
}

Expand Down
Loading

0 comments on commit 3795036

Please sign in to comment.