Skip to content

Commit

Permalink
Update index.md
Browse files Browse the repository at this point in the history
  • Loading branch information
fmplaza authored Jun 26, 2024
1 parent 472a06d commit 83e7f56
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions content/publication/2023-label-variation-llms/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ publishDate: 2023-07-24T14:48:20+01:00
publication_types: ["3"]

# Publication name and optional abbreviated publication name.
publication: "arXiv preprint arXiv:2307.12973"
publication_short: "arXiv preprint arXiv:2307.12973"
publication: "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024"
publication_short: "NLPerspectives (LREC-COLING 2024)"

abstract: "Large Language Models (LLMs) exhibit remarkable text classification capabilities, excelling in zero- and few-shot learning (ZSL and FSL) scenarios. However, since they are trained on different datasets, performance varies widely across tasks between those models. Recent studies emphasize the importance of considering human label variation in data annotation. However, how this human label variation also applies to LLMs remains unexplored. Given this likely model specialization, we ask: Do aggregate LLM labels improve over individual models (as for human annotators)? We evaluate four recent instruction-tuned LLMs as annotators on five subjective tasks across four languages. We use ZSL and FSL setups and label aggregation from human annotation. Aggregations are indeed substantially better than any individual model, benefiting from specialization in diverse tasks or languages. Surprisingly, FSL does not surpass ZSL, as it depends on the quality of the selected examples. However, there seems to be no good information-theoretical strategy to select those. We find that no LLM method rivals even simple supervised models. We also discuss the tradeoffs in accuracy, cost, and moral/ethical considerations between LLM and human annotation."

Expand All @@ -37,7 +37,7 @@ featured: false
# icon_pack: fab
# icon: twitter

url_pdf: https://arxiv.org/pdf/2307.12973.pdf
url_pdf: https://aclanthology.org/2024.nlperspectives-1.2.pdf
url_code:
url_dataset:
url_poster:
Expand Down

0 comments on commit 83e7f56

Please sign in to comment.