Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Centralise OTEL metrics from OpenAI responses #300

Closed
dylanratcliffe opened this issue Aug 19, 2024 · 1 comment
Closed

Centralise OTEL metrics from OpenAI responses #300

dylanratcliffe opened this issue Aug 19, 2024 · 1 comment
Assignees
Labels
sustain Work that keeps the gears greased but doesn't contribute directly to our roadmap

Comments

@dylanratcliffe
Copy link
Member

Since we're calling OpenAI in multiple locations, it would be good if we could centralise the helper functions: https://github.com/overmindtech/api-server/blob/main/server/risks/shared.go#L81-L127

This would mean it would be standardised across all usage

Extra Credit: Perplexity

As per this article: https://medium.com/@furqanshaikh/measuring-llm-confusion-9529a4b5e907

It's possible to calculate the perplexity of a given response. This is basically a metric for how confused a model is. This would be an interesting metric when comparing one prompt against another. It also makes me wonder if there are existing OTEL integrations that would calculate this and potentially other metrics for us easily

@dylanratcliffe dylanratcliffe added the sustain Work that keeps the gears greased but doesn't contribute directly to our roadmap label Sep 2, 2024
@dylanratcliffe dylanratcliffe self-assigned this Oct 27, 2024
@dylanratcliffe
Copy link
Member Author

Fixed in #320

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sustain Work that keeps the gears greased but doesn't contribute directly to our roadmap
Projects
None yet
Development

No branches or pull requests

1 participant