Skip to content

Commit

Permalink
chore: Incremental Docs Reorg Pt. 1 (#2262)
Browse files Browse the repository at this point in the history
* part 1

* part 2

* part 2 - undo a lot

* part 2 - undo a lot

* part 2 - undo a lot

* tone down the tools section

* small adjustments

* small adjustments

* updated

* lint

* Apply suggestions from code review

Andrew changes

Co-authored-by: Andrew Truong <[email protected]>

---------

Co-authored-by: Andrew Truong <[email protected]>
  • Loading branch information
tssweeney and andrewtruong authored Aug 29, 2024
1 parent 144ea88 commit 8c1b193
Show file tree
Hide file tree
Showing 26 changed files with 789 additions and 673 deletions.
8 changes: 0 additions & 8 deletions docs/docs/guides/_category_.json

This file was deleted.

2 changes: 1 addition & 1 deletion docs/docs/guides/core-types/evaluations.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Evaluation
# Evaluations

Evaluation-driven development helps you reliably iterate on an application. The `Evaluation` class is designed to assess the performance of a `Model` on a given `Dataset` or set of examples using scoring functions.

Expand Down
8 changes: 0 additions & 8 deletions docs/docs/guides/core-types/index.md

This file was deleted.

53 changes: 36 additions & 17 deletions docs/docs/guides/integrations/index.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,37 @@
# Weave Integrations

Weave contains automatic logging integrations for popular LLMs and orchestration frameworks. Weave will automatically trace calls made via the following libraries:

- **[OpenAI](/guides/integrations/openai)**
- **[Anthropic](/guides/integrations/anthropic)**
- **[Cohere](/guides/integrations/cohere)**
- **[MistralAI](/guides/integrations/mistral)**
- **[LangChain](/guides/integrations/langchain)**
- **[LlamaIndex](/guides/integrations/llamaindex)**
- **[DSPy](/guides/integrations/dspy)**
- **[Google Gemini](/guides/integrations/google-gemini)**
- **[Together AI](/guides/integrations/together_ai)**
- **[Groq](/guides/integrations/groq)**
- **[Open Router](/guides/integrations/openrouter)**
- **[Local Models](/guides/integrations/local_models)**
- **[LiteLLM](/guides/integrations/litellm)**
# Integrations

:::success[Automatic Tracking]
In most cases, all you need to do is call `weave.init()` at the top of your script or program in order for Weave to automatically patch and track any of these libraries!
:::

Weave provides automatic logging integrations for popular LLM providers and orchestration frameworks. These integrations allow you to seamlessly trace calls made through various libraries, enhancing your ability to monitor and analyze your AI applications.

## LLM Providers

LLM providers are the vendors that offer access to large language models for generating predictions. Weave integrates with these providers to log and trace the interactions with their APIs:

- **[OpenAI](/guides/integrations/openai)**
- **[Anthropic](/guides/integrations/anthropic)**
- **[Cerebras](/guides/integrations/cerebras)**
- **[Cohere](/guides/integrations/cohere)**
- **[MistralAI](/guides/integrations/mistral)**
- **[Google Gemini](/guides/integrations/google-gemini)**
- **[Together AI](/guides/integrations/together_ai)**
- **[Groq](/guides/integrations/groq)**
- **[Open Router](/guides/integrations/openrouter)**
- **[LiteLLM](/guides/integrations/litellm)**


**[Local Models](/guides/integrations/local_models)**: For when you're running models on your own infrastructure.

## Frameworks

Frameworks help orchestrate the actual execution pipelines in AI applications. They provide tools and abstractions for building complex workflows. Weave integrates with these frameworks to trace the entire pipeline:

- **[LangChain](/guides/integrations/langchain)**
- **[LlamaIndex](/guides/integrations/llamaindex)**
- **[DSPy](/guides/integrations/dspy)**



Choose an integration from the lists above to learn more about how to use Weave with your preferred LLM provider or framework. Whether you're directly accessing LLM APIs or building complex pipelines with orchestration frameworks, Weave provides the tools to trace and analyze your AI applications effectively.
23 changes: 20 additions & 3 deletions docs/docs/guides/tools/index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,21 @@
# Tools
# Tools & Utilities

- [serve](/guides/tools/serve): Serve Weave ops and models
- [deploy](/guides/tools/deploy): Deploy Weave ops and models to various targets
Weave is developing a set of tools and utilities to help with your workflow and deployment process for AI applications. These are currently in early alpha stages and subject to change. Here's an overview of what we're working on:

## Serve (experimental)

[Serve](/guides/tools/serve) is a feature to expose your Weave ops and models as API endpoints. We're exploring possibilities such as:

- Creating web services for your Weave components
- Integrating Weave components into existing applications
- Providing a way to test models in a more production-like setting

## Deploy (experimental)

[Deploy](/guides/tools/deploy) is another alpha-stage utility we're developing to help with deploying Weave ops and models. Some potential features we're considering include:

- Pushing models to cloud platforms
- Managing different deployment environments
- Exploring ways to automate parts of the deployment process

Please note that these tools are still in very early stages of development. They may not be fully functional, could change significantly, or might be discontinued. We recommend using them for experimental purposes only at this time.
34 changes: 28 additions & 6 deletions docs/docs/guides/tracking/index.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,30 @@
# Tracking
# Tracing

Weave track and versions objects and function calls.
Weave provides powerful tracing capabilities to track and version objects and function calls in your applications. This comprehensive system enables better monitoring, debugging, and iterative development of AI-powered applications, allowing you to "track insights between commits."

- [objects](/guides/tracking/objects): Weave's extensible serialization layer versions Python objects.
- [ops](/guides/tracking/ops): A Weave op is a function that is automatically versioned and tracked
- [tracing](/guides/tracking/tracing)
- [feedback](/guides/tracking/feedback): Annotate calls with emojis and notes
## Key Tracing Features

Weave's tracing functionality comprises three main components:

### Calls

[Calls](/guides/tracking/tracing) trace function calls, inputs, and outputs, enabling you to:
- Analyze data flow through your application
- Debug complex interactions between components
- Optimize application performance based on call patterns

### Ops

[Ops](/guides/tracking/ops) are automatically versioned and tracked functions (which produce Calls) that allow you to:
- Monitor function performance and behavior
- Maintain a record of function modifications
- Ensure experiment reproducibility

### Objects

[Objects](/guides/tracking/objects) form Weave's extensible serialization layer, automatically versioning runtime objects (often the inputs and outputs of Calls). This feature allows you to:
- Track changes in data structures over time
- Maintain a clear history of object modifications
- Easily revert to previous versions when needed

By leveraging these tracing capabilities, you can gain deeper insights into your application's behavior, streamline your development process, and build more robust AI-powered systems.
4 changes: 2 additions & 2 deletions docs/docs/guides/tracking/tracing.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Tracing
# Calls

Tracing is a powerful feature in Weave that allows you to track the inputs and outputs of functions seamlessly. Follow these steps to get started:
Calls is a powerful feature in Weave that allows you to track the inputs and outputs of functions seamlessly. Follow these steps to get started:

To track specific functions, decorate them with @weave.op(). This decorator tells Weave to monitor the inputs, outputs, and any code changes for the function. Now, every call to this function will be tracked and logged. Additionally, weave tracks how functions relate by keeping track of parent and children calls and building a trace tree.

Expand Down
36 changes: 0 additions & 36 deletions docs/docs/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,42 +20,6 @@ You can use Weave to:
- Build rigorous, apples-to-apples evaluations for language model use cases
- Organize all the information generated across the LLM workflow, from experimentation to evaluations to production

## Key concepts

Weave's **core types** layer contains everything you need for organizing Generative AI projects, with built-in lineage, tracking, and reproducibility.

- **[Datasets](/guides/core-types/datasets)**: Version, store, and share rich tabular data.
- **[Models](/guides/core-types/models)**: Version, store, and share parameterized functions.
- **[Evaluations](/guides/core-types/evaluations)**: Test suites for AI models.
- [soon] Agents: ...

Weave's **tracking** layer brings immutable tracing and versioning to your programs and experiments.

- **[Objects](/guides/tracking/objects)**: Weave's extensible serialization lets you easily version, track, and share Python objects.
- **[Ops](/guides/tracking/ops)**: Versioned, reproducible functions, with automatic tracing.
- **[Tracing](/guides/tracking/tracing)**: Automatic organization of function calls and data lineage.
- **[Feedback](/guides/tracking/feedback)**: Simple utilities to capture user feedback and attach them to the underlying tracked call.

Weave offers **integrations** with many language model APIs and LLM frameworks to streamline tracking and evaluation:

- **[OpenAI](/guides/integrations/openai)**: automatic tracking for openai api calls
- **[Anthropic](/guides/integrations/anthropic)**
- **[Cohere](/guides/integrations/cohere)**
- **[MistralAI](/guides/integrations/mistral)**
- **[LangChain](/guides/integrations/langchain)**
- **[LlamaIndex](/guides/integrations/llamaindex)**
- **[DSPy](/guides/integrations/dspy)**
- **[Google Gemini](/guides/integrations/google-gemini)**
- **[Together AI](/guides/integrations/together_ai)**
- **[Open Router](/guides/integrations/openrouter)**
- **[Local Models](/guides/integrations/local_models)**
- **[LiteLLM](/guides/integrations/litellm)**

Weave's **tools** layer contains utilities for making use of Weave objects.

- **[Serve](/guides/tools/serve)**: FastAPI server for Weave Ops and Models
- **[Deploy](/guides/tools/deploy)**: Deploy Weave Ops and Models to various targets

## What's next?

Try the [Quickstart](/quickstart) to see Weave in action.
Loading

0 comments on commit 8c1b193

Please sign in to comment.