Skip to content

Commit

Permalink
0.1.0-alpha.5 (#20)
Browse files Browse the repository at this point in the history
* bump version to 0.1.0-alpha.5

* add publish:all

* rename publish:all to publish

* even more README edits

* update README again

* docs initial commit

* reorganize tests/; run integration test during ci?

* fix circleci conf

* ok

* add context

* test if this will cause test to fail

* Revert "test if this will cause test to fail"

This reverts commit 389e377.

* hf int test

* s/pass/fail/

* use more types

* incremental updates to types

* incremental updates to types

* add typeDefs tsc path

* fix paths

* remove createHttpModelProvider

* incremental type improvements

* use import type

* refresh docs

* README updates

* more doc improvements

* more docs

* more docs

* fleshing out ModelApi docs

* full docs for createOpenAiChatModelProvider

* full docs for createLmStudioModelProvider

* full docs for createHuggingfaceInferenceModelProvider and small fixes elsewhere

* full docs for createGroqModelProvider

* full docs for createCohereLegacyModelProvider

* full docs for createAwsBedrockModelProvider

* doc categories

* full openai ChatCompletion api

* move some files around; rest of ai21j2complete

* comment about lmstudio

* lint

* add ChatCompletionRequestMessage to OpenAiChatOptions

* add mistral provider

* finish docs for mistral provider

* doc updates

* docs:gen

* proper integration tests

* track req/resp history

* replace cohereLegacy with cohere; add cohere chat api (initial)

* union type for createAwsBedrockModelProvider api param

* union type for createHuggingfaceInferenceModelProvider api param

* docs:gen

* fix builds

* add optionals to cohere-chat resp

* fix composite() 😬

* fix typo (really need some snapshot tests of these)

* complete cohere /chat api

* (WIP) cohere tools int test

* better clean script

* remove comment

* update keywords

* remove commented out config

* fix whole build procedure; overly redundant now but should be fully working

* remove old script

* more docs

* template fix for browsers

* fix lmstudio modelIds in e2e tests
  • Loading branch information
jnaglick authored Jun 11, 2024
1 parent b6c9643 commit 4d686fe
Show file tree
Hide file tree
Showing 205 changed files with 8,442 additions and 2,996 deletions.
10 changes: 8 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@ jobs:
- run:
name: Test
command: npm run test
- run:
name: Integration Test
command: npm run test:integration:ci
publish:
executor:
name: node/default
Expand All @@ -43,7 +46,7 @@ jobs:
name: Publish
command: |
npm set //registry.npmjs.org/:_authToken=$GENERATIVE_TS_NPM_TOKEN
npm publish
npm run publish
workflows:
version: 2
Expand All @@ -57,7 +60,10 @@ workflows:
pattern: '^release\/.*$'
value: << pipeline.git.branch >>
jobs:
- test
- test:
context:
- econify-aws
- generative-ts-hf-token
push_release_branch:
jobs:
- publish:
Expand Down
1 change: 1 addition & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,4 @@ OPENAI_API_KEY=
HUGGINGFACE_API_TOKEN=
GROQ_API_KEY=
COHERE_API_KEY=
MISTRAL_API_KEY=
232 changes: 178 additions & 54 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# generative-ts

**a typescript-first utility library for building LLM applications+agents for node and the browser**
**a typescript library for building LLM applications+agents**

generative-ts provides a strongly-typed interface for invoking LLMs from various service providers as defined by their own APIs. It’s not a "universal interface," set of heavy abstractions, or wrapper around existing SDKs. Instead, it offers an easy way to get type-safe requests and responses from multiple LLM providers using their actual APIs. It has some useful features purpose-built for LLMs, and it’s designed for the TypeScript ecosystem with a minimal footprint and high portability in mind.
[![Documentation](https://img.shields.io/badge/docs-generative--ts-blue)](https://econify.github.io/generative-ts/)

## Design Goals
generative-ts is a web-first library for programming LLM applications. Its core feature is allowing you to easily use a wide variety of different [model providers](#model-providers) with minimal code and dependencies, while still exposing their native APIs so as to not get in your way. We provide some useful features on top of that for common applications like Chatbots, Tool Use, RAG, and Agents.

- **Simple**: Invoke many different popular models and providers out of the box, using their native interfaces, with a couple lines of code
- **Typesafe**: todo
- **Customizable**: Built on interfaces and injectable dependencies, you can define your own APIs and Providers, supply your own Template implementation, or use your own HTTP client.
- **Portable**: Runs in node or the browser, ships with cjs, esm, and browser-optimized bundles
- **Minimal**: Minimal dependencies, <100KB bundles, and [scoped packages](#packages) for fine-grain installs
- **HTTP-level control**: It uses native fetch (or optionally lets you to [pass your own client](#custom-http-client)) to interact with models APIs, giving you uniform control of the request at the http level
- **Useful**: todo
## Features

- **Simple**: *NOT* a heavy duty abstraction or framework. The library is easy to understand and model APIs are exposed 1:1.
- **Minimal**: *NOT* a wrapper of a bunch of different SDKs. It uses a small number of dependencies and also provide [scoped packages](#packages) for fine-grained installs.
- **Portable**: Can run in node or entirely in the browser
- **Just HTTP**: It uses native fetch out of the box, giving you universal control of timeout, retries, and proxies. You can also [inject your own HTTP client](#custom-http-client) as an alternative.
- **Versatile**: Provides utilities for things like Chatbots, Tool Use, RAG, and Agents (mostly coming in beta)

## Install

Expand All @@ -24,58 +24,164 @@ npm i generative-ts

You can also do more granular installs of scoped packages if you want to optimize your builds further (see [packages](#packages))

## How it Works
## Usage

TODO explain concept of ModelAPIs vs ModelProviders here
### AWS Bedrock

### ModelAPIs
**[API docs: `createAwsBedrockModelProvider` ](https://econify.github.io/generative-ts/functions/createAwsBedrockModelProvider.html)**

* AI21 Jurrassic
* Amazon Titan Text
* Cohere: Chat; Generate
* Huggingface: Text Generation task; Conversational task
* Meta: LLama2 Chat; LLama3 Chat
* Mistral: ChatCompletion; Bedrock
* OpenAI: ChatCompletion
* Antrophic: ? (COMING SOON)
* Google: Gemini (COMING SOON)
<!-- TEST [Bedrock] -->
```ts
import {
AmazonTitanTextApi,
createAwsBedrockModelProvider
} from "generative-ts";

// Bedrock supports many different APIs and models. See API docs (above) for full list.
const titanText = createAwsBedrockModelProvider({
api: AmazonTitanTextApi,
modelId: "amazon.titan-text-express-v1",
// auth will be read from process.env and properly handled for the AWS environment on which the code is running
});

### ModelProviders
const response = await titanText.sendRequest({
prompt: "Brief history of NY Mets:"
// all other options for the specified `api` available here
});

* AWS Bedrock
* Cohere
* Groq
* Huggingface Inference
* OpenAI
* Replicate (TODO!)
* Mistral (TODO!)
* LMStudio
* LLamafile (TODO - OpenAI ChatCompletion running local)
* vLLM (TODO - OpenAI ChatCompletion running local)
* Google Vertex AI (COMING SOON)
* Microsoft Azure (COMING SOON?)
console.log(response.results[0]?.outputText);
```

It's also easy to add your own APIs and ModelProviders (TODO section)
### Cohere

## Usage
**[API docs: `createCohereModelProvider` ](https://econify.github.io/generative-ts/functions/createCohereModelProvider.html)**

<!-- TEST [Cohere] -->
```ts
import { createCohereModelProvider } from "generative-ts";

const commandR = createCohereModelProvider({
modelId: "command-r-plus", // Cohere defined model ID
// you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await commandR.sendRequest({
prompt: "Brief History of NY Mets:",
preamble: "Talk like Jafar from Aladdin",
// all other Cohere /generate options available here
});

console.log(response.text);
```

### Groq

**[API docs: `createGroqModelProvider` ](https://econify.github.io/generative-ts/functions/createGroqModelProvider.html)**

<!-- TEST [Groq] -->
```ts
import { createGroqModelProvider } from "generative-ts";

const llama3 = createGroqModelProvider({
modelId: "llama3-70b-8192", // Groq defined model ID
// you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await llama3.sendRequest({
prompt: "Brief History of NY Mets:"
// all other OpenAI ChatCompletion options available here (Groq uses the OpenAI ChatCompletion API for all the models it hosts)
});

console.log(response.choices[0]?.message.content);
```

### Huggingface Inference

**[API docs: `createHuggingfaceInferenceModelProvider` ](https://econify.github.io/generative-ts/functions/createHuggingfaceInferenceModelProvider.html)**

<!-- TEST [Huggingface] -->
```ts
import {
createHuggingfaceInferenceModelProvider,
HfTextGenerationTaskApi
} from "generative-ts";

// Huggingface Inference supports many different APIs and models. See API docs (above) for full list.
const gpt2 = createHuggingfaceInferenceModelProvider({
api: HfTextGenerationTaskApi,
modelId: "gpt2",
// you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await gpt2.sendRequest({
prompt: "Hello,"
// all other options for the specified `api` available here
});

console.log(response[0]?.generated_text);
```

### LMStudio

**[API docs: `createLmStudioModelProvider` ](https://econify.github.io/generative-ts/functions/createLmStudioModelProvider.html)**

<!-- TEST [LMStudio] -->
```ts
import { createLmStudioModelProvider } from "generative-ts";

const llama3 = createLmStudioModelProvider({
modelId: "lmstudio-community/Meta-Llama-3-70B-Instruct-GGUF", // a ID of a model you have downloaded in LMStudio
});

### Hello gpt
const response = await llama3.sendRequest({
prompt: "Brief History of NY Mets:"
// all other OpenAI ChatCompletion options available here (LMStudio uses the OpenAI ChatCompletion API for all the models it hosts)
});

console.log(response.choices[0]?.message.content);
```

### Mistral

**[API docs: `createMistralModelProvider` ](https://econify.github.io/generative-ts/functions/createMistralModelProvider.html)**

<!-- TEST [Mistral] -->
```ts
import { createMistralModelProvider } from "generative-ts";

const mistralLarge = createMistralModelProvider({
modelId: "mistral-large-latest", // Mistral defined model ID
// you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await mistralLarge.sendRequest({
prompt: "Brief History of NY Mets:"
// all other Mistral ChatCompletion API options available here
});

console.log(response.choices[0]?.message.content);
```

### OpenAI

**[API docs: `createOpenAiChatModelProvider` ](https://econify.github.io/generative-ts/functions/createOpenAiChatModelProvider.html)**

<!-- TEST [OpenAI] -->
```ts
import { createOpenAiChatModelProvider } from "generative-ts";

const gpt = createOpenAiChatModelProvider({
modelId: "gpt-3.5-turbo",
modelId: "gpt-4-turbo", // OpenAI defined model ID
// you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await gpt.sendRequest({
prompt: "Brief History of NY Mets:",
temperature: 1.0,
// all other OpenAI Chat Completion options are available here
max_tokens: 100,
// all other OpenAI ChatCompletion options available here
});

console.log(response.choices[0]?.message); // the response has been runtime validated within a typeguard, so this is also typesafe
console.log(response.choices[0]?.message.content);
```

### Custom HTTP Client
Expand All @@ -88,6 +194,27 @@ todo;

For more examples, please refer to the /examples folder in the repository.

## Supported Providers and Models

See [Usage](#usage) for how to use each provider.

|Provider|Models|Model APIs|
|-|-|-|
|AWS Bedrock|[Multiple hosted models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns)|[Native model APIs](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html)|
|Cohere|Command / Command R+|Cohere /generate and /chat|
|Groq|[Multiple hosted models](https://console.groq.com/docs/models)|OpenAI ChatCompletion|
|Huggingface Inference|Open-source|[Huggingface Inference APIs](https://huggingface.co/docs/api-inference/detailed_parameters)|
|LMStudio (localhost)|Open-source (must be downloaded)|OpenAI ChatCompletion|
|Mistral|Mistral x.y|Mistral ChatCompletion|
|OpenAI|GPT x.y|OpenAI ChatCompletion|
|Azure (coming soon)||
|Google Vertex AI (coming soon)||
|Replicate (coming soon)||
|Anthropic (coming soon)||
|Fireworks (coming soon)||

It's also easy to add your own **TODO LINK**

## Packages

If you're using a modern bundler, just install generative-ts to get everything. Modern bundlers support tree-shaking, so your final bundle won't include unused code. (Note: we distribute both ESM and CJS bundles for compatibility.) If you prefer to avoid unnecessary downloads, or you're operating under constraints where tree-shaking isn't an option, we offer scoped packages under @generative-ts/ with specific functionality for more fine-grained installs.
Expand All @@ -100,28 +227,25 @@ If you're using a modern bundler, just install generative-ts to get everything.
| `@generative-ts/provider-bedrock` | AWS Bedrock provider | This is its own package because it uses the `aws4` dependency to properly authenticate when running in AWS environments |
| `@generative-ts/apis` | Model APIs | `ModelAPI` implementations. These use some internal dependencies (like `ejs` for templating) which arent strictly necessary because you can implement your own (see docs of `ModelAPI` for full details -- **TODO**) |

## Report Bugs / Submit Feature Requests

Please submit all issues here: https://github.com/Econify/generative-ts/issues

## Contributing

We welcome contributions! To get started developing, clone the repository and run:
To get started developing, optionally fork and then clone the repository and run:

```sh
nvm use
npm install
npm ci
```

From there you can run the examples (`npm run example`) or the "e2e tests" in `tests/`
To run examples and integration/e2e tests you'll need to create an .env file by running `cp .env.example .env` and then add values where necessary. This section needs a lot more work :)

## Report Bugs / Submit Feature Requests
## Publishing

Encountered a bug or have a feature request? Please submit issues here: https://github.com/Econify/generative-ts/issues
The "main" `generative-ts` package and the scoped `@generative-ts` packages both are controlled by the generative-ts npm organization. Releases are published via circleci job upon pushes of tags that have a name starting with `release/`. The job requires an NPM token that has publishing permissions to both `generative-ts` and `@generative-ts`. Currently this is a "granular" token set to expire every 30 days, created by @jnaglick, set in a circleci context.

## License

**TODO**: License details will be added here.

## Package publishing and ownership

Both the "main" `generative-ts` package and the scoped `@generative-ts` packages are controlled by the generative-ts npm organization. Currently the 'developer' team in the org only has read permissions. The only way the packages can be published is via ci/cd.

Releases are published via circleci job upon pushes to branches that have a name starting with `release/`. The job requires an NPM token that has publishing permissions to both `generative-ts` and `@generative-ts`. Currently this is a "granular" token set to expire every 30 days, created by @jnaglick, set in a circleci context.
**TODO**
1 change: 1 addition & 0 deletions docs/.nojekyll
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
TypeDoc added this file to prevent GitHub Pages from using Jekyll. You can turn off this behavior by setting the `githubPages` option to false.
1 change: 1 addition & 0 deletions docs/assets/category-nav.css

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit 4d686fe

Please sign in to comment.