Skip to content

Commit

Permalink
0.1.0-alpha.6 (#42)
Browse files Browse the repository at this point in the history
* bump version to dev/0.1.0-alpha.6

* docs:gen

* add skeleton package for google vertexAi provider; externalize bundle deps

* no need to use these

* prettier output from scripts/

* finish createVertexAiModelProvider; most of gemini api

* docs:gen

* reorg imports

* more gemini

* even prettier scripts/ output

* publishing script cleanup

* add vertexai to READMEs

* checkEjs script (currently unused)

* gemini tool use

* docs:gen

* createVertexAiModelProvider specs

* add references to tsconfig; these dont do anything but nice to have

* start of client opts

* remove log from spec

* ability to pass client options to sendRequest

* add timeout

* cleanup

* add necessary generics to rest of provider factories (except aws...)

* remove BaseHttpModelProvider :)

* docs:gen

* docs:gen - new

* cleanup

* docs:gen

* small refactor

* better handling of AWS_REGION

* docs:gen

* run ci integration test in us-east-1

* rename package google-vertex-ai to gcloud-vertex-ai

* update some names/imports

* missed this in package lock

* add test:pattern script

* add more gemini specs

* OpenAiChatApi spec

* specs of all apis

* start of ejs removal conversion

* remove ejs dep :)

* docs:gen

* remove

* maybe new api

* reorg e2e tests

* rollup cleanup

* use snapshots for gemini api test

* umd bundle; better e2e browser tests

* correct use of tslib

* unbreak umd build; add TODO

* temp stop point on Tools

* move stuff around

* incremental improvement of Tool; better handling of prompt in gemini api

* organize gemini api files

* rename prompt -> > %{�[32m%}%(!.#.$)%{�[00m%}

* note

* test all options in gemini api spec

* remove bad copilot-generated jsdoc

* remove es-lint-disable

* add some custom errors

* move tool-related interfaces from typeDefs to Tool

* rename function

* export TODOs

* refactor

* move tool interfaces to typeDefs; make Tool implement ToolDescriptor instead of build it

* integration test cleanup

* add license

* docs:gen
  • Loading branch information
jnaglick authored Jul 15, 2024
1 parent d11d0c5 commit 15e7107
Show file tree
Hide file tree
Showing 241 changed files with 11,351 additions and 2,180 deletions.
6 changes: 6 additions & 0 deletions .babelrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"presets": [
"@babel/preset-env",
"@babel/preset-typescript"
]
}
2 changes: 1 addition & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
command: npm run test
- run:
name: Integration Test
command: npm run test:integration:ci
command: AWS_REGION=us-east-1 npm run test:integration:ci
publish:
executor:
name: node/default
Expand Down
7 changes: 7 additions & 0 deletions LICENSE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Copyright 2024 econify, llc.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
68 changes: 36 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,9 @@

**a typescript library for building LLM applications+agents**

[![Documentation](https://img.shields.io/badge/docs-generative--ts-blue)](https://econify.github.io/generative-ts/)

generative-ts is a web-first library for programming LLM applications. Its core feature is allowing you to easily use a wide variety of different model providers with minimal code and dependencies, while still exposing their native APIs so as to not get in your way. We provide some useful features on top of that for common applications like Chatbots, Tool Use, RAG, and Agents.

## Features

- **Simple**: *NOT* a heavy duty abstraction or framework. The library is easy to understand and model APIs are exposed 1:1.
- **Minimal**: *NOT* a wrapper of a bunch of different SDKs. It uses a small number of dependencies and also provide [scoped packages](#packages) for fine-grained installs.
- **Portable**: Can run in node or entirely in the browser
- **Just HTTP**: It uses native fetch out of the box, giving you universal control of timeout, retries, and proxies. You can also [inject your own HTTP client](#custom-http-client) as an alternative.
- **Versatile**: Provides utilities for things like Chatbots, Tool Use, RAG, and Agents (mostly coming in beta)
[![Documentation](https://img.shields.io/badge/API-documentation-blue)](https://econify.github.io/generative-ts/)
[![NPM](https://img.shields.io/badge/npm-alpha--6-yellow)](https://www.npmjs.com/package/generative-ts)
[![License](https://img.shields.io/badge/license-MIT-green)](https://github.com/Econify/generative-ts/blob/main/LICENSE.md)

## Install

Expand Down Expand Up @@ -41,11 +33,11 @@ import {
const titanText = createAwsBedrockModelProvider({
api: AmazonTitanTextApi,
modelId: "amazon.titan-text-express-v1",
// auth will be read from process.env and properly handled for the AWS environment on which the code is running
// If your code is running in an AWS Environment (eg, Lambda) authorization will happen automatically. Otherwise, explicitly pass in `auth`
});

const response = await titanText.sendRequest({
prompt: "Brief history of NY Mets:"
$prompt:"Brief history of NY Mets:"
// all other options for the specified `api` available here
});

Expand All @@ -66,14 +58,35 @@ const commandR = createCohereModelProvider({
});

const response = await commandR.sendRequest({
prompt: "Brief History of NY Mets:",
$prompt:"Brief History of NY Mets:",
preamble: "Talk like Jafar from Aladdin",
// all other Cohere /generate options available here
});

console.log(response.text);
```

### Google Cloud VertexAI

**[API docs: `createVertexAiModelProvider` ](https://econify.github.io/generative-ts/functions/createVertexAiModelProvider.html)**

<!-- TEST [VertexAI] -->
```ts
import { createVertexAiModelProvider } from "@packages/gcloud-vertex-ai";

const gemini = await createVertexAiModelProvider({
modelId: "gemini-1.0-pro", // VertexAI defined model ID
// you can explicitly pass auth here, otherwise by default it is read from process.env
});

const response = await gemini.sendRequest({
$prompt:"Brief History of NY Mets:",
// all other Gemini options available here
});

console.log(response.data.candidates[0]);
```

### Groq

**[API docs: `createGroqModelProvider` ](https://econify.github.io/generative-ts/functions/createGroqModelProvider.html)**
Expand All @@ -88,7 +101,7 @@ const llama3 = createGroqModelProvider({
});

const response = await llama3.sendRequest({
prompt: "Brief History of NY Mets:"
$prompt:"Brief History of NY Mets:"
// all other OpenAI ChatCompletion options available here (Groq uses the OpenAI ChatCompletion API for all the models it hosts)
});

Expand All @@ -114,7 +127,7 @@ const gpt2 = createHuggingfaceInferenceModelProvider({
});

const response = await gpt2.sendRequest({
prompt: "Hello,"
$prompt:"Hello,"
// all other options for the specified `api` available here
});

Expand All @@ -134,7 +147,7 @@ const llama3 = createLmStudioModelProvider({
});

const response = await llama3.sendRequest({
prompt: "Brief History of NY Mets:"
$prompt:"Brief History of NY Mets:"
// all other OpenAI ChatCompletion options available here (LMStudio uses the OpenAI ChatCompletion API for all the models it hosts)
});

Expand All @@ -155,7 +168,7 @@ const mistralLarge = createMistralModelProvider({
});

const response = await mistralLarge.sendRequest({
prompt: "Brief History of NY Mets:"
$prompt:"Brief History of NY Mets:"
// all other Mistral ChatCompletion API options available here
});

Expand All @@ -176,7 +189,7 @@ const gpt = createOpenAiChatModelProvider({
});

const response = await gpt.sendRequest({
prompt: "Brief History of NY Mets:",
$prompt:"Brief History of NY Mets:",
max_tokens: 100,
// all other OpenAI ChatCompletion options available here
});
Expand All @@ -190,10 +203,6 @@ console.log(response.choices[0]?.message.content);
todo;
```

### Additional Examples

For more examples, please refer to the /examples folder in the repository.

## Supported Providers and Models

See [Usage](#usage) for how to use each provider.
Expand All @@ -202,13 +211,13 @@ See [Usage](#usage) for how to use each provider.
|-|-|-|
|AWS Bedrock|[Multiple hosted models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns)|[Native model APIs](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html)|
|Cohere|Command / Command R+|Cohere /generate and /chat|
|Google Vertex AI|Gemini x.y|Gemini; OpenAI in preview|
|Groq|[Multiple hosted models](https://console.groq.com/docs/models)|OpenAI ChatCompletion|
|Huggingface Inference|Open-source|[Huggingface Inference APIs](https://huggingface.co/docs/api-inference/detailed_parameters)|
|LMStudio (localhost)|Open-source (must be downloaded)|OpenAI ChatCompletion|
|Mistral|Mistral x.y|Mistral ChatCompletion|
|OpenAI|GPT x.y|OpenAI ChatCompletion|
|Azure (coming soon)||
|Google Vertex AI (coming soon)||
|Replicate (coming soon)||
|Anthropic (coming soon)||
|Fireworks (coming soon)||
Expand All @@ -223,9 +232,8 @@ If you're using a modern bundler, just install generative-ts to get everything.
|-|-|-|
| `generative-ts` | Everything | Includes all scoped packages listed below |
| `@generative-ts/core` | Core functionality (zero dependencies) | Interfaces, classes, utilities, etc |
| `@generative-ts/providers` | All Model Providers | All `ModelProvider` implementations that aren't in their own packages. Most providers don't require any special dependencies so are here |
| `@generative-ts/provider-bedrock` | AWS Bedrock provider | This is its own package because it uses the `aws4` dependency to properly authenticate when running in AWS environments |
| `@generative-ts/apis` | Model APIs | `ModelAPI` implementations. These use some internal dependencies (like `ejs` for templating) which arent strictly necessary because you can implement your own (see docs of `ModelAPI` for full details -- **TODO**) |
| `@generative-ts/gcloud-vertex-ai` | Google Cloud VertexAI `ModelProvider` | Uses Application Default Credentials (ADC) to properly authenticate in GCloud environments |
| `@generative-ts/aws-bedrock` | AWS Bedrock `ModelProvider` | Uses aws4 to properly authenticate when running in AWS environments |

## Report Bugs / Submit Feature Requests

Expand All @@ -240,12 +248,8 @@ nvm use
npm ci
```

To run examples and integration/e2e tests you'll need to create an .env file by running `cp .env.example .env` and then add values where necessary. This section needs a lot more work :)
To run examples and integration/e2e tests, create an .env file by running `cp .env.example .env` and then add values where necessary

## Publishing

The "main" `generative-ts` package and the scoped `@generative-ts` packages both are controlled by the generative-ts npm organization. Releases are published via circleci job upon pushes of tags that have a name starting with `release/`. The job requires an NPM token that has publishing permissions to both `generative-ts` and `@generative-ts`. Currently this is a "granular" token set to expire every 30 days, created by @jnaglick, set in a circleci context.

## License

**TODO**
6 changes: 6 additions & 0 deletions babel.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
module.exports = {
presets: [
'@babel/preset-env',
'@babel/preset-typescript',
],
};
12 changes: 6 additions & 6 deletions docs/assets/highlight.css
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@
--dark-hl-3: #C586C0;
--light-hl-4: #001080;
--dark-hl-4: #9CDCFE;
--light-hl-5: #0000FF;
--dark-hl-5: #569CD6;
--light-hl-6: #0070C1;
--dark-hl-6: #4FC1FF;
--light-hl-7: #008000;
--dark-hl-7: #6A9955;
--light-hl-5: #008000;
--dark-hl-5: #6A9955;
--light-hl-6: #0000FF;
--dark-hl-6: #569CD6;
--light-hl-7: #0070C1;
--dark-hl-7: #4FC1FF;
--light-hl-8: #098658;
--dark-hl-8: #B5CEA8;
--light-hl-9: #267F99;
Expand Down
2 changes: 1 addition & 1 deletion docs/assets/navigation.js

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit 15e7107

Please sign in to comment.