Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(deps): update dependency @anthropic-ai/sdk to v0.33.1 #225

Merged
merged 1 commit into from
Dec 30, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Dec 25, 2024

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
@anthropic-ai/sdk 0.32.1 -> 0.33.1 age adoption passing confidence

Release Notes

anthropics/anthropic-sdk-typescript (@​anthropic-ai/sdk)

v0.33.1

Full Changelog: sdk-v0.33.0...sdk-v0.33.1

Bug Fixes
  • vertex: remove anthropic_version deletion for token counting (88221be)
Chores

v0.33.0

Full Changelog: sdk-v0.32.1...sdk-v0.33.0

Features
  • api: general availability updates (93d1316)
  • api: general availability updates (#​631) (b5c92e5)
  • client: add ._request_id property to object responses (#​596) (9d6d584)
  • internal: make git install file structure match npm (#​617) (d3dd7d5)
  • vertex: support token counting (9e76b4d)
Bug Fixes
Chores
Documentation

Configuration

📅 Schedule: Branch creation - "* 0-4 * * 3" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link

openai debug - [puLL-Merge] - anthropics/[email protected]

Diff
diff --git .github/workflows/handle-release-pr-title-edit.yml .github/workflows/handle-release-pr-title-edit.yml
deleted file mode 100644
index 8144aaae..00000000
--- .github/workflows/handle-release-pr-title-edit.yml
+++ /dev/null
@@ -1,26 +0,0 @@
-name: Handle release PR title edits
-on:
-  pull_request:
-    types:
-      - edited
-      - unlabeled
-
-jobs:
-  update_pr_content:
-    name: Update pull request content
-    if: |
-      ((github.event.action == 'edited' && github.event.changes.title.from != github.event.pull_request.title) ||
-      (github.event.action == 'unlabeled' && github.event.label.name == 'autorelease: custom version')) &&
-      startsWith(github.event.pull_request.head.ref, 'release-please--') &&
-      github.event.pull_request.state == 'open' &&
-      github.event.sender.login != 'stainless-bot' &&
-      github.event.sender.login != 'stainless-app' &&
-      github.repository == 'anthropics/anthropic-sdk-typescript'
-    runs-on: ubuntu-latest
-    steps:
-      - uses: actions/checkout@v4
-      - uses: stainless-api/trigger-release-please@v1
-        with:
-          repo: ${{ github.event.repository.full_name }}
-          stainless-api-key: ${{ secrets.STAINLESS_API_KEY }}
-
diff --git .release-please-manifest.json .release-please-manifest.json
index e6b9ab03..2053c67b 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,5 +1,5 @@
 {
-  ".": "0.32.1",
-  "packages/vertex-sdk": "0.5.2",
-  "packages/bedrock-sdk": "0.11.2"
+  ".": "0.33.1",
+  "packages/vertex-sdk": "0.6.1",
+  "packages/bedrock-sdk": "0.12.0"
 }
diff --git .stats.yml .stats.yml
index ebe0695a..19e9daeb 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
-configured_endpoints: 10
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/anthropic-25f83d91f601c1962b3701fedf829f678f306aca0758af286ee1586cc9931f75.yml
+configured_endpoints: 19
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/anthropic-be055148d227480fcacc9086c37ac8009dcb487731069ada51af35044f65bee4.yml
diff --git CHANGELOG.md CHANGELOG.md
index f332e42a..a1a57c52 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,61 @@
 # Changelog
 
+## 0.33.1 (2024-12-17)
+
+Full Changelog: [sdk-v0.33.0...sdk-v0.33.1](https://github.com/anthropics/anthropic-sdk-typescript/compare/sdk-v0.33.0...sdk-v0.33.1)
+
+### Bug Fixes
+
+* **vertex:** remove `anthropic_version` deletion for token counting ([88221be](https://github.com/anthropics/anthropic-sdk-typescript/commit/88221be305d6e13ccf92e6e9cdb00daba45b57db))
+
+
+### Chores
+
+* **internal:** fix some typos ([#633](https://github.com/anthropics/anthropic-sdk-typescript/issues/633)) ([a0298f5](https://github.com/anthropics/anthropic-sdk-typescript/commit/a0298f5f67b8ecd25de416dbb3eada68b86befd7))
+
+## 0.33.0 (2024-12-17)
+
+Full Changelog: [sdk-v0.32.1...sdk-v0.33.0](https://github.com/anthropics/anthropic-sdk-typescript/compare/sdk-v0.32.1...sdk-v0.33.0)
+
+### Features
+
+* **api:** general availability updates ([93d1316](https://github.com/anthropics/anthropic-sdk-typescript/commit/93d13168f950b2cdfc3b7c6664205b06418fea79))
+* **api:** general availability updates ([#631](https://github.com/anthropics/anthropic-sdk-typescript/issues/631)) ([b5c92e5](https://github.com/anthropics/anthropic-sdk-typescript/commit/b5c92e5b74c370ac3f9ba28e915bd54588a42be0))
+* **client:** add ._request_id property to object responses ([#596](https://github.com/anthropics/anthropic-sdk-typescript/issues/596)) ([9d6d584](https://github.com/anthropics/anthropic-sdk-typescript/commit/9d6d58430a216df9888434158bf628ae4b067aba))
+* **internal:** make git install file structure match npm ([#617](https://github.com/anthropics/anthropic-sdk-typescript/issues/617)) ([d3dd7d5](https://github.com/anthropics/anthropic-sdk-typescript/commit/d3dd7d5f8cad460dd18725d5c0f3c8db3f00115d))
+* **vertex:** support token counting ([9e76b4d](https://github.com/anthropics/anthropic-sdk-typescript/commit/9e76b4dc22d62b1239b382bb771b69ad8cff9442))
+
+
+### Bug Fixes
+
+* **docs:** add missing await to pagination example ([#609](https://github.com/anthropics/anthropic-sdk-typescript/issues/609)) ([e303077](https://github.com/anthropics/anthropic-sdk-typescript/commit/e303077ebab73c41adee7d25375b767c3fc78998))
+* **types:** remove anthropic-instant-1.2 model ([#599](https://github.com/anthropics/anthropic-sdk-typescript/issues/599)) ([e222a4d](https://github.com/anthropics/anthropic-sdk-typescript/commit/e222a4d0518aa80671c66ee2a25d87dc87a51316))
+
+
+### Chores
+
+* **api:** update spec version ([#607](https://github.com/anthropics/anthropic-sdk-typescript/issues/607)) ([ea44f9a](https://github.com/anthropics/anthropic-sdk-typescript/commit/ea44f9ac49dcc25a5dfa53880ebf61318ee90f6c))
+* **api:** update spec version ([#629](https://github.com/anthropics/anthropic-sdk-typescript/issues/629)) ([a25295c](https://github.com/anthropics/anthropic-sdk-typescript/commit/a25295cd6db7b57162fdd9049eb8a3c37bb94f08))
+* **bedrock,vertex:** remove unsupported countTokens method ([#597](https://github.com/anthropics/anthropic-sdk-typescript/issues/597)) ([17b7da5](https://github.com/anthropics/anthropic-sdk-typescript/commit/17b7da5ee6f35ea2bdd53a66a662871affae6341))
+* **bedrock:** remove unsupported methods ([6458dc1](https://github.com/anthropics/anthropic-sdk-typescript/commit/6458dc14544c16240a6580a21a36fcf5bde594b2))
+* **ci:** remove unneeded workflow ([#594](https://github.com/anthropics/anthropic-sdk-typescript/issues/594)) ([7572e48](https://github.com/anthropics/anthropic-sdk-typescript/commit/7572e48dbccb2090562399c7ff2d01503c86f445))
+* **client:** drop unused devDependency ([#610](https://github.com/anthropics/anthropic-sdk-typescript/issues/610)) ([5d0d523](https://github.com/anthropics/anthropic-sdk-typescript/commit/5d0d523390d8c34cae836c423940b67defb9d2aa))
+* improve browser error message ([#613](https://github.com/anthropics/anthropic-sdk-typescript/issues/613)) ([c26121e](https://github.com/anthropics/anthropic-sdk-typescript/commit/c26121e84039b7430995b6363876ea9795ba31ed))
+* **internal:** bump cross-spawn to v7.0.6 ([#624](https://github.com/anthropics/anthropic-sdk-typescript/issues/624)) ([e58ba9a](https://github.com/anthropics/anthropic-sdk-typescript/commit/e58ba9a177ec5c8545fd3a3f4fd3d2e7c722f023))
+* **internal:** remove unnecessary getRequestClient function ([#623](https://github.com/anthropics/anthropic-sdk-typescript/issues/623)) ([882c45f](https://github.com/anthropics/anthropic-sdk-typescript/commit/882c45f5a0bd1f4b996d59e6589a205c2111f46b))
+* **internal:** update isAbsoluteURL ([#627](https://github.com/anthropics/anthropic-sdk-typescript/issues/627)) ([2528ea0](https://github.com/anthropics/anthropic-sdk-typescript/commit/2528ea0dcfc83f38e76b58eaadaa5e8c5c0b188d))
+* **internal:** update spec ([#630](https://github.com/anthropics/anthropic-sdk-typescript/issues/630)) ([82cac06](https://github.com/anthropics/anthropic-sdk-typescript/commit/82cac065e2711467773c0ea62848cdf139ed5a11))
+* **internal:** use reexports not destructuring ([#604](https://github.com/anthropics/anthropic-sdk-typescript/issues/604)) ([e4daff2](https://github.com/anthropics/anthropic-sdk-typescript/commit/e4daff2b6a3fb42876ebd06ed4947c88cff919d8))
+* remove redundant word in comment ([#615](https://github.com/anthropics/anthropic-sdk-typescript/issues/615)) ([ef57a10](https://github.com/anthropics/anthropic-sdk-typescript/commit/ef57a103bcfc922a724a7c878f970dbd369b305e))
+* **tests:** limit array example length ([#611](https://github.com/anthropics/anthropic-sdk-typescript/issues/611)) ([91dc181](https://github.com/anthropics/anthropic-sdk-typescript/commit/91dc1812db2cc9e1f4660a13106bad932518b7cf))
+* **types:** nicer error class types + jsdocs ([#626](https://github.com/anthropics/anthropic-sdk-typescript/issues/626)) ([0287993](https://github.com/anthropics/anthropic-sdk-typescript/commit/0287993912ef81bd2c49603d120f49f4f979d75e))
+
+
+### Documentation
+
+* remove suggestion to use `npm` call out ([#614](https://github.com/anthropics/anthropic-sdk-typescript/issues/614)) ([6369261](https://github.com/anthropics/anthropic-sdk-typescript/commit/6369261e3597351f17b8f1a3945ca56b00eba177))
+* use latest sonnet in example snippets ([#625](https://github.com/anthropics/anthropic-sdk-typescript/issues/625)) ([f70882b](https://github.com/anthropics/anthropic-sdk-typescript/commit/f70882b0e8119a414b01b9f0b85fbe1ccb06f122))
+
 ## 0.32.1 (2024-11-05)
 
 Full Changelog: [sdk-v0.32.0...sdk-v0.32.1](https://github.com/anthropics/anthropic-sdk-typescript/compare/sdk-v0.32.0...sdk-v0.32.1)
diff --git README.md README.md
index daba6e63..da3db48e 100644
--- README.md
+++ README.md
@@ -28,7 +28,7 @@ async function main() {
   const message = await client.messages.create({
     max_tokens: 1024,
     messages: [{ role: 'user', content: 'Hello, Claude' }],
-    model: 'claude-3-opus-20240229',
+    model: 'claude-3-5-sonnet-latest',
   });
 
   console.log(message.content);
@@ -49,7 +49,7 @@ const client = new Anthropic();
 const stream = await client.messages.create({
   max_tokens: 1024,
   messages: [{ role: 'user', content: 'Hello, Claude' }],
-  model: 'claude-3-opus-20240229',
+  model: 'claude-3-5-sonnet-latest',
   stream: true,
 });
 for await (const messageStreamEvent of stream) {
@@ -76,7 +76,7 @@ async function main() {
   const params: Anthropic.MessageCreateParams = {
     max_tokens: 1024,
     messages: [{ role: 'user', content: 'Hello, Claude' }],
-    model: 'claude-3-opus-20240229',
+    model: 'claude-3-5-sonnet-latest',
   };
   const message: Anthropic.Message = await client.messages.create(params);
 }
@@ -108,7 +108,7 @@ const anthropic = new Anthropic();
 async function main() {
   const stream = anthropic.messages
     .stream({
-      model: 'claude-3-opus-20240229',
+      model: 'claude-3-5-sonnet-latest',
       max_tokens: 1024,
       messages: [
         {
@@ -146,7 +146,7 @@ await anthropic.beta.messages.batches.create({
     {
       custom_id: 'my-first-request',
       params: {
-        model: 'claude-3-5-sonnet-20240620',
+        model: 'claude-3-5-sonnet-latest',
         max_tokens: 1024,
         messages: [{ role: 'user', content: 'Hello, world' }],
       },
@@ -154,7 +154,7 @@ await anthropic.beta.messages.batches.create({
     {
       custom_id: 'my-second-request',
       params: {
-        model: 'claude-3-5-sonnet-20240620',
+        model: 'claude-3-5-sonnet-latest',
         max_tokens: 1024,
         messages: [{ role: 'user', content: 'Hi again, friend' }],
       },
@@ -198,7 +198,7 @@ async function main() {
     .create({
       max_tokens: 1024,
       messages: [{ role: 'user', content: 'Hello, Claude' }],
-      model: 'claude-3-opus-20240229',
+      model: 'claude-3-5-sonnet-latest',
     })
     .catch(async (err) => {
       if (err instanceof Anthropic.APIError) {
@@ -227,6 +227,18 @@ Error codes are as followed:
 | >=500       | `InternalServerError`      |
 | N/A         | `APIConnectionError`       |
 
+## Request IDs
+
+> For more information on debugging requests, see [these docs](https://docs.anthropic.com/en/api/errors#request-id)
+
+All object responses in the SDK provide a `_request_id` property which is added from the `request-id` response header so that you can quickly log failing requests and report them back to Anthropic.
+
+\`\`\`ts
+const message = await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-5-sonnet-latest' });
+console.log(completion._request_id) // req_018EeWyXxfu5pfWkrYcMdjWG
+```
+
+
 ### Retries
 
 Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
@@ -243,7 +255,7 @@ const client = new Anthropic({
 });
 
 // Or, configure per-request:
-await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-opus-20240229' }, {
+await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-5-sonnet-latest' }, {
   maxRetries: 5,
 });

@@ -260,7 +272,7 @@ const client = new Anthropic({
});

// Override per-request:
-await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-opus-20240229' }, {
+await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-5-sonnet-latest' }, {
timeout: 5 * 1000,
});

@@ -295,7 +307,7 @@ for (const betaMessageBatch of page.data) {

// Convenience methods are provided for manually paginating:
while (page.hasNextPage()) {
-  page = page.getNextPage();
+  page = await page.getNextPage();
  // ...
}

@@ -317,7 +329,7 @@ const message = await client.messages.create(
{
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello, Claude' }],

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    },
    { headers: { 'anthropic-version': 'My-Custom-Value' } },
    );
    @@ -339,7 +351,7 @@ const response = await client.messages
    .create({
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello, Claude' }],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    })
    .asResponse();
    console.log(response.headers.get('X-My-Header'));
    @@ -349,7 +361,7 @@ const { data: message, response: raw } = await client.messages
    .create({
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello, Claude' }],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    })
    .withResponse();
    console.log(raw.headers.get('X-My-Header'));
    @@ -461,7 +473,7 @@ await client.messages.create(
    {
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello, Claude' }],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    },
    {
    httpAgent: new http.Agent({ keepAlive: false }),
    @@ -488,7 +500,7 @@ TypeScript >= 4.5 is supported.
    The following runtimes are supported:
  • Node.js 18 LTS or later (non-EOL) versions.
    -- Deno v1.28.0 or higher, using import Anthropic from "npm:@anthropic-ai/sdk".
    +- Deno v1.28.0 or higher.
  • Bun 1.0 or later.
  • Cloudflare Workers.
  • Vercel Edge Runtime.
    diff --git api.md api.md
    index ab1abd4c..48d1c9a8 100644
    --- api.md
    +++ api.md
    @@ -1,49 +1,103 @@

Anthropic

+# Shared
+
+Types:
+
+- APIErrorObject
+- AuthenticationError
+- BillingError
+- ErrorObject
+- ErrorResponse
+- GatewayTimeoutError
+- InvalidRequestError
+- NotFoundError
+- OverloadedError
+- PermissionError
+- RateLimitError
+

Messages

Types:

-- ContentBlock
-- ContentBlockDeltaEvent
-- ContentBlockStartEvent
-- ContentBlockStopEvent
-- ImageBlockParam
-- InputJSONDelta
-- Message
-- MessageDeltaEvent
-- MessageDeltaUsage
-- MessageParam
-- MessageStartEvent
-- MessageStopEvent
-- MessageStreamEvent
-- Metadata
-- Model
-- RawContentBlockDeltaEvent
-- RawContentBlockStartEvent
-- RawContentBlockStopEvent
-- RawMessageDeltaEvent
-- RawMessageStartEvent
-- RawMessageStopEvent
-- RawMessageStreamEvent
-- TextBlock
-- TextBlockParam
-- TextDelta
-- Tool
-- ToolChoice
-- ToolChoiceAny
-- ToolChoiceAuto
-- ToolChoiceTool
-- ToolResultBlockParam
-- ToolUseBlock
-- ToolUseBlockParam
-- Usage
+- Base64PDFSource
+- CacheControlEphemeral
+- ContentBlock
+- ContentBlockDeltaEvent
+- ContentBlockParam
+- ContentBlockStartEvent
+- ContentBlockStopEvent
+- DocumentBlockParam
+- ImageBlockParam
+- InputJSONDelta
+- Message
+- MessageDeltaEvent
+- MessageDeltaUsage
+- MessageParam
+- MessageStartEvent
+- MessageStopEvent
+- MessageStreamEvent
+- MessageTokensCount
+- Metadata
+- Model
+- RawContentBlockDeltaEvent
+- RawContentBlockStartEvent
+- RawContentBlockStopEvent
+- RawMessageDeltaEvent
+- RawMessageStartEvent
+- RawMessageStopEvent
+- RawMessageStreamEvent
+- TextBlock
+- TextBlockParam
+- TextDelta
+- Tool
+- ToolChoice
+- ToolChoiceAny
+- ToolChoiceAuto
+- ToolChoiceTool
+- ToolResultBlockParam
+- ToolUseBlock
+- ToolUseBlockParam
+- Usage

Methods:

-- client.messages.create({ ...params }) -> Message
+- client.messages.create({ ...params }) -> Message
+- client.messages.countTokens({ ...params }) -> MessageTokensCount

  • client.messages.stream(body, options?) -> MessageStream

+## Batches
+
+Types:
+
+- MessageBatch
+- MessageBatchCanceledResult
+- MessageBatchErroredResult
+- MessageBatchExpiredResult
+- MessageBatchIndividualResponse
+- MessageBatchRequestCounts
+- MessageBatchResult
+- MessageBatchSucceededResult
+
+Methods:
+
+- client.messages.batches.create({ ...params }) -> MessageBatch
+- client.messages.batches.retrieve(messageBatchId) -> MessageBatch
+- client.messages.batches.list({ ...params }) -> MessageBatchesPage
+- client.messages.batches.cancel(messageBatchId) -> MessageBatch
+- client.messages.batches.results(messageBatchId) -> Response
+
+# Models
+
+Types:
+
+- ModelInfo
+
+Methods:
+
+- client.models.retrieve(modelId) -> ModelInfo
+- client.models.list({ ...params }) -> ModelInfosPage
+

Beta

Types:
@@ -51,14 +105,27 @@ Types:

+## Models
+
+Types:
+
+- BetaModelInfo
+
+Methods:
+
+- client.beta.models.retrieve(modelId) -> BetaModelInfo
+- client.beta.models.list({ ...params }) -> BetaModelInfosPage
+

Messages

Types:
@@ -124,26 +191,3 @@ Methods:

  • client.beta.messages.batches.list({ ...params }) -> BetaMessageBatchesPage
  • client.beta.messages.batches.cancel(messageBatchId, { ...params }) -> BetaMessageBatch
  • client.beta.messages.batches.results(messageBatchId, { ...params }) -> Response

-## PromptCaching

-### Messages

-Types:

-- PromptCachingBetaCacheControlEphemeral
-- PromptCachingBetaImageBlockParam
-- PromptCachingBetaMessage
-- PromptCachingBetaMessageParam
-- PromptCachingBetaTextBlockParam
-- PromptCachingBetaTool
-- PromptCachingBetaToolResultBlockParam
-- PromptCachingBetaToolUseBlockParam
-- PromptCachingBetaUsage
-- RawPromptCachingBetaMessageStartEvent
-- RawPromptCachingBetaMessageStreamEvent

-Methods:

-- client.beta.promptCaching.messages.create({ ...params }) -> PromptCachingBetaMessage
-- client.beta.promptCaching.messages.stream({ ...params }) -> PromptCachingBetaMessageStream
diff --git examples/cancellation.ts examples/cancellation.ts
index 23fb7ec9..fc8bb0c7 100755
--- examples/cancellation.ts
+++ examples/cancellation.ts
@@ -16,7 +16,7 @@ async function main() {
const question = 'Hey Claude! How can I recursively list all files in a directory in Rust?';

const stream = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    stream: true,
    max_tokens: 500,
    messages: [{ role: 'user', content: question }],
    diff --git a/examples/count-tokens.ts b/examples/count-tokens.ts
    new file mode 100755
    index 00000000..e69de29b
    diff --git examples/demo.ts examples/demo.ts
    index 609e63ef..33fc2d87 100755
    --- examples/demo.ts
    +++ examples/demo.ts
    @@ -12,7 +12,7 @@ async function main() {
    content: 'Hey Claude!?',
    },
    ],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    max_tokens: 1024,
    });
    console.dir(result);
    diff --git examples/raw-streaming.ts examples/raw-streaming.ts
    index 559a6cac..916f2a4d 100755
    --- examples/raw-streaming.ts
    +++ examples/raw-streaming.ts
    @@ -6,7 +6,7 @@ const client = new Anthropic(); // gets API Key from environment variable ANTHRO

async function main() {
const stream = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    stream: true,
    max_tokens: 500,
    messages: [
    diff --git examples/streaming.ts examples/streaming.ts
    index 9ac2da60..bc2c74bd 100755
    --- examples/streaming.ts
    +++ examples/streaming.ts
    @@ -13,7 +13,7 @@ async function main() {
    content: Hey Claude! How can I recursively list all files in a directory in Rust?,
    },
    ],
  •  model: 'claude-3-opus-20240229',
    
  •  model: 'claude-3-5-sonnet-latest',
     max_tokens: 1024,
    
    })
    // Once a content block is fully streamed, this event will fire
    diff --git examples/tools-streaming.ts examples/tools-streaming.ts
    index 96d9cbdc..816201f2 100644
    --- examples/tools-streaming.ts
    +++ examples/tools-streaming.ts
    @@ -33,7 +33,7 @@ async function main() {
    },
    },
    ],
  •  model: 'claude-3-haiku-20240307',
    
  •  model: 'claude-3-5-sonnet-latest',
     max_tokens: 1024,
    

    })
    // When a JSON content block delta is encountered this
    diff --git examples/tools.ts examples/tools.ts
    index b237043b..1a696bc0 100644
    --- examples/tools.ts
    +++ examples/tools.ts
    @@ -22,7 +22,7 @@ async function main() {
    ];

    const message = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    max_tokens: 1024,
    messages: [userMessage],
    tools,
    @@ -38,7 +38,7 @@ async function main() {
    assert(tool);

const result = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    max_tokens: 1024,
    messages: [
    userMessage,
    diff --git package.json package.json
    index f713c04f..d8f88f57 100644
    --- package.json
    +++ package.json
    @@ -1,6 +1,6 @@
    {
    "name": "@anthropic-ai/sdk",
  • "version": "0.32.1",
  • "version": "0.33.1",
    "description": "The official TypeScript library for the Anthropic API",
    "author": "Anthropic [email protected]",
    "types": "dist/index.d.ts",
    @@ -18,7 +18,7 @@
    "build": "./scripts/build-all",
    "prepublishOnly": "echo 'to publish, run yarn build && (cd dist; yarn publish)' && exit 1",
    "format": "prettier --write --cache --cache-strategy metadata . !dist",
  • "prepare": "if ./scripts/utils/check-is-in-git-install.sh; then ./scripts/build; fi",
  • "prepare": "if ./scripts/utils/check-is-in-git-install.sh; then ./scripts/build && ./scripts/utils/git-swap.sh; fi",
    "tsn": "ts-node -r tsconfig-paths/register",
    "lint": "./scripts/lint",
    "fix": "./scripts/format"
    @@ -45,7 +45,6 @@
    "jest": "^29.4.0",
    "prettier": "^3.0.0",
    "ts-jest": "^29.1.0",
  • "ts-morph": "^19.0.0",
    "ts-node": "^10.5.0",
    "tsc-multi": "^1.1.0",
    "tsconfig-paths": "^4.0.0",
    diff --git packages/bedrock-sdk/CHANGELOG.md packages/bedrock-sdk/CHANGELOG.md
    index 174cbb90..837af37e 100644
    --- packages/bedrock-sdk/CHANGELOG.md
    +++ packages/bedrock-sdk/CHANGELOG.md
    @@ -1,5 +1,24 @@

Changelog

+## 0.12.0 (2024-12-17)
+
+Full Changelog: bedrock-sdk-v0.11.2...bedrock-sdk-v0.12.0
+
+### Features
+
+* api: general availability updates (#631) (b5c92e5)
+
+
+### Chores
+
+* bedrock,vertex: remove unsupported countTokens method (#597) (17b7da5)
+* bedrock: remove unsupported methods (6458dc1)
+
+
+### Documentation
+
+* use latest sonnet in example snippets (#625) (f70882b)
+

0.11.2 (2024-11-05)

Full Changelog: bedrock-sdk-v0.11.1...bedrock-sdk-v0.11.2
diff --git packages/bedrock-sdk/README.md packages/bedrock-sdk/README.md
index f6eca6f5..74765c47 100644
--- packages/bedrock-sdk/README.md
+++ packages/bedrock-sdk/README.md
@@ -27,7 +27,7 @@ const client = new AnthropicBedrock();

async function main() {
const message = await client.messages.create({

  • model: 'anthropic.claude-3-sonnet-20240229-v1:0',
  • model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [
    {
    role: 'user',
    diff --git packages/bedrock-sdk/examples/demo.ts packages/bedrock-sdk/examples/demo.ts
    index 810514e8..a918b9ca 100644
    --- packages/bedrock-sdk/examples/demo.ts
    +++ packages/bedrock-sdk/examples/demo.ts
    @@ -11,7 +11,7 @@ const anthropic = new AnthropicBedrock();

async function main() {
const message = await anthropic.messages.create({

  • model: 'anthropic.claude-3-sonnet-20240229-v1:0',
  • model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [
    {
    role: 'user',
    diff --git packages/bedrock-sdk/examples/streaming.ts packages/bedrock-sdk/examples/streaming.ts
    index e1fac81f..5c577a2d 100644
    --- packages/bedrock-sdk/examples/streaming.ts
    +++ packages/bedrock-sdk/examples/streaming.ts
    @@ -11,7 +11,7 @@ const client = new AnthropicBedrock();

async function main() {
const stream = await client.messages.create({

  • model: 'anthropic.claude-3-sonnet-20240229-v1:0',
  • model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [
    {
    role: 'user',
    diff --git packages/bedrock-sdk/package.json packages/bedrock-sdk/package.json
    index a0e56703..352931a5 100644
    --- packages/bedrock-sdk/package.json
    +++ packages/bedrock-sdk/package.json
    @@ -1,6 +1,6 @@
    {
    "name": "@anthropic-ai/bedrock-sdk",
  • "version": "0.11.2",
  • "version": "0.12.0",
    "description": "The official TypeScript library for the Anthropic Bedrock API",
    "author": "Anthropic [email protected]",
    "types": "dist/index.d.ts",
    diff --git packages/bedrock-sdk/src/client.ts packages/bedrock-sdk/src/client.ts
    index 523df8ba..86bd17ef 100644
    --- packages/bedrock-sdk/src/client.ts
    +++ packages/bedrock-sdk/src/client.ts
    @@ -74,7 +74,7 @@ export class AnthropicBedrock extends Core.APIClient {
    this.awsSessionToken = awsSessionToken;
    }
  • messages: Resources.Messages = new Resources.Messages(this);
  • messages: MessagesResource = makeMessagesResource(this);
    completions: Resources.Completions = new Resources.Completions(this);
    beta: BetaResource = makeBetaResource(this);

@@ -159,10 +159,27 @@ export class AnthropicBedrock extends Core.APIClient {
}

/**

    • The Bedrock API does not currently support prompt caching or the Batch API.
    • The Bedrock API does not currently support token counting or the Batch API.
  • */
    +type MessagesResource = Omit<Resources.Messages, 'batches' | 'countTokens'>;

+function makeMessagesResource(client: AnthropicBedrock): MessagesResource {

  • const resource = new Resources.Messages(client);
  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.batches;
  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.countTokens;
  • return resource;
    +}

+/**

    • The Bedrock API does not currently support prompt caching, token counting or the Batch API.
      */
      type BetaResource = Omit<Resources.Beta, 'promptCaching' | 'messages'> & {
  • messages: Omit<Resources.Beta['messages'], 'batches'>;
  • messages: Omit<Resources.Beta['messages'], 'batches' | 'countTokens'>;
    };

function makeBetaResource(client: AnthropicBedrock): BetaResource {
@@ -174,5 +191,8 @@ function makeBetaResource(client: AnthropicBedrock): BetaResource {
// @ts-expect-error we're deleting non-optional properties
delete resource.messages.batches;

  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.messages.countTokens;
  • return resource;
    }
    diff --git packages/vertex-sdk/CHANGELOG.md packages/vertex-sdk/CHANGELOG.md
    index 418af52a..94191164 100644
    --- packages/vertex-sdk/CHANGELOG.md
    +++ packages/vertex-sdk/CHANGELOG.md
    @@ -1,5 +1,32 @@

Changelog

+## 0.6.1 (2024-12-17)
+
+Full Changelog: vertex-sdk-v0.6.0...vertex-sdk-v0.6.1
+
+### Bug Fixes
+
+* vertex: remove anthropic_version deletion for token counting (88221be)
+
+## 0.6.0 (2024-12-17)
+
+Full Changelog: vertex-sdk-v0.5.2...vertex-sdk-v0.6.0
+
+### Features
+
+* api: general availability updates (#631) (b5c92e5)
+* vertex: support token counting (9e76b4d)
+
+
+### Chores
+
+* bedrock,vertex: remove unsupported countTokens method (#597) (17b7da5)
+
+
+### Documentation
+
+* use latest sonnet in example snippets (#625) (f70882b)
+

0.5.2 (2024-11-05)

Full Changelog: vertex-sdk-v0.5.1...vertex-sdk-v0.5.2
diff --git packages/vertex-sdk/README.md packages/vertex-sdk/README.md
index 6e63a8c5..6c9a9c93 100644
--- packages/vertex-sdk/README.md
+++ packages/vertex-sdk/README.md
@@ -30,7 +30,7 @@ async function main() {
content: 'Hey Claude!',
},
],

  • model: 'claude-3-sonnet@20240229',
  • model: 'claude-3-5-sonnet-v2@20241022',
    max_tokens: 300,
    });
    console.log(JSON.stringify(result, null, 2));
    diff --git packages/vertex-sdk/examples/vertex.ts packages/vertex-sdk/examples/vertex.ts
    index 62474cc7..75aba347 100644
    --- packages/vertex-sdk/examples/vertex.ts
    +++ packages/vertex-sdk/examples/vertex.ts
    @@ -14,7 +14,7 @@ async function main() {
    content: 'Hello!',
    },
    ],
  • model: 'claude-3-sonnet@20240229',
  • model: 'claude-3-5-sonnet-v2@20241022',
    max_tokens: 300,
    });
    console.log(JSON.stringify(result, null, 2));
    diff --git packages/vertex-sdk/package.json packages/vertex-sdk/package.json
    index 210c96d5..43fc356d 100644
    --- packages/vertex-sdk/package.json
    +++ packages/vertex-sdk/package.json
    @@ -1,6 +1,6 @@
    {
    "name": "@anthropic-ai/vertex-sdk",
  • "version": "0.5.2",
  • "version": "0.6.1",
    "description": "The official TypeScript library for the Anthropic Vertex API",
    "author": "Anthropic [email protected]",
    "types": "dist/index.d.ts",
    diff --git packages/vertex-sdk/src/client.ts packages/vertex-sdk/src/client.ts
    index 06231649..f1046455 100644
    --- packages/vertex-sdk/src/client.ts
    +++ packages/vertex-sdk/src/client.ts
    @@ -83,7 +83,7 @@ export class AnthropicVertex extends Core.APIClient {
    this._authClientPromise = this._auth.getClient();
    }
  • messages: Resources.Messages = new Resources.Messages(this);
  • messages: MessagesResource = makeMessagesResource(this);
    beta: BetaResource = makeBetaResource(this);

    protected override defaultQuery(): Core.DefaultQuery | undefined {
    @@ -147,15 +147,42 @@ export class AnthropicVertex extends Core.APIClient {
    options.path = /projects/${this.projectId}/locations/${this.region}/publishers/anthropic/models/${model}:${specifier};
    }

  • if (

  •  options.path === '/v1/messages/count_tokens' ||
    
  •  (options.path == '/v1/messages/count_tokens?beta=true' && options.method === 'post')
    
  • ) {

  •  if (!this.projectId) {
    
  •    throw new Error(
    
  •      'No projectId was given and it could not be resolved from credentials. The client should be instantiated with the `projectId` option or the `ANTHROPIC_VERTEX_PROJECT_ID` environment variable should be set.',
    
  •    );
    
  •  }
    
  •  options.path = `/projects/${this.projectId}/locations/${this.region}/publishers/anthropic/models/count-tokens:rawPredict`;
    
  • }

  • return super.buildRequest(options);
    }
    }

/**

    • The Vertex API does not currently support prompt caching or the Batch API.
    • The Vertex SDK does not currently support the Batch API.
  • */
    +type MessagesResource = Omit<Resources.Messages, 'batches'>;

+function makeMessagesResource(client: AnthropicVertex): MessagesResource {

  • const resource = new Resources.Messages(client);
  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.batches;
  • return resource;
    +}

+/**

    • The Vertex API does not currently support prompt caching, token counting or the Batch API.
      */
      type BetaResource = Omit<Resources.Beta, 'promptCaching' | 'messages'> & {
  • messages: Omit<Resources.Beta['messages'], 'batches'>;
  • messages: Omit<Resources.Beta['messages'], 'batches' | 'countTokens'>;
    };

function makeBetaResource(client: AnthropicVertex): BetaResource {
@@ -167,5 +194,8 @@ function makeBetaResource(client: AnthropicVertex): BetaResource {
// @ts-expect-error we're deleting non-optional properties
delete resource.messages.batches;

  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.messages.countTokens;
  • return resource;
    }
    diff --git scripts/build scripts/build
    index ed2b9941..0bee923e 100755
    --- scripts/build
    +++ scripts/build
    @@ -32,7 +32,7 @@ npm exec tsc-multi

copy over handwritten .js/.mjs/.d.ts files

cp src/_shims/.{d.ts,js,mjs,md} dist/_shims
cp src/_shims/auto/
.{d.ts,js,mjs} dist/_shims/auto
-# we need to add exports = module.exports = Anthropic TypeScript to index.js;
+# we need to add exports = module.exports = Anthropic to index.js;

No way to get that from index.ts because it would cause compile errors

when building .mjs

node scripts/utils/fix-index-exports.cjs
diff --git scripts/utils/check-is-in-git-install.sh scripts/utils/check-is-in-git-install.sh
index 36bcedc2..1354eb43 100755
--- scripts/utils/check-is-in-git-install.sh
+++ scripts/utils/check-is-in-git-install.sh
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/usr/bin/env bash

Check if you happen to call prepare for a repository that's already in node_modules.

[ "$(basename "$(dirname "$PWD")")" = 'node_modules' ] ||

The name of the containing directory that 'npm` uses, which looks like

diff --git a/scripts/utils/git-swap.sh b/scripts/utils/git-swap.sh
new file mode 100755
index 00000000..79d1888e
--- /dev/null
+++ scripts/utils/git-swap.sh
@@ -0,0 +1,13 @@
+#!/usr/bin/env bash
+set -exuo pipefail
+# the package is published to NPM from ./dist
+# we want the final file structure for git installs to match the npm installs, so we
+
+# delete everything except ./dist and ./node_modules
+find . -maxdepth 1 -mindepth 1 ! -name 'dist' ! -name 'node_modules' -exec rm -rf '{}' +
+
+# move everything from ./dist to .
+mv dist/* .
+
+# delete the now-empty ./dist
+rmdir dist
diff --git src/core.ts src/core.ts
index a3e22246..ea8d8dca 100644
--- src/core.ts
+++ src/core.ts
@@ -37,7 +37,7 @@ type APIResponseProps = {
controller: AbortController;
};

-async function defaultParseResponse(props: APIResponseProps): Promise {
+async function defaultParseResponse(props: APIResponseProps): Promise<WithRequestID> {
const { response } = props;
if (props.options.stream) {
debug('response', response.status, response.url, response.headers, response.body);
@@ -54,11 +54,11 @@ async function defaultParseResponse(props: APIResponseProps): Promise {

// fetch refuses to read the body when the status code is 204.
if (response.status === 204) {

  • return null as T;
  • return null as WithRequestID;
    }

if (props.options.__binaryResponse) {

  • return response as unknown as T;
  • return response as unknown as WithRequestID;
    }

const contentType = response.headers.get('content-type');
@@ -69,26 +69,44 @@ async function defaultParseResponse(props: APIResponseProps): Promise {

 debug('response', response.status, response.url, response.headers, json);
  • return json as T;
  • return _addRequestID(json as T, response);
    }

const text = await response.text();
debug('response', response.status, response.url, response.headers, text);

// TODO handle blob, arraybuffer, other content types, etc.

  • return text as unknown as T;
  • return text as unknown as WithRequestID;
    +}

+type WithRequestID =

  • T extends Array | Response | AbstractPage ? T
  • : T extends Record<string, any> ? T & { _request_id?: string | null }
  • : T;

+function _addRequestID(value: T, response: Response): WithRequestID {

  • if (!value || typeof value !== 'object' || Array.isArray(value)) {
  • return value as WithRequestID;
  • }
  • return Object.defineProperty(value, '_request_id', {
  • value: response.headers.get('request-id'),
  • enumerable: false,
  • }) as WithRequestID;
    }

/**

  • A subclass of Promise providing additional helper methods
  • for interacting with the SDK.
    */
    -export class APIPromise extends Promise {
  • private parsedPromise: Promise | undefined;
    +export class APIPromise extends Promise<WithRequestID> {
  • private parsedPromise: Promise<WithRequestID> | undefined;

    constructor(
    private responsePromise: Promise,

  • private parseResponse: (props: APIResponseProps) => PromiseOrValue = defaultParseResponse,
  • private parseResponse: (
  •  props: APIResponseProps,
    
  • ) => PromiseOrValue<WithRequestID> = defaultParseResponse,
    ) {
    super((resolve) => {
    // this is maybe a bit weird but this has to be a no-op to not implicitly
    @@ -100,7 +118,7 @@ export class APIPromise extends Promise {

_thenUnwrap(transform: (data: T, props: APIResponseProps) => U): APIPromise {
return new APIPromise(this.responsePromise, async (props) =>

  •  transform(await this.parseResponse(props), props),
    
  •  _addRequestID(transform(await this.parseResponse(props), props), props.response),
    
    );
    }

@@ -120,33 +138,35 @@ export class APIPromise extends Promise {
asResponse(): Promise {
return this.responsePromise.then((p) => p.response);
}
+
/**

    • Gets the parsed response data and the raw Response instance.
    • Gets the parsed response data, the raw Response instance and the ID of the request,
    • returned vie the request-id header which is useful for debugging requests and resporting
    • issues to Anthropic.
    • If you just want to get the raw Response instance without parsing it,
    • you can use {@link asResponse()}.
    • 👋 Getting the wrong TypeScript type for Response?
    • Try setting "moduleResolution": "NodeNext" if you can,
    • or add one of these imports before your first import … from '@anthropic-ai/sdk':
      • import '@anthropic-ai/sdk/shims/node' (if you're running on Node)
      • import '@anthropic-ai/sdk/shims/web' (otherwise)
        */
  • async withResponse(): Promise<{ data: T; response: Response }> {
  • async withResponse(): Promise<{ data: T; response: Response; request_id: string | null | undefined }> {
    const [data, response] = await Promise.all([this.parse(), this.asResponse()]);
  • return { data, response };
  • return { data, response, request_id: response.headers.get('request-id') };
    }
  • private parse(): Promise {
  • private parse(): Promise<WithRequestID> {
    if (!this.parsedPromise) {
  •  this.parsedPromise = this.responsePromise.then(this.parseResponse);
    
  •  this.parsedPromise = this.responsePromise.then(this.parseResponse) as any as Promise<WithRequestID<T>>;
    
    }
    return this.parsedPromise;
    }
  • override then<TResult1 = T, TResult2 = never>(
  • onfulfilled?: ((value: T) => TResult1 | PromiseLike) | undefined | null,
  • override then<TResult1 = WithRequestID, TResult2 = never>(
  • onfulfilled?: ((value: WithRequestID) => TResult1 | PromiseLike) | undefined | null,
    onrejected?: ((reason: any) => TResult2 | PromiseLike) | undefined | null,
    ): Promise<TResult1 | TResult2> {
    return this.parse().then(onfulfilled, onrejected);
    @@ -154,11 +174,11 @@ export class APIPromise extends Promise {

override catch<TResult = never>(
onrejected?: ((reason: any) => TResult | PromiseLike) | undefined | null,

  • ): Promise<T | TResult> {
  • ): Promise<WithRequestID | TResult> {
    return this.parse().catch(onrejected);
    }
  • override finally(onfinally?: (() => void) | undefined | null): Promise {
  • override finally(onfinally?: (() => void) | undefined | null): Promise<WithRequestID> {
    return this.parse().finally(onfinally);
    }
    }
    @@ -177,7 +197,7 @@ export abstract class APIClient {
    maxRetries = 2,
    timeout = 600000, // 10 minutes
    httpAgent,
  • fetch: overridenFetch,
  • fetch: overriddenFetch,
    }: {
    baseURL: string;
    maxRetries?: number | undefined;
    @@ -190,7 +210,7 @@ export abstract class APIClient {
    this.timeout = validatePositiveInteger('timeout', timeout);
    this.httpAgent = httpAgent;
  • this.fetch = overridenFetch ?? fetch;
  • this.fetch = overriddenFetch ?? fetch;
    }

protected authHeaders(opts: FinalRequestOptions): Headers {
@@ -537,19 +557,13 @@ export abstract class APIClient {
const timeout = setTimeout(() => controller.abort(), ms);

 return (
  •  this.getRequestClient()
    
  •    // use undefined this binding; fetch errors if bound to something else in browser/cloudflare
    
  •    .fetch.call(undefined, url, { signal: controller.signal as any, ...options })
    
  •    .finally(() => {
    
  •      clearTimeout(timeout);
    
  •    })
    
  •  // use undefined this binding; fetch errors if bound to something else in browser/cloudflare
    
  •  this.fetch.call(undefined, url, { signal: controller.signal as any, ...options }).finally(() => {
    
  •    clearTimeout(timeout);
    
  •  })
    
    );
    }
  • protected getRequestClient(): RequestClient {
  • return { fetch: this.fetch };
  • }
  • private shouldRetry(response: Response): boolean {
    // Note this is not a standard header.
    const shouldRetryHeader = response.headers.get('x-should-retry');
    @@ -724,7 +738,13 @@ export class PagePromise<
    ) {
    super(
    request,
  •  async (props) => new Page(client, props.response, await defaultParseResponse(props), props.options),
    
  •  async (props) =>
    
  •    new Page(
    
  •      client,
    
  •      props.response,
    
  •      await defaultParseResponse(props),
    
  •      props.options,
    
  •    ) as WithRequestID<PageClass>,
    
    );
    }

@@ -992,8 +1012,8 @@ export const safeJSON = (text: string) => {
}
};

-// https://stackoverflow.com/a/19709846
-const startsWithSchemeRegexp = new RegExp('^(?:[a-z]+:)?//', 'i');
+// https://url.spec.whatwg.org/#url-scheme-string
+const startsWithSchemeRegexp = /^[a-z][a-z0-9+.-]*:/i;
const isAbsoluteURL = (url: string): boolean => {
return startsWithSchemeRegexp.test(url);
};
diff --git src/error.ts src/error.ts
index e9f24916..64525004 100644
--- src/error.ts
+++ src/error.ts
@@ -4,19 +4,21 @@ import { castToError, Headers } from './core';

export class AnthropicError extends Error {}

-export class APIError extends AnthropicError {

  • readonly status: number | undefined;
  • readonly headers: Headers | undefined;
  • readonly error: Object | undefined;
    +export class APIError<
  • TStatus extends number | undefined = number | undefined,

  • THeaders extends Headers | undefined = Headers | undefined,

  • TError extends Object | undefined = Object | undefined,
    +> extends AnthropicError {

  • /** HTTP status for the response that caused the error */

  • readonly status: TStatus;

  • /** HTTP headers for the response that caused the error */

  • readonly headers: THeaders;

  • /** JSON body of the response that caused the error */

  • readonly error: TError;

    readonly request_id: string | null | undefined;

  • constructor(
  • status: number | undefined,
  • error: Object | undefined,
  • message: string | undefined,
  • headers: Headers | undefined,
  • ) {
  • constructor(status: TStatus, error: TError, message: string | undefined, headers: THeaders) {
    super(${APIError.makeMessage(status, error, message)});
    this.status = status;
    this.headers = headers;
    @@ -51,7 +53,7 @@ export class APIError extends AnthropicError {
    message: string | undefined,
    headers: Headers | undefined,
    ): APIError {
  • if (!status) {
  • if (!status || !headers) {
    return new APIConnectionError({ message, cause: castToError(errorResponse) });
    }

@@ -93,17 +95,13 @@ export class APIError extends AnthropicError {
}
}

-export class APIUserAbortError extends APIError {

  • override readonly status: undefined = undefined;

+export class APIUserAbortError extends APIError<undefined, undefined, undefined> {
constructor({ message }: { message?: string } = {}) {
super(undefined, undefined, message || 'Request was aborted.', undefined);
}
}

-export class APIConnectionError extends APIError {

  • override readonly status: undefined = undefined;

+export class APIConnectionError extends APIError<undefined, undefined, undefined> {
constructor({ message, cause }: { message?: string | undefined; cause?: Error | undefined }) {
super(undefined, undefined, message || 'Connection error.', undefined);
// in some environments the 'cause' property is already declared
@@ -118,32 +116,18 @@ export class APIConnectionTimeoutError extends APIConnectionError {
}
}

-export class BadRequestError extends APIError {

  • override readonly status: 400 = 400;
    -}
    +export class BadRequestError extends APIError<400, Headers> {}

-export class AuthenticationError extends APIError {

  • override readonly status: 401 = 401;
    -}
    +export class AuthenticationError extends APIError<401, Headers> {}

-export class PermissionDeniedError extends APIError {

  • override readonly status: 403 = 403;
    -}
    +export class PermissionDeniedError extends APIError<403, Headers> {}

-export class NotFoundError extends APIError {

  • override readonly status: 404 = 404;
    -}
    +export class NotFoundError extends APIError<404, Headers> {}

-export class ConflictError extends APIError {

  • override readonly status: 409 = 409;
    -}
    +export class ConflictError extends APIError<409, Headers> {}

-export class UnprocessableEntityError extends APIError {

  • override readonly status: 422 = 422;
    -}
    +export class UnprocessableEntityError extends APIError<422, Headers> {}

-export class RateLimitError extends APIError {

  • override readonly status: 429 = 429;
    -}
    +export class RateLimitError extends APIError<429, Headers> {}

-export class InternalServerError extends APIError {}
+export class InternalServerError extends APIError<number, Headers> {}
diff --git src/index.ts src/index.ts
index 70c9d5d7..bfca4fc8 100644
--- src/index.ts
+++ src/index.ts
@@ -14,14 +14,35 @@ import {
CompletionCreateParamsStreaming,
Completions,
} from './resources/completions';
+import { ModelInfo, ModelInfosPage, ModelListParams, Models } from './resources/models';
import {

  • AnthropicBeta,
  • Beta,
  • BetaAPIError,
  • BetaAuthenticationError,
  • BetaBillingError,
  • BetaError,
  • BetaErrorResponse,
  • BetaGatewayTimeoutError,
  • BetaInvalidRequestError,
  • BetaNotFoundError,
  • BetaOverloadedError,
  • BetaPermissionError,
  • BetaRateLimitError,
    +} from './resources/beta/beta';
    +import {
  • Base64PDFSource,
  • CacheControlEphemeral,
    ContentBlock,
    ContentBlockDeltaEvent,
  • ContentBlockParam,
    ContentBlockStartEvent,
    ContentBlockStopEvent,
  • DocumentBlockParam,
    ImageBlockParam,
    InputJSONDelta,
    Message,
  • MessageCountTokensParams,
    MessageCreateParams,
    MessageCreateParamsNonStreaming,
    MessageCreateParamsStreaming,
    @@ -32,6 +53,7 @@ import {
    MessageStopEvent,
    MessageStreamEvent,
    MessageStreamParams,
  • MessageTokensCount,
    Messages,
    Metadata,
    Model,
    @@ -54,20 +76,7 @@ import {
    ToolUseBlock,
    ToolUseBlockParam,
    Usage,
    -} from './resources/messages';
    -import {
  • AnthropicBeta,
  • Beta,
  • BetaAPIError,
  • BetaAuthenticationError,
  • BetaError,
  • BetaErrorResponse,
  • BetaInvalidRequestError,
  • BetaNotFoundError,
  • BetaOverloadedError,
  • BetaPermissionError,
  • BetaRateLimitError,
    -} from './resources/beta/beta';
    +} from './resources/messages/messages';

export interface ClientOptions {
/**
@@ -181,7 +190,7 @@ export class Anthropic extends Core.APIClient {

 if (!options.dangerouslyAllowBrowser && Core.isRunningInBrowser()) {
   throw new Errors.AnthropicError(
  •    "It looks like you're running in a browser-like environment.\n\nThis is disabled by default, as it risks exposing your secret API credentials to attackers.\nIf you understand the risks and have appropriate mitigations in place,\nyou can set the `dangerouslyAllowBrowser` option to `true`, e.g.,\n\nnew Anthropic({ apiKey, dangerouslyAllowBrowser: true });\n\nTODO: link!\n",
    
  •    "It looks like you're running in a browser-like environment.\n\nThis is disabled by default, as it risks exposing your secret API credentials to attackers.\nIf you understand the risks and have appropriate mitigations in place,\nyou can set the `dangerouslyAllowBrowser` option to `true`, e.g.,\n\nnew Anthropic({ apiKey, dangerouslyAllowBrowser: true });\n",
     );
    
    }

@@ -201,6 +210,7 @@ export class Anthropic extends Core.APIClient {

completions: API.Completions = new API.Completions(this);
messages: API.Messages = new API.Messages(this);

  • models: API.Models = new API.Models(this);
    beta: API.Beta = new API.Beta(this);

    protected override defaultQuery(): Core.DefaultQuery | undefined {
    @@ -289,31 +299,11 @@ export class Anthropic extends Core.APIClient {
    static fileFromPath = Uploads.fileFromPath;
    }

-export const { HUMAN_PROMPT, AI_PROMPT } = Anthropic;

-export {

  • AnthropicError,
  • APIError,
  • APIConnectionError,
  • APIConnectionTimeoutError,
  • APIUserAbortError,
  • NotFoundError,
  • ConflictError,
  • RateLimitError,
  • BadRequestError,
  • AuthenticationError,
  • InternalServerError,
  • PermissionDeniedError,
  • UnprocessableEntityError,
    -} from './error';

-export import toFile = Uploads.toFile;
-export import fileFromPath = Uploads.fileFromPath;

Anthropic.Completions = Completions;
Anthropic.Messages = Messages;
+Anthropic.Models = Models;
+Anthropic.ModelInfosPage = ModelInfosPage;
Anthropic.Beta = Beta;

export declare namespace Anthropic {
export type RequestOptions = Core.RequestOptions;

@@ -330,10 +320,14 @@ export declare namespace Anthropic {

export {
Messages as Messages,

  • type Base64PDFSource as Base64PDFSource,
  • type CacheControlEphemeral as CacheControlEphemeral,
    type ContentBlock as ContentBlock,
    type ContentBlockDeltaEvent as ContentBlockDeltaEvent,
  • type ContentBlockParam as ContentBlockParam,
    type ContentBlockStartEvent as ContentBlockStartEvent,
    type ContentBlockStopEvent as ContentBlockStopEvent,
  • type DocumentBlockParam as DocumentBlockParam,
    type ImageBlockParam as ImageBlockParam,
    type InputJSONDelta as InputJSONDelta,
    type Message as Message,
    @@ -343,6 +337,7 @@ export declare namespace Anthropic {
    type MessageStartEvent as MessageStartEvent,
    type MessageStopEvent as MessageStopEvent,
    type MessageStreamEvent as MessageStreamEvent,
  • type MessageTokensCount as MessageTokensCount,
    type Metadata as Metadata,
    type Model as Model,
    type RawContentBlockDeltaEvent as RawContentBlockDeltaEvent,
    @@ -368,6 +363,14 @@ export declare namespace Anthropic {
    type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
    type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
    type MessageStreamParams as MessageStreamParams,
  • type MessageCountTokensParams as MessageCountTokensParams,
  • };
  • export {
  • Models as Models,
  • type ModelInfo as ModelInfo,
  • ModelInfosPage as ModelInfosPage,
  • type ModelListParams as ModelListParams,
    };

export {
@@ -375,14 +378,46 @@ export declare namespace Anthropic {
type AnthropicBeta as AnthropicBeta,
type BetaAPIError as BetaAPIError,
type BetaAuthenticationError as BetaAuthenticationError,

  • type BetaBillingError as BetaBillingError,
    type BetaError as BetaError,
    type BetaErrorResponse as BetaErrorResponse,
  • type BetaGatewayTimeoutError as BetaGatewayTimeoutError,
    type BetaInvalidRequestError as BetaInvalidRequestError,
    type BetaNotFoundError as BetaNotFoundError,
    type BetaOverloadedError as BetaOverloadedError,
    type BetaPermissionError as BetaPermissionError,
    type BetaRateLimitError as BetaRateLimitError,
    };
  • export type APIErrorObject = API.APIErrorObject;
  • export type AuthenticationError = API.AuthenticationError;
  • export type BillingError = API.BillingError;
  • export type ErrorObject = API.ErrorObject;
  • export type ErrorResponse = API.ErrorResponse;
  • export type GatewayTimeoutError = API.GatewayTimeoutError;
  • export type InvalidRequestError = API.InvalidRequestError;
  • export type NotFoundError = API.NotFoundError;
  • export type OverloadedError = API.OverloadedError;
  • export type PermissionError = API.PermissionError;
  • export type RateLimitError = API.RateLimitError;
    }
    +export const { HUMAN_PROMPT, AI_PROMPT } = Anthropic;

+export { toFile, fileFromPath } from './uploads';
+export {

  • AnthropicError,
  • APIError,
  • APIConnectionError,
  • APIConnectionTimeoutError,
  • APIUserAbortError,
  • NotFoundError,
  • ConflictError,
  • RateLimitError,
  • BadRequestError,
  • AuthenticationError,
  • InternalServerError,
  • PermissionDeniedError,
  • UnprocessableEntityError,
    +} from './error';

export default Anthropic;
diff --git src/lib/PromptCachingBetaMessageStream.ts src/lib/PromptCachingBetaMessageStream.ts
deleted file mode 100644
index 0e742cba..00000000
--- src/lib/PromptCachingBetaMessageStream.ts
+++ /dev/null
@@ -1,579 +0,0 @@
-import * as Core from '@anthropic-ai/sdk/core';
-import { AnthropicError, APIUserAbortError } from '@anthropic-ai/sdk/error';
-import { type ContentBlock, type TextBlock } from '@anthropic-ai/sdk/resources/messages';
-import {

  • Messages,
  • type PromptCachingBetaMessage,
  • type RawPromptCachingBetaMessageStreamEvent,
  • type PromptCachingBetaMessageParam,
  • type MessageCreateParams,
  • type MessageCreateParamsBase,
    -} from '@anthropic-ai/sdk/resources/beta/prompt-caching/messages';
    -import { type ReadableStream } from '@anthropic-ai/sdk/_shims/index';
    -import { Stream } from '@anthropic-ai/sdk/streaming';
    -import { partialParse } from '../_vendor/partial-json-parser/parser';

-export interface PromptCachingBetaMessageStreamEvents {

  • connect: () => void;
  • streamEvent: (event: RawPromptCachingBetaMessageStreamEvent, snapshot: PromptCachingBetaMessage) => void;
  • text: (textDelta: string, textSnapshot: string) => void;
  • inputJson: (partialJson: string, jsonSnapshot: unknown) => void;
  • message: (message: PromptCachingBetaMessage) => void;
  • contentBlock: (content: ContentBlock) => void;
  • finalPromptCachingBetaMessage: (message: PromptCachingBetaMessage) => void;
  • error: (error: AnthropicError) => void;
  • abort: (error: APIUserAbortError) => void;
  • end: () => void;
    -}

-type PromptCachingBetaMessageStreamEventListeners =

  • {
  • listener: PromptCachingBetaMessageStreamEvents[Event];
  • once?: boolean;
  • }[];

-const JSON_BUF_PROPERTY = '__json_buf';

-export class PromptCachingBetaMessageStream implements AsyncIterable {

  • messages: PromptCachingBetaMessageParam[] = [];
  • receivedMessages: PromptCachi,ngBetaMessage[] = [];
  • #currentMessageSnapshot: PromptCachingBetaMessage | undefined;
  • controller: AbortController = new AbortController();
  • #connectedPromise: Promise;
  • #resolveConnectedPromise: () => void = () => {};
  • #rejectConnectedPromise: (error: AnthropicError) => void = () => {};
  • #endPromise: Promise;
  • #resolveEndPromise: () => void = () => {};
  • #rejectEndPromise: (error: AnthropicError) => void = () => {};
  • #listeners: {
  • [Event in keyof PromptCachingBetaMessageStreamEvents]?: PromptCachingBetaMessageStreamEventListeners;
  • } = {};
  • #ended = false;
  • #errored = false;
  • #aborted = false;
  • #catchingPromiseCreated = false;
  • constructor() {
  • this.#connectedPromise = new Promise((resolve, reject) => {
  •  this.#resolveConnectedPromise = resolve;
    
  •  this.#rejectConnectedPromise = reject;
    
  • });
  • this.#endPromise = new Promise((resolve, reject) => {
  •  this.#resolveEndPromise = resolve;
    
  •  this.#rejectEndPromise = reject;
    
  • });
  • // Don't let these promises cause unhandled rejection errors.
  • // we will manually cause an unhandled rejection error later
  • // if the user hasn't registered any error listener or called
  • // any promise-returning method.
  • this.#connectedPromise.catch(() => {});
  • this.#endPromise.catch(() => {});
  • }
  • /**
    • Intended for use on the frontend, consuming a stream produced with
    • .toReadableStream() on the backend.
    • Note that messages sent to the model do not appear in .on('message')
    • in this context.
  • */
  • static fromReadableStream(stream: ReadableStream): PromptCachingBetaMessageStream {
  • const runner = new PromptCachingBetaMessageStream();
  • runner._run(() => runner._fromReadableStream(stream));
  • return runner;
  • }
  • static createMessage(
  • messages: Messages,
  • params: MessageCreateParamsBase,
  • options?: Core.RequestOptions,
  • ): PromptCachingBetaMessageStream {
  • const runner = new PromptCachingBetaMessageStream();
  • for (const message of params.messages) {
  •  runner._addPromptCachingBetaMessageParam(message);
    
  • }
  • runner._run(() =>
  •  runner._createPromptCachingBetaMessage(
    
  •    messages,
    
  •    { ...params, stream: true },
    
  •    { ...options, headers: { ...options?.headers, 'X-Stainless-Helper-Method': 'stream' } },
    
  •  ),
    
  • );
  • return runner;
  • }
  • protected _run(executor: () => Promise) {
  • executor().then(() => {
  •  this._emitFinal();
    
  •  this._emit('end');
    
  • }, this.#handleError);
  • }
  • protected _addPromptCachingBetaMessageParam(message: PromptCachingBetaMessageParam) {
  • this.messages.push(message);
  • }
  • protected _addPromptCachingBetaMessage(message: PromptCachingBetaMessage, emit = true) {
  • this.receivedMessages.push(message);
  • if (emit) {
  •  this._emit('message', message);
    
  • }
  • }
  • protected async _createPromptCachingBetaMessage(
  • messages: Messages,
  • params: MessageCreateParams,
  • options?: Core.RequestOptions,
  • ): Promise {
  • const signal = options?.signal;
  • if (signal) {
  •  if (signal.aborted) this.controller.abort();
    
  •  signal.addEventListener('abort', () => this.controller.abort());
    
  • }
  • this.#beginRequest();
  • const stream = await messages.create(
  •  { ...params, stream: true },
    
  •  { ...options, signal: this.controller.signal },
    
  • );
  • this._connected();
  • for await (const event of stream) {
  •  this.#addStreamEvent(event);
    
  • }
  • if (stream.controller.signal?.aborted) {
  •  throw new APIUserAbortError();
    
  • }
  • this.#endRequest();
  • }
  • protected _connected() {
  • if (this.ended) return;
  • this.#resolveConnectedPromise();
  • this._emit('connect');
  • }
  • get ended(): boolean {
  • return this.#ended;
  • }
  • get errored(): boolean {
  • return this.#errored;
  • }
  • get aborted(): boolean {
  • return this.#aborted;
  • }
  • abort() {
  • this.controller.abort();
  • }
  • /**
    • Adds the listener function to the end of the listeners array for the event.
    • No checks are made to see if the listener has already been added. Multiple calls passing
    • the same combination of event and listener will result in the listener being added, and
    • called, multiple times.
    • @returns this PromptCachingBetaMessageStream, so that calls can be chained
  • */
  • on(
  • event: Event,
  • listener: PromptCachingBetaMessageStreamEvents[Event],
  • ): this {
  • const listeners: PromptCachingBetaMessageStreamEventListeners =
  •  this.#listeners[event] || (this.#listeners[event] = []);
    
  • listeners.push({ listener });
  • return this;
  • }
  • /**
    • Removes the specified listener from the listener array for the event.
    • off() will remove, at most, one instance of a listener from the listener array. If any single
    • listener has been added multiple times to the listener array for the specified event, then
    • off() must be called multiple times to remove each instance.
    • @returns this PromptCachingBetaMessageStream, so that calls can be chained
  • */
  • off(
  • event: Event,
  • listener: PromptCachingBetaMessageStreamEvents[Event],
  • ): this {
  • const listeners = this.#listeners[event];
  • if (!listeners) return this;
  • const index = listeners.findIndex((l) => l.listener === listener);
  • if (index >= 0) listeners.splice(index, 1);
  • return this;
  • }
  • /**
    • Adds a one-time listener function for the event. The next time the event is triggered,
    • this listener is removed and then invoked.
    • @returns this PromptCachingBetaMessageStream, so that calls can be chained
  • */
  • once(
  • event: Event,
  • listener: PromptCachingBetaMessageStreamEvents[Event],
  • ): this {
  • const listeners: PromptCachingBetaMessageStreamEventListeners =
  •  this.#listeners[event] || (this.#listeners[event] = []);
    
  • listeners.push({ listener, once: true });
  • return this;
  • }
  • /**
    • This is similar to .once(), but returns a Promise that resolves the next time
    • the event is triggered, instead of calling a listener callback.
    • @returns a Promise that resolves the next time given event is triggered,
    • or rejects if an error is emitted. (If you request the 'error' event,
    • returns a promise that resolves with the error).
    • Example:
    • const message = await stream.emitted('message') // rejects if the stream errors
  • */
  • emitted(
  • event: Event,
  • ): Promise<
  • Parameters<PromptCachingBetaMessageStreamEvents[Event]> extends [infer Param] ? Param
  • : Parameters<PromptCachingBetaMessageStreamEvents[Event]> extends [] ? void
  • : Parameters<PromptCachingBetaMessageStreamEvents[Event]>
  • {

  • return new Promise((resolve, reject) => {
  •  this.#catchingPromiseCreated = true;
    
  •  if (event !== 'error') this.once('error', reject);
    
  •  this.once(event, resolve as any);
    
  • });
  • }
  • async done(): Promise {
  • this.#catchingPromiseCreated = true;
  • await this.#endPromise;
  • }
  • get currentMessage(): PromptCachingBetaMessage | undefined {
  • return this.#currentMessageSnapshot;
  • }
  • #getFinalMessage(): PromptCachingBetaMessage {
  • if (this.receivedMessages.length === 0) {
  •  throw new AnthropicError(
    
  •    'stream ended without producing a PromptCachingBetaMessage with role=assistant',
    
  •  );
    
  • }
  • return this.receivedMessages.at(-1)!;
  • }
  • /**
    • @returns a promise that resolves with the the final assistant PromptCachingBetaMessage response,
    • or rejects if an error occurred or the stream ended prematurely without producing a PromptCachingBetaMessage.
  • */
  • async finalMessage(): Promise {
  • await this.done();
  • return this.#getFinalMessage();
  • }
  • #getFinalText(): string {
  • if (this.receivedMessages.length === 0) {
  •  throw new AnthropicError(
    
  •    'stream ended without producing a PromptCachingBetaMessage with role=assistant',
    
  •  );
    
  • }
  • const textBlocks = this.receivedMessages
  •  .at(-1)!
    
  •  .content.filter((block): block is TextBlock => block.type === 'text')
    
  •  .map((block) => block.text);
    
  • if (textBlocks.length === 0) {
  •  throw new AnthropicError('stream ended without producing a content block with type=text');
    
  • }
  • return textBlocks.join(' ');
  • }
  • /**
    • @returns a promise that resolves with the the final assistant PromptCachingBetaMessage's text response, concatenated
    • together if there are more than one text blocks.
    • Rejects if an error occurred or the stream ended prematurely without producing a PromptCachingBetaMessage.
  • */
  • async finalText(): Promise {
  • await this.done();
  • return this.#getFinalText();
  • }
  • #handleError = (error: unknown) => {
  • this.#errored = true;
  • if (error instanceof Error && error.name === 'AbortError') {
  •  error = new APIUserAbortError();
    
  • }
  • if (error instanceof APIUserAbortError) {
  •  this.#aborted = true;
    
  •  return this._emit('abort', error);
    
  • }
  • if (error instanceof AnthropicError) {
  •  return this._emit('error', error);
    
  • }
  • if (error instanceof Error) {
  •  const anthropicError: AnthropicError = new AnthropicError(error.message);
    
  •  // @ts-ignore
    
  •  anthropicError.cause = error;
    
  •  return this._emit('error', anthropicError);
    
  • }
  • return this._emit('error', new AnthropicError(String(error)));
  • };
  • protected _emit(
  • event: Event,
  • ...args: Parameters<PromptCachingBetaMessageStreamEvents[Event]>
  • ) {
  • // make sure we don't emit any PromptCachingBetaMessageStreamEvents after end
  • if (this.#ended) return;
  • if (event === 'end') {
  •  this.#ended = true;
    
  •  this.#resolveEndPromise();
    
  • }
  • const listeners: PromptCachingBetaMessageStreamEventListeners | undefined = this.#listeners[event];
  • if (listeners) {
  •  this.#listeners[event] = listeners.filter((l) => !l.once) as any;
    
  •  listeners.forEach(({ listener }: any) => listener(...args));
    
  • }
  • if (event === 'abort') {
  •  const error = args[0] as APIUserAbortError;
    
  •  if (!this.#catchingPromiseCreated && !listeners?.length) {
    
  •    Promise.reject(error);
    
  •  }
    
  •  this.#rejectConnectedPromise(error);
    
  •  this.#rejectEndPromise(error);
    
  •  this._emit('end');
    
  •  return;
    
  • }
  • if (event === 'error') {
  •  // NOTE: _emit('error', error) should only be called from #handleError().
    
  •  const error = args[0] as AnthropicError;
    
  •  if (!this.#catchingPromiseCreated && !listeners?.length) {
    
  •    // Trigger an unhandled rejection if the user hasn't registered any error handlers.
    
  •    // If you are seeing stack traces here, make sure to handle errors via either:
    
  •    // - runner.on('error', () => ...)
    
  •    // - await runner.done()
    
  •    // - await runner.final...()
    
  •    // - etc.
    
  •    Promise.reject(error);
    
  •  }
    
  •  this.#rejectConnectedPromise(error);
    
  •  this.#rejectEndPromise(error);
    
  •  this._emit('end');
    
  • }
  • }
  • protected _emitFinal() {
  • const finalPromptCachingBetaMessage = this.receivedMessages.at(-1);
  • if (finalPromptCachingBetaMessage) {
  •  this._emit('finalPromptCachingBetaMessage', this.#getFinalMessage());
    
  • }
  • }
  • #beginRequest() {
  • if (this.ended) return;
  • this.#currentMessageSnapshot = undefined;
  • }
  • #addStreamEvent(event: RawPromptCachingBetaMessageStreamEvent) {
  • if (this.ended) return;
  • const messageSnapshot = this.#accumulateMessage(event);
  • this._emit('streamEvent', event, messageSnapshot);
  • switch (event.type) {
  •  case 'content_block_delta': {
    
  •    const content = messageSnapshot.content.at(-1)!;
    
  •    if (event.delta.type === 'text_delta' && content.type === 'text') {
    
  •      this._emit('text', event.delta.text, content.text || '');
    
  •    } else if (event.delta.type === 'input_json_delta' && content.type === 'tool_use') {
    
  •      if (content.input) {
    
  •        this._emit('inputJson', event.delta.partial_json, content.input);
    
  •      }
    
  •    }
    
  •    break;
    
  •  }
    
  •  case 'message_stop': {
    
  •    this._addPromptCachingBetaMessageParam(messageSnapshot);
    
  •    this._addPromptCachingBetaMessage(messageSnapshot, true);
    
  •    break;
    
  •  }
    
  •  case 'content_block_stop': {
    
  •    this._emit('contentBlock', messageSnapshot.content.at(-1)!);
    
  •    break;
    
  •  }
    
  •  case 'message_start': {
    
  •    this.#currentMessageSnapshot = messageSnapshot;
    
  •    break;
    
  •  }
    
  •  case 'content_block_start':
    
  •  case 'message_delta':
    
  •    break;
    
  • }
  • }
  • #endRequest(): PromptCachingBetaMessage {
  • if (this.ended) {
  •  throw new AnthropicError(`stream has ended, this shouldn't happen`);
    
  • }
  • const snapshot = this.#currentMessageSnapshot;
  • if (!snapshot) {
  •  throw new AnthropicError(`request ended without sending any chunks`);
    
  • }
  • this.#currentMessageSnapshot = undefined;
  • return snapshot;
  • }
  • protected async _fromReadableStream(
  • readableStream: ReadableStream,
  • options?: Core.RequestOptions,
  • ): Promise {
  • const signal = options?.signal;
  • if (signal) {
  •  if (signal.aborted) this.controller.abort();
    
  •  signal.addEventListener('abort', () => this.controller.abort());
    
  • }
  • this.#beginRequest();
  • this._connected();
  • const stream = Stream.fromReadableStream(
  •  readableStream,
    
  •  this.controller,
    
  • );
  • for await (const event of stream) {
  •  this.#addStreamEvent(event);
    
  • }
  • if (stream.controller.signal?.aborted) {
  •  throw new APIUserAbortError();
    
  • }
  • this.#endRequest();
  • }
  • /**
    • Mutates this.#currentPromptCachingBetaMessage with the current event. Handling the accumulation of multiple messages
    • will be needed to be handled by the caller, this method will throw if you try to accumulate for multiple
    • messages.
  • */
  • #accumulateMessage(event: RawPromptCachingBetaMessageStreamEvent): PromptCachingBetaMessage {
  • let snapshot = this.#currentMessageSnapshot;
  • if (event.type === 'message_start') {
  •  if (snapshot) {
    
  •    throw new AnthropicError(`Unexpected event order, got ${event.type} before receiving "message_stop"`);
    
  •  }
    
  •  return event.message;
    
  • }
  • if (!snapshot) {
  •  throw new AnthropicError(`Unexpected event order, got ${event.type} before "message_start"`);
    
  • }
  • switch (event.type) {
  •  case 'message_stop':
    
  •    return snapshot;
    
  •  case 'message_delta':
    
  •    snapshot.stop_reason = event.delta.stop_reason;
    
  •    snapshot.stop_sequence = event.delta.stop_sequence;
    
  •    snapshot.usage.output_tokens = event.usage.output_tokens;
    
  •    return snapshot;
    
  •  case 'content_block_start':
    
  •    snapshot.content.push(event.content_block);
    
  •    return snapshot;
    
  •  case 'content_block_delta': {
    
  •    const snapshotContent = snapshot.content.at(event.index);
    
  •    if (snapshotContent?.type === 'text' && event.delta.type === 'text_delta') {
    
  •      snapshotContent.text += event.delta.text;
    
  •    } else if (snapshotContent?.type === 'tool_use' && event.delta.type === 'input_json_delta') {
    
  •      // we need to keep track of the raw JSON string as well so that we can
    
  •      // re-parse it for each delta, for now we just store it as an untyped
    
  •      // non-enumerable property on the snapshot
    
  •      let jsonBuf = (snapshotContent as any)[JSON_BUF_PROPERTY] || '';
    
  •      jsonBuf += event.delta.partial_json;
    
  •      Object.defineProperty(snapshotContent, JSON_BUF_PROPERTY, {
    
  •        value: jsonBuf,
    
  •        enumerable: false,
    
  •        writable: true,
    
  •      });
    
  •      if (jsonBuf) {
    
  •        snapshotContent.input = partialParse(jsonBuf);
    
  •      }
    
  •    }
    
  •    return snapshot;
    
  •  }
    
  •  case 'content_block_stop':
    
  •    return snapshot;
    
  • }
  • }
  • Symbol.asyncIterator: AsyncIterator {
  • const pushQueue: RawPromptCachingBetaMessageStreamEvent[] = [];
  • const readQueue: {
  •  resolve: (chunk: RawPromptCachingBetaMessageStreamEvent | undefined) => void;
    
  •  reject: (error: unknown) => void;
    
  • }[] = [];
  • let done = false;
  • this.on('streamEvent', (event) => {
  •  const reader = readQueue.shift();
    
  •  if (reader) {
    
  •    reader.resolve(event);
    
  •  } else {
    
  •    pushQueue.push(event);
    
  •  }
    
  • });
  • this.on('end', () => {
  •  done = true;
    
  •  for (const reader of readQueue) {
    
  •    reader.resolve(undefined);
    
  •  }
    
  •  readQueue.length = 0;
    
  • });
  • this.on('abort', (err) => {
  •  done = true;
    
  •  for (const reader of readQueue) {
    
  •    reader.reject(err);
    
  •  }
    
  •  readQueue.length = 0;
    
  • });
  • this.on('error', (err) => {
  •  done = true;
    
  •  for (const reader of readQueue) {
    
  •    reader.reject(err);
    
  •  }
    
  •  readQueue.length = 0;
    
  • });
  • return {
  •  next: async (): Promise<IteratorResult<RawPromptCachingBetaMessageStreamEvent>> => {
    
  •    if (!pushQueue.length) {
    
  •      if (done) {
    
  •        return { value: undefined, done: true };
    
  •      }
    
  •      return new Promise<RawPromptCachingBetaMessageStreamEvent | undefined>((resolve, reject) =>
    
  •        readQueue.push({ resolve, reject }),
    
  •      ).then((chunk) => (chunk ? { value: chunk, done: false } : { value: undefined, done: true }));
    
  •    }
    
  •    const chunk = pushQueue.shift()!;
    
  •    return { value: chunk, done: false };
    
  •  },
    
  •  return: async () => {
    
  •    this.abort();
    
  •    return { value: undefined, done: true };
    
  •  },
    
  • };
  • }
  • toReadableStream(): ReadableStream {
  • const stream = new Stream(this[Symbol.asyncIterator].bind(this), this.controller);
  • return stream.toReadableStream();
  • }
    -}
    diff --git src/resources/beta/beta.ts src/resources/beta/beta.ts
    index ee3c6ca5..e29a187c 100644
    --- src/resources/beta/beta.ts
    +++ src/resources/beta/beta.ts
    @@ -1,6 +1,8 @@
    // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

import { APIResource } from '../../resource';
+import * as ModelsAPI from './models';
+import { BetaModelInfo, BetaModelInfosPage, ModelListParams, Models } from './models';
import * as MessagesAPI from './messages/messages';
import {
BetaBase64PDFBlock,
@@ -44,12 +46,10 @@ import {
MessageCreateParamsStreaming,
Messages,
} from './messages/messages';
-import * as PromptCachingAPI from './prompt-caching/prompt-caching';
-import { PromptCaching } from './prompt-caching/prompt-caching';

export class Beta extends APIResource {

  • models: ModelsAPI.Models = new ModelsAPI.Models(this._client);
    messages: MessagesAPI.Messages = new MessagesAPI.Messages(this._client);
  • promptCaching: PromptCachingAPI.PromptCaching = new PromptCachingAPI.PromptCaching(this._client);
    }

export type AnthropicBeta =
@@ -72,12 +72,20 @@ export interface BetaAuthenticationError {
type: 'authentication_error';
}

+export interface BetaBillingError {

  • message: string;
  • type: 'billing_error';
    +}

export type BetaError =
| BetaInvalidRequestError
| BetaAuthenticationError

  • | BetaBillingError
    | BetaPermissionError
    | BetaNotFoundError
    | BetaRateLimitError
  • | BetaGatewayTimeoutError
    | BetaAPIError
    | BetaOverloadedError;

@@ -87,6 +95,12 @@ export interface BetaErrorResponse {
type: 'error';
}

+export interface BetaGatewayTimeoutError {

  • message: string;
  • type: 'timeout_error';
    +}

export interface BetaInvalidRequestError {
message: string;

@@ -117,16 +131,19 @@ export interface BetaRateLimitError {
type: 'rate_limit_error';
}

+Beta.Models = Models;
+Beta.BetaModelInfosPage = BetaModelInfosPage;
Beta.Messages = Messages;
-Beta.PromptCaching = PromptCaching;

export declare namespace Beta {
export {
type AnthropicBeta as AnthropicBeta,
type BetaAPIError as BetaAPIError,
type BetaAuthenticationError as BetaAuthenticationError,

  • type BetaBillingError as BetaBillingError,
    type BetaError as BetaError,
    type BetaErrorResponse as BetaErrorResponse,

  • type BetaGatewayTimeoutError as BetaGatewayTimeoutError,
    type BetaInvalidRequestError as BetaInvalidRequestError,
    type BetaNotFoundError as BetaNotFoundError,
    type BetaOverloadedError as BetaOverloadedError,
    @@ -134,6 +151,13 @@ export declare namespace Beta {
    type BetaRateLimitError as BetaRateLimitError,
    };

  • export {

  • Models as Models,

  • type BetaModelInfo as BetaModelInfo,

  • BetaModelInfosPage as BetaModelInfosPage,

  • type ModelListParams as ModelListParams,

  • };

  • export {
    Messages as Messages,
    type BetaBase64PDFBlock as BetaBase64PDFBlock,
    @@ -176,6 +200,4 @@ export declare namespace Beta {
    type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
    type MessageCountTokensParams as MessageCountTokensParams,
    };

  • export { PromptCaching as PromptCaching };
    }
    diff --git src/resources/beta/index.ts src/resources/beta/index.ts
    index 6e2b0a89..a68f2327 100644
    --- src/resources/beta/index.ts
    +++ src/resources/beta/index.ts
    @@ -5,14 +5,17 @@ export {
    type AnthropicBeta,
    type BetaAPIError,
    type BetaAuthenticationError,
  • type BetaBillingError,
    type BetaError,
    type BetaErrorResponse,
  • type BetaGatewayTimeoutError,
    type BetaInvalidRequestError,
    type BetaNotFoundError,
    type BetaOverloadedError,
    type BetaPermissionError,
    type BetaRateLimitError,
    } from './beta';
    +export { BetaModelInfosPage, Models, type BetaModelInfo, type ModelListParams } from './models';
    export {
    Messages,
    type BetaBase64PDFBlock,
    @@ -55,4 +58,3 @@ export {
    type MessageCreateParamsStreaming,
    type MessageCountTokensParams,
    } from './messages/index';
    -export { PromptCaching } from './prompt-caching/index';
    diff --git src/resources/beta/messages/messages.ts src/resources/beta/messages/messages.ts
    index 3f39ca3a..186a6c36 100644
    --- src/resources/beta/messages/messages.ts
    +++ src/resources/beta/messages/messages.ts
    @@ -4,8 +4,8 @@ import { APIResource } from '../../../resource';
    import { APIPromise } from '../../../core';
    import * as Core from '../../../core';
    import * as MessagesMessagesAPI from './messages';
    -import * as MessagesAPI from '../../messages';
    import * as BetaAPI from '../beta';
    +import * as MessagesAPI from '../../messages/messages';
    import * as BatchesAPI from './batches';
    import {
    BatchCancelParams,
    diff --git a/src/resources/beta/models.ts b/src/resources/beta/models.ts
    new file mode 100644
    index 00000000..48036273
    --- /dev/null
    +++ src/resources/beta/models.ts
    @@ -0,0 +1,78 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import { APIResource } from '../../resource';
+import { isRequestOptions } from '../../core';
+import * as Core from '../../core';
+import { Page, type PageParams } from '../../pagination';
+
+export class Models extends APIResource {

  • /**
    • Get a specific model.
    • The Models API response can be used to determine information about a specific
    • model or resolve a model alias to a model ID.
  • */
  • retrieve(modelId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.get(/v1/models/${modelId}?beta=true, options);
  • }
  • /**
    • List available models.
    • The Models API response can be used to determine which models are available for
    • use in the API. More recently released models are listed first.
  • */
  • list(
  • query?: ModelListParams,
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<BetaModelInfosPage, BetaModelInfo>;
  • list(options?: Core.RequestOptions): Core.PagePromise<BetaModelInfosPage, BetaModelInfo>;
  • list(
  • query: ModelListParams | Core.RequestOptions = {},
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<BetaModelInfosPage, BetaModelInfo> {
  • if (isRequestOptions(query)) {
  •  return this.list({}, query);
    
  • }
  • return this._client.getAPIList('/v1/models?beta=true', BetaModelInfosPage, { query, ...options });
  • }
    +}

+export class BetaModelInfosPage extends Page {}
+
+export interface BetaModelInfo {

  • /**
    • Unique model identifier.
  • */
  • id: string;
  • /**
    • RFC 3339 datetime string representing the time at which the model was released.
    • May be set to an epoch value if the release date is unknown.
  • */
  • created_at: string;
  • /**
    • A human-readable name for the model.
  • */
  • display_name: string;
  • /**
    • Object type.
    • For Models, this is always "model".
  • */
  • type: 'model';
    +}

+export interface ModelListParams extends PageParams {}
+
+Models.BetaModelInfosPage = BetaModelInfosPage;
+
+export declare namespace Models {

  • export {
  • type BetaModelInfo as BetaModelInfo,
  • BetaModelInfosPage as BetaModelInfosPage,
  • type ModelListParams as ModelListParams,
  • };
    +}
    diff --git src/resources/beta/prompt-caching/index.ts src/resources/beta/prompt-caching/index.ts
    deleted file mode 100644
    index 78b4e747..00000000
    --- src/resources/beta/prompt-caching/index.ts
    +++ /dev/null
    @@ -1,20 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-export {

  • Messages,
  • type PromptCachingBetaCacheControlEphemeral,
  • type PromptCachingBetaImageBlockParam,
  • type PromptCachingBetaMessage,
  • type PromptCachingBetaMessageParam,
  • type PromptCachingBetaTextBlockParam,
  • type PromptCachingBetaTool,
  • type PromptCachingBetaToolResultBlockParam,
  • type PromptCachingBetaToolUseBlockParam,
  • type PromptCachingBetaUsage,
  • type RawPromptCachingBetaMessageStartEvent,
  • type RawPromptCachingBetaMessageStreamEvent,
  • type MessageCreateParams,
  • type MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming,
    -} from './messages';
    -export { PromptCaching } from './prompt-caching';
    diff --git src/resources/beta/prompt-caching/messages.ts src/resources/beta/prompt-caching/messages.ts
    deleted file mode 100644
    index 4ae7449b..00000000
    --- src/resources/beta/prompt-caching/messages.ts
    +++ /dev/null
    @@ -1,642 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import { APIResource } from '../../../resource';
-import { APIPromise } from '../../../core';
-import * as Core from '../../../core';
-import * as PromptCachingMessagesAPI from './messages';
-import * as MessagesAPI from '../../messages';
-import * as BetaAPI from '../beta';
-import { Stream } from '../../../streaming';
-import { PromptCachingBetaMessageStream } from '../../../lib/PromptCachingBetaMessageStream';

-export class Messages extends APIResource {

  • /**
    • Send a structured list of input messages with text and/or image content, and the
    • model will generate the next message in the conversation.
    • The Messages API can be used for either single queries or stateless multi-turn
    • conversations.
  • */
  • create(
  • params: MessageCreateParamsNonStreaming,
  • options?: Core.RequestOptions,
  • ): APIPromise;
  • create(
  • params: MessageCreateParamsStreaming,
  • options?: Core.RequestOptions,
  • ): APIPromise<Stream>;
  • create(
  • params: MessageCreateParamsBase,
  • options?: Core.RequestOptions,
  • ): APIPromise<Stream | PromptCachingBetaMessage>;
  • create(
  • params: MessageCreateParams,
  • options?: Core.RequestOptions,
  • ): APIPromise | APIPromise<Stream> {
  • const { betas, ...body } = params;
  • return this._client.post('/v1/messages?beta=prompt_caching', {
  •  body,
    
  •  timeout: (this._client as any)._options.timeout ?? 600000,
    
  •  ...options,
    
  •  headers: {
    
  •    'anthropic-beta': [...(betas ?? []), 'prompt-caching-2024-07-31'].toString(),
    
  •    ...options?.headers,
    
  •  },
    
  •  stream: params.stream ?? false,
    
  • }) as APIPromise | APIPromise<Stream>;
  • }
  • /**
    • Create a Message stream
  • */
  • stream(body: MessageStreamParams, options?: Core.RequestOptions): PromptCachingBetaMessageStream {
  • return PromptCachingBetaMessageStream.createMessage(this, body, options);
  • }
    -}

-export type MessageStreamParams = MessageCreateParamsBase;

-export interface PromptCachingBetaCacheControlEphemeral {

  • type: 'ephemeral';
    -}

-export interface PromptCachingBetaImageBlockParam {

  • source: PromptCachingBetaImageBlockParam.Source;
  • type: 'image';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
    -}

-export namespace PromptCachingBetaImageBlockParam {

  • export interface Source {
  • data: string;
  • media_type: 'image/jpeg' | 'image/png' | 'image/gif' | 'image/webp';
  • type: 'base64';
  • }
    -}

-export interface PromptCachingBetaMessage {

  • /**
    • Unique object identifier.
    • The format and length of IDs may change over time.
  • */
  • id: string;
  • /**
    • Content generated by the model.
    • This is an array of content blocks, each of which has a type that determines
    • its shape.
    • Example:
    • [{ "type": "text", "text": "Hi, I'm Claude." }]
    • If the request input messages ended with an assistant turn, then the
    • response content will continue directly from that last turn. You can use this
    • to constrain the model's output.
    • For example, if the input messages were:
    • [
    • {
    • "role": "user",
      
    • "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
      
    • },
    • { "role": "assistant", "content": "The best answer is (" }
    • ]
    • Then the response content might be:
    • [{ "type": "text", "text": "B)" }]
  • */
  • content: Array<MessagesAPI.ContentBlock>;
  • /**
    • The model that will complete your prompt.\n\nSee
    • details and options.
  • */
  • model: MessagesAPI.Model;
  • /**
    • Conversational role of the generated message.
    • This will always be "assistant".
  • */
  • role: 'assistant';
  • /**
    • The reason that we stopped.
    • This may be one the following values:
      • "end_turn": the model reached a natural stopping point
      • "max_tokens": we exceeded the requested max_tokens or the model's maximum
      • "stop_sequence": one of your provided custom stop_sequences was generated
      • "tool_use": the model invoked one or more tools
    • In non-streaming mode this value is always non-null. In streaming mode, it is
    • null in the message_start event and non-null otherwise.
  • */
  • stop_reason: 'end_turn' | 'max_tokens' | 'stop_sequence' | 'tool_use' | null;
  • /**
    • Which custom stop sequence was generated, if any.
    • This value will be a non-null string if one of your custom stop sequences was
    • generated.
  • */
  • stop_sequence: string | null;
  • /**
    • Object type.
    • For Messages, this is always "message".
  • */
  • type: 'message';
  • /**
    • Billing and rate-limit usage.
    • Anthropic's API bills and rate-limits by token counts, as tokens represent the
    • underlying cost to our systems.
    • Under the hood, the API transforms requests into a format suitable for the
    • model. The model's output then goes through a parsing stage before becoming an
    • API response. As a result, the token counts in usage will not match one-to-one
    • with the exact visible content of an API request or response.
    • For example, output_tokens will be non-zero, even for an empty string response
    • from Claude.
  • */
  • usage: PromptCachingBetaUsage;
    -}

-export interface PromptCachingBetaMessageParam {

  • content:
  • | string
  • | Array<
  •    | PromptCachingBetaTextBlockParam
    
  •    | PromptCachingBetaImageBlockParam
    
  •    | PromptCachingBetaToolUseBlockParam
    
  •    | PromptCachingBetaToolResultBlockParam
    
  •  >;
    
  • role: 'user' | 'assistant';
    -}

-export interface PromptCachingBetaTextBlockParam {

  • text: string;
  • type: 'text';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
    -}

-export interface PromptCachingBetaTool {

  • /**
    • This defines the shape of the input that your tool accepts and that the model
    • will produce.
  • */
  • input_schema: PromptCachingBetaTool.InputSchema;
  • /**
    • Name of the tool.
    • This is how the tool will be called by the model and in tool_use blocks.
  • */
  • name: string;
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
  • /**
    • Description of what this tool does.
    • Tool descriptions should be as detailed as possible. The more information that
    • the model has about what the tool is and how to use it, the better it will
    • perform. You can use natural language descriptions to reinforce important
    • aspects of the tool input JSON schema.
  • */
  • description?: string;
    -}

-export namespace PromptCachingBetaTool {

  • /**
    • This defines the shape of the input that your tool accepts and that the model
    • will produce.
  • */
  • export interface InputSchema {
  • type: 'object';
  • properties?: unknown | null;
  • }
    -}

-export interface PromptCachingBetaToolResultBlockParam {

  • tool_use_id: string;
  • type: 'tool_result';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
  • content?: string | Array<PromptCachingBetaTextBlockParam | PromptCachingBetaImageBlockParam>;
  • is_error?: boolean;
    -}

-export interface PromptCachingBetaToolUseBlockParam {

  • id: string;
  • input: unknown;
  • name: string;
  • type: 'tool_use';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
    -}

-export interface PromptCachingBetaUsage {

  • /**
    • The number of input tokens used to create the cache entry.
  • */
  • cache_creation_input_tokens: number | null;
  • /**
    • The number of input tokens read from the cache.
  • */
  • cache_read_input_tokens: number | null;
  • /**
    • The number of input tokens which were used.
  • */
  • input_tokens: number;
  • /**
    • The number of output tokens which were used.
  • */
  • output_tokens: number;
    -}

-export interface RawPromptCachingBetaMessageStartEvent {

  • message: PromptCachingBetaMessage;
  • type: 'message_start';
    -}

-export type RawPromptCachingBetaMessageStreamEvent =

  • | RawPromptCachingBetaMessageStartEvent
  • | MessagesAPI.RawMessageDeltaEvent
  • | MessagesAPI.RawMessageStopEvent
  • | MessagesAPI.RawContentBlockStartEvent
  • | MessagesAPI.RawContentBlockDeltaEvent
  • | MessagesAPI.RawContentBlockStopEvent;

-export type MessageCreateParams = MessageCreateParamsNonStreaming | MessageCreateParamsStreaming;

-export interface MessageCreateParamsBase {

  • /**
    • Body param: The maximum number of tokens to generate before stopping.
    • Note that our models may stop before reaching this maximum. This parameter
    • only specifies the absolute maximum number of tokens to generate.
    • Different models have different maximum values for this parameter. See
  • */
  • max_tokens: number;
  • /**
    • Body param: Input messages.
    • Our models are trained to operate on alternating user and assistant
    • conversational turns. When creating a new Message, you specify the prior
    • conversational turns with the messages parameter, and the model then generates
    • the next Message in the conversation. Consecutive user or assistant turns
    • in your request will be combined into a single turn.
    • Each input message must be an object with a role and content. You can
    • specify a single user-role message, or you can include multiple user and
    • assistant messages.
    • If the final message uses the assistant role, the response content will
    • continue immediately from the content in that message. This can be used to
    • constrain part of the model's response.
    • Example with a single user message:
    • [{ "role": "user", "content": "Hello, Claude" }]
    • Example with multiple conversational turns:
    • [
    • { "role": "user", "content": "Hello there." },
    • { "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
    • { "role": "user", "content": "Can you explain LLMs in plain English?" }
    • ]
    • Example with a partially-filled response from Claude:
    • [
    • {
    • "role": "user",
      
    • "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
      
    • },
    • { "role": "assistant", "content": "The best answer is (" }
    • ]
    • Each input message content may be either a single string or an array of
    • content blocks, where each block has a specific type. Using a string for
    • content is shorthand for an array of one content block of type "text". The
    • following input messages are equivalent:
    • { "role": "user", "content": "Hello, Claude" }
    • { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
    • Starting with Claude 3 models, you can also send image content blocks:
    • {
    • "role": "user",
    • "content": [
    • {
      
    •   "type": "image",
      
    •   "source": {
      
    •     "type": "base64",
      
    •     "media_type": "image/jpeg",
      
    •     "data": "/9j/4AAQSkZJRg..."
      
    •   }
      
    • },
      
    • { "type": "text", "text": "What is in this image?" }
      
    • ]
    • }
    • We currently support the base64 source type for images, and the image/jpeg,
    • image/png, image/gif, and image/webp media types.
    • more input examples.
    • Note that if you want to include a
    • the top-level system parameter — there is no "system" role for input
    • messages in the Messages API.
  • */
  • messages: Array;
  • /**
    • Body param: The model that will complete your prompt.\n\nSee
    • details and options.
  • */
  • model: MessagesAPI.Model;
  • /**
    • Body param: An object describing metadata about the request.
  • */
  • metadata?: MessagesAPI.Metadata;
  • /**
    • Body param: Custom text sequences that will cause the model to stop generating.
    • Our models will normally stop when they have naturally completed their turn,
    • which will result in a response stop_reason of "end_turn".
    • If you want the model to stop generating when it encounters custom strings of
    • text, you can use the stop_sequences parameter. If the model encounters one of
    • the custom sequences, the response stop_reason value will be "stop_sequence"
    • and the response stop_sequence value will contain the matched stop sequence.
  • */
  • stop_sequences?: Array;
  • /**
    • Body param: Whether to incrementally stream the response using server-sent
    • events.
    • details.
  • */
  • stream?: boolean;
  • /**
    • Body param: System prompt.
    • A system prompt is a way of providing context and instructions to Claude, such
    • as specifying a particular goal or role. See our
  • */
  • system?: string | Array;
  • /**
    • Body param: Amount of randomness injected into the response.
    • Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0
    • for analytical / multiple choice, and closer to 1.0 for creative and
    • generative tasks.
    • Note that even with temperature of 0.0, the results will not be fully
    • deterministic.
  • */
  • temperature?: number;
  • /**
    • Body param: How the model should use the provided tools. The model can use a
    • specific tool, any available tool, or decide by itself.
  • */
  • tool_choice?: MessagesAPI.ToolChoice;
  • /**
    • Body param: Definitions of tools that the model may use.
    • If you include tools in your API request, the model may return tool_use
    • content blocks that represent the model's use of those tools. You can then run
    • those tools using the tool input generated by the model and then optionally
    • return results back to the model using tool_result content blocks.
    • Each tool definition includes:
      • name: Name of the tool.
      • description: Optional, but strongly-recommended description of the tool.
    • shape that the model will produce in tool_use output content blocks.
    • For example, if you defined tools as:
    • [
    • {
    • "name": "get_stock_price",
      
    • "description": "Get the current stock price for a given ticker symbol.",
      
    • "input_schema": {
      
    •   "type": "object",
      
    •   "properties": {
      
    •     "ticker": {
      
    •       "type": "string",
      
    •       "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
      
    •     }
      
    •   },
      
    •   "required": ["ticker"]
      
    • }
      
    • }
    • ]
    • And then asked the model "What's the S&P 500 at today?", the model might produce
    • tool_use content blocks in the response like this:
    • [
    • {
    • "type": "tool_use",
      
    • "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "name": "get_stock_price",
      
    • "input": { "ticker": "^GSPC" }
      
    • }
    • ]
    • You might then run your get_stock_price tool with {"ticker": "^GSPC"} as an
    • input, and return the following back to the model in a subsequent user
    • message:
    • [
    • {
    • "type": "tool_result",
      
    • "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "content": "259.75 USD"
      
    • }
    • ]
    • Tools can be used for workflows that include running client-side tools and
    • functions, or more generally whenever you want the model to produce a particular
    • JSON structure of output.
    • See our guide for more details.
  • */
  • tools?: Array;
  • /**
    • Body param: Only sample from the top K options for each subsequent token.
    • Used to remove "long tail" low probability responses.
    • Recommended for advanced use cases only. You usually only need to use
    • temperature.
  • */
  • top_k?: number;
  • /**
    • Body param: Use nucleus sampling.
    • In nucleus sampling, we compute the cumulative distribution over all the options
    • for each subsequent token in decreasing probability order and cut it off once it
    • reaches a particular probability specified by top_p. You should either alter
    • temperature or top_p, but not both.
    • Recommended for advanced use cases only. You usually only need to use
    • temperature.
  • */
  • top_p?: number;
  • /**
    • Header param: Optional header to specify the beta version(s) you want to use.
  • */
  • betas?: Array<BetaAPI.AnthropicBeta>;
    -}

-export namespace MessageCreateParams {

  • /**
  • */
  • export type Metadata = MessagesAPI.Metadata;
  • /**
    • @deprecated use Anthropic.Messages.ToolChoiceAuto instead
  • */
  • export type ToolChoiceAuto = MessagesAPI.ToolChoiceAuto;
  • /**
    • @deprecated use Anthropic.Messages.ToolChoiceAny instead
  • */
  • export type ToolChoiceAny = MessagesAPI.ToolChoiceAny;
  • /**
    • @deprecated use Anthropic.Messages.ToolChoiceTool instead
  • */
  • export type ToolChoiceTool = MessagesAPI.ToolChoiceTool;
  • export type MessageCreateParamsNonStreaming = PromptCachingMessagesAPI.MessageCreateParamsNonStreaming;
  • export type MessageCreateParamsStreaming = PromptCachingMessagesAPI.MessageCreateParamsStreaming;
    -}

-export interface MessageCreateParamsNonStreaming extends MessageCreateParamsBase {

  • /**
    • Body param: Whether to incrementally stream the response using server-sent
    • events.
    • details.
  • */
  • stream?: false;
    -}

-export interface MessageCreateParamsStreaming extends MessageCreateParamsBase {

  • /**
    • Body param: Whether to incrementally stream the response using server-sent
    • events.
    • details.
  • */
  • stream: true;
    -}

-export declare namespace Messages {

  • export {
  • type PromptCachingBetaCacheControlEphemeral as PromptCachingBetaCacheControlEphemeral,
  • type PromptCachingBetaImageBlockParam as PromptCachingBetaImageBlockParam,
  • type PromptCachingBetaMessage as PromptCachingBetaMessage,
  • type PromptCachingBetaMessageParam as PromptCachingBetaMessageParam,
  • type PromptCachingBetaTextBlockParam as PromptCachingBetaTextBlockParam,
  • type PromptCachingBetaTool as PromptCachingBetaTool,
  • type PromptCachingBetaToolResultBlockParam as PromptCachingBetaToolResultBlockParam,
  • type PromptCachingBetaToolUseBlockParam as PromptCachingBetaToolUseBlockParam,
  • type PromptCachingBetaUsage as PromptCachingBetaUsage,
  • type RawPromptCachingBetaMessageStartEvent as RawPromptCachingBetaMessageStartEvent,
  • type RawPromptCachingBetaMessageStreamEvent as RawPromptCachingBetaMessageStreamEvent,
  • type MessageCreateParams as MessageCreateParams,
  • type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
  • };
    -}
    diff --git src/resources/beta/prompt-caching/prompt-caching.ts src/resources/beta/prompt-caching/prompt-caching.ts
    deleted file mode 100644
    index 421f8621..00000000
    --- src/resources/beta/prompt-caching/prompt-caching.ts
    +++ /dev/null
    @@ -1,47 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import { APIResource } from '../../../resource';
-import * as MessagesAPI from './messages';
-import {

  • MessageCreateParams,
  • MessageCreateParamsNonStreaming,
  • MessageCreateParamsStreaming,
  • Messages,
  • PromptCachingBetaCacheControlEphemeral,
  • PromptCachingBetaImageBlockParam,
  • PromptCachingBetaMessage,
  • PromptCachingBetaMessageParam,
  • PromptCachingBetaTextBlockParam,
  • PromptCachingBetaTool,
  • PromptCachingBetaToolResultBlockParam,
  • PromptCachingBetaToolUseBlockParam,
  • PromptCachingBetaUsage,
  • RawPromptCachingBetaMessageStartEvent,
  • RawPromptCachingBetaMessageStreamEvent,
    -} from './messages';

-export class PromptCaching extends APIResource {

  • messages: MessagesAPI.Messages = new MessagesAPI.Messages(this._client);
    -}

-PromptCaching.Messages = Messages;

-export declare namespace PromptCaching {

  • export {
  • Messages as Messages,
  • type PromptCachingBetaCacheControlEphemeral as PromptCachingBetaCacheControlEphemeral,
  • type PromptCachingBetaImageBlockParam as PromptCachingBetaImageBlockParam,
  • type PromptCachingBetaMessage as PromptCachingBetaMessage,
  • type PromptCachingBetaMessageParam as PromptCachingBetaMessageParam,
  • type PromptCachingBetaTextBlockParam as PromptCachingBetaTextBlockParam,
  • type PromptCachingBetaTool as PromptCachingBetaTool,
  • type PromptCachingBetaToolResultBlockParam as PromptCachingBetaToolResultBlockParam,
  • type PromptCachingBetaToolUseBlockParam as PromptCachingBetaToolUseBlockParam,
  • type PromptCachingBetaUsage as PromptCachingBetaUsage,
  • type RawPromptCachingBetaMessageStartEvent as RawPromptCachingBetaMessageStartEvent,
  • type RawPromptCachingBetaMessageStreamEvent as RawPromptCachingBetaMessageStreamEvent,
  • type MessageCreateParams as MessageCreateParams,
  • type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
  • };
    -}
    diff --git src/resources/completions.ts src/resources/completions.ts
    index a2ef4d98..2260681d 100644
    --- src/resources/completions.ts
    +++ src/resources/completions.ts
    @@ -4,7 +4,7 @@ import { APIResource } from '../resource';
    import { APIPromise } from '../core';
    import * as Core from '../core';
    import * as CompletionsAPI from './completions';
    -import * as MessagesAPI from './messages';
    +import * as MessagesAPI from './messages/messages';
    import { Stream } from '../streaming';

export class Completions extends APIResource {
diff --git src/resources/index.ts src/resources/index.ts
index 59b714ff..23366973 100644
--- src/resources/index.ts
+++ src/resources/index.ts
@@ -1,12 +1,15 @@
// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+export * from './shared';
export {
Beta,
type AnthropicBeta,
type BetaAPIError,
type BetaAuthenticationError,

  • type BetaBillingError,
    type BetaError,
    type BetaErrorResponse,
  • type BetaGatewayTimeoutError,
    type BetaInvalidRequestError,
    type BetaNotFoundError,
    type BetaOverloadedError,
    @@ -22,10 +25,14 @@ export {
    } from './completions';
    export {
    Messages,
  • type Base64PDFSource,
  • type CacheControlEphemeral,
    type ContentBlock,
    type ContentBlockDeltaEvent,
  • type ContentBlockParam,
    type ContentBlockStartEvent,
    type ContentBlockStopEvent,
  • type DocumentBlockParam,
    type ImageBlockParam,
    type InputJsonDelta,
    type InputJSONDelta,
    @@ -37,6 +44,7 @@ export {
    type MessageStopEvent,
    type MessageStreamEvent,
    type MessageStreamParams,
  • type MessageTokensCount,
    type Metadata,
    type Model,
    type RawContentBlockDeltaEvent,
    @@ -61,4 +69,6 @@ export {
    type MessageCreateParams,
    type MessageCreateParamsNonStreaming,
    type MessageCreateParamsStreaming,
    -} from './messages';
  • type MessageCountTokensParams,
    +} from './messages/messages';
    +export { ModelInfosPage, Models, type ModelInfo, type ModelListParams } from './models';
    diff --git a/src/resources/messages/batches.ts b/src/resources/messages/batches.ts
    new file mode 100644
    index 00000000..b4fd45e8
    --- /dev/null
    +++ src/resources/messages/batches.ts
    @@ -0,0 +1,298 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import { APIResource } from '../../resource';
+import { isRequestOptions } from '../../core';
+import * as Core from '../../core';
+import * as Shared from '../shared';
+import * as MessagesAPI from './messages';
+import { Page, type PageParams } from '../../pagination';
+import { JSONLDecoder } from '../../internal/decoders/jsonl';
+import { AnthropicError } from '../../error';
+
+export class Batches extends APIResource {

  • /**
    • Send a batch of Message creation requests.
    • The Message Batches API can be used to process multiple Messages API requests at
    • once. Once a Message Batch is created, it begins processing immediately. Batches
    • can take up to 24 hours to complete.
  • */
  • create(body: BatchCreateParams, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.post('/v1/messages/batches', { body, ...options });
  • }
  • /**
    • This endpoint is idempotent and can be used to poll for Message Batch
    • completion. To access the results of a Message Batch, make a request to the
    • results_url field in the response.
  • */
  • retrieve(messageBatchId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.get(/v1/messages/batches/${messageBatchId}, options);
  • }
  • /**
    • List all Message Batches within a Workspace. Most recently created batches are
    • returned first.
  • */
  • list(
  • query?: BatchListParams,
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<MessageBatchesPage, MessageBatch>;
  • list(options?: Core.RequestOptions): Core.PagePromise<MessageBatchesPage, MessageBatch>;
  • list(
  • query: BatchListParams | Core.RequestOptions = {},
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<MessageBatchesPage, MessageBatch> {
  • if (isRequestOptions(query)) {
  •  return this.list({}, query);
    
  • }
  • return this._client.getAPIList('/v1/messages/batches', MessageBatchesPage, { query, ...options });
  • }
  • /**
    • Batches may be canceled any time before processing ends. Once cancellation is
    • initiated, the batch enters a canceling state, at which time the system may
    • complete any in-progress, non-interruptible requests before finalizing
    • cancellation.
    • The number of canceled requests is specified in request_counts. To determine
    • which requests were canceled, check the individual results within the batch.
    • Note that cancellation may not result in any canceled requests if they were
    • non-interruptible.
  • */
  • cancel(messageBatchId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.post(/v1/messages/batches/${messageBatchId}/cancel, options);
  • }
  • /**
    • Streams the results of a Message Batch as a .jsonl file.
    • Each line in the file is a JSON object containing the result of a single request
    • in the Message Batch. Results are not guaranteed to be in the same order as
    • requests. Use the custom_id field to match results to requests.
  • */
  • async results(
  • messageBatchId: string,
  • options?: Core.RequestOptions,
  • ): Promise<JSONLDecoder> {
  • const batch = await this.retrieve(messageBatchId);
  • if (!batch.results_url) {
  •  throw new AnthropicError(
    
  •    `No batch \`results_url\`; Has it finished processing? ${batch.processing_status} - ${batch.id}`,
    
  •  );
    
  • }
  • return this._client
  •  .get(batch.results_url, { ...options, __binaryResponse: true })
    
  •  ._thenUnwrap((_, props) => JSONLDecoder.fromResponse(props.response, props.controller));
    
  • }
    +}

+export class MessageBatchesPage extends Page {}
+
+export interface MessageBatch {

  • /**
    • Unique object identifier.
    • The format and length of IDs may change over time.
  • */
  • id: string;
  • /**
    • RFC 3339 datetime string representing the time at which the Message Batch was
    • archived and its results became unavailable.
  • */
  • archived_at: string | null;
  • /**
    • RFC 3339 datetime string representing the time at which cancellation was
    • initiated for the Message Batch. Specified only if cancellation was initiated.
  • */
  • cancel_initiated_at: string | null;
  • /**
    • RFC 3339 datetime string representing the time at which the Message Batch was
    • created.
  • */
  • created_at: string;
  • /**
    • RFC 3339 datetime string representing the time at which processing for the
    • Message Batch ended. Specified only once processing ends.
    • Processing ends when every request in a Message Batch has either succeeded,
    • errored, canceled, or expired.
  • */
  • ended_at: string | null;
  • /**
    • RFC 3339 datetime string representing the time at which the Message Batch will
    • expire and end processing, which is 24 hours after creation.
  • */
  • expires_at: string;
  • /**
    • Processing status of the Message Batch.
  • */
  • processing_status: 'in_progress' | 'canceling' | 'ended';
  • /**
    • Tallies requests within the Message Batch, categorized by their status.
    • Requests start as processing and move to one of the other statuses only once
    • processing of the entire batch ends. The sum of all values always matches the
    • total number of requests in the batch.
  • */
  • request_counts: MessageBatchRequestCounts;
  • /**
    • URL to a .jsonl file containing the results of the Message Batch requests.
    • Specified only once processing ends.
    • Results in the file are not guaranteed to be in the same order as requests. Use
    • the custom_id field to match results to requests.
  • */
  • results_url: string | null;
  • /**
    • Object type.
    • For Message Batches, this is always "message_batch".
  • */
  • type: 'message_batch';
    +}

+export interface MessageBatchCanceledResult {

  • type: 'canceled';
    +}

+export interface MessageBatchErroredResult {

  • error: Shared.ErrorResponse;
  • type: 'errored';
    +}

+export interface MessageBatchExpiredResult {

  • type: 'expired';
    +}

+export interface MessageBatchIndividualResponse {

  • /**
    • Developer-provided ID created for each request in a Message Batch. Useful for
    • matching results to requests, as results may be given out of request order.
    • Must be unique for each request within the Message Batch.
  • */
  • custom_id: string;
  • /**
    • Processing result for this request.
    • Contains a Message output if processing was successful, an error response if
    • processing failed, or the reason why processing was not attempted, such as
    • cancellation or expiration.
  • */
  • result: MessageBatchResult;
    +}

+export interface MessageBatchRequestCounts {

  • /**
    • Number of requests in the Message Batch that have been canceled.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • canceled: number;
  • /**
    • Number of requests in the Message Batch that encountered an error.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • errored: number;
  • /**
    • Number of requests in the Message Batch that have expired.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • expired: number;
  • /**
    • Number of requests in the Message Batch that are processing.
  • */
  • processing: number;
  • /**
    • Number of requests in the Message Batch that have completed successfully.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • succeeded: number;
    +}

+/**

    • Processing result for this request.
    • Contains a Message output if processing was successful, an error response if
    • processing failed, or the reason why processing was not attempted, such as
    • cancellation or expiration.
  • */
    +export type MessageBatchResult =
  • | MessageBatchSucceededResult
  • | MessageBatchErroredResult
  • | MessageBatchCanceledResult
  • | MessageBatchExpiredResult;

+export interface MessageBatchSucceededResult {

  • message: MessagesAPI.Message;
  • type: 'succeeded';
    +}

+export interface BatchCreateParams {

  • /**
    • List of requests for prompt completion. Each is an individual request to create
    • a Message.
  • */
  • requests: Array<BatchCreateParams.Request>;
    +}

+export namespace BatchCreateParams {

  • export interface Request {
  • /**
  • * Developer-provided ID created for each request in a Message Batch. Useful for
    
  • * matching results to requests, as results may be given out of request order.
    
  • *
    
  • * Must be unique for each request within the Message Batch.
    
  • */
    
  • custom_id: string;
  • /**
  • * Messages API creation parameters for the individual request.
    
  • *
    
  • * See the [Messages API reference](/en/api/messages) for full documentation on
    
  • * available parameters.
    
  • */
    
  • params: MessagesAPI.MessageCreateParamsNonStreaming;
  • }
    +}

+export interface BatchListParams extends PageParams {}
+
+Batches.MessageBatchesPage = MessageBatchesPage;
+
+export declare namespace Batches {

  • export {
  • type MessageBatch as MessageBatch,
  • type MessageBatchCanceledResult as MessageBatchCanceledResult,
  • type MessageBatchErroredResult as MessageBatchErroredResult,
  • type MessageBatchExpiredResult as MessageBatchExpiredResult,
  • type MessageBatchIndividualResponse as MessageBatchIndividualResponse,
  • type MessageBatchRequestCounts as MessageBatchRequestCounts,
  • type MessageBatchResult as MessageBatchResult,
  • type MessageBatchSucceededResult as MessageBatchSucceededResult,
  • MessageBatchesPage as MessageBatchesPage,
  • type BatchCreateParams as BatchCreateParams,
  • type BatchListParams as BatchListParams,
  • };
    +}
    diff --git a/src/resources/messages/index.ts b/src/resources/messages/index.ts
    new file mode 100644
    index 00000000..10308d2a
    --- /dev/null
    +++ src/resources/messages/index.ts
    @@ -0,0 +1,63 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+export {

  • MessageBatchesPage,
  • Batches,
  • type MessageBatch,
  • type MessageBatchCanceledResult,
  • type MessageBatchErroredResult,
  • type MessageBatchExpiredResult,
  • type MessageBatchIndividualResponse,
  • type MessageBatchRequestCounts,
  • type MessageBatchResult,
  • type MessageBatchSucceededResult,
  • type BatchCreateParams,
  • type BatchListParams,
    +} from './batches';
    +export {
  • Messages,
  • type Base64PDFSource,
  • type CacheControlEphemeral,
  • type ContentBlock,
  • type ContentBlockDeltaEvent,
  • type ContentBlockParam,
  • type ContentBlockStartEvent,
  • type ContentBlockStopEvent,
  • type DocumentBlockParam,
  • type ImageBlockParam,
  • type InputJSONDelta,
  • type Message,
  • type MessageDeltaEvent,
  • type MessageDeltaUsage,
  • type MessageParam,
  • type MessageStartEvent,
  • type MessageStopEvent,
  • type MessageStreamEvent,
  • type MessageTokensCount,
  • type Metadata,
  • type Model,
  • type RawContentBlockDeltaEvent,
  • type RawContentBlockStartEvent,
  • type RawContentBlockStopEvent,
  • type RawMessageDeltaEvent,
  • type RawMessageStartEvent,
  • type RawMessageStopEvent,
  • type RawMessageStreamEvent,
  • type TextBlock,
  • type TextBlockParam,
  • type TextDelta,
  • type Tool,
  • type ToolChoice,
  • type ToolChoiceAny,
  • type ToolChoiceAuto,
  • type ToolChoiceTool,
  • type ToolResultBlockParam,
  • type ToolUseBlock,
  • type ToolUseBlockParam,
  • type Usage,
  • type MessageCreateParams,
  • type MessageCreateParamsBase,
  • type MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming,
  • type MessageCountTokensParams,
    +} from './messages';
    diff --git src/resources/messages.ts src/resources/messages/messages.ts
    similarity index 71%
    rename from src/resources/messages.ts
    rename to src/resources/messages/messages.ts
    index a13c43f4..a1affbf5 100644
    --- src/resources/messages.ts
    +++ src/resources/messages/messages.ts
    @@ -1,15 +1,32 @@
    // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import { APIResource } from '../resource';
-import { APIPromise } from '../core';
-import * as Core from '../core';
+import { APIResource } from '../../resource';
+import { APIPromise } from '../../core';
+import * as Core from '../../core';
import * as MessagesAPI from './messages';
-import { Stream } from '../streaming';
-import { MessageStream } from '../lib/MessageStream';

-export { MessageStream } from '../lib/MessageStream';
+import * as BatchesAPI from './batches';
+import {

  • BatchCreateParams,
  • BatchListParams,
  • Batches,
  • MessageBatch,
  • MessageBatchCanceledResult,
  • MessageBatchErroredResult,
  • MessageBatchExpiredResult,
  • MessageBatchIndividualResponse,
  • MessageBatchRequestCounts,
  • MessageBatchResult,
  • MessageBatchSucceededResult,
  • MessageBatchesPage,
    +} from './batches';
    +import { Stream } from '../../streaming';
    +import { MessageStream } from '../../lib/MessageStream';

+export { MessageStream } from '../../lib/MessageStream';

export class Messages extends APIResource {

  • batches: BatchesAPI.Batches = new Batch,esAPI.Batches(this._client);
  • /**
    • Send a structured list of input messages with text and/or image content, and the
    • model will generate the next message in the conversation.
      @@ -51,20 +68,62 @@ export class Messages extends APIResource {
      stream(body: MessageStreamParams, options?: Core.RequestOptions): MessageStream {
      return MessageStream.createMessage(this, body, options);
      }
  • /**
    • Count the number of tokens in a Message.
    • The Token Count API can be used to count the number of tokens in a Message,
    • including tools, images, and documents, without creating it.
  • */
  • countTokens(
  • body: MessageCountTokensParams,
  • options?: Core.RequestOptions,
  • ): Core.APIPromise {
  • return this._client.post('/v1/messages/count_tokens', { body, ...options });
  • }
    +}

+export interface Base64PDFSource {

  • data: string;
  • media_type: 'application/pdf';
  • type: 'base64';
    +}

+export interface CacheControlEphemeral {

  • type: 'ephemeral';
    }

export type ContentBlock = TextBlock | ToolUseBlock;

export type ContentBlockDeltaEvent = RawContentBlockDeltaEvent;

+export type ContentBlockParam =

  • | TextBlockParam
  • | ImageBlockParam
  • | ToolUseBlockParam
  • | ToolResultBlockParam
  • | DocumentBlockParam;

export type ContentBlockStartEvent = RawContentBlockStartEvent;

export type ContentBlockStopEvent = RawContentBlockStopEvent;

+export interface DocumentBlockParam {

  • source: Base64PDFSource;
  • type: 'document';
  • cache_control?: CacheControlEphemeral | null;
    +}

export interface ImageBlockParam {
source: ImageBlockParam.Source;

type: 'image';
+

  • cache_control?: CacheControlEphemeral | null;
    }

export namespace ImageBlockParam {
@@ -200,7 +259,7 @@ export interface MessageDeltaUsage {
}

export interface MessageParam {

  • content: string | Array<TextBlockParam | ImageBlockParam | ToolUseBlockParam | ToolResultBlockParam>;
  • content: string | Array;

    role: 'user' | 'assistant';
    }
    @@ -211,6 +270,14 @@ export type MessageStopEvent = RawMessageStopEvent;

export type MessageStreamEvent = RawMessageStreamEvent;

+export interface MessageTokensCount {

  • /**
    • The total number of tokens across the provided list of messages, system prompt,
    • and tools.
  • */
  • input_tokens: number;
    +}

export interface Metadata {
/**
* An external identifier for the user who is associated with the request.
@@ -239,8 +306,7 @@ export type Model =
| 'claude-3-sonnet-20240229'
| 'claude-3-haiku-20240307'
| 'claude-2.1'

  • | 'claude-2.0'
  • | 'claude-instant-1.2';
  • | 'claude-2.0';

type DeprecatedModelsType = {
[K in Model]?: string;
@@ -334,6 +400,8 @@ export interface TextBlockParam {
text: string;

type: 'text';
+

  • cache_control?: CacheControlEphemeral | null;
    }

export interface TextDelta {
@@ -358,6 +426,8 @@ export interface Tool {
*/
name: string;

  • cache_control?: CacheControlEphemeral | null;
  • /**
    • Description of what this tool does.

@@ -445,6 +515,8 @@ export interface ToolResultBlockParam {

type: 'tool_result';

  • cache_control?: CacheControlEphemeral | null;

  • content?: string | Array<TextBlockParam | ImageBlockParam>;

    is_error?: boolean;
    @@ -468,9 +540,21 @@ export interface ToolUseBlockParam {
    name: string;

    type: 'tool_use';

  • cache_control?: CacheControlEphemeral | null;
    }

export interface Usage {

  • /**
    • The number of input tokens used to create the cache entry.
  • */
  • cache_creation_input_tokens: number | null;
  • /**
    • The number of input tokens read from the cache.
  • */
  • cache_read_input_tokens: number | null;
  • /**
    • The number of input tokens which were used.
      */
      @@ -790,12 +874,205 @@ export interface MessageCreateParamsStreaming extends MessageCreateParamsBase {

export type MessageStreamParams = MessageCreateParamsBase;

+export interface MessageCountTokensParams {

  • /**
    • Input messages.
    • Our models are trained to operate on alternating user and assistant
    • conversational turns. When creating a new Message, you specify the prior
    • conversational turns with the messages parameter, and the model then generates
    • the next Message in the conversation. Consecutive user or assistant turns
    • in your request will be combined into a single turn.
    • Each input message must be an object with a role and content. You can
    • specify a single user-role message, or you can include multiple user and
    • assistant messages.
    • If the final message uses the assistant role, the response content will
    • continue immediately from the content in that message. This can be used to
    • constrain part of the model's response.
    • Example with a single user message:
    • [{ "role": "user", "content": "Hello, Claude" }]
    • Example with multiple conversational turns:
    • [
    • { "role": "user", "content": "Hello there." },
    • { "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
    • { "role": "user", "content": "Can you explain LLMs in plain English?" }
    • ]
    • Example with a partially-filled response from Claude:
    • [
    • {
    • "role": "user",
      
    • "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
      
    • },
    • { "role": "assistant", "content": "The best answer is (" }
    • ]
    • Each input message content may be either a single string or an array of
    • content blocks, where each block has a specific type. Using a string for
    • content is shorthand for an array of one content block of type "text". The
    • following input messages are equivalent:
    • { "role": "user", "content": "Hello, Claude" }
    • { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
    • Starting with Claude 3 models, you can also send image content blocks:
    • {
    • "role": "user",
    • "content": [
    • {
      
    •   "type": "image",
      
    •   "source": {
      
    •     "type": "base64",
      
    •     "media_type": "image/jpeg",
      
    •     "data": "/9j/4AAQSkZJRg..."
      
    •   }
      
    • },
      
    • { "type": "text", "text": "What is in this image?" }
      
    • ]
    • }
    • We currently support the base64 source type for images, and the image/jpeg,
    • image/png, image/gif, and image/webp media types.
    • more input examples.
    • Note that if you want to include a
    • the top-level system parameter — there is no "system" role for input
    • messages in the Messages API.
  • */
  • messages: Array;
  • /**
    • The model that will complete your prompt.\n\nSee
    • details and options.
  • */
  • model: Model;
  • /**
    • System prompt.
    • A system prompt is a way of providing context and instructions to Claude, such
    • as specifying a particular goal or role. See our
  • */
  • system?: string | Array;
  • /**
    • How the model should use the provided tools. The model can use a specific tool,
    • any available tool, or decide by itself.
  • */
  • tool_choice?: ToolChoice;
  • /**
    • Definitions of tools that the model may use.
    • If you include tools in your API request, the model may return tool_use
    • content blocks that represent the model's use of those tools. You can then run
    • those tools using the tool input generated by the model and then optionally
    • return results back to the model using tool_result content blocks.
    • Each tool definition includes:
      • name: Name of the tool.
      • description: Optional, but strongly-recommended description of the tool.
    • shape that the model will produce in tool_use output content blocks.
    • For example, if you defined tools as:
    • [
    • {
    • "name": "get_stock_price",
      
    • "description": "Get the current stock price for a given ticker symbol.",
      
    • "input_schema": {
      
    •   "type": "object",
      
    •   "properties": {
      
    •     "ticker": {
      
    •       "type": "string",
      
    •       "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
      
    •     }
      
    •   },
      
    •   "required": ["ticker"]
      
    • }
      
    • }
    • ]
    • And then asked the model "What's the S&P 500 at today?", the model might produce
    • tool_use content blocks in the response like this:
    • [
    • {
    • "type": "tool_use",
      
    • "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "name": "get_stock_price",
      
    • "input": { "ticker": "^GSPC" }
      
    • }
    • ]
    • You might then run your get_stock_price tool with {"ticker": "^GSPC"} as an
    • input, and return the following back to the model in a subsequent user
    • message:
    • [
    • {
    • "type": "tool_result",
      
    • "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "content": "259.75 USD"
      
    • }
    • ]
    • Tools can be used for workflows that include running client-side tools and
    • functions, or more generally whenever you want the model to produce a particular
    • JSON structure of output.
    • See our guide for more details.
  • */
  • tools?: Array;
    +}

+Messages.Batches = Batches;
+Messages.MessageBatchesPage = MessageBatchesPage;
+
export declare namespace Messages {
export {

  • type Base64PDFSource as Base64PDFSource,
  • type CacheControlEphemeral as CacheControlEphemeral,
    type ContentBlock as ContentBlock,
    type ContentBlockDeltaEvent as ContentBlockDeltaEvent,
  • type ContentBlockParam as ContentBlockParam,
    type ContentBlockStartEvent as ContentBlockStartEvent,
    type ContentBlockStopEvent as ContentBlockStopEvent,
  • type DocumentBlockParam as DocumentBlockParam,
    type ImageBlockParam as ImageBlockParam,
    type InputJsonDelta as InputJsonDelta,
    type InputJSONDelta as InputJSONDelta,
    @@ -806,6 +1083,7 @@ export declare namespace Messages {
    type MessageStartEvent as MessageStartEvent,
    type MessageStopEvent as MessageStopEvent,
    type MessageStreamEvent as MessageStreamEvent,
  • type MessageTokensCount as MessageTokensCount,
    type Metadata as Metadata,
    type Model as Model,
    type RawContentBlockDeltaEvent as RawContentBlockDeltaEvent,
    @@ -831,5 +1109,21 @@ export declare namespace Messages {
    type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
    type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
    type MessageStreamParams as MessageStreamParams,
  • type MessageCountTokensParams as MessageCountTokensParams,
  • };
  • export {
  • Batches as Batches,
  • type MessageBatch as MessageBatch,
  • type MessageBatchCanceledResult as MessageBatchCanceledResult,
  • type MessageBatchErroredResult as MessageBatchErroredResult,
  • type MessageBatchExpiredResult as MessageBatchExpiredResult,
  • type MessageBatchIndividualResponse as MessageBatchIndividualResponse,
  • type MessageBatchRequestCounts as MessageBatchRequestCounts,
  • type MessageBatchResult as MessageBatchResult,
  • type MessageBatchSucceededResult as MessageBatchSucceededResult,
  • MessageBatchesPage as MessageBatchesPage,
  • type BatchCreateParams as BatchCreateParams,
  • type BatchListParams as BatchListParams,
    };
    }
    diff --git a/src/resources/models.ts b/src/resources/models.ts
    new file mode 100644
    index 00000000..50e80399
    --- /dev/null
    +++ src/resources/models.ts
    @@ -0,0 +1,75 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import { APIResource } from '../resource';
+import { isRequestOptions } from '../core';
+import * as Core from '../core';
+import { Page, type PageParams } from '../pagination';
+
+export class Models extends APIResource {

  • /**
    • Get a specific model.
    • The Models API response can be used to determine information about a specific
    • model or resolve a model alias to a model ID.
  • */
  • retrieve(modelId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.get(/v1/models/${modelId}, options);
  • }
  • /**
    • List available models.
    • The Models API response can be used to determine which models are available for
    • use in the API. More recently released models are listed first.
  • */
  • list(query?: ModelListParams, options?: Core.RequestOptions): Core.PagePromise<ModelInfosPage, ModelInfo>;
  • list(options?: Core.RequestOptions): Core.PagePromise<ModelInfosPage, ModelInfo>;
  • list(
  • query: ModelListParams | Core.RequestOptions = {},
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<ModelInfosPage, ModelInfo> {
  • if (isRequestOptions(query)) {
  •  return this.list({}, query);
    
  • }
  • return this._client.getAPIList('/v1/models', ModelInfosPage, { query, ...options });
  • }
    +}

+export class ModelInfosPage extends Page {}
+
+export interface ModelInfo {

  • /**
    • Unique model identifier.
  • */
  • id: string;
  • /**
    • RFC 3339 datetime string representing the time at which the model was released.
    • May be set to an epoch value if the release date is unknown.
  • */
  • created_at: string;
  • /**
    • A human-readable name for the model.
  • */
  • display_name: string;
  • /**
    • Object type.
    • For Models, this is always "model".
  • */
  • type: 'model';
    +}

+export interface ModelListParams extends PageParams {}
+
+Models.ModelInfosPage = ModelInfosPage;
+
+export declare namespace Models {

  • export {
  • type ModelInfo as ModelInfo,
  • ModelInfosPage as ModelInfosPage,
  • type ModelListParams as ModelListParams,
  • };
    +}
    diff --git a/src/resources/shared.ts b/src/resources/shared.ts
    new file mode 100644
    index 00000000..d731c1f9
    --- /dev/null
    +++ src/resources/shared.ts
    @@ -0,0 +1,72 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+export interface APIErrorObject {

  • message: string;
  • type: 'api_error';
    +}

+export interface AuthenticationError {

  • message: string;
  • type: 'authentication_error';
    +}

+export interface BillingError {

  • message: string;
  • type: 'billing_error';
    +}

+export type ErrorObject =

  • | InvalidRequestError
  • | AuthenticationError
  • | BillingError
  • | PermissionError
  • | NotFoundError
  • | RateLimitError
  • | GatewayTimeoutError
  • | APIErrorObject
  • | OverloadedError;

+export interface ErrorResponse {

  • error: ErrorObject;
  • type: 'error';
    +}

+export interface GatewayTimeoutError {

  • message: string;
  • type: 'timeout_error';
    +}

+export interface InvalidRequestError {

  • message: string;
  • type: 'invalid_request_error';
    +}

+export interface NotFoundError {

  • message: string;
  • type: 'not_found_error';
    +}

+export interface OverloadedError {

  • message: string;
  • type: 'overloaded_error';
    +}

+export interface PermissionError {

  • message: string;
  • type: 'permission_error';
    +}

+export interface RateLimitError {

  • message: string;
  • type: 'rate_limit_error';
    +}
    diff --git src/version.ts src/version.ts
    index ab5165fb..4a46c186 100644
    --- src/version.ts
    +++ src/version.ts
    @@ -1 +1 @@
    -export const VERSION = '0.32.1'; // x-release-please-version
    +export const VERSION = '0.33.1'; // x-release-please-version
    diff --git tests/api-resources/MessageStream.test.ts tests/api-resources/MessageStream.test.ts
    index 81b9c81e..0051d397 100644
    --- tests/api-resources/MessageStream.test.ts
    +++ tests/api-resources/MessageStream.test.ts
    @@ -149,7 +149,12 @@ describe('MessageStream class', () => {
    model: 'claude-3-opus-20240229',
    stop_reason: 'end_turn',
    stop_sequence: null,
  •    usage: { output_tokens: 6, input_tokens: 10 },
    
  •    usage: {
    
  •      output_tokens: 6,
    
  •      input_tokens: 10,
    
  •      cache_creation_input_tokens: null,
    
  •      cache_read_input_tokens: null,
    
  •    },
     }),
    
    );

@@ -209,22 +214,22 @@ describe('MessageStream class', () => {
},
{
"args": [

  •        "{"type":"message_start","message":{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message_start","message":{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
           "{"type":"content_block_start","content_block":{"type":"text","text":""},"index":0}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":""}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":""}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
           "{"type":"content_block_delta","delta":{"type":"text_delta","text":"Hello"},"index":0}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -238,7 +243,7 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"content_block_delta","delta":{"type":"text_delta","text":" ther"},"index":0}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello ther"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello ther"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -252,7 +257,7 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"content_block_delta","delta":{"type":"text_delta","text":"e!"},"index":0}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -266,7 +271,7 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"content_block_stop","index":0}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -279,26 +284,26 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"message_delta","usage":{"output_tokens":6},"delta":{"stop_reason":"end_turn","stop_sequence":null}}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
           "{"type":"message_stop"}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "message",
       },
       {
         "args": [
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "finalMessage",
       },
    

@@ -326,6 +331,8 @@ describe('MessageStream class', () => {
"stop_sequence": null,
"type": "message",
"usage": {

  •      "cache_creation_input_tokens": null,
    
  •      "cache_read_input_tokens": null,
         "input_tokens": 10,
         "output_tokens": 6,
       },
    

@@ -353,7 +360,12 @@ describe('MessageStream class', () => {
model: 'claude-3-opus-20240229',
stop_reason: 'end_turn',
stop_sequence: null,

  •    usage: { output_tokens: 6, input_tokens: 10 },
    
  •    usage: {
    
  •      output_tokens: 6,
    
  •      input_tokens: 10,
    
  •      cache_creation_input_tokens: null,
    
  •      cache_read_input_tokens: null,
    
  •    },
     }),
    
    );

diff --git tests/api-resources/beta/messages/batches.test.ts tests/api-resources/beta/messages/batches.test.ts
index ed2027c8..e395910a 100644
--- tests/api-resources/beta/messages/batches.test.ts
+++ tests/api-resources/beta/messages/batches.test.ts
@@ -20,22 +20,6 @@ describe('resource batches', () => {
model: 'claude-3-5-sonnet-20241022',
},
},

  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •      },
    
  •    },
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •      },
    
  •    },
     ],
    
    });
    const rawResponse = await responsePromise.asResponse();
    @@ -57,143 +41,7 @@ describe('resource batches', () => {
    messages: [{ content: 'Hello, world', role: 'user' }],
    model: 'claude-3-5-sonnet-20241022',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •        stop_sequences: ['string', 'string', 'string'],
    
  •        stream: false,
    
  •        system: [
    
  •          { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    
  •        ],
    
  •        temperature: 1,
    
  •        tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •        tools: [
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •        ],
    
  •        top_k: 5,
    
  •        top_p: 0.7,
    
  •      },
    
  •    },
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •        metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •        stop_sequences: ['string', 'string', 'string'],
    
  •        stream: false,
    
  •        system: [
    
  •          { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    
  •        ],
    
  •        temperature: 1,
    
  •        tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •        tools: [
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •        ],
    
  •        top_k: 5,
    
  •        top_p: 0.7,
    
  •      },
    
  •    },
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •        metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •        stop_sequences: ['string', 'string', 'string'],
    
  •        stop_sequences: ['string'],
           stream: false,
           system: [
             { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    

@@ -217,45 +65,13 @@ describe('resource batches', () => {
description: 'Get the current weather in a given location',
type: 'custom',
},

  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
           ],
           top_k: 5,
           top_p: 0.7,
         },
       },
     ],
    
  •  betas: ['string', 'string', 'string'],
    
  •  betas: ['string'],
    
    });
    });

@@ -282,7 +98,7 @@ describe('resource batches', () => {
await expect(
client.beta.messages.batches.retrieve(
'message_batch_id',

  •    { betas: ['string', 'string', 'string'] },
    
  •    { betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    @@ -310,7 +126,7 @@ describe('resource batches', () => {
    // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
    await expect(
    client.beta.messages.batches.list(
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1, betas: ['string', 'string', 'string'] },
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1, betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    @@ -339,7 +155,7 @@ describe('resource batches', () => {
    await expect(
    client.beta.messages.batches.cancel(
    'message_batch_id',
  •    { betas: ['string', 'string', 'string'] },
    
  •    { betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    @@ -357,7 +173,7 @@ describe('resource batches', () => {
    await expect(
    client.beta.messages.batches.results(
    'message_batch_id',
  •    { betas: ['string', 'string', 'string'] },
    
  •    { betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    diff --git tests/api-resources/beta/messages/messages.test.ts tests/api-resources/beta/messages/messages.test.ts
    index 64b6299c..ec73d9c0 100644
    --- tests/api-resources/beta/messages/messages.test.ts
    +++ tests/api-resources/beta/messages/messages.test.ts
    @@ -30,7 +30,7 @@ describe('resource messages', () => {
    messages: [{ content: 'Hello, world', role: 'user' }],
    model: 'claude-3-5-sonnet-20241022',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stop_sequences: ['string'],
     stream: false,
     system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
     temperature: 1,
    

@@ -49,46 +49,16 @@ describe('resource messages', () => {
description: 'Get the current weather in a given location',
type: 'custom',
},

  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
     ],
     top_k: 5,
     top_p: 0.7,
    
  •  betas: ['string', 'string', 'string'],
    
  •  betas: ['string'],
    

    });
    });

    test('countTokens: only required params', async () => {
    const responsePromise = client.beta.messages.countTokens({

  •  messages: [
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •  ],
    
  •  messages: [{ content: 'string', role: 'user' }],
     model: 'string',
    

    });
    const rawResponse = await responsePromise.asResponse();
    @@ -102,11 +72,7 @@ describe('resource messages', () => {

    test('countTokens: required and optional params', async () => {
    const response = await client.beta.messages.countTokens({

  •  messages: [
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •  ],
    
  •  messages: [{ content: 'string', role: 'user' }],
     model: 'string',
     system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
     tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    

@@ -124,34 +90,8 @@ describe('resource messages', () => {
description: 'Get the current weather in a given location',
type: 'custom',
},

  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
     ],
    
  •  betas: ['string', 'string', 'string'],
    
  •  betas: ['string'],
    
    });
    });
    });
    diff --git a/tests/api-resources/beta/models.test.ts b/tests/api-resources/beta/models.test.ts
    new file mode 100644
    index 00000000..f155b632
    --- /dev/null
    +++ tests/api-resources/beta/models.test.ts
    @@ -0,0 +1,57 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import Anthropic from '@anthropic-ai/sdk';
+import { Response } from 'node-fetch';
+
+const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    +});

+describe('resource models', () => {

  • test('retrieve', async () => {
  • const responsePromise = client.beta.models.retrieve('model_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('retrieve: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.beta.models.retrieve('model_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
  • test('list', async () => {
  • const responsePromise = client.beta.models.list();
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('list: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.beta.models.list({ path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list: request options and params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.beta.models.list(
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1 },
    
  •    { path: '/_stainless_unknown_path' },
    
  •  ),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
    +});
    diff --git tests/api-resources/beta/prompt-caching/messages.test.ts tests/api-resources/beta/prompt-caching/messages.test.ts
    deleted file mode 100644
    index dd94b3a7..00000000
    --- tests/api-resources/beta/prompt-caching/messages.test.ts
    +++ /dev/null
    @@ -1,81 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import Anthropic from '@anthropic-ai/sdk';
-import { Response } from 'node-fetch';

-const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    -});

-describe('resource messages', () => {

  • test('create: only required params', async () => {
  • const responsePromise = client.beta.promptCaching.messages.create({
  •  max_tokens: 1024,
    
  •  messages: [{ content: 'Hello, world', role: 'user' }],
    
  •  model: 'claude-3-5-sonnet-20241022',
    
  • });
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('create: required and optional params', async () => {
  • const response = await client.beta.promptCaching.messages.create({
  •  max_tokens: 1024,
    
  •  messages: [{ content: 'Hello, world', role: 'user' }],
    
  •  model: 'claude-3-5-sonnet-20241022',
    
  •  metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stream: false,
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
    
  •  temperature: 1,
    
  •  tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •  tools: [
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •  ],
    
  •  top_k: 5,
    
  •  top_p: 0.7,
    
  •  betas: ['string', 'string', 'string'],
    
  • });
  • });
    -});
    diff --git tests/api-resources/completions.test.ts tests/api-resources/completions.test.ts
    index aa326cf2..fcd0a68c 100644
    --- tests/api-resources/completions.test.ts
    +++ tests/api-resources/completions.test.ts
    @@ -30,7 +30,7 @@ describe('resource completions', () => {
    model: 'string',
    prompt: '\n\nHuman: Hello, world!\n\nAssistant:',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stop_sequences: ['string'],
     stream: false,
     temperature: 1,
     top_k: 5,
    

diff --git a/tests/api-resources/messages/batches.test.ts b/tests/api-resources/messages/batches.test.ts
new file mode 100644
index 00000000..26efdbc8
--- /dev/null
+++ tests/api-resources/messages/batches.test.ts
@@ -0,0 +1,145 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+import Anthropic from '@anthropic-ai/sdk';
+import { Response } from 'node-fetch';
+
+const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    +});

+describe('resource batches', () => {

  • test('create: only required params', async () => {
  • const responsePromise = client.messages.batches.create({
  •  requests: [
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •      },
    
  •    },
    
  •  ],
    
  • });
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('create: required and optional params', async () => {
  • const response = await client.messages.batches.create({
  •  requests: [
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •        metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •        stop_sequences: ['string'],
    
  •        system: [
    
  •          { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    
  •        ],
    
  •        temperature: 1,
    
  •        tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •        tools: [
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •          },
    
  •        ],
    
  •        top_k: 5,
    
  •        top_p: 0.7,
    
  •      },
    
  •    },
    
  •  ],
    
  • });
  • });
  • test('retrieve', async () => {
  • const responsePromise = client.messages.batches.retrieve('message_batch_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('retrieve: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.retrieve('message_batch_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
  • test('list', async () => {
  • const responsePromise = client.messages.batches.list();
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('list: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.messages.batches.list({ path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list: request options and params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.list(
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1 },
    
  •    { path: '/_stainless_unknown_path' },
    
  •  ),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
  • test('cancel', async () => {
  • const responsePromise = client.messages.batches.cancel('message_batch_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('cancel: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.cancel('message_batch_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(,Anthropic.NotFoundError);
  • });
  • test('results: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.results('message_batch_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
    +});
    diff --git tests/api-resources/messages.test.ts tests/api-resources/messages/messages.test.ts
    similarity index 74%
    rename from tests/api-resources/messages.test.ts
    rename to tests/api-resources/messages/messages.test.ts
    index 0497742e..3ae41d32 100644
    --- tests/api-resources/messages.test.ts
    +++ tests/api-resources/messages/messages.test.ts
    @@ -30,9 +30,9 @@ describe('resource messages', () => {
    messages: [{ content: 'Hello, world', role: 'user' }],
    model: 'claude-3-5-sonnet-20241022',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stop_sequences: ['string'],
     stream: false,
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text' }],
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
     temperature: 1,
     tool_choice: { type: 'auto', disable_parallel_tool_use: true },
     tools: [
    

@@ -45,8 +45,36 @@ describe('resource messages', () => {
},
},
name: 'x',

  •      cache_control: { type: 'ephemeral' },
         description: 'Get the current weather in a given location',
       },
    
  •  ],
    
  •  top_k: 5,
    
  •  top_p: 0.7,
    
  • });
  • });
  • test('countTokens: only required params', async () => {
  • const responsePromise = client.messages.countTokens({
  •  messages: [{ content: 'string', role: 'user' }],
    
  •  model: 'string',
    
  • });
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('countTokens: required and optional params', async () => {
  • const response = await client.messages.countTokens({
  •  messages: [{ content: 'string', role: 'user' }],
    
  •  model: 'string',
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
    
  •  tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •  tools: [
       {
         input_schema: {
           type: 'object',
    

@@ -56,22 +84,10 @@ describe('resource messages', () => {
},
},
name: 'x',

  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
         description: 'Get the current weather in a given location',
       },
     ],
    
  •  top_k: 5,
    
  •  top_p: 0.7,
    
    });
    });
    });
    diff --git a/tests/api-resources/models.test.ts b/tests/api-resources/models.test.ts
    new file mode 100644
    index 00000000..7f5c0411
    --- /dev/null
    +++ tests/api-resources/models.test.ts
    @@ -0,0 +1,57 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import Anthropic from '@anthropic-ai/sdk';
+import { Response } from 'node-fetch';
+
+const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    +});

+describe('resource models', () => {

  • test('retrieve', async () => {
  • const responsePromise = client.models.retrieve('model_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('retrieve: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.models.retrieve('model_id', { path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list', async () => {
  • const responsePromise = client.models.list();
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('list: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.models.list({ path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list: request options and params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.models.list(
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1 },
    
  •    { path: '/_stainless_unknown_path' },
    
  •  ),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
    +});
    diff --git tests/index.test.ts tests/index.test.ts
    index 0d4a6bba..b6398085 100644
    --- tests/index.test.ts
    +++ tests/index.test.ts
    @@ -183,7 +183,7 @@ describe('instantiate client', () => {
    expect(client.apiKey).toBe('my-anthropic-api-key');
    });
  • test('with overriden environment variable arguments', () => {
  • test('with overridden environment variable arguments', () => {
    // set options via env var
    process.env['ANTHROPIC_API_KEY'] = 'another my-anthropic-api-key';
    const client = new Anthropic({ apiKey: 'my-anthropic-api-key' });
    diff --git tests/responses.test.ts tests/responses.test.ts
    index d0db2b1f..db58e0b7 100644
    --- tests/responses.test.ts
    +++ tests/responses.test.ts
    @@ -1,5 +1,8 @@
    -import { createResponseHeaders } from '@anthropic-ai/sdk/core';
    +import { APIPromise, createResponseHeaders } from '@anthropic-ai/sdk/core';
    +import Anthropic from '@anthropic-ai/sdk/index';
    import { Headers } from '@anthropic-ai/sdk/_shims/index';
    +import { Response } from 'node-fetch';
    +import { compareType } from './utils/typing';

describe('response parsing', () => {
// TODO: test unicode characters
@@ -23,3 +26,129 @@ describe('response parsing', () => {
expect(headers['content-type']).toBe('text/xml, application/json');
});
});
+
+describe('request id', () => {

  • test('types', () => {
  • compareType<Awaited<APIPromise>, string>(true);
  • compareType<Awaited<APIPromise>, number>(true);
  • compareType<Awaited<APIPromise>, null>(true);
  • compareType<Awaited<APIPromise>, void>(true);
  • compareType<Awaited<APIPromise>, Response>(true);
  • compareType<Awaited<APIPromise>, Response>(true);
  • compareType<Awaited<APIPromise<{ foo: string }>>, { foo: string } & { _request_id?: string | null }>(
  •  true,
    
  • );
  • compareType<Awaited<APIPromise<Array<{ foo: string }>>>, Array<{ foo: string }>>(true);
  • });
  • test('withResponse', async () => {
  • const client = new Anthropic({
  •  apiKey: 'dummy',
    
  •  fetch: async () =>
    
  •    new Response(JSON.stringify({ id: 'bar' }), {
    
  •      headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •    }),
    
  • });
  • const {
  •  data: message,
    
  •  response,
    
  •  request_id,
    
  • } = await client.messages
  •  .create({ messages: [], model: 'claude-3-opus-20240229', max_tokens: 1024 })
    
  •  .withResponse();
    
  • expect(request_id).toBe('req_xxx');
  • expect(response.headers.get('request-id')).toBe('req_xxx');
  • expect(message.id).toBe('bar');
  • expect(JSON.stringify(message)).toBe('{"id":"bar"}');
  • });
  • test('object response', async () => {
  • const client = new Anthropic({
  •  apiKey: 'dummy',
    
  •  fetch: async () =>
    
  •    new Response(JSON.stringify({ id: 'bar' }), {
    
  •      headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •    }),
    
  • });
  • const rsp = await client.messages.create({
  •  messages: [],
    
  •  model: 'claude-3-opus-20240229',
    
  •  max_tokens: 1024,
    
  • });
  • expect(rsp.id).toBe('bar');
  • expect(rsp._request_id).toBe('req_xxx');
  • expect(JSON.stringify(rsp)).toBe('{"id":"bar"}');
  • });
  • test('envelope response', async () => {
  • const promise = new APIPromise<{ data: { foo: string } }>(
  •  (async () => {
    
  •    return {
    
  •      response: new Response(JSON.stringify({ data: { foo: 'bar' } }), {
    
  •        headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •      }),
    
  •      controller: {} as any,
    
  •      options: {} as any,
    
  •    };
    
  •  })(),
    
  • )._thenUnwrap((d) => d.data);
  • const rsp = await promise;
  • expect(rsp.foo).toBe('bar');
  • expect(rsp._request_id).toBe('req_xxx');
  • });
  • test('page response', async () => {
  • const client = new Anthropic({
  •  apiKey: 'dummy',
    
  •  fetch: async () =>
    
  •    new Response(JSON.stringify({ data: [{ foo: 'bar' }] }), {
    
  •      headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •    }),
    
  • });
  • const page = await client.beta.messages.batches.list();
  • expect(page.data).toMatchObject([{ foo: 'bar' }]);
  • expect((page as any)._request_id).toBeUndefined();
  • });
  • test('array response', async () => {
  • const promise = new APIPromise<Array<{ foo: string }>>(
  •  (async () => {
    
  •    return {
    
  •      response: new Response(JSON.stringify([{ foo: 'bar' }]), {
    
  •        headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •      }),
    
  •      controller: {} as any,
    
  •      options: {} as any,
    
  •    };
    
  •  })(),
    
  • );
  • const rsp = await promise;
  • expect(rsp.length).toBe(1);
  • expect(rsp[0]).toMatchObject({ foo: 'bar' });
  • expect((rsp as any)._request_id).toBeUndefined();
  • });
  • test('string response', async () => {
  • const promise = new APIPromise(
  •  (async () => {
    
  •    return {
    
  •      response: new Response('hello world', {
    
  •        headers: { 'request-id': 'req_xxx', 'content-type': 'application/text' },
    
  •      }),
    
  •      controller: {} as any,
    
  •      options: {} as any,
    
  •    };
    
  •  })(),
    
  • );
  • const result = await promise;
  • expect(result).toBe('hello world');
  • expect((result as any)._request_id).toBeUndefined();
  • });
    +});
    diff --git a/tests/utils/typing.ts b/tests/utils/typing.ts
    new file mode 100644
    index 00000000..4a791d2a
    --- /dev/null
    +++ tests/utils/typing.ts
    @@ -0,0 +1,9 @@
    +type Equal<X, Y> = (() => T extends X ? 1 : 2) extends () => T extends Y ? 1 : 2 ? true : false;

+export const expectType = (_expression: T): void => {

  • return;
    +};

+export const compareType = <T1, T2>(_expression: Equal<T1, T2>): void => {

  • return;
    +};

</details>

### Description
This PR primarily introduces the following changes:

1. **Deletion of a GitHub Actions Workflow**:
    - Removed `.github/workflows/handle-release-pr-title-edit.yml` workflow file. This workflow seemed to handle release pull request title edits.
2. **Version Updates**:
    - Updated versions in `.release-please-manifest.json` for several packages:
      - Root package: `0.32.1` → `0.33.1`
      - Vertex SDK: `0.5.2` → `0.6.1`
      - Bedrock SDK: `0.11.2` → `0.12.0`
3. **Configuration Changes**:
    - Adjusted `.stats.yml` to reflect new configured endpoints and OpenAPI spec URL.
4. **Changelog Updates**:
    - Updated `CHANGELOG.md` for respective versions with detailed logs.
    - Added changelogs for other packages like `bedrock-sdk` and `vertex-sdk`.
5. **README and Example Updates**:
    - Updated model names in several example files and the README from `claude-3-opus-20240229` to `claude-3-5-sonnet-latest`.
6. **New and Updated SDK Features**:
    - Added and updated several types and methods across SDK files.
    - Added token counting support.
    - Introduced a new `.git-swap.sh` script.

### Possible Issues
None identified.

### Security Hotspots
None identified.

<details>
<summary><i>Changes</i></summary>

### Changes
#### Deleted Files
- `.github/workflows/handle-release-pr-title-edit.yml`

#### Updated Files
1. **Updating Versions**:
    - `.release-please-manifest.json`
    - `package.json`
    - `src/version.ts`
    
2. **Configuration**:
    - `.stats.yml`

3. **Documentation and Examples**:
    - `README.md`
    - `CHANGELOG.md`
    - Several example files (e.g., `examples/demo.ts`, `examples/streaming.ts`)

4. **New Features and Enhancements**:
    - **SDK Changes**:
        - `.github/workflows/handle-release-pr-title-edit.yml`
        - `src/core.ts`
        - `src/error.ts`
        - `src/index.ts`
        - `src/resources/beta/beta.ts`
        - `src/resources/beta/messages/batches.ts`
        - `src/resources/beta/models.ts`
        - `src/resources/completions.ts`
        - `src/resources/models.ts`
        - `src/resources/shared.ts`
    - **New Script**:
        - `scripts/utils/git-swap.sh`

5. **New Tests**:
    - `tests/api-resources/models.test.ts`
    - `tests/responses.test.ts`

```mermaid
sequenceDiagram
    actor Developer
    participant Github
    participant CI
    participant SDK

    Developer->>Github: Create PR
    Github->>CI: Trigger CI
    CI->>Github: CI Status Update
    Developer->>Github: Update PR Description
    Github->>SDK: Workflow handle-release-pr-title-edit.yml (Deleted)
    Developer->>Github: Merge PR
    Github->>CI: Trigger new release
    CI->>SDK: Update versions, README, Examples, Changelogs

Copy link

anthropic debug - [puLL-Merge] - anthropics/[email protected]

Diff
diff --git .github/workflows/handle-release-pr-title-edit.yml .github/workflows/handle-release-pr-title-edit.yml
deleted file mode 100644
index 8144aaae..00000000
--- .github/workflows/handle-release-pr-title-edit.yml
+++ /dev/null
@@ -1,26 +0,0 @@
-name: Handle release PR title edits
-on:
-  pull_request:
-    types:
-      - edited
-      - unlabeled
-
-jobs:
-  update_pr_content:
-    name: Update pull request content
-    if: |
-      ((github.event.action == 'edited' && github.event.changes.title.from != github.event.pull_request.title) ||
-      (github.event.action == 'unlabeled' && github.event.label.name == 'autorelease: custom version')) &&
-      startsWith(github.event.pull_request.head.ref, 'release-please--') &&
-      github.event.pull_request.state == 'open' &&
-      github.event.sender.login != 'stainless-bot' &&
-      github.event.sender.login != 'stainless-app' &&
-      github.repository == 'anthropics/anthropic-sdk-typescript'
-    runs-on: ubuntu-latest
-    steps:
-      - uses: actions/checkout@v4
-      - uses: stainless-api/trigger-release-please@v1
-        with:
-          repo: ${{ github.event.repository.full_name }}
-          stainless-api-key: ${{ secrets.STAINLESS_API_KEY }}
-
diff --git .release-please-manifest.json .release-please-manifest.json
index e6b9ab03..2053c67b 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,5 +1,5 @@
 {
-  ".": "0.32.1",
-  "packages/vertex-sdk": "0.5.2",
-  "packages/bedrock-sdk": "0.11.2"
+  ".": "0.33.1",
+  "packages/vertex-sdk": "0.6.1",
+  "packages/bedrock-sdk": "0.12.0"
 }
diff --git .stats.yml .stats.yml
index ebe0695a..19e9daeb 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
-configured_endpoints: 10
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/anthropic-25f83d91f601c1962b3701fedf829f678f306aca0758af286ee1586cc9931f75.yml
+configured_endpoints: 19
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/anthropic-be055148d227480fcacc9086c37ac8009dcb487731069ada51af35044f65bee4.yml
diff --git CHANGELOG.md CHANGELOG.md
index f332e42a..a1a57c52 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,61 @@
 # Changelog
 
+## 0.33.1 (2024-12-17)
+
+Full Changelog: [sdk-v0.33.0...sdk-v0.33.1](https://github.com/anthropics/anthropic-sdk-typescript/compare/sdk-v0.33.0...sdk-v0.33.1)
+
+### Bug Fixes
+
+* **vertex:** remove `anthropic_version` deletion for token counting ([88221be](https://github.com/anthropics/anthropic-sdk-typescript/commit/88221be305d6e13ccf92e6e9cdb00daba45b57db))
+
+
+### Chores
+
+* **internal:** fix some typos ([#633](https://github.com/anthropics/anthropic-sdk-typescript/issues/633)) ([a0298f5](https://github.com/anthropics/anthropic-sdk-typescript/commit/a0298f5f67b8ecd25de416dbb3eada68b86befd7))
+
+## 0.33.0 (2024-12-17)
+
+Full Changelog: [sdk-v0.32.1...sdk-v0.33.0](https://github.com/anthropics/anthropic-sdk-typescript/compare/sdk-v0.32.1...sdk-v0.33.0)
+
+### Features
+
+* **api:** general availability updates ([93d1316](https://github.com/anthropics/anthropic-sdk-typescript/commit/93d13168f950b2cdfc3b7c6664205b06418fea79))
+* **api:** general availability updates ([#631](https://github.com/anthropics/anthropic-sdk-typescript/issues/631)) ([b5c92e5](https://github.com/anthropics/anthropic-sdk-typescript/commit/b5c92e5b74c370ac3f9ba28e915bd54588a42be0))
+* **client:** add ._request_id property to object responses ([#596](https://github.com/anthropics/anthropic-sdk-typescript/issues/596)) ([9d6d584](https://github.com/anthropics/anthropic-sdk-typescript/commit/9d6d58430a216df9888434158bf628ae4b067aba))
+* **internal:** make git install file structure match npm ([#617](https://github.com/anthropics/anthropic-sdk-typescript/issues/617)) ([d3dd7d5](https://github.com/anthropics/anthropic-sdk-typescript/commit/d3dd7d5f8cad460dd18725d5c0f3c8db3f00115d))
+* **vertex:** support token counting ([9e76b4d](https://github.com/anthropics/anthropic-sdk-typescript/commit/9e76b4dc22d62b1239b382bb771b69ad8cff9442))
+
+
+### Bug Fixes
+
+* **docs:** add missing await to pagination example ([#609](https://github.com/anthropics/anthropic-sdk-typescript/issues/609)) ([e303077](https://github.com/anthropics/anthropic-sdk-typescript/commit/e303077ebab73c41adee7d25375b767c3fc78998))
+* **types:** remove anthropic-instant-1.2 model ([#599](https://github.com/anthropics/anthropic-sdk-typescript/issues/599)) ([e222a4d](https://github.com/anthropics/anthropic-sdk-typescript/commit/e222a4d0518aa80671c66ee2a25d87dc87a51316))
+
+
+### Chores
+
+* **api:** update spec version ([#607](https://github.com/anthropics/anthropic-sdk-typescript/issues/607)) ([ea44f9a](https://github.com/anthropics/anthropic-sdk-typescript/commit/ea44f9ac49dcc25a5dfa53880ebf61318ee90f6c))
+* **api:** update spec version ([#629](https://github.com/anthropics/anthropic-sdk-typescript/issues/629)) ([a25295c](https://github.com/anthropics/anthropic-sdk-typescript/commit/a25295cd6db7b57162fdd9049eb8a3c37bb94f08))
+* **bedrock,vertex:** remove unsupported countTokens method ([#597](https://github.com/anthropics/anthropic-sdk-typescript/issues/597)) ([17b7da5](https://github.com/anthropics/anthropic-sdk-typescript/commit/17b7da5ee6f35ea2bdd53a66a662871affae6341))
+* **bedrock:** remove unsupported methods ([6458dc1](https://github.com/anthropics/anthropic-sdk-typescript/commit/6458dc14544c16240a6580a21a36fcf5bde594b2))
+* **ci:** remove unneeded workflow ([#594](https://github.com/anthropics/anthropic-sdk-typescript/issues/594)) ([7572e48](https://github.com/anthropics/anthropic-sdk-typescript/commit/7572e48dbccb2090562399c7ff2d01503c86f445))
+* **client:** drop unused devDependency ([#610](https://github.com/anthropics/anthropic-sdk-typescript/issues/610)) ([5d0d523](https://github.com/anthropics/anthropic-sdk-typescript/commit/5d0d523390d8c34cae836c423940b67defb9d2aa))
+* improve browser error message ([#613](https://github.com/anthropics/anthropic-sdk-typescript/issues/613)) ([c26121e](https://github.com/anthropics/anthropic-sdk-typescript/commit/c26121e84039b7430995b6363876ea9795ba31ed))
+* **internal:** bump cross-spawn to v7.0.6 ([#624](https://github.com/anthropics/anthropic-sdk-typescript/issues/624)) ([e58ba9a](https://github.com/anthropics/anthropic-sdk-typescript/commit/e58ba9a177ec5c8545fd3a3f4fd3d2e7c722f023))
+* **internal:** remove unnecessary getRequestClient function ([#623](https://github.com/anthropics/anthropic-sdk-typescript/issues/623)) ([882c45f](https://github.com/anthropics/anthropic-sdk-typescript/commit/882c45f5a0bd1f4b996d59e6589a205c2111f46b))
+* **internal:** update isAbsoluteURL ([#627](https://github.com/anthropics/anthropic-sdk-typescript/issues/627)) ([2528ea0](https://github.com/anthropics/anthropic-sdk-typescript/commit/2528ea0dcfc83f38e76b58eaadaa5e8c5c0b188d))
+* **internal:** update spec ([#630](https://github.com/anthropics/anthropic-sdk-typescript/issues/630)) ([82cac06](https://github.com/anthropics/anthropic-sdk-typescript/commit/82cac065e2711467773c0ea62848cdf139ed5a11))
+* **internal:** use reexports not destructuring ([#604](https://github.com/anthropics/anthropic-sdk-typescript/issues/604)) ([e4daff2](https://github.com/anthropics/anthropic-sdk-typescript/commit/e4daff2b6a3fb42876ebd06ed4947c88cff919d8))
+* remove redundant word in comment ([#615](https://github.com/anthropics/anthropic-sdk-typescript/issues/615)) ([ef57a10](https://github.com/anthropics/anthropic-sdk-typescript/commit/ef57a103bcfc922a724a7c878f970dbd369b305e))
+* **tests:** limit array example length ([#611](https://github.com/anthropics/anthropic-sdk-typescript/issues/611)) ([91dc181](https://github.com/anthropics/anthropic-sdk-typescript/commit/91dc1812db2cc9e1f4660a13106bad932518b7cf))
+* **types:** nicer error class types + jsdocs ([#626](https://github.com/anthropics/anthropic-sdk-typescript/issues/626)) ([0287993](https://github.com/anthropics/anthropic-sdk-typescript/commit/0287993912ef81bd2c49603d120f49f4f979d75e))
+
+
+### Documentation
+
+* remove suggestion to use `npm` call out ([#614](https://github.com/anthropics/anthropic-sdk-typescript/issues/614)) ([6369261](https://github.com/anthropics/anthropic-sdk-typescript/commit/6369261e3597351f17b8f1a3945ca56b00eba177))
+* use latest sonnet in example snippets ([#625](https://github.com/anthropics/anthropic-sdk-typescript/issues/625)) ([f70882b](https://github.com/anthropics/anthropic-sdk-typescript/commit/f70882b0e8119a414b01b9f0b85fbe1ccb06f122))
+
 ## 0.32.1 (2024-11-05)
 
 Full Changelog: [sdk-v0.32.0...sdk-v0.32.1](https://github.com/anthropics/anthropic-sdk-typescript/compare/sdk-v0.32.0...sdk-v0.32.1)
diff --git README.md README.md
index daba6e63..da3db48e 100644
--- README.md
+++ README.md
@@ -28,7 +28,7 @@ async function main() {
   const message = await client.messages.create({
     max_tokens: 1024,
     messages: [{ role: 'user', content: 'Hello, Claude' }],
-    model: 'claude-3-opus-20240229',
+    model: 'claude-3-5-sonnet-latest',
   });
 
   console.log(message.content);
@@ -49,7 +49,7 @@ const client = new Anthropic();
 const stream = await client.messages.create({
   max_tokens: 1024,
   messages: [{ role: 'user', content: 'Hello, Claude' }],
-  model: 'claude-3-opus-20240229',
+  model: 'claude-3-5-sonnet-latest',
   stream: true,
 });
 for await (const messageStreamEvent of stream) {
@@ -76,7 +76,7 @@ async function main() {
   const params: Anthropic.MessageCreateParams = {
     max_tokens: 1024,
     messages: [{ role: 'user', content: 'Hello, Claude' }],
-    model: 'claude-3-opus-20240229',
+    model: 'claude-3-5-sonnet-latest',
   };
   const message: Anthropic.Message = await client.messages.create(params);
 }
@@ -108,7 +108,7 @@ const anthropic = new Anthropic();
 async function main() {
   const stream = anthropic.messages
     .stream({
-      model: 'claude-3-opus-20240229',
+      model: 'claude-3-5-sonnet-latest',
       max_tokens: 1024,
       messages: [
         {
@@ -146,7 +146,7 @@ await anthropic.beta.messages.batches.create({
     {
       custom_id: 'my-first-request',
       params: {
-        model: 'claude-3-5-sonnet-20240620',
+        model: 'claude-3-5-sonnet-latest',
         max_tokens: 1024,
         messages: [{ role: 'user', content: 'Hello, world' }],
       },
@@ -154,7 +154,7 @@ await anthropic.beta.messages.batches.create({
     {
       custom_id: 'my-second-request',
       params: {
-        model: 'claude-3-5-sonnet-20240620',
+        model: 'claude-3-5-sonnet-latest',
         max_tokens: 1024,
         messages: [{ role: 'user', content: 'Hi again, friend' }],
       },
@@ -198,7 +198,7 @@ async function main() {
     .create({
       max_tokens: 1024,
       messages: [{ role: 'user', content: 'Hello, Claude' }],
-      model: 'claude-3-opus-20240229',
+      model: 'claude-3-5-sonnet-latest',
     })
     .catch(async (err) => {
       if (err instanceof Anthropic.APIError) {
@@ -227,6 +227,18 @@ Error codes are as followed:
 | >=500       | `InternalServerError`      |
 | N/A         | `APIConnectionError`       |
 
+## Request IDs
+
+> For more information on debugging requests, see [these docs](https://docs.anthropic.com/en/api/errors#request-id)
+
+All object responses in the SDK provide a `_request_id` property which is added from the `request-id` response header so that you can quickly log failing requests and report them back to Anthropic.
+
+\`\`\`ts
+const message = await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-5-sonnet-latest' });
+console.log(completion._request_id) // req_018EeWyXxfu5pfWkrYcMdjWG
+```
+
+
 ### Retries
 
 Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
@@ -243,7 +255,7 @@ const client = new Anthropic({
 });
 
 // Or, configure per-request:
-await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-opus-20240229' }, {
+await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-5-sonnet-latest' }, {
   maxRetries: 5,
 });

@@ -260,7 +272,7 @@ const client = new Anthropic({
});

// Override per-request:
-await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-opus-20240229' }, {
+await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-5-sonnet-latest' }, {
timeout: 5 * 1000,
});

@@ -295,7 +307,7 @@ for (const betaMessageBatch of page.data) {

// Convenience methods are provided for manually paginating:
while (page.hasNextPage()) {
-  page = page.getNextPage();
+  page = await page.getNextPage();
  // ...
}

@@ -317,7 +329,7 @@ const message = await client.messages.create(
{
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello, Claude' }],

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    },
    { headers: { 'anthropic-version': 'My-Custom-Value' } },
    );
    @@ -339,7 +351,7 @@ const response = await client.messages
    .create({
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello, Claude' }],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    })
    .asResponse();
    console.log(response.headers.get('X-My-Header'));
    @@ -349,7 +361,7 @@ const { data: message, response: raw } = await client.messages
    .create({
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello, Claude' }],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    })
    .withResponse();
    console.log(raw.headers.get('X-My-Header'));
    @@ -461,7 +473,7 @@ await client.messages.create(
    {
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello, Claude' }],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    },
    {
    httpAgent: new http.Agent({ keepAlive: false }),
    @@ -488,7 +500,7 @@ TypeScript >= 4.5 is supported.
    The following runtimes are supported:
  • Node.js 18 LTS or later (non-EOL) versions.
    -- Deno v1.28.0 or higher, using import Anthropic from "npm:@anthropic-ai/sdk".
    +- Deno v1.28.0 or higher.
  • Bun 1.0 or later.
  • Cloudflare Workers.
  • Vercel Edge Runtime.
    diff --git api.md api.md
    index ab1abd4c..48d1c9a8 100644
    --- api.md
    +++ api.md
    @@ -1,49 +1,103 @@

Anthropic

+# Shared
+
+Types:
+
+- APIErrorObject
+- AuthenticationError
+- BillingError
+- ErrorObject
+- ErrorResponse
+- GatewayTimeoutError
+- InvalidRequestError
+- NotFoundError
+- OverloadedError
+- PermissionError
+- RateLimitError
+

Messages

Types:

-- ContentBlock
-- ContentBlockDeltaEvent
-- ContentBlockStartEvent
-- ContentBlockStopEvent
-- ImageBlockParam
-- InputJSONDelta
-- Message
-- MessageDeltaEvent
-- MessageDeltaUsage
-- MessageParam
-- MessageStartEvent
-- MessageStopEvent
-- MessageStreamEvent
-- Metadata
-- Model
-- RawContentBlockDeltaEvent
-- RawContentBlockStartEvent
-- RawContentBlockStopEvent
-- RawMessageDeltaEvent
-- RawMessageStartEvent
-- RawMessageStopEvent
-- RawMessageStreamEvent
-- TextBlock
-- TextBlockParam
-- TextDelta
-- Tool
-- ToolChoice
-- ToolChoiceAny
-- ToolChoiceAuto
-- ToolChoiceTool
-- ToolResultBlockParam
-- ToolUseBlock
-- ToolUseBlockParam
-- Usage
+- Base64PDFSource
+- CacheControlEphemeral
+- ContentBlock
+- ContentBlockDeltaEvent
+- ContentBlockParam
+- ContentBlockStartEvent
+- ContentBlockStopEvent
+- DocumentBlockParam
+- ImageBlockParam
+- InputJSONDelta
+- Message
+- MessageDeltaEvent
+- MessageDeltaUsage
+- MessageParam
+- MessageStartEvent
+- MessageStopEvent
+- MessageStreamEvent
+- MessageTokensCount
+- Metadata
+- Model
+- RawContentBlockDeltaEvent
+- RawContentBlockStartEvent
+- RawContentBlockStopEvent
+- RawMessageDeltaEvent
+- RawMessageStartEvent
+- RawMessageStopEvent
+- RawMessageStreamEvent
+- TextBlock
+- TextBlockParam
+- TextDelta
+- Tool
+- ToolChoice
+- ToolChoiceAny
+- ToolChoiceAuto
+- ToolChoiceTool
+- ToolResultBlockParam
+- ToolUseBlock
+- ToolUseBlockParam
+- Usage

Methods:

-- client.messages.create({ ...params }) -> Message
+- client.messages.create({ ...params }) -> Message
+- client.messages.countTokens({ ...params }) -> MessageTokensCount

  • client.messages.stream(body, options?) -> MessageStream

+## Batches
+
+Types:
+
+- MessageBatch
+- MessageBatchCanceledResult
+- MessageBatchErroredResult
+- MessageBatchExpiredResult
+- MessageBatchIndividualResponse
+- MessageBatchRequestCounts
+- MessageBatchResult
+- MessageBatchSucceededResult
+
+Methods:
+
+- client.messages.batches.create({ ...params }) -> MessageBatch
+- client.messages.batches.retrieve(messageBatchId) -> MessageBatch
+- client.messages.batches.list({ ...params }) -> MessageBatchesPage
+- client.messages.batches.cancel(messageBatchId) -> MessageBatch
+- client.messages.batches.results(messageBatchId) -> Response
+
+# Models
+
+Types:
+
+- ModelInfo
+
+Methods:
+
+- client.models.retrieve(modelId) -> ModelInfo
+- client.models.list({ ...params }) -> ModelInfosPage
+

Beta

Types:
@@ -51,14 +105,27 @@ Types:

+## Models
+
+Types:
+
+- BetaModelInfo
+
+Methods:
+
+- client.beta.models.retrieve(modelId) -> BetaModelInfo
+- client.beta.models.list({ ...params }) -> BetaModelInfosPage
+

Messages

Types:
@@ -124,26 +191,3 @@ Methods:

  • client.beta.messages.batches.list({ ...params }) -> BetaMessageBatchesPage
  • client.beta.messages.batches.cancel(messageBatchId, { ...params }) -> BetaMessageBatch
  • client.beta.messages.batches.results(messageBatchId, { ...params }) -> Response

-## PromptCaching

-### Messages

-Types:

-- PromptCachingBetaCacheControlEphemeral
-- PromptCachingBetaImageBlockParam
-- PromptCachingBetaMessage
-- PromptCachingBetaMessageParam
-- PromptCachingBetaTextBlockParam
-- PromptCachingBetaTool
-- PromptCachingBetaToolResultBlockParam
-- PromptCachingBetaToolUseBlockParam
-- PromptCachingBetaUsage
-- RawPromptCachingBetaMessageStartEvent
-- RawPromptCachingBetaMessageStreamEvent

-Methods:

-- client.beta.promptCaching.messages.create({ ...params }) -> PromptCachingBetaMessage
-- client.beta.promptCaching.messages.stream({ ...params }) -> PromptCachingBetaMessageStream
diff --git examples/cancellation.ts examples/cancellation.ts
index 23fb7ec9..fc8bb0c7 100755
--- examples/cancellation.ts
+++ examples/cancellation.ts
@@ -16,7 +16,7 @@ async function main() {
const question = 'Hey Claude! How can I recursively list all files in a directory in Rust?';

const stream = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    stream: true,
    max_tokens: 500,
    messages: [{ role: 'user', content: question }],
    diff --git a/examples/count-tokens.ts b/examples/count-tokens.ts
    new file mode 100755
    index 00000000..e69de29b
    diff --git examples/demo.ts examples/demo.ts
    index 609e63ef..33fc2d87 100755
    --- examples/demo.ts
    +++ examples/demo.ts
    @@ -12,7 +12,7 @@ async function main() {
    content: 'Hey Claude!?',
    },
    ],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    max_tokens: 1024,
    });
    console.dir(result);
    diff --git examples/raw-streaming.ts examples/raw-streaming.ts
    index 559a6cac..916f2a4d 100755
    --- examples/raw-streaming.ts
    +++ examples/raw-streaming.ts
    @@ -6,7 +6,7 @@ const client = new Anthropic(); // gets API Key from environment variable ANTHRO

async function main() {
const stream = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    stream: true,
    max_tokens: 500,
    messages: [
    diff --git examples/streaming.ts examples/streaming.ts
    index 9ac2da60..bc2c74bd 100755
    --- examples/streaming.ts
    +++ examples/streaming.ts
    @@ -13,7 +13,7 @@ async function main() {
    content: Hey Claude! How can I recursively list all files in a directory in Rust?,
    },
    ],
  •  model: 'claude-3-opus-20240229',
    
  •  model: 'claude-3-5-sonnet-latest',
     max_tokens: 1024,
    
    })
    // Once a content block is fully streamed, this event will fire
    diff --git examples/tools-streaming.ts examples/tools-streaming.ts
    index 96d9cbdc..816201f2 100644
    --- examples/tools-streaming.ts
    +++ examples/tools-streaming.ts
    @@ -33,7 +33,7 @@ async function main() {
    },
    },
    ],
  •  model: 'claude-3-haiku-20240307',
    
  •  model: 'claude-3-5-sonnet-latest',
     max_tokens: 1024,
    

    })
    // When a JSON content block delta is encountered this
    diff --git examples/tools.ts examples/tools.ts
    index b237043b..1a696bc0 100644
    --- examples/tools.ts
    +++ examples/tools.ts
    @@ -22,7 +22,7 @@ async function main() {
    ];

    const message = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    max_tokens: 1024,
    messages: [userMessage],
    tools,
    @@ -38,7 +38,7 @@ async function main() {
    assert(tool);

const result = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    max_tokens: 1024,
    messages: [
    userMessage,
    diff --git package.json package.json
    index f713c04f..d8f88f57 100644
    --- package.json
    +++ package.json
    @@ -1,6 +1,6 @@
    {
    "name": "@anthropic-ai/sdk",
  • "version": "0.32.1",
  • "version": "0.33.1",
    "description": "The official TypeScript library for the Anthropic API",
    "author": "Anthropic [email protected]",
    "types": "dist/index.d.ts",
    @@ -18,7 +18,7 @@
    "build": "./scripts/build-all",
    "prepublishOnly": "echo 'to publish, run yarn build && (cd dist; yarn publish)' && exit 1",
    "format": "prettier --write --cache --cache-strategy metadata . !dist",
  • "prepare": "if ./scripts/utils/check-is-in-git-install.sh; then ./scripts/build; fi",
  • "prepare": "if ./scripts/utils/check-is-in-git-install.sh; then ./scripts/build && ./scripts/utils/git-swap.sh; fi",
    "tsn": "ts-node -r tsconfig-paths/register",
    "lint": "./scripts/lint",
    "fix": "./scripts/format"
    @@ -45,7 +45,6 @@
    "jest": "^29.4.0",
    "prettier": "^3.0.0",
    "ts-jest": "^29.1.0",
  • "ts-morph": "^19.0.0",
    "ts-node": "^10.5.0",
    "tsc-multi": "^1.1.0",
    "tsconfig-paths": "^4.0.0",
    diff --git packages/bedrock-sdk/CHANGELOG.md packages/bedrock-sdk/CHANGELOG.md
    index 174cbb90..837af37e 100644
    --- packages/bedrock-sdk/CHANGELOG.md
    +++ packages/bedrock-sdk/CHANGELOG.md
    @@ -1,5 +1,24 @@

Changelog

+## 0.12.0 (2024-12-17)
+
+Full Changelog: bedrock-sdk-v0.11.2...bedrock-sdk-v0.12.0
+
+### Features
+
+* api: general availability updates (#631) (b5c92e5)
+
+
+### Chores
+
+* bedrock,vertex: remove unsupported countTokens method (#597) (17b7da5)
+* bedrock: remove unsupported methods (6458dc1)
+
+
+### Documentation
+
+* use latest sonnet in example snippets (#625) (f70882b)
+

0.11.2 (2024-11-05)

Full Changelog: bedrock-sdk-v0.11.1...bedrock-sdk-v0.11.2
diff --git packages/bedrock-sdk/README.md packages/bedrock-sdk/README.md
index f6eca6f5..74765c47 100644
--- packages/bedrock-sdk/README.md
+++ packages/bedrock-sdk/README.md
@@ -27,7 +27,7 @@ const client = new AnthropicBedrock();

async function main() {
const message = await client.messages.create({

  • model: 'anthropic.claude-3-sonnet-20240229-v1:0',
  • model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [
    {
    role: 'user',
    diff --git packages/bedrock-sdk/examples/demo.ts packages/bedrock-sdk/examples/demo.ts
    index 810514e8..a918b9ca 100644
    --- packages/bedrock-sdk/examples/demo.ts
    +++ packages/bedrock-sdk/examples/demo.ts
    @@ -11,7 +11,7 @@ const anthropic = new AnthropicBedrock();

async function main() {
const message = await anthropic.messages.create({

  • model: 'anthropic.claude-3-sonnet-20240229-v1:0',
  • model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [
    {
    role: 'user',
    diff --git packages/bedrock-sdk/examples/streaming.ts packages/bedrock-sdk/examples/streaming.ts
    index e1fac81f..5c577a2d 100644
    --- packages/bedrock-sdk/examples/streaming.ts
    +++ packages/bedrock-sdk/examples/streaming.ts
    @@ -11,7 +11,7 @@ const client = new AnthropicBedrock();

async function main() {
const stream = await client.messages.create({

  • model: 'anthropic.claude-3-sonnet-20240229-v1:0',
  • model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [
    {
    role: 'user',
    diff --git packages/bedrock-sdk/package.json packages/bedrock-sdk/package.json
    index a0e56703..352931a5 100644
    --- packages/bedrock-sdk/package.json
    +++ packages/bedrock-sdk/package.json
    @@ -1,6 +1,6 @@
    {
    "name": "@anthropic-ai/bedrock-sdk",
  • "version": "0.11.2",
  • "version": "0.12.0",
    "description": "The official TypeScript library for the Anthropic Bedrock API",
    "author": "Anthropic [email protected]",
    "types": "dist/index.d.ts",
    diff --git packages/bedrock-sdk/src/client.ts packages/bedrock-sdk/src/client.ts
    index 523df8ba..86bd17ef 100644
    --- packages/bedrock-sdk/src/client.ts
    +++ packages/bedrock-sdk/src/client.ts
    @@ -74,7 +74,7 @@ export class AnthropicBedrock extends Core.APIClient {
    this.awsSessionToken = awsSessionToken;
    }
  • messages: Resources.Messages = new Resources.Messages(this);
  • messages: MessagesResource = makeMessagesResource(this);
    completions: Resources.Completions = new Resources.Completions(this);
    beta: BetaResource = makeBetaResource(this);

@@ -159,10 +159,27 @@ export class AnthropicBedrock extends Core.APIClient {
}

/**

    • The Bedrock API does not currently support prompt caching or the Batch API.
    • The Bedrock API does not currently support token counting or the Batch API.
  • */
    +type MessagesResource = Omit<Resources.Messages, 'batches' | 'countTokens'>;

+function makeMessagesResource(client: AnthropicBedrock): MessagesResource {

  • const resource = new Resources.Messages(client);
  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.batches;
  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.countTokens;
  • return resource;
    +}

+/**

    • The Bedrock API does not currently support prompt caching, token counting or the Batch API.
      */
      type BetaResource = Omit<Resources.Beta, 'promptCaching' | 'messages'> & {
  • messages: Omit<Resources.Beta['messages'], 'batches'>;
  • messages: Omit<Resources.Beta['messages'], 'batches' | 'countTokens'>;
    };

function makeBetaResource(client: AnthropicBedrock): BetaResource {
@@ -174,5 +191,8 @@ function makeBetaResource(client: AnthropicBedrock): BetaResource {
// @ts-expect-error we're deleting non-optional properties
delete resource.messages.batches;

  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.messages.countTokens;
  • return resource;
    }
    diff --git packages/vertex-sdk/CHANGELOG.md packages/vertex-sdk/CHANGELOG.md
    index 418af52a..94191164 100644
    --- packages/vertex-sdk/CHANGELOG.md
    +++ packages/vertex-sdk/CHANGELOG.md
    @@ -1,5 +1,32 @@

Changelog

+## 0.6.1 (2024-12-17)
+
+Full Changelog: vertex-sdk-v0.6.0...vertex-sdk-v0.6.1
+
+### Bug Fixes
+
+* vertex: remove anthropic_version deletion for token counting (88221be)
+
+## 0.6.0 (2024-12-17)
+
+Full Changelog: vertex-sdk-v0.5.2...vertex-sdk-v0.6.0
+
+### Features
+
+* api: general availability updates (#631) (b5c92e5)
+* vertex: support token counting (9e76b4d)
+
+
+### Chores
+
+* bedrock,vertex: remove unsupported countTokens method (#597) (17b7da5)
+
+
+### Documentation
+
+* use latest sonnet in example snippets (#625) (f70882b)
+

0.5.2 (2024-11-05)

Full Changelog: vertex-sdk-v0.5.1...vertex-sdk-v0.5.2
diff --git packages/vertex-sdk/README.md packages/vertex-sdk/README.md
index 6e63a8c5..6c9a9c93 100644
--- packages/vertex-sdk/README.md
+++ packages/vertex-sdk/README.md
@@ -30,7 +30,7 @@ async function main() {
content: 'Hey Claude!',
},
],

  • model: 'claude-3-sonnet@20240229',
  • model: 'claude-3-5-sonnet-v2@20241022',
    max_tokens: 300,
    });
    console.log(JSON.stringify(result, null, 2));
    diff --git packages/vertex-sdk/examples/vertex.ts packages/vertex-sdk/examples/vertex.ts
    index 62474cc7..75aba347 100644
    --- packages/vertex-sdk/examples/vertex.ts
    +++ packages/vertex-sdk/examples/vertex.ts
    @@ -14,7 +14,7 @@ async function main() {
    content: 'Hello!',
    },
    ],
  • model: 'claude-3-sonnet@20240229',
  • model: 'claude-3-5-sonnet-v2@20241022',
    max_tokens: 300,
    });
    console.log(JSON.stringify(result, null, 2));
    diff --git packages/vertex-sdk/package.json packages/vertex-sdk/package.json
    index 210c96d5..43fc356d 100644
    --- packages/vertex-sdk/package.json
    +++ packages/vertex-sdk/package.json
    @@ -1,6 +1,6 @@
    {
    "name": "@anthropic-ai/vertex-sdk",
  • "version": "0.5.2",
  • "version": "0.6.1",
    "description": "The official TypeScript library for the Anthropic Vertex API",
    "author": "Anthropic [email protected]",
    "types": "dist/index.d.ts",
    diff --git packages/vertex-sdk/src/client.ts packages/vertex-sdk/src/client.ts
    index 06231649..f1046455 100644
    --- packages/vertex-sdk/src/client.ts
    +++ packages/vertex-sdk/src/client.ts
    @@ -83,7 +83,7 @@ export class AnthropicVertex extends Core.APIClient {
    this._authClientPromise = this._auth.getClient();
    }
  • messages: Resources.Messages = new Resources.Messages(this);
  • messages: MessagesResource = makeMessagesResource(this);
    beta: BetaResource = makeBetaResource(this);

    protected override defaultQuery(): Core.DefaultQuery | undefined {
    @@ -147,15 +147,42 @@ export class AnthropicVertex extends Core.APIClient {
    options.path = /projects/${this.projectId}/locations/${this.region}/publishers/anthropic/models/${model}:${specifier};
    }

  • if (

  •  options.path === '/v1/messages/count_tokens' ||
    
  •  (options.path == '/v1/messages/count_tokens?beta=true' && options.method === 'post')
    
  • ) {

  •  if (!this.projectId) {
    
  •    throw new Error(
    
  •      'No projectId was given and it could not be resolved from credentials. The client should be instantiated with the `projectId` option or the `ANTHROPIC_VERTEX_PROJECT_ID` environment variable should be set.',
    
  •    );
    
  •  }
    
  •  options.path = `/projects/${this.projectId}/locations/${this.region}/publishers/anthropic/models/count-tokens:rawPredict`;
    
  • }

  • return super.buildRequest(options);
    }
    }

/**

    • The Vertex API does not currently support prompt caching or the Batch API.
    • The Vertex SDK does not currently support the Batch API.
  • */
    +type MessagesResource = Omit<Resources.Messages, 'batches'>;

+function makeMessagesResource(client: AnthropicVertex): MessagesResource {

  • const resource = new Resources.Messages(client);
  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.batches;
  • return resource;
    +}

+/**

    • The Vertex API does not currently support prompt caching, token counting or the Batch API.
      */
      type BetaResource = Omit<Resources.Beta, 'promptCaching' | 'messages'> & {
  • messages: Omit<Resources.Beta['messages'], 'batches'>;
  • messages: Omit<Resources.Beta['messages'], 'batches' | 'countTokens'>;
    };

function makeBetaResource(client: AnthropicVertex): BetaResource {
@@ -167,5 +194,8 @@ function makeBetaResource(client: AnthropicVertex): BetaResource {
// @ts-expect-error we're deleting non-optional properties
delete resource.messages.batches;

  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.messages.countTokens;
  • return resource;
    }
    diff --git scripts/build scripts/build
    index ed2b9941..0bee923e 100755
    --- scripts/build
    +++ scripts/build
    @@ -32,7 +32,7 @@ npm exec tsc-multi

copy over handwritten .js/.mjs/.d.ts files

cp src/_shims/.{d.ts,js,mjs,md} dist/_shims
cp src/_shims/auto/
.{d.ts,js,mjs} dist/_shims/auto
-# we need to add exports = module.exports = Anthropic TypeScript to index.js;
+# we need to add exports = module.exports = Anthropic to index.js;

No way to get that from index.ts because it would cause compile errors

when building .mjs

node scripts/utils/fix-index-exports.cjs
diff --git scripts/utils/check-is-in-git-install.sh scripts/utils/check-is-in-git-install.sh
index 36bcedc2..1354eb43 100755
--- scripts/utils/check-is-in-git-install.sh
+++ scripts/utils/check-is-in-git-install.sh
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/usr/bin/env bash

Check if you happen to call prepare for a repository that's already in node_modules.

[ "$(basename "$(dirname "$PWD")")" = 'node_modules' ] ||

The name of the containing directory that 'npm` uses, which looks like

diff --git a/scripts/utils/git-swap.sh b/scripts/utils/git-swap.sh
new file mode 100755
index 00000000..79d1888e
--- /dev/null
+++ scripts/utils/git-swap.sh
@@ -0,0 +1,13 @@
+#!/usr/bin/env bash
+set -exuo pipefail
+# the package is published to NPM from ./dist
+# we want the final file structure for git installs to match the npm installs, so we
+
+# delete everything except ./dist and ./node_modules
+find . -maxdepth 1 -mindepth 1 ! -name 'dist' ! -name 'node_modules' -exec rm -rf '{}' +
+
+# move everything from ./dist to .
+mv dist/* .
+
+# delete the now-empty ./dist
+rmdir dist
diff --git src/core.ts src/core.ts
index a3e22246..ea8d8dca 100644
--- src/core.ts
+++ src/core.ts
@@ -37,7 +37,7 @@ type APIResponseProps = {
controller: AbortController;
};

-async function defaultParseResponse(props: APIResponseProps): Promise {
+async function defaultParseResponse(props: APIResponseProps): Promise<WithRequestID> {
const { response } = props;
if (props.options.stream) {
debug('response', response.status, response.url, response.headers, response.body);
@@ -54,11 +54,11 @@ async function defaultParseResponse(props: APIResponseProps): Promise {

// fetch refuses to read the body when the status code is 204.
if (response.status === 204) {

  • return null as T;
  • return null as WithRequestID;
    }

if (props.options.__binaryResponse) {

  • return response as unknown as T;
  • return response as unknown as WithRequestID;
    }

const contentType = response.headers.get('content-type');
@@ -69,26 +69,44 @@ async function defaultParseResponse(props: APIResponseProps): Promise {

 debug('response', response.status, response.url, response.headers, json);
  • return json as T;
  • return _addRequestID(json as T, response);
    }

const text = await response.text();
debug('response', response.status, response.url, response.headers, text);

// TODO handle blob, arraybuffer, other content types, etc.

  • return text as unknown as T;
  • return text as unknown as WithRequestID;
    +}

+type WithRequestID =

  • T extends Array | Response | AbstractPage ? T
  • : T extends Record<string, any> ? T & { _request_id?: string | null }
  • : T;

+function _addRequestID(value: T, response: Response): WithRequestID {

  • if (!value || typeof value !== 'object' || Array.isArray(value)) {
  • return value as WithRequestID;
  • }
  • return Object.defineProperty(value, '_request_id', {
  • value: response.headers.get('request-id'),
  • enumerable: false,
  • }) as WithRequestID;
    }

/**

  • A subclass of Promise providing additional helper methods
  • for interacting with the SDK.
    */
    -export class APIPromise extends Promise {
  • private parsedPromise: Promise | undefined;
    +export class APIPromise extends Promise<WithRequestID> {
  • private parsedPromise: Promise<WithRequestID> | undefined;

    constructor(
    private responsePromise: Promise,

  • private parseResponse: (props: APIResponseProps) => PromiseOrValue = defaultParseResponse,
  • private parseResponse: (
  •  props: APIResponseProps,
    
  • ) => PromiseOrValue<WithRequestID> = defaultParseResponse,
    ) {
    super((resolve) => {
    // this is maybe a bit weird but this has to be a no-op to not implicitly
    @@ -100,7 +118,7 @@ export class APIPromise extends Promise {

_thenUnwrap(transform: (data: T, props: APIResponseProps) => U): APIPromise {
return new APIPromise(this.responsePromise, async (props) =>

  •  transform(await this.parseResponse(props), props),
    
  •  _addRequestID(transform(await this.parseResponse(props), props), props.response),
    
    );
    }

@@ -120,33 +138,35 @@ export class APIPromise extends Promise {
asResponse(): Promise {
return this.responsePromise.then((p) => p.response);
}
+
/**

    • Gets the parsed response data and the raw Response instance.
    • Gets the parsed response data, the raw Response instance and the ID of the request,
    • returned vie the request-id header which is useful for debugging requests and resporting
    • issues to Anthropic.
    • If you just want to get the raw Response instance without parsing it,
    • you can use {@link asResponse()}.
    • 👋 Getting the wrong TypeScript type for Response?
    • Try setting "moduleResolution": "NodeNext" if you can,
    • or add one of these imports before your first import … from '@anthropic-ai/sdk':
      • import '@anthropic-ai/sdk/shims/node' (if you're running on Node)
      • import '@anthropic-ai/sdk/shims/web' (otherwise)
        */
  • async withResponse(): Promise<{ data: T; response: Response }> {
  • async withResponse(): Promise<{ data: T; response: Response; request_id: string | null | undefined }> {
    const [data, response] = await Promise.all([this.parse(), this.asResponse()]);
  • return { data, response };
  • return { data, response, request_id: response.headers.get('request-id') };
    }
  • private parse(): Promise {
  • private parse(): Promise<WithRequestID> {
    if (!this.parsedPromise) {
  •  this.parsedPromise = this.responsePromise.then(this.parseResponse);
    
  •  this.parsedPromise = this.responsePromise.then(this.parseResponse) as any as Promise<WithRequestID<T>>;
    
    }
    return this.parsedPromise;
    }
  • override then<TResult1 = T, TResult2 = never>(
  • onfulfilled?: ((value: T) => TResult1 | PromiseLike) | undefined | null,
  • override then<TResult1 = WithRequestID, TResult2 = never>(
  • onfulfilled?: ((value: WithRequestID) => TResult1 | PromiseLike) | undefined | null,
    onrejected?: ((reason: any) => TResult2 | PromiseLike) | undefined | null,
    ): Promise<TResult1 | TResult2> {
    return this.parse().then(onfulfilled, onrejected);
    @@ -154,11 +174,11 @@ export class APIPromise extends Promise {

override catch<TResult = never>(
onrejected?: ((reason: any) => TResult | PromiseLike) | undefined | null,

  • ): Promise<T | TResult> {
  • ): Promise<WithRequestID | TResult> {
    return this.parse().catch(onrejected);
    }
  • override finally(onfinally?: (() => void) | undefined | null): Promise {
  • override finally(onfinally?: (() => void) | undefined | null): Promise<WithRequestID> {
    return this.parse().finally(onfinally);
    }
    }
    @@ -177,7 +197,7 @@ export abstract class APIClient {
    maxRetries = 2,
    timeout = 600000, // 10 minutes
    httpAgent,
  • fetch: overridenFetch,
  • fetch: overriddenFetch,
    }: {
    baseURL: string;
    maxRetries?: number | undefined;
    @@ -190,7 +210,7 @@ export abstract class APIClient {
    this.timeout = validatePositiveInteger('timeout', timeout);
    this.httpAgent = httpAgent;
  • this.fetch = overridenFetch ?? fetch;
  • this.fetch = overriddenFetch ?? fetch;
    }

protected authHeaders(opts: FinalRequestOptions): Headers {
@@ -537,19 +557,13 @@ export abstract class APIClient {
const timeout = setTimeout(() => controller.abort(), ms);

 return (
  •  this.getRequestClient()
    
  •    // use undefined this binding; fetch errors if bound to something else in browser/cloudflare
    
  •    .fetch.call(undefined, url, { signal: controller.signal as any, ...options })
    
  •    .finally(() => {
    
  •      clearTimeout(timeout);
    
  •    })
    
  •  // use undefined this binding; fetch errors if bound to something else in browser/cloudflare
    
  •  this.fetch.call(undefined, url, { signal: controller.signal as any, ...options }).finally(() => {
    
  •    clearTimeout(timeout);
    
  •  })
    
    );
    }
  • protected getRequestClient(): RequestClient {
  • return { fetch: this.fetch };
  • }
  • private shouldRetry(response: Response): boolean {
    // Note this is not a standard header.
    const shouldRetryHeader = response.headers.get('x-should-retry');
    @@ -724,7 +738,13 @@ export class PagePromise<
    ) {
    super(
    request,
  •  async (props) => new Page(client, props.response, await defaultParseResponse(props), props.options),
    
  •  async (props) =>
    
  •    new Page(
    
  •      client,
    
  •      props.response,
    
  •      await defaultParseResponse(props),
    
  •      props.options,
    
  •    ) as WithRequestID<PageClass>,
    
    );
    }

@@ -992,8 +1012,8 @@ export const safeJSON = (text: string) => {
}
};

-// https://stackoverflow.com/a/19709846
-const startsWithSchemeRegexp = new RegExp('^(?:[a-z]+:)?//', 'i');
+// https://url.spec.whatwg.org/#url-scheme-string
+const startsWithSchemeRegexp = /^[a-z][a-z0-9+.-]*:/i;
const isAbsoluteURL = (url: string): boolean => {
return startsWithSchemeRegexp.test(url);
};
diff --git src/error.ts src/error.ts
index e9f24916..64525004 100644
--- src/error.ts
+++ src/error.ts
@@ -4,19 +4,21 @@ import { castToError, Headers } from './core';

export class AnthropicError extends Error {}

-export class APIError extends AnthropicError {

  • readonly status: number | undefined;
  • readonly headers: Headers | undefined;
  • readonly error: Object | undefined;
    +export class APIError<
  • TStatus extends number | undefined = number | undefined,

  • THeaders extends Headers | undefined = Headers | undefined,

  • TError extends Object | undefined = Object | undefined,
    +> extends AnthropicError {

  • /** HTTP status for the response that caused the error */

  • readonly status: TStatus;

  • /** HTTP headers for the response that caused the error */

  • readonly headers: THeaders;

  • /** JSON body of the response that caused the error */

  • readonly error: TError;

    readonly request_id: string | null | undefined;

  • constructor(
  • status: number | undefined,
  • error: Object | undefined,
  • message: string | undefined,
  • headers: Headers | undefined,
  • ) {
  • constructor(status: TStatus, error: TError, message: string | undefined, headers: THeaders) {
    super(${APIError.makeMessage(status, error, message)});
    this.status = status;
    this.headers = headers;
    @@ -51,7 +53,7 @@ export class APIError extends AnthropicError {
    message: string | undefined,
    headers: Headers | undefined,
    ): APIError {
  • if (!status) {
  • if (!status || !headers) {
    return new APIConnectionError({ message, cause: castToError(errorResponse) });
    }

@@ -93,17 +95,13 @@ export class APIError extends AnthropicError {
}
}

-export class APIUserAbortError extends APIError {

  • override readonly status: undefined = undefined;

+export class APIUserAbortError extends APIError<undefined, undefined, undefined> {
constructor({ message }: { message?: string } = {}) {
super(undefined, undefined, message || 'Request was aborted.', undefined);
}
}

-export class APIConnectionError extends APIError {

  • override readonly status: undefined = undefined;

+export class APIConnectionError extends APIError<undefined, undefined, undefined> {
constructor({ message, cause }: { message?: string | undefined; cause?: Error | undefined }) {
super(undefined, undefined, message || 'Connection error.', undefined);
// in some environments the 'cause' property is already declared
@@ -118,32 +116,18 @@ export class APIConnectionTimeoutError extends APIConnectionError {
}
}

-export class BadRequestError extends APIError {

  • override readonly status: 400 = 400;
    -}
    +export class BadRequestError extends APIError<400, Headers> {}

-export class AuthenticationError extends APIError {

  • override readonly status: 401 = 401;
    -}
    +export class AuthenticationError extends APIError<401, Headers> {}

-export class PermissionDeniedError extends APIError {

  • override readonly status: 403 = 403;
    -}
    +export class PermissionDeniedError extends APIError<403, Headers> {}

-export class NotFoundError extends APIError {

  • override readonly status: 404 = 404;
    -}
    +export class NotFoundError extends APIError<404, Headers> {}

-export class ConflictError extends APIError {

  • override readonly status: 409 = 409;
    -}
    +export class ConflictError extends APIError<409, Headers> {}

-export class UnprocessableEntityError extends APIError {

  • override readonly status: 422 = 422;
    -}
    +export class UnprocessableEntityError extends APIError<422, Headers> {}

-export class RateLimitError extends APIError {

  • override readonly status: 429 = 429;
    -}
    +export class RateLimitError extends APIError<429, Headers> {}

-export class InternalServerError extends APIError {}
+export class InternalServerError extends APIError<number, Headers> {}
diff --git src/index.ts src/index.ts
index 70c9d5d7..bfca4fc8 100644
--- src/index.ts
+++ src/index.ts
@@ -14,14 +14,35 @@ import {
CompletionCreateParamsStreaming,
Completions,
} from './resources/completions';
+import { ModelInfo, ModelInfosPage, ModelListParams, Models } from './resources/models';
import {

  • AnthropicBeta,
  • Beta,
  • BetaAPIError,
  • BetaAuthenticationError,
  • BetaBillingError,
  • BetaError,
  • BetaErrorResponse,
  • BetaGatewayTimeoutError,
  • BetaInvalidRequestError,
  • BetaNotFoundError,
  • BetaOverloadedError,
  • BetaPermissionError,
  • BetaRateLimitError,
    +} from './resources/beta/beta';
    +import {
  • Base64PDFSource,
  • CacheControlEphemeral,
    ContentBlock,
    ContentBlockDeltaEvent,
  • ContentBlockParam,
    ContentBlockStartEvent,
    ContentBlockStopEvent,
  • DocumentBlockParam,
    ImageBlockParam,
    InputJSONDelta,
    Message,
  • MessageCountTokensParams,
    MessageCreateParams,
    MessageCreateParamsNonStreaming,
    MessageCreateParamsStreaming,
    @@ -32,6 +53,7 @@ import {
    MessageStopEvent,
    MessageStreamEvent,
    MessageStreamParams,
  • MessageTokensCount,
    Messages,
    Metadata,
    Model,
    @@ -54,20 +76,7 @@ import {
    ToolUseBlock,
    ToolUseBlockParam,
    Usage,
    -} from './resources/messages';
    -import {
  • AnthropicBeta,
  • Beta,
  • BetaAPIError,
  • BetaAuthenticationError,
  • BetaError,
  • BetaErrorResponse,
  • BetaInvalidRequestError,
  • BetaNotFoundError,
  • BetaOverloadedError,
  • BetaPermissionError,
  • BetaRateLimitError,
    -} from './resources/beta/beta';
    +} from './resources/messages/messages';

export interface ClientOptions {
/**
@@ -181,7 +190,7 @@ export class Anthropic extends Core.APIClient {

 if (!options.dangerouslyAllowBrowser && Core.isRunningInBrowser()) {
   throw new Errors.AnthropicError(
  •    "It looks like you're running in a browser-like environment.\n\nThis is disabled by default, as it risks exposing your secret API credentials to attackers.\nIf you understand the risks and have appropriate mitigations in place,\nyou can set the `dangerouslyAllowBrowser` option to `true`, e.g.,\n\nnew Anthropic({ apiKey, dangerouslyAllowBrowser: true });\n\nTODO: link!\n",
    
  •    "It looks like you're running in a browser-like environment.\n\nThis is disabled by default, as it risks exposing your secret API credentials to attackers.\nIf you understand the risks and have appropriate mitigations in place,\nyou can set the `dangerouslyAllowBrowser` option to `true`, e.g.,\n\nnew Anthropic({ apiKey, dangerouslyAllowBrowser: true });\n",
     );
    
    }

@@ -201,6 +210,7 @@ export class Anthropic extends Core.APIClient {

completions: API.Completions = new API.Completions(this);
messages: API.Messages = new API.Messages(this);

  • models: API.Models = new API.Models(this);
    beta: API.Beta = new API.Beta(this);

    protected override defaultQuery(): Core.DefaultQuery | undefined {
    @@ -289,31 +299,11 @@ export class Anthropic extends Core.APIClient {
    static fileFromPath = Uploads.fileFromPath;
    }

-export const { HUMAN_PROMPT, AI_PROMPT } = Anthropic;

-export {

  • AnthropicError,
  • APIError,
  • APIConnectionError,
  • APIConnectionTimeoutError,
  • APIUserAbortError,
  • NotFoundError,
  • ConflictError,
  • RateLimitError,
  • BadRequestError,
  • AuthenticationError,
  • InternalServerError,
  • PermissionDeniedError,
  • UnprocessableEntityError,
    -} from './error';

-export import toFile = Uploads.toFile;
-export import fileFromPath = Uploads.fileFromPath;

Anthropic.Completions = Completions;
Anthropic.Messages = Messages;
+Anthropic.Models = Models;
+Anthropic.ModelInfosPage = ModelInfosPage;
Anthropic.Beta = Beta;

export declare namespace Anthropic {
export type RequestOptions = Core.RequestOptions;

@@ -330,10 +320,14 @@ export declare namespace Anthropic {

export {
Messages as Messages,

  • type Base64PDFSource as Base64PDFSource,
  • type CacheControlEphemeral as CacheControlEphemeral,
    type ContentBlock as ContentBlock,
    type ContentBlockDeltaEvent as ContentBlockDeltaEvent,
  • type ContentBlockParam as ContentBlockParam,
    type ContentBlockStartEvent as ContentBlockStartEvent,
    type ContentBlockStopEvent as ContentBlockStopEvent,
  • type DocumentBlockParam as DocumentBlockParam,
    type ImageBlockParam as ImageBlockParam,
    type InputJSONDelta as InputJSONDelta,
    type Message as Message,
    @@ -343,6 +337,7 @@ export declare namespace Anthropic {
    type MessageStartEvent as MessageStartEvent,
    type MessageStopEvent as MessageStopEvent,
    type MessageStreamEvent as MessageStreamEvent,
  • type MessageTokensCount as MessageTokensCount,
    type Metadata as Metadata,
    type Model as Model,
    type RawContentBlockDeltaEvent as RawContentBlockDeltaEvent,
    @@ -368,6 +363,14 @@ export declare namespace Anthropic {
    type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
    type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
    type MessageStreamParams as MessageStreamParams,
  • type MessageCountTokensParams as MessageCountTokensParams,
  • };
  • export {
  • Models as Models,
  • type ModelInfo as ModelInfo,
  • ModelInfosPage as ModelInfosPage,
  • type ModelListParams as ModelListParams,
    };

export {
@@ -375,14 +378,46 @@ export declare namespace Anthropic {
type AnthropicBeta as AnthropicBeta,
type BetaAPIError as BetaAPIError,
type BetaAuthenticationError as BetaAuthenticationError,

  • type BetaBillingError as BetaBillingError,
    type BetaError as BetaError,
    type BetaErrorResponse as BetaErrorResponse,
  • type BetaGatewayTimeoutError as BetaGatewayTimeoutError,
    type BetaInvalidRequestError as BetaInvalidRequestError,
    type BetaNotFoundError as BetaNotFoundError,
    type BetaOverloadedError as BetaOverloadedError,
    type BetaPermissionError as BetaPermissionError,
    type BetaRateLimitError as BetaRateLimitError,
    };
  • export type APIErrorObject = API.APIErrorObject;
  • export type AuthenticationError = API.AuthenticationError;
  • export type BillingError = API.BillingError;
  • export type ErrorObject = API.ErrorObject;
  • export type ErrorResponse = API.ErrorResponse;
  • export type GatewayTimeoutError = API.GatewayTimeoutError;
  • export type InvalidRequestError = API.InvalidRequestError;
  • export type NotFoundError = API.NotFoundError;
  • export type OverloadedError = API.OverloadedError;
  • export type PermissionError = API.PermissionError;
  • export type RateLimitError = API.RateLimitError;
    }
    +export const { HUMAN_PROMPT, AI_PROMPT } = Anthropic;

+export { toFile, fileFromPath } from './uploads';
+export {

  • AnthropicError,
  • APIError,
  • APIConnectionError,
  • APIConnectionTimeoutError,
  • APIUserAbortError,
  • NotFoundError,
  • ConflictError,
  • RateLimitError,
  • BadRequestError,
  • AuthenticationError,
  • InternalServerError,
  • PermissionDeniedError,
  • UnprocessableEntityError,
    +} from './error';

export default Anthropic;
diff --git src/lib/PromptCachingBetaMessageStream.ts src/lib/PromptCachingBetaMessageStream.ts
deleted file mode 100644
index 0e742cba..00000000
--- src/lib/PromptCachingBetaMessageStream.ts
+++ /dev/null
@@ -1,579 +0,0 @@
-import * as Core from '@anthropic-ai/sdk/core';
-import { AnthropicError, APIUserAbortError } from '@anthropic-ai/sdk/error';
-import { type ContentBlock, type TextBlock } from '@anthropic-ai/sdk/resources/messages';
-import {

  • Messages,
  • type PromptCachingBetaMessage,
  • type RawPromptCachingBetaMessageStreamEvent,
  • type PromptCachingBetaMessageParam,
  • type MessageCreateParams,
  • type MessageCreateParamsBase,
    -} from '@anthropic-ai/sdk/resources/beta/prompt-caching/messages';
    -import { type ReadableStream } from '@anthropic-ai/sdk/_shims/index';
    -import { Stream } from '@anthropic-ai/sdk/streaming';
    -import { partialParse } from '../_vendor/partial-json-parser/parser';

-export interface PromptCachingBetaMessageStreamEvents {

  • connect: () => void;
  • streamEvent: (event: RawPromptCachingBetaMessageStreamEvent, snapshot: PromptCachingBetaMessage) => void;
  • text: (textDelta: string, textSnapshot: string) => void;
  • inputJson: (partialJson: string, jsonSnapshot: unknown) => void;
  • message: (message: PromptCachingBetaMessage) => void;
  • contentBlock: (content: ContentBlock) => void;
  • finalPromptCachingBetaMessage: (message: PromptCachingBetaMessage) => void;
  • error: (error: AnthropicError) => void;
  • abort: (error: APIUserAbortError) => void;
  • end: () => void;
    -}

-type PromptCachingBetaMessageStreamEventListeners =

  • {
  • listener: PromptCachingBetaMessageStreamEvents[Event];
  • once?: boolean;
  • }[];

-const JSON_BUF_PROPERTY = '__json_buf';

-export class PromptCachingBetaMessageStream implements AsyncIterable {

  • messages: PromptCachingBetaMessageParam[] = [];
  • receivedMessages: PromptCachi,ngBetaMessage[] = [];
  • #currentMessageSnapshot: PromptCachingBetaMessage | undefined;
  • controller: AbortController = new AbortController();
  • #connectedPromise: Promise;
  • #resolveConnectedPromise: () => void = () => {};
  • #rejectConnectedPromise: (error: AnthropicError) => void = () => {};
  • #endPromise: Promise;
  • #resolveEndPromise: () => void = () => {};
  • #rejectEndPromise: (error: AnthropicError) => void = () => {};
  • #listeners: {
  • [Event in keyof PromptCachingBetaMessageStreamEvents]?: PromptCachingBetaMessageStreamEventListeners;
  • } = {};
  • #ended = false;
  • #errored = false;
  • #aborted = false;
  • #catchingPromiseCreated = false;
  • constructor() {
  • this.#connectedPromise = new Promise((resolve, reject) => {
  •  this.#resolveConnectedPromise = resolve;
    
  •  this.#rejectConnectedPromise = reject;
    
  • });
  • this.#endPromise = new Promise((resolve, reject) => {
  •  this.#resolveEndPromise = resolve;
    
  •  this.#rejectEndPromise = reject;
    
  • });
  • // Don't let these promises cause unhandled rejection errors.
  • // we will manually cause an unhandled rejection error later
  • // if the user hasn't registered any error listener or called
  • // any promise-returning method.
  • this.#connectedPromise.catch(() => {});
  • this.#endPromise.catch(() => {});
  • }
  • /**
    • Intended for use on the frontend, consuming a stream produced with
    • .toReadableStream() on the backend.
    • Note that messages sent to the model do not appear in .on('message')
    • in this context.
  • */
  • static fromReadableStream(stream: ReadableStream): PromptCachingBetaMessageStream {
  • const runner = new PromptCachingBetaMessageStream();
  • runner._run(() => runner._fromReadableStream(stream));
  • return runner;
  • }
  • static createMessage(
  • messages: Messages,
  • params: MessageCreateParamsBase,
  • options?: Core.RequestOptions,
  • ): PromptCachingBetaMessageStream {
  • const runner = new PromptCachingBetaMessageStream();
  • for (const message of params.messages) {
  •  runner._addPromptCachingBetaMessageParam(message);
    
  • }
  • runner._run(() =>
  •  runner._createPromptCachingBetaMessage(
    
  •    messages,
    
  •    { ...params, stream: true },
    
  •    { ...options, headers: { ...options?.headers, 'X-Stainless-Helper-Method': 'stream' } },
    
  •  ),
    
  • );
  • return runner;
  • }
  • protected _run(executor: () => Promise) {
  • executor().then(() => {
  •  this._emitFinal();
    
  •  this._emit('end');
    
  • }, this.#handleError);
  • }
  • protected _addPromptCachingBetaMessageParam(message: PromptCachingBetaMessageParam) {
  • this.messages.push(message);
  • }
  • protected _addPromptCachingBetaMessage(message: PromptCachingBetaMessage, emit = true) {
  • this.receivedMessages.push(message);
  • if (emit) {
  •  this._emit('message', message);
    
  • }
  • }
  • protected async _createPromptCachingBetaMessage(
  • messages: Messages,
  • params: MessageCreateParams,
  • options?: Core.RequestOptions,
  • ): Promise {
  • const signal = options?.signal;
  • if (signal) {
  •  if (signal.aborted) this.controller.abort();
    
  •  signal.addEventListener('abort', () => this.controller.abort());
    
  • }
  • this.#beginRequest();
  • const stream = await messages.create(
  •  { ...params, stream: true },
    
  •  { ...options, signal: this.controller.signal },
    
  • );
  • this._connected();
  • for await (const event of stream) {
  •  this.#addStreamEvent(event);
    
  • }
  • if (stream.controller.signal?.aborted) {
  •  throw new APIUserAbortError();
    
  • }
  • this.#endRequest();
  • }
  • protected _connected() {
  • if (this.ended) return;
  • this.#resolveConnectedPromise();
  • this._emit('connect');
  • }
  • get ended(): boolean {
  • return this.#ended;
  • }
  • get errored(): boolean {
  • return this.#errored;
  • }
  • get aborted(): boolean {
  • return this.#aborted;
  • }
  • abort() {
  • this.controller.abort();
  • }
  • /**
    • Adds the listener function to the end of the listeners array for the event.
    • No checks are made to see if the listener has already been added. Multiple calls passing
    • the same combination of event and listener will result in the listener being added, and
    • called, multiple times.
    • @returns this PromptCachingBetaMessageStream, so that calls can be chained
  • */
  • on(
  • event: Event,
  • listener: PromptCachingBetaMessageStreamEvents[Event],
  • ): this {
  • const listeners: PromptCachingBetaMessageStreamEventListeners =
  •  this.#listeners[event] || (this.#listeners[event] = []);
    
  • listeners.push({ listener });
  • return this;
  • }
  • /**
    • Removes the specified listener from the listener array for the event.
    • off() will remove, at most, one instance of a listener from the listener array. If any single
    • listener has been added multiple times to the listener array for the specified event, then
    • off() must be called multiple times to remove each instance.
    • @returns this PromptCachingBetaMessageStream, so that calls can be chained
  • */
  • off(
  • event: Event,
  • listener: PromptCachingBetaMessageStreamEvents[Event],
  • ): this {
  • const listeners = this.#listeners[event];
  • if (!listeners) return this;
  • const index = listeners.findIndex((l) => l.listener === listener);
  • if (index >= 0) listeners.splice(index, 1);
  • return this;
  • }
  • /**
    • Adds a one-time listener function for the event. The next time the event is triggered,
    • this listener is removed and then invoked.
    • @returns this PromptCachingBetaMessageStream, so that calls can be chained
  • */
  • once(
  • event: Event,
  • listener: PromptCachingBetaMessageStreamEvents[Event],
  • ): this {
  • const listeners: PromptCachingBetaMessageStreamEventListeners =
  •  this.#listeners[event] || (this.#listeners[event] = []);
    
  • listeners.push({ listener, once: true });
  • return this;
  • }
  • /**
    • This is similar to .once(), but returns a Promise that resolves the next time
    • the event is triggered, instead of calling a listener callback.
    • @returns a Promise that resolves the next time given event is triggered,
    • or rejects if an error is emitted. (If you request the 'error' event,
    • returns a promise that resolves with the error).
    • Example:
    • const message = await stream.emitted('message') // rejects if the stream errors
  • */
  • emitted(
  • event: Event,
  • ): Promise<
  • Parameters<PromptCachingBetaMessageStreamEvents[Event]> extends [infer Param] ? Param
  • : Parameters<PromptCachingBetaMessageStreamEvents[Event]> extends [] ? void
  • : Parameters<PromptCachingBetaMessageStreamEvents[Event]>
  • {

  • return new Promise((resolve, reject) => {
  •  this.#catchingPromiseCreated = true;
    
  •  if (event !== 'error') this.once('error', reject);
    
  •  this.once(event, resolve as any);
    
  • });
  • }
  • async done(): Promise {
  • this.#catchingPromiseCreated = true;
  • await this.#endPromise;
  • }
  • get currentMessage(): PromptCachingBetaMessage | undefined {
  • return this.#currentMessageSnapshot;
  • }
  • #getFinalMessage(): PromptCachingBetaMessage {
  • if (this.receivedMessages.length === 0) {
  •  throw new AnthropicError(
    
  •    'stream ended without producing a PromptCachingBetaMessage with role=assistant',
    
  •  );
    
  • }
  • return this.receivedMessages.at(-1)!;
  • }
  • /**
    • @returns a promise that resolves with the the final assistant PromptCachingBetaMessage response,
    • or rejects if an error occurred or the stream ended prematurely without producing a PromptCachingBetaMessage.
  • */
  • async finalMessage(): Promise {
  • await this.done();
  • return this.#getFinalMessage();
  • }
  • #getFinalText(): string {
  • if (this.receivedMessages.length === 0) {
  •  throw new AnthropicError(
    
  •    'stream ended without producing a PromptCachingBetaMessage with role=assistant',
    
  •  );
    
  • }
  • const textBlocks = this.receivedMessages
  •  .at(-1)!
    
  •  .content.filter((block): block is TextBlock => block.type === 'text')
    
  •  .map((block) => block.text);
    
  • if (textBlocks.length === 0) {
  •  throw new AnthropicError('stream ended without producing a content block with type=text');
    
  • }
  • return textBlocks.join(' ');
  • }
  • /**
    • @returns a promise that resolves with the the final assistant PromptCachingBetaMessage's text response, concatenated
    • together if there are more than one text blocks.
    • Rejects if an error occurred or the stream ended prematurely without producing a PromptCachingBetaMessage.
  • */
  • async finalText(): Promise {
  • await this.done();
  • return this.#getFinalText();
  • }
  • #handleError = (error: unknown) => {
  • this.#errored = true;
  • if (error instanceof Error && error.name === 'AbortError') {
  •  error = new APIUserAbortError();
    
  • }
  • if (error instanceof APIUserAbortError) {
  •  this.#aborted = true;
    
  •  return this._emit('abort', error);
    
  • }
  • if (error instanceof AnthropicError) {
  •  return this._emit('error', error);
    
  • }
  • if (error instanceof Error) {
  •  const anthropicError: AnthropicError = new AnthropicError(error.message);
    
  •  // @ts-ignore
    
  •  anthropicError.cause = error;
    
  •  return this._emit('error', anthropicError);
    
  • }
  • return this._emit('error', new AnthropicError(String(error)));
  • };
  • protected _emit(
  • event: Event,
  • ...args: Parameters<PromptCachingBetaMessageStreamEvents[Event]>
  • ) {
  • // make sure we don't emit any PromptCachingBetaMessageStreamEvents after end
  • if (this.#ended) return;
  • if (event === 'end') {
  •  this.#ended = true;
    
  •  this.#resolveEndPromise();
    
  • }
  • const listeners: PromptCachingBetaMessageStreamEventListeners | undefined = this.#listeners[event];
  • if (listeners) {
  •  this.#listeners[event] = listeners.filter((l) => !l.once) as any;
    
  •  listeners.forEach(({ listener }: any) => listener(...args));
    
  • }
  • if (event === 'abort') {
  •  const error = args[0] as APIUserAbortError;
    
  •  if (!this.#catchingPromiseCreated && !listeners?.length) {
    
  •    Promise.reject(error);
    
  •  }
    
  •  this.#rejectConnectedPromise(error);
    
  •  this.#rejectEndPromise(error);
    
  •  this._emit('end');
    
  •  return;
    
  • }
  • if (event === 'error') {
  •  // NOTE: _emit('error', error) should only be called from #handleError().
    
  •  const error = args[0] as AnthropicError;
    
  •  if (!this.#catchingPromiseCreated && !listeners?.length) {
    
  •    // Trigger an unhandled rejection if the user hasn't registered any error handlers.
    
  •    // If you are seeing stack traces here, make sure to handle errors via either:
    
  •    // - runner.on('error', () => ...)
    
  •    // - await runner.done()
    
  •    // - await runner.final...()
    
  •    // - etc.
    
  •    Promise.reject(error);
    
  •  }
    
  •  this.#rejectConnectedPromise(error);
    
  •  this.#rejectEndPromise(error);
    
  •  this._emit('end');
    
  • }
  • }
  • protected _emitFinal() {
  • const finalPromptCachingBetaMessage = this.receivedMessages.at(-1);
  • if (finalPromptCachingBetaMessage) {
  •  this._emit('finalPromptCachingBetaMessage', this.#getFinalMessage());
    
  • }
  • }
  • #beginRequest() {
  • if (this.ended) return;
  • this.#currentMessageSnapshot = undefined;
  • }
  • #addStreamEvent(event: RawPromptCachingBetaMessageStreamEvent) {
  • if (this.ended) return;
  • const messageSnapshot = this.#accumulateMessage(event);
  • this._emit('streamEvent', event, messageSnapshot);
  • switch (event.type) {
  •  case 'content_block_delta': {
    
  •    const content = messageSnapshot.content.at(-1)!;
    
  •    if (event.delta.type === 'text_delta' && content.type === 'text') {
    
  •      this._emit('text', event.delta.text, content.text || '');
    
  •    } else if (event.delta.type === 'input_json_delta' && content.type === 'tool_use') {
    
  •      if (content.input) {
    
  •        this._emit('inputJson', event.delta.partial_json, content.input);
    
  •      }
    
  •    }
    
  •    break;
    
  •  }
    
  •  case 'message_stop': {
    
  •    this._addPromptCachingBetaMessageParam(messageSnapshot);
    
  •    this._addPromptCachingBetaMessage(messageSnapshot, true);
    
  •    break;
    
  •  }
    
  •  case 'content_block_stop': {
    
  •    this._emit('contentBlock', messageSnapshot.content.at(-1)!);
    
  •    break;
    
  •  }
    
  •  case 'message_start': {
    
  •    this.#currentMessageSnapshot = messageSnapshot;
    
  •    break;
    
  •  }
    
  •  case 'content_block_start':
    
  •  case 'message_delta':
    
  •    break;
    
  • }
  • }
  • #endRequest(): PromptCachingBetaMessage {
  • if (this.ended) {
  •  throw new AnthropicError(`stream has ended, this shouldn't happen`);
    
  • }
  • const snapshot = this.#currentMessageSnapshot;
  • if (!snapshot) {
  •  throw new AnthropicError(`request ended without sending any chunks`);
    
  • }
  • this.#currentMessageSnapshot = undefined;
  • return snapshot;
  • }
  • protected async _fromReadableStream(
  • readableStream: ReadableStream,
  • options?: Core.RequestOptions,
  • ): Promise {
  • const signal = options?.signal;
  • if (signal) {
  •  if (signal.aborted) this.controller.abort();
    
  •  signal.addEventListener('abort', () => this.controller.abort());
    
  • }
  • this.#beginRequest();
  • this._connected();
  • const stream = Stream.fromReadableStream(
  •  readableStream,
    
  •  this.controller,
    
  • );
  • for await (const event of stream) {
  •  this.#addStreamEvent(event);
    
  • }
  • if (stream.controller.signal?.aborted) {
  •  throw new APIUserAbortError();
    
  • }
  • this.#endRequest();
  • }
  • /**
    • Mutates this.#currentPromptCachingBetaMessage with the current event. Handling the accumulation of multiple messages
    • will be needed to be handled by the caller, this method will throw if you try to accumulate for multiple
    • messages.
  • */
  • #accumulateMessage(event: RawPromptCachingBetaMessageStreamEvent): PromptCachingBetaMessage {
  • let snapshot = this.#currentMessageSnapshot;
  • if (event.type === 'message_start') {
  •  if (snapshot) {
    
  •    throw new AnthropicError(`Unexpected event order, got ${event.type} before receiving "message_stop"`);
    
  •  }
    
  •  return event.message;
    
  • }
  • if (!snapshot) {
  •  throw new AnthropicError(`Unexpected event order, got ${event.type} before "message_start"`);
    
  • }
  • switch (event.type) {
  •  case 'message_stop':
    
  •    return snapshot;
    
  •  case 'message_delta':
    
  •    snapshot.stop_reason = event.delta.stop_reason;
    
  •    snapshot.stop_sequence = event.delta.stop_sequence;
    
  •    snapshot.usage.output_tokens = event.usage.output_tokens;
    
  •    return snapshot;
    
  •  case 'content_block_start':
    
  •    snapshot.content.push(event.content_block);
    
  •    return snapshot;
    
  •  case 'content_block_delta': {
    
  •    const snapshotContent = snapshot.content.at(event.index);
    
  •    if (snapshotContent?.type === 'text' && event.delta.type === 'text_delta') {
    
  •      snapshotContent.text += event.delta.text;
    
  •    } else if (snapshotContent?.type === 'tool_use' && event.delta.type === 'input_json_delta') {
    
  •      // we need to keep track of the raw JSON string as well so that we can
    
  •      // re-parse it for each delta, for now we just store it as an untyped
    
  •      // non-enumerable property on the snapshot
    
  •      let jsonBuf = (snapshotContent as any)[JSON_BUF_PROPERTY] || '';
    
  •      jsonBuf += event.delta.partial_json;
    
  •      Object.defineProperty(snapshotContent, JSON_BUF_PROPERTY, {
    
  •        value: jsonBuf,
    
  •        enumerable: false,
    
  •        writable: true,
    
  •      });
    
  •      if (jsonBuf) {
    
  •        snapshotContent.input = partialParse(jsonBuf);
    
  •      }
    
  •    }
    
  •    return snapshot;
    
  •  }
    
  •  case 'content_block_stop':
    
  •    return snapshot;
    
  • }
  • }
  • Symbol.asyncIterator: AsyncIterator {
  • const pushQueue: RawPromptCachingBetaMessageStreamEvent[] = [];
  • const readQueue: {
  •  resolve: (chunk: RawPromptCachingBetaMessageStreamEvent | undefined) => void;
    
  •  reject: (error: unknown) => void;
    
  • }[] = [];
  • let done = false;
  • this.on('streamEvent', (event) => {
  •  const reader = readQueue.shift();
    
  •  if (reader) {
    
  •    reader.resolve(event);
    
  •  } else {
    
  •    pushQueue.push(event);
    
  •  }
    
  • });
  • this.on('end', () => {
  •  done = true;
    
  •  for (const reader of readQueue) {
    
  •    reader.resolve(undefined);
    
  •  }
    
  •  readQueue.length = 0;
    
  • });
  • this.on('abort', (err) => {
  •  done = true;
    
  •  for (const reader of readQueue) {
    
  •    reader.reject(err);
    
  •  }
    
  •  readQueue.length = 0;
    
  • });
  • this.on('error', (err) => {
  •  done = true;
    
  •  for (const reader of readQueue) {
    
  •    reader.reject(err);
    
  •  }
    
  •  readQueue.length = 0;
    
  • });
  • return {
  •  next: async (): Promise<IteratorResult<RawPromptCachingBetaMessageStreamEvent>> => {
    
  •    if (!pushQueue.length) {
    
  •      if (done) {
    
  •        return { value: undefined, done: true };
    
  •      }
    
  •      return new Promise<RawPromptCachingBetaMessageStreamEvent | undefined>((resolve, reject) =>
    
  •        readQueue.push({ resolve, reject }),
    
  •      ).then((chunk) => (chunk ? { value: chunk, done: false } : { value: undefined, done: true }));
    
  •    }
    
  •    const chunk = pushQueue.shift()!;
    
  •    return { value: chunk, done: false };
    
  •  },
    
  •  return: async () => {
    
  •    this.abort();
    
  •    return { value: undefined, done: true };
    
  •  },
    
  • };
  • }
  • toReadableStream(): ReadableStream {
  • const stream = new Stream(this[Symbol.asyncIterator].bind(this), this.controller);
  • return stream.toReadableStream();
  • }
    -}
    diff --git src/resources/beta/beta.ts src/resources/beta/beta.ts
    index ee3c6ca5..e29a187c 100644
    --- src/resources/beta/beta.ts
    +++ src/resources/beta/beta.ts
    @@ -1,6 +1,8 @@
    // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

import { APIResource } from '../../resource';
+import * as ModelsAPI from './models';
+import { BetaModelInfo, BetaModelInfosPage, ModelListParams, Models } from './models';
import * as MessagesAPI from './messages/messages';
import {
BetaBase64PDFBlock,
@@ -44,12 +46,10 @@ import {
MessageCreateParamsStreaming,
Messages,
} from './messages/messages';
-import * as PromptCachingAPI from './prompt-caching/prompt-caching';
-import { PromptCaching } from './prompt-caching/prompt-caching';

export class Beta extends APIResource {

  • models: ModelsAPI.Models = new ModelsAPI.Models(this._client);
    messages: MessagesAPI.Messages = new MessagesAPI.Messages(this._client);
  • promptCaching: PromptCachingAPI.PromptCaching = new PromptCachingAPI.PromptCaching(this._client);
    }

export type AnthropicBeta =
@@ -72,12 +72,20 @@ export interface BetaAuthenticationError {
type: 'authentication_error';
}

+export interface BetaBillingError {

  • message: string;
  • type: 'billing_error';
    +}

export type BetaError =
| BetaInvalidRequestError
| BetaAuthenticationError

  • | BetaBillingError
    | BetaPermissionError
    | BetaNotFoundError
    | BetaRateLimitError
  • | BetaGatewayTimeoutError
    | BetaAPIError
    | BetaOverloadedError;

@@ -87,6 +95,12 @@ export interface BetaErrorResponse {
type: 'error';
}

+export interface BetaGatewayTimeoutError {

  • message: string;
  • type: 'timeout_error';
    +}

export interface BetaInvalidRequestError {
message: string;

@@ -117,16 +131,19 @@ export interface BetaRateLimitError {
type: 'rate_limit_error';
}

+Beta.Models = Models;
+Beta.BetaModelInfosPage = BetaModelInfosPage;
Beta.Messages = Messages;
-Beta.PromptCaching = PromptCaching;

export declare namespace Beta {
export {
type AnthropicBeta as AnthropicBeta,
type BetaAPIError as BetaAPIError,
type BetaAuthenticationError as BetaAuthenticationError,

  • type BetaBillingError as BetaBillingError,
    type BetaError as BetaError,
    type BetaErrorResponse as BetaErrorResponse,

  • type BetaGatewayTimeoutError as BetaGatewayTimeoutError,
    type BetaInvalidRequestError as BetaInvalidRequestError,
    type BetaNotFoundError as BetaNotFoundError,
    type BetaOverloadedError as BetaOverloadedError,
    @@ -134,6 +151,13 @@ export declare namespace Beta {
    type BetaRateLimitError as BetaRateLimitError,
    };

  • export {

  • Models as Models,

  • type BetaModelInfo as BetaModelInfo,

  • BetaModelInfosPage as BetaModelInfosPage,

  • type ModelListParams as ModelListParams,

  • };

  • export {
    Messages as Messages,
    type BetaBase64PDFBlock as BetaBase64PDFBlock,
    @@ -176,6 +200,4 @@ export declare namespace Beta {
    type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
    type MessageCountTokensParams as MessageCountTokensParams,
    };

  • export { PromptCaching as PromptCaching };
    }
    diff --git src/resources/beta/index.ts src/resources/beta/index.ts
    index 6e2b0a89..a68f2327 100644
    --- src/resources/beta/index.ts
    +++ src/resources/beta/index.ts
    @@ -5,14 +5,17 @@ export {
    type AnthropicBeta,
    type BetaAPIError,
    type BetaAuthenticationError,
  • type BetaBillingError,
    type BetaError,
    type BetaErrorResponse,
  • type BetaGatewayTimeoutError,
    type BetaInvalidRequestError,
    type BetaNotFoundError,
    type BetaOverloadedError,
    type BetaPermissionError,
    type BetaRateLimitError,
    } from './beta';
    +export { BetaModelInfosPage, Models, type BetaModelInfo, type ModelListParams } from './models';
    export {
    Messages,
    type BetaBase64PDFBlock,
    @@ -55,4 +58,3 @@ export {
    type MessageCreateParamsStreaming,
    type MessageCountTokensParams,
    } from './messages/index';
    -export { PromptCaching } from './prompt-caching/index';
    diff --git src/resources/beta/messages/messages.ts src/resources/beta/messages/messages.ts
    index 3f39ca3a..186a6c36 100644
    --- src/resources/beta/messages/messages.ts
    +++ src/resources/beta/messages/messages.ts
    @@ -4,8 +4,8 @@ import { APIResource } from '../../../resource';
    import { APIPromise } from '../../../core';
    import * as Core from '../../../core';
    import * as MessagesMessagesAPI from './messages';
    -import * as MessagesAPI from '../../messages';
    import * as BetaAPI from '../beta';
    +import * as MessagesAPI from '../../messages/messages';
    import * as BatchesAPI from './batches';
    import {
    BatchCancelParams,
    diff --git a/src/resources/beta/models.ts b/src/resources/beta/models.ts
    new file mode 100644
    index 00000000..48036273
    --- /dev/null
    +++ src/resources/beta/models.ts
    @@ -0,0 +1,78 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import { APIResource } from '../../resource';
+import { isRequestOptions } from '../../core';
+import * as Core from '../../core';
+import { Page, type PageParams } from '../../pagination';
+
+export class Models extends APIResource {

  • /**
    • Get a specific model.
    • The Models API response can be used to determine information about a specific
    • model or resolve a model alias to a model ID.
  • */
  • retrieve(modelId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.get(/v1/models/${modelId}?beta=true, options);
  • }
  • /**
    • List available models.
    • The Models API response can be used to determine which models are available for
    • use in the API. More recently released models are listed first.
  • */
  • list(
  • query?: ModelListParams,
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<BetaModelInfosPage, BetaModelInfo>;
  • list(options?: Core.RequestOptions): Core.PagePromise<BetaModelInfosPage, BetaModelInfo>;
  • list(
  • query: ModelListParams | Core.RequestOptions = {},
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<BetaModelInfosPage, BetaModelInfo> {
  • if (isRequestOptions(query)) {
  •  return this.list({}, query);
    
  • }
  • return this._client.getAPIList('/v1/models?beta=true', BetaModelInfosPage, { query, ...options });
  • }
    +}

+export class BetaModelInfosPage extends Page {}
+
+export interface BetaModelInfo {

  • /**
    • Unique model identifier.
  • */
  • id: string;
  • /**
    • RFC 3339 datetime string representing the time at which the model was released.
    • May be set to an epoch value if the release date is unknown.
  • */
  • created_at: string;
  • /**
    • A human-readable name for the model.
  • */
  • display_name: string;
  • /**
    • Object type.
    • For Models, this is always "model".
  • */
  • type: 'model';
    +}

+export interface ModelListParams extends PageParams {}
+
+Models.BetaModelInfosPage = BetaModelInfosPage;
+
+export declare namespace Models {

  • export {
  • type BetaModelInfo as BetaModelInfo,
  • BetaModelInfosPage as BetaModelInfosPage,
  • type ModelListParams as ModelListParams,
  • };
    +}
    diff --git src/resources/beta/prompt-caching/index.ts src/resources/beta/prompt-caching/index.ts
    deleted file mode 100644
    index 78b4e747..00000000
    --- src/resources/beta/prompt-caching/index.ts
    +++ /dev/null
    @@ -1,20 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-export {

  • Messages,
  • type PromptCachingBetaCacheControlEphemeral,
  • type PromptCachingBetaImageBlockParam,
  • type PromptCachingBetaMessage,
  • type PromptCachingBetaMessageParam,
  • type PromptCachingBetaTextBlockParam,
  • type PromptCachingBetaTool,
  • type PromptCachingBetaToolResultBlockParam,
  • type PromptCachingBetaToolUseBlockParam,
  • type PromptCachingBetaUsage,
  • type RawPromptCachingBetaMessageStartEvent,
  • type RawPromptCachingBetaMessageStreamEvent,
  • type MessageCreateParams,
  • type MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming,
    -} from './messages';
    -export { PromptCaching } from './prompt-caching';
    diff --git src/resources/beta/prompt-caching/messages.ts src/resources/beta/prompt-caching/messages.ts
    deleted file mode 100644
    index 4ae7449b..00000000
    --- src/resources/beta/prompt-caching/messages.ts
    +++ /dev/null
    @@ -1,642 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import { APIResource } from '../../../resource';
-import { APIPromise } from '../../../core';
-import * as Core from '../../../core';
-import * as PromptCachingMessagesAPI from './messages';
-import * as MessagesAPI from '../../messages';
-import * as BetaAPI from '../beta';
-import { Stream } from '../../../streaming';
-import { PromptCachingBetaMessageStream } from '../../../lib/PromptCachingBetaMessageStream';

-export class Messages extends APIResource {

  • /**
    • Send a structured list of input messages with text and/or image content, and the
    • model will generate the next message in the conversation.
    • The Messages API can be used for either single queries or stateless multi-turn
    • conversations.
  • */
  • create(
  • params: MessageCreateParamsNonStreaming,
  • options?: Core.RequestOptions,
  • ): APIPromise;
  • create(
  • params: MessageCreateParamsStreaming,
  • options?: Core.RequestOptions,
  • ): APIPromise<Stream>;
  • create(
  • params: MessageCreateParamsBase,
  • options?: Core.RequestOptions,
  • ): APIPromise<Stream | PromptCachingBetaMessage>;
  • create(
  • params: MessageCreateParams,
  • options?: Core.RequestOptions,
  • ): APIPromise | APIPromise<Stream> {
  • const { betas, ...body } = params;
  • return this._client.post('/v1/messages?beta=prompt_caching', {
  •  body,
    
  •  timeout: (this._client as any)._options.timeout ?? 600000,
    
  •  ...options,
    
  •  headers: {
    
  •    'anthropic-beta': [...(betas ?? []), 'prompt-caching-2024-07-31'].toString(),
    
  •    ...options?.headers,
    
  •  },
    
  •  stream: params.stream ?? false,
    
  • }) as APIPromise | APIPromise<Stream>;
  • }
  • /**
    • Create a Message stream
  • */
  • stream(body: MessageStreamParams, options?: Core.RequestOptions): PromptCachingBetaMessageStream {
  • return PromptCachingBetaMessageStream.createMessage(this, body, options);
  • }
    -}

-export type MessageStreamParams = MessageCreateParamsBase;

-export interface PromptCachingBetaCacheControlEphemeral {

  • type: 'ephemeral';
    -}

-export interface PromptCachingBetaImageBlockParam {

  • source: PromptCachingBetaImageBlockParam.Source;
  • type: 'image';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
    -}

-export namespace PromptCachingBetaImageBlockParam {

  • export interface Source {
  • data: string;
  • media_type: 'image/jpeg' | 'image/png' | 'image/gif' | 'image/webp';
  • type: 'base64';
  • }
    -}

-export interface PromptCachingBetaMessage {

  • /**
    • Unique object identifier.
    • The format and length of IDs may change over time.
  • */
  • id: string;
  • /**
    • Content generated by the model.
    • This is an array of content blocks, each of which has a type that determines
    • its shape.
    • Example:
    • [{ "type": "text", "text": "Hi, I'm Claude." }]
    • If the request input messages ended with an assistant turn, then the
    • response content will continue directly from that last turn. You can use this
    • to constrain the model's output.
    • For example, if the input messages were:
    • [
    • {
    • "role": "user",
      
    • "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
      
    • },
    • { "role": "assistant", "content": "The best answer is (" }
    • ]
    • Then the response content might be:
    • [{ "type": "text", "text": "B)" }]
  • */
  • content: Array<MessagesAPI.ContentBlock>;
  • /**
    • The model that will complete your prompt.\n\nSee
    • details and options.
  • */
  • model: MessagesAPI.Model;
  • /**
    • Conversational role of the generated message.
    • This will always be "assistant".
  • */
  • role: 'assistant';
  • /**
    • The reason that we stopped.
    • This may be one the following values:
      • "end_turn": the model reached a natural stopping point
      • "max_tokens": we exceeded the requested max_tokens or the model's maximum
      • "stop_sequence": one of your provided custom stop_sequences was generated
      • "tool_use": the model invoked one or more tools
    • In non-streaming mode this value is always non-null. In streaming mode, it is
    • null in the message_start event and non-null otherwise.
  • */
  • stop_reason: 'end_turn' | 'max_tokens' | 'stop_sequence' | 'tool_use' | null;
  • /**
    • Which custom stop sequence was generated, if any.
    • This value will be a non-null string if one of your custom stop sequences was
    • generated.
  • */
  • stop_sequence: string | null;
  • /**
    • Object type.
    • For Messages, this is always "message".
  • */
  • type: 'message';
  • /**
    • Billing and rate-limit usage.
    • Anthropic's API bills and rate-limits by token counts, as tokens represent the
    • underlying cost to our systems.
    • Under the hood, the API transforms requests into a format suitable for the
    • model. The model's output then goes through a parsing stage before becoming an
    • API response. As a result, the token counts in usage will not match one-to-one
    • with the exact visible content of an API request or response.
    • For example, output_tokens will be non-zero, even for an empty string response
    • from Claude.
  • */
  • usage: PromptCachingBetaUsage;
    -}

-export interface PromptCachingBetaMessageParam {

  • content:
  • | string
  • | Array<
  •    | PromptCachingBetaTextBlockParam
    
  •    | PromptCachingBetaImageBlockParam
    
  •    | PromptCachingBetaToolUseBlockParam
    
  •    | PromptCachingBetaToolResultBlockParam
    
  •  >;
    
  • role: 'user' | 'assistant';
    -}

-export interface PromptCachingBetaTextBlockParam {

  • text: string;
  • type: 'text';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
    -}

-export interface PromptCachingBetaTool {

  • /**
    • This defines the shape of the input that your tool accepts and that the model
    • will produce.
  • */
  • input_schema: PromptCachingBetaTool.InputSchema;
  • /**
    • Name of the tool.
    • This is how the tool will be called by the model and in tool_use blocks.
  • */
  • name: string;
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
  • /**
    • Description of what this tool does.
    • Tool descriptions should be as detailed as possible. The more information that
    • the model has about what the tool is and how to use it, the better it will
    • perform. You can use natural language descriptions to reinforce important
    • aspects of the tool input JSON schema.
  • */
  • description?: string;
    -}

-export namespace PromptCachingBetaTool {

  • /**
    • This defines the shape of the input that your tool accepts and that the model
    • will produce.
  • */
  • export interface InputSchema {
  • type: 'object';
  • properties?: unknown | null;
  • }
    -}

-export interface PromptCachingBetaToolResultBlockParam {

  • tool_use_id: string;
  • type: 'tool_result';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
  • content?: string | Array<PromptCachingBetaTextBlockParam | PromptCachingBetaImageBlockParam>;
  • is_error?: boolean;
    -}

-export interface PromptCachingBetaToolUseBlockParam {

  • id: string;
  • input: unknown;
  • name: string;
  • type: 'tool_use';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
    -}

-export interface PromptCachingBetaUsage {

  • /**
    • The number of input tokens used to create the cache entry.
  • */
  • cache_creation_input_tokens: number | null;
  • /**
    • The number of input tokens read from the cache.
  • */
  • cache_read_input_tokens: number | null;
  • /**
    • The number of input tokens which were used.
  • */
  • input_tokens: number;
  • /**
    • The number of output tokens which were used.
  • */
  • output_tokens: number;
    -}

-export interface RawPromptCachingBetaMessageStartEvent {

  • message: PromptCachingBetaMessage;
  • type: 'message_start';
    -}

-export type RawPromptCachingBetaMessageStreamEvent =

  • | RawPromptCachingBetaMessageStartEvent
  • | MessagesAPI.RawMessageDeltaEvent
  • | MessagesAPI.RawMessageStopEvent
  • | MessagesAPI.RawContentBlockStartEvent
  • | MessagesAPI.RawContentBlockDeltaEvent
  • | MessagesAPI.RawContentBlockStopEvent;

-export type MessageCreateParams = MessageCreateParamsNonStreaming | MessageCreateParamsStreaming;

-export interface MessageCreateParamsBase {

  • /**
    • Body param: The maximum number of tokens to generate before stopping.
    • Note that our models may stop before reaching this maximum. This parameter
    • only specifies the absolute maximum number of tokens to generate.
    • Different models have different maximum values for this parameter. See
  • */
  • max_tokens: number;
  • /**
    • Body param: Input messages.
    • Our models are trained to operate on alternating user and assistant
    • conversational turns. When creating a new Message, you specify the prior
    • conversational turns with the messages parameter, and the model then generates
    • the next Message in the conversation. Consecutive user or assistant turns
    • in your request will be combined into a single turn.
    • Each input message must be an object with a role and content. You can
    • specify a single user-role message, or you can include multiple user and
    • assistant messages.
    • If the final message uses the assistant role, the response content will
    • continue immediately from the content in that message. This can be used to
    • constrain part of the model's response.
    • Example with a single user message:
    • [{ "role": "user", "content": "Hello, Claude" }]
    • Example with multiple conversational turns:
    • [
    • { "role": "user", "content": "Hello there." },
    • { "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
    • { "role": "user", "content": "Can you explain LLMs in plain English?" }
    • ]
    • Example with a partially-filled response from Claude:
    • [
    • {
    • "role": "user",
      
    • "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
      
    • },
    • { "role": "assistant", "content": "The best answer is (" }
    • ]
    • Each input message content may be either a single string or an array of
    • content blocks, where each block has a specific type. Using a string for
    • content is shorthand for an array of one content block of type "text". The
    • following input messages are equivalent:
    • { "role": "user", "content": "Hello, Claude" }
    • { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
    • Starting with Claude 3 models, you can also send image content blocks:
    • {
    • "role": "user",
    • "content": [
    • {
      
    •   "type": "image",
      
    •   "source": {
      
    •     "type": "base64",
      
    •     "media_type": "image/jpeg",
      
    •     "data": "/9j/4AAQSkZJRg..."
      
    •   }
      
    • },
      
    • { "type": "text", "text": "What is in this image?" }
      
    • ]
    • }
    • We currently support the base64 source type for images, and the image/jpeg,
    • image/png, image/gif, and image/webp media types.
    • more input examples.
    • Note that if you want to include a
    • the top-level system parameter — there is no "system" role for input
    • messages in the Messages API.
  • */
  • messages: Array;
  • /**
    • Body param: The model that will complete your prompt.\n\nSee
    • details and options.
  • */
  • model: MessagesAPI.Model;
  • /**
    • Body param: An object describing metadata about the request.
  • */
  • metadata?: MessagesAPI.Metadata;
  • /**
    • Body param: Custom text sequences that will cause the model to stop generating.
    • Our models will normally stop when they have naturally completed their turn,
    • which will result in a response stop_reason of "end_turn".
    • If you want the model to stop generating when it encounters custom strings of
    • text, you can use the stop_sequences parameter. If the model encounters one of
    • the custom sequences, the response stop_reason value will be "stop_sequence"
    • and the response stop_sequence value will contain the matched stop sequence.
  • */
  • stop_sequences?: Array;
  • /**
    • Body param: Whether to incrementally stream the response using server-sent
    • events.
    • details.
  • */
  • stream?: boolean;
  • /**
    • Body param: System prompt.
    • A system prompt is a way of providing context and instructions to Claude, such
    • as specifying a particular goal or role. See our
  • */
  • system?: string | Array;
  • /**
    • Body param: Amount of randomness injected into the response.
    • Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0
    • for analytical / multiple choice, and closer to 1.0 for creative and
    • generative tasks.
    • Note that even with temperature of 0.0, the results will not be fully
    • deterministic.
  • */
  • temperature?: number;
  • /**
    • Body param: How the model should use the provided tools. The model can use a
    • specific tool, any available tool, or decide by itself.
  • */
  • tool_choice?: MessagesAPI.ToolChoice;
  • /**
    • Body param: Definitions of tools that the model may use.
    • If you include tools in your API request, the model may return tool_use
    • content blocks that represent the model's use of those tools. You can then run
    • those tools using the tool input generated by the model and then optionally
    • return results back to the model using tool_result content blocks.
    • Each tool definition includes:
      • name: Name of the tool.
      • description: Optional, but strongly-recommended description of the tool.
    • shape that the model will produce in tool_use output content blocks.
    • For example, if you defined tools as:
    • [
    • {
    • "name": "get_stock_price",
      
    • "description": "Get the current stock price for a given ticker symbol.",
      
    • "input_schema": {
      
    •   "type": "object",
      
    •   "properties": {
      
    •     "ticker": {
      
    •       "type": "string",
      
    •       "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
      
    •     }
      
    •   },
      
    •   "required": ["ticker"]
      
    • }
      
    • }
    • ]
    • And then asked the model "What's the S&P 500 at today?", the model might produce
    • tool_use content blocks in the response like this:
    • [
    • {
    • "type": "tool_use",
      
    • "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "name": "get_stock_price",
      
    • "input": { "ticker": "^GSPC" }
      
    • }
    • ]
    • You might then run your get_stock_price tool with {"ticker": "^GSPC"} as an
    • input, and return the following back to the model in a subsequent user
    • message:
    • [
    • {
    • "type": "tool_result",
      
    • "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "content": "259.75 USD"
      
    • }
    • ]
    • Tools can be used for workflows that include running client-side tools and
    • functions, or more generally whenever you want the model to produce a particular
    • JSON structure of output.
    • See our guide for more details.
  • */
  • tools?: Array;
  • /**
    • Body param: Only sample from the top K options for each subsequent token.
    • Used to remove "long tail" low probability responses.
    • Recommended for advanced use cases only. You usually only need to use
    • temperature.
  • */
  • top_k?: number;
  • /**
    • Body param: Use nucleus sampling.
    • In nucleus sampling, we compute the cumulative distribution over all the options
    • for each subsequent token in decreasing probability order and cut it off once it
    • reaches a particular probability specified by top_p. You should either alter
    • temperature or top_p, but not both.
    • Recommended for advanced use cases only. You usually only need to use
    • temperature.
  • */
  • top_p?: number;
  • /**
    • Header param: Optional header to specify the beta version(s) you want to use.
  • */
  • betas?: Array<BetaAPI.AnthropicBeta>;
    -}

-export namespace MessageCreateParams {

  • /**
  • */
  • export type Metadata = MessagesAPI.Metadata;
  • /**
    • @deprecated use Anthropic.Messages.ToolChoiceAuto instead
  • */
  • export type ToolChoiceAuto = MessagesAPI.ToolChoiceAuto;
  • /**
    • @deprecated use Anthropic.Messages.ToolChoiceAny instead
  • */
  • export type ToolChoiceAny = MessagesAPI.ToolChoiceAny;
  • /**
    • @deprecated use Anthropic.Messages.ToolChoiceTool instead
  • */
  • export type ToolChoiceTool = MessagesAPI.ToolChoiceTool;
  • export type MessageCreateParamsNonStreaming = PromptCachingMessagesAPI.MessageCreateParamsNonStreaming;
  • export type MessageCreateParamsStreaming = PromptCachingMessagesAPI.MessageCreateParamsStreaming;
    -}

-export interface MessageCreateParamsNonStreaming extends MessageCreateParamsBase {

  • /**
    • Body param: Whether to incrementally stream the response using server-sent
    • events.
    • details.
  • */
  • stream?: false;
    -}

-export interface MessageCreateParamsStreaming extends MessageCreateParamsBase {

  • /**
    • Body param: Whether to incrementally stream the response using server-sent
    • events.
    • details.
  • */
  • stream: true;
    -}

-export declare namespace Messages {

  • export {
  • type PromptCachingBetaCacheControlEphemeral as PromptCachingBetaCacheControlEphemeral,
  • type PromptCachingBetaImageBlockParam as PromptCachingBetaImageBlockParam,
  • type PromptCachingBetaMessage as PromptCachingBetaMessage,
  • type PromptCachingBetaMessageParam as PromptCachingBetaMessageParam,
  • type PromptCachingBetaTextBlockParam as PromptCachingBetaTextBlockParam,
  • type PromptCachingBetaTool as PromptCachingBetaTool,
  • type PromptCachingBetaToolResultBlockParam as PromptCachingBetaToolResultBlockParam,
  • type PromptCachingBetaToolUseBlockParam as PromptCachingBetaToolUseBlockParam,
  • type PromptCachingBetaUsage as PromptCachingBetaUsage,
  • type RawPromptCachingBetaMessageStartEvent as RawPromptCachingBetaMessageStartEvent,
  • type RawPromptCachingBetaMessageStreamEvent as RawPromptCachingBetaMessageStreamEvent,
  • type MessageCreateParams as MessageCreateParams,
  • type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
  • };
    -}
    diff --git src/resources/beta/prompt-caching/prompt-caching.ts src/resources/beta/prompt-caching/prompt-caching.ts
    deleted file mode 100644
    index 421f8621..00000000
    --- src/resources/beta/prompt-caching/prompt-caching.ts
    +++ /dev/null
    @@ -1,47 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import { APIResource } from '../../../resource';
-import * as MessagesAPI from './messages';
-import {

  • MessageCreateParams,
  • MessageCreateParamsNonStreaming,
  • MessageCreateParamsStreaming,
  • Messages,
  • PromptCachingBetaCacheControlEphemeral,
  • PromptCachingBetaImageBlockParam,
  • PromptCachingBetaMessage,
  • PromptCachingBetaMessageParam,
  • PromptCachingBetaTextBlockParam,
  • PromptCachingBetaTool,
  • PromptCachingBetaToolResultBlockParam,
  • PromptCachingBetaToolUseBlockParam,
  • PromptCachingBetaUsage,
  • RawPromptCachingBetaMessageStartEvent,
  • RawPromptCachingBetaMessageStreamEvent,
    -} from './messages';

-export class PromptCaching extends APIResource {

  • messages: MessagesAPI.Messages = new MessagesAPI.Messages(this._client);
    -}

-PromptCaching.Messages = Messages;

-export declare namespace PromptCaching {

  • export {
  • Messages as Messages,
  • type PromptCachingBetaCacheControlEphemeral as PromptCachingBetaCacheControlEphemeral,
  • type PromptCachingBetaImageBlockParam as PromptCachingBetaImageBlockParam,
  • type PromptCachingBetaMessage as PromptCachingBetaMessage,
  • type PromptCachingBetaMessageParam as PromptCachingBetaMessageParam,
  • type PromptCachingBetaTextBlockParam as PromptCachingBetaTextBlockParam,
  • type PromptCachingBetaTool as PromptCachingBetaTool,
  • type PromptCachingBetaToolResultBlockParam as PromptCachingBetaToolResultBlockParam,
  • type PromptCachingBetaToolUseBlockParam as PromptCachingBetaToolUseBlockParam,
  • type PromptCachingBetaUsage as PromptCachingBetaUsage,
  • type RawPromptCachingBetaMessageStartEvent as RawPromptCachingBetaMessageStartEvent,
  • type RawPromptCachingBetaMessageStreamEvent as RawPromptCachingBetaMessageStreamEvent,
  • type MessageCreateParams as MessageCreateParams,
  • type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
  • };
    -}
    diff --git src/resources/completions.ts src/resources/completions.ts
    index a2ef4d98..2260681d 100644
    --- src/resources/completions.ts
    +++ src/resources/completions.ts
    @@ -4,7 +4,7 @@ import { APIResource } from '../resource';
    import { APIPromise } from '../core';
    import * as Core from '../core';
    import * as CompletionsAPI from './completions';
    -import * as MessagesAPI from './messages';
    +import * as MessagesAPI from './messages/messages';
    import { Stream } from '../streaming';

export class Completions extends APIResource {
diff --git src/resources/index.ts src/resources/index.ts
index 59b714ff..23366973 100644
--- src/resources/index.ts
+++ src/resources/index.ts
@@ -1,12 +1,15 @@
// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+export * from './shared';
export {
Beta,
type AnthropicBeta,
type BetaAPIError,
type BetaAuthenticationError,

  • type BetaBillingError,
    type BetaError,
    type BetaErrorResponse,
  • type BetaGatewayTimeoutError,
    type BetaInvalidRequestError,
    type BetaNotFoundError,
    type BetaOverloadedError,
    @@ -22,10 +25,14 @@ export {
    } from './completions';
    export {
    Messages,
  • type Base64PDFSource,
  • type CacheControlEphemeral,
    type ContentBlock,
    type ContentBlockDeltaEvent,
  • type ContentBlockParam,
    type ContentBlockStartEvent,
    type ContentBlockStopEvent,
  • type DocumentBlockParam,
    type ImageBlockParam,
    type InputJsonDelta,
    type InputJSONDelta,
    @@ -37,6 +44,7 @@ export {
    type MessageStopEvent,
    type MessageStreamEvent,
    type MessageStreamParams,
  • type MessageTokensCount,
    type Metadata,
    type Model,
    type RawContentBlockDeltaEvent,
    @@ -61,4 +69,6 @@ export {
    type MessageCreateParams,
    type MessageCreateParamsNonStreaming,
    type MessageCreateParamsStreaming,
    -} from './messages';
  • type MessageCountTokensParams,
    +} from './messages/messages';
    +export { ModelInfosPage, Models, type ModelInfo, type ModelListParams } from './models';
    diff --git a/src/resources/messages/batches.ts b/src/resources/messages/batches.ts
    new file mode 100644
    index 00000000..b4fd45e8
    --- /dev/null
    +++ src/resources/messages/batches.ts
    @@ -0,0 +1,298 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import { APIResource } from '../../resource';
+import { isRequestOptions } from '../../core';
+import * as Core from '../../core';
+import * as Shared from '../shared';
+import * as MessagesAPI from './messages';
+import { Page, type PageParams } from '../../pagination';
+import { JSONLDecoder } from '../../internal/decoders/jsonl';
+import { AnthropicError } from '../../error';
+
+export class Batches extends APIResource {

  • /**
    • Send a batch of Message creation requests.
    • The Message Batches API can be used to process multiple Messages API requests at
    • once. Once a Message Batch is created, it begins processing immediately. Batches
    • can take up to 24 hours to complete.
  • */
  • create(body: BatchCreateParams, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.post('/v1/messages/batches', { body, ...options });
  • }
  • /**
    • This endpoint is idempotent and can be used to poll for Message Batch
    • completion. To access the results of a Message Batch, make a request to the
    • results_url field in the response.
  • */
  • retrieve(messageBatchId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.get(/v1/messages/batches/${messageBatchId}, options);
  • }
  • /**
    • List all Message Batches within a Workspace. Most recently created batches are
    • returned first.
  • */
  • list(
  • query?: BatchListParams,
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<MessageBatchesPage, MessageBatch>;
  • list(options?: Core.RequestOptions): Core.PagePromise<MessageBatchesPage, MessageBatch>;
  • list(
  • query: BatchListParams | Core.RequestOptions = {},
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<MessageBatchesPage, MessageBatch> {
  • if (isRequestOptions(query)) {
  •  return this.list({}, query);
    
  • }
  • return this._client.getAPIList('/v1/messages/batches', MessageBatchesPage, { query, ...options });
  • }
  • /**
    • Batches may be canceled any time before processing ends. Once cancellation is
    • initiated, the batch enters a canceling state, at which time the system may
    • complete any in-progress, non-interruptible requests before finalizing
    • cancellation.
    • The number of canceled requests is specified in request_counts. To determine
    • which requests were canceled, check the individual results within the batch.
    • Note that cancellation may not result in any canceled requests if they were
    • non-interruptible.
  • */
  • cancel(messageBatchId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.post(/v1/messages/batches/${messageBatchId}/cancel, options);
  • }
  • /**
    • Streams the results of a Message Batch as a .jsonl file.
    • Each line in the file is a JSON object containing the result of a single request
    • in the Message Batch. Results are not guaranteed to be in the same order as
    • requests. Use the custom_id field to match results to requests.
  • */
  • async results(
  • messageBatchId: string,
  • options?: Core.RequestOptions,
  • ): Promise<JSONLDecoder> {
  • const batch = await this.retrieve(messageBatchId);
  • if (!batch.results_url) {
  •  throw new AnthropicError(
    
  •    `No batch \`results_url\`; Has it finished processing? ${batch.processing_status} - ${batch.id}`,
    
  •  );
    
  • }
  • return this._client
  •  .get(batch.results_url, { ...options, __binaryResponse: true })
    
  •  ._thenUnwrap((_, props) => JSONLDecoder.fromResponse(props.response, props.controller));
    
  • }
    +}

+export class MessageBatchesPage extends Page {}
+
+export interface MessageBatch {

  • /**
    • Unique object identifier.
    • The format and length of IDs may change over time.
  • */
  • id: string;
  • /**
    • RFC 3339 datetime string representing the time at which the Message Batch was
    • archived and its results became unavailable.
  • */
  • archived_at: string | null;
  • /**
    • RFC 3339 datetime string representing the time at which cancellation was
    • initiated for the Message Batch. Specified only if cancellation was initiated.
  • */
  • cancel_initiated_at: string | null;
  • /**
    • RFC 3339 datetime string representing the time at which the Message Batch was
    • created.
  • */
  • created_at: string;
  • /**
    • RFC 3339 datetime string representing the time at which processing for the
    • Message Batch ended. Specified only once processing ends.
    • Processing ends when every request in a Message Batch has either succeeded,
    • errored, canceled, or expired.
  • */
  • ended_at: string | null;
  • /**
    • RFC 3339 datetime string representing the time at which the Message Batch will
    • expire and end processing, which is 24 hours after creation.
  • */
  • expires_at: string;
  • /**
    • Processing status of the Message Batch.
  • */
  • processing_status: 'in_progress' | 'canceling' | 'ended';
  • /**
    • Tallies requests within the Message Batch, categorized by their status.
    • Requests start as processing and move to one of the other statuses only once
    • processing of the entire batch ends. The sum of all values always matches the
    • total number of requests in the batch.
  • */
  • request_counts: MessageBatchRequestCounts;
  • /**
    • URL to a .jsonl file containing the results of the Message Batch requests.
    • Specified only once processing ends.
    • Results in the file are not guaranteed to be in the same order as requests. Use
    • the custom_id field to match results to requests.
  • */
  • results_url: string | null;
  • /**
    • Object type.
    • For Message Batches, this is always "message_batch".
  • */
  • type: 'message_batch';
    +}

+export interface MessageBatchCanceledResult {

  • type: 'canceled';
    +}

+export interface MessageBatchErroredResult {

  • error: Shared.ErrorResponse;
  • type: 'errored';
    +}

+export interface MessageBatchExpiredResult {

  • type: 'expired';
    +}

+export interface MessageBatchIndividualResponse {

  • /**
    • Developer-provided ID created for each request in a Message Batch. Useful for
    • matching results to requests, as results may be given out of request order.
    • Must be unique for each request within the Message Batch.
  • */
  • custom_id: string;
  • /**
    • Processing result for this request.
    • Contains a Message output if processing was successful, an error response if
    • processing failed, or the reason why processing was not attempted, such as
    • cancellation or expiration.
  • */
  • result: MessageBatchResult;
    +}

+export interface MessageBatchRequestCounts {

  • /**
    • Number of requests in the Message Batch that have been canceled.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • canceled: number;
  • /**
    • Number of requests in the Message Batch that encountered an error.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • errored: number;
  • /**
    • Number of requests in the Message Batch that have expired.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • expired: number;
  • /**
    • Number of requests in the Message Batch that are processing.
  • */
  • processing: number;
  • /**
    • Number of requests in the Message Batch that have completed successfully.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • succeeded: number;
    +}

+/**

    • Processing result for this request.
    • Contains a Message output if processing was successful, an error response if
    • processing failed, or the reason why processing was not attempted, such as
    • cancellation or expiration.
  • */
    +export type MessageBatchResult =
  • | MessageBatchSucceededResult
  • | MessageBatchErroredResult
  • | MessageBatchCanceledResult
  • | MessageBatchExpiredResult;

+export interface MessageBatchSucceededResult {

  • message: MessagesAPI.Message;
  • type: 'succeeded';
    +}

+export interface BatchCreateParams {

  • /**
    • List of requests for prompt completion. Each is an individual request to create
    • a Message.
  • */
  • requests: Array<BatchCreateParams.Request>;
    +}

+export namespace BatchCreateParams {

  • export interface Request {
  • /**
  • * Developer-provided ID created for each request in a Message Batch. Useful for
    
  • * matching results to requests, as results may be given out of request order.
    
  • *
    
  • * Must be unique for each request within the Message Batch.
    
  • */
    
  • custom_id: string;
  • /**
  • * Messages API creation parameters for the individual request.
    
  • *
    
  • * See the [Messages API reference](/en/api/messages) for full documentation on
    
  • * available parameters.
    
  • */
    
  • params: MessagesAPI.MessageCreateParamsNonStreaming;
  • }
    +}

+export interface BatchListParams extends PageParams {}
+
+Batches.MessageBatchesPage = MessageBatchesPage;
+
+export declare namespace Batches {

  • export {
  • type MessageBatch as MessageBatch,
  • type MessageBatchCanceledResult as MessageBatchCanceledResult,
  • type MessageBatchErroredResult as MessageBatchErroredResult,
  • type MessageBatchExpiredResult as MessageBatchExpiredResult,
  • type MessageBatchIndividualResponse as MessageBatchIndividualResponse,
  • type MessageBatchRequestCounts as MessageBatchRequestCounts,
  • type MessageBatchResult as MessageBatchResult,
  • type MessageBatchSucceededResult as MessageBatchSucceededResult,
  • MessageBatchesPage as MessageBatchesPage,
  • type BatchCreateParams as BatchCreateParams,
  • type BatchListParams as BatchListParams,
  • };
    +}
    diff --git a/src/resources/messages/index.ts b/src/resources/messages/index.ts
    new file mode 100644
    index 00000000..10308d2a
    --- /dev/null
    +++ src/resources/messages/index.ts
    @@ -0,0 +1,63 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+export {

  • MessageBatchesPage,
  • Batches,
  • type MessageBatch,
  • type MessageBatchCanceledResult,
  • type MessageBatchErroredResult,
  • type MessageBatchExpiredResult,
  • type MessageBatchIndividualResponse,
  • type MessageBatchRequestCounts,
  • type MessageBatchResult,
  • type MessageBatchSucceededResult,
  • type BatchCreateParams,
  • type BatchListParams,
    +} from './batches';
    +export {
  • Messages,
  • type Base64PDFSource,
  • type CacheControlEphemeral,
  • type ContentBlock,
  • type ContentBlockDeltaEvent,
  • type ContentBlockParam,
  • type ContentBlockStartEvent,
  • type ContentBlockStopEvent,
  • type DocumentBlockParam,
  • type ImageBlockParam,
  • type InputJSONDelta,
  • type Message,
  • type MessageDeltaEvent,
  • type MessageDeltaUsage,
  • type MessageParam,
  • type MessageStartEvent,
  • type MessageStopEvent,
  • type MessageStreamEvent,
  • type MessageTokensCount,
  • type Metadata,
  • type Model,
  • type RawContentBlockDeltaEvent,
  • type RawContentBlockStartEvent,
  • type RawContentBlockStopEvent,
  • type RawMessageDeltaEvent,
  • type RawMessageStartEvent,
  • type RawMessageStopEvent,
  • type RawMessageStreamEvent,
  • type TextBlock,
  • type TextBlockParam,
  • type TextDelta,
  • type Tool,
  • type ToolChoice,
  • type ToolChoiceAny,
  • type ToolChoiceAuto,
  • type ToolChoiceTool,
  • type ToolResultBlockParam,
  • type ToolUseBlock,
  • type ToolUseBlockParam,
  • type Usage,
  • type MessageCreateParams,
  • type MessageCreateParamsBase,
  • type MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming,
  • type MessageCountTokensParams,
    +} from './messages';
    diff --git src/resources/messages.ts src/resources/messages/messages.ts
    similarity index 71%
    rename from src/resources/messages.ts
    rename to src/resources/messages/messages.ts
    index a13c43f4..a1affbf5 100644
    --- src/resources/messages.ts
    +++ src/resources/messages/messages.ts
    @@ -1,15 +1,32 @@
    // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import { APIResource } from '../resource';
-import { APIPromise } from '../core';
-import * as Core from '../core';
+import { APIResource } from '../../resource';
+import { APIPromise } from '../../core';
+import * as Core from '../../core';
import * as MessagesAPI from './messages';
-import { Stream } from '../streaming';
-import { MessageStream } from '../lib/MessageStream';

-export { MessageStream } from '../lib/MessageStream';
+import * as BatchesAPI from './batches';
+import {

  • BatchCreateParams,
  • BatchListParams,
  • Batches,
  • MessageBatch,
  • MessageBatchCanceledResult,
  • MessageBatchErroredResult,
  • MessageBatchExpiredResult,
  • MessageBatchIndividualResponse,
  • MessageBatchRequestCounts,
  • MessageBatchResult,
  • MessageBatchSucceededResult,
  • MessageBatchesPage,
    +} from './batches';
    +import { Stream } from '../../streaming';
    +import { MessageStream } from '../../lib/MessageStream';

+export { MessageStream } from '../../lib/MessageStream';

export class Messages extends APIResource {

  • batches: BatchesAPI.Batches = new Batch,esAPI.Batches(this._client);
  • /**
    • Send a structured list of input messages with text and/or image content, and the
    • model will generate the next message in the conversation.
      @@ -51,20 +68,62 @@ export class Messages extends APIResource {
      stream(body: MessageStreamParams, options?: Core.RequestOptions): MessageStream {
      return MessageStream.createMessage(this, body, options);
      }
  • /**
    • Count the number of tokens in a Message.
    • The Token Count API can be used to count the number of tokens in a Message,
    • including tools, images, and documents, without creating it.
  • */
  • countTokens(
  • body: MessageCountTokensParams,
  • options?: Core.RequestOptions,
  • ): Core.APIPromise {
  • return this._client.post('/v1/messages/count_tokens', { body, ...options });
  • }
    +}

+export interface Base64PDFSource {

  • data: string;
  • media_type: 'application/pdf';
  • type: 'base64';
    +}

+export interface CacheControlEphemeral {

  • type: 'ephemeral';
    }

export type ContentBlock = TextBlock | ToolUseBlock;

export type ContentBlockDeltaEvent = RawContentBlockDeltaEvent;

+export type ContentBlockParam =

  • | TextBlockParam
  • | ImageBlockParam
  • | ToolUseBlockParam
  • | ToolResultBlockParam
  • | DocumentBlockParam;

export type ContentBlockStartEvent = RawContentBlockStartEvent;

export type ContentBlockStopEvent = RawContentBlockStopEvent;

+export interface DocumentBlockParam {

  • source: Base64PDFSource;
  • type: 'document';
  • cache_control?: CacheControlEphemeral | null;
    +}

export interface ImageBlockParam {
source: ImageBlockParam.Source;

type: 'image';
+

  • cache_control?: CacheControlEphemeral | null;
    }

export namespace ImageBlockParam {
@@ -200,7 +259,7 @@ export interface MessageDeltaUsage {
}

export interface MessageParam {

  • content: string | Array<TextBlockParam | ImageBlockParam | ToolUseBlockParam | ToolResultBlockParam>;
  • content: string | Array;

    role: 'user' | 'assistant';
    }
    @@ -211,6 +270,14 @@ export type MessageStopEvent = RawMessageStopEvent;

export type MessageStreamEvent = RawMessageStreamEvent;

+export interface MessageTokensCount {

  • /**
    • The total number of tokens across the provided list of messages, system prompt,
    • and tools.
  • */
  • input_tokens: number;
    +}

export interface Metadata {
/**
* An external identifier for the user who is associated with the request.
@@ -239,8 +306,7 @@ export type Model =
| 'claude-3-sonnet-20240229'
| 'claude-3-haiku-20240307'
| 'claude-2.1'

  • | 'claude-2.0'
  • | 'claude-instant-1.2';
  • | 'claude-2.0';

type DeprecatedModelsType = {
[K in Model]?: string;
@@ -334,6 +400,8 @@ export interface TextBlockParam {
text: string;

type: 'text';
+

  • cache_control?: CacheControlEphemeral | null;
    }

export interface TextDelta {
@@ -358,6 +426,8 @@ export interface Tool {
*/
name: string;

  • cache_control?: CacheControlEphemeral | null;
  • /**
    • Description of what this tool does.

@@ -445,6 +515,8 @@ export interface ToolResultBlockParam {

type: 'tool_result';

  • cache_control?: CacheControlEphemeral | null;

  • content?: string | Array<TextBlockParam | ImageBlockParam>;

    is_error?: boolean;
    @@ -468,9 +540,21 @@ export interface ToolUseBlockParam {
    name: string;

    type: 'tool_use';

  • cache_control?: CacheControlEphemeral | null;
    }

export interface Usage {

  • /**
    • The number of input tokens used to create the cache entry.
  • */
  • cache_creation_input_tokens: number | null;
  • /**
    • The number of input tokens read from the cache.
  • */
  • cache_read_input_tokens: number | null;
  • /**
    • The number of input tokens which were used.
      */
      @@ -790,12 +874,205 @@ export interface MessageCreateParamsStreaming extends MessageCreateParamsBase {

export type MessageStreamParams = MessageCreateParamsBase;

+export interface MessageCountTokensParams {

  • /**
    • Input messages.
    • Our models are trained to operate on alternating user and assistant
    • conversational turns. When creating a new Message, you specify the prior
    • conversational turns with the messages parameter, and the model then generates
    • the next Message in the conversation. Consecutive user or assistant turns
    • in your request will be combined into a single turn.
    • Each input message must be an object with a role and content. You can
    • specify a single user-role message, or you can include multiple user and
    • assistant messages.
    • If the final message uses the assistant role, the response content will
    • continue immediately from the content in that message. This can be used to
    • constrain part of the model's response.
    • Example with a single user message:
    • [{ "role": "user", "content": "Hello, Claude" }]
    • Example with multiple conversational turns:
    • [
    • { "role": "user", "content": "Hello there." },
    • { "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
    • { "role": "user", "content": "Can you explain LLMs in plain English?" }
    • ]
    • Example with a partially-filled response from Claude:
    • [
    • {
    • "role": "user",
      
    • "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
      
    • },
    • { "role": "assistant", "content": "The best answer is (" }
    • ]
    • Each input message content may be either a single string or an array of
    • content blocks, where each block has a specific type. Using a string for
    • content is shorthand for an array of one content block of type "text". The
    • following input messages are equivalent:
    • { "role": "user", "content": "Hello, Claude" }
    • { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
    • Starting with Claude 3 models, you can also send image content blocks:
    • {
    • "role": "user",
    • "content": [
    • {
      
    •   "type": "image",
      
    •   "source": {
      
    •     "type": "base64",
      
    •     "media_type": "image/jpeg",
      
    •     "data": "/9j/4AAQSkZJRg..."
      
    •   }
      
    • },
      
    • { "type": "text", "text": "What is in this image?" }
      
    • ]
    • }
    • We currently support the base64 source type for images, and the image/jpeg,
    • image/png, image/gif, and image/webp media types.
    • more input examples.
    • Note that if you want to include a
    • the top-level system parameter — there is no "system" role for input
    • messages in the Messages API.
  • */
  • messages: Array;
  • /**
    • The model that will complete your prompt.\n\nSee
    • details and options.
  • */
  • model: Model;
  • /**
    • System prompt.
    • A system prompt is a way of providing context and instructions to Claude, such
    • as specifying a particular goal or role. See our
  • */
  • system?: string | Array;
  • /**
    • How the model should use the provided tools. The model can use a specific tool,
    • any available tool, or decide by itself.
  • */
  • tool_choice?: ToolChoice;
  • /**
    • Definitions of tools that the model may use.
    • If you include tools in your API request, the model may return tool_use
    • content blocks that represent the model's use of those tools. You can then run
    • those tools using the tool input generated by the model and then optionally
    • return results back to the model using tool_result content blocks.
    • Each tool definition includes:
      • name: Name of the tool.
      • description: Optional, but strongly-recommended description of the tool.
    • shape that the model will produce in tool_use output content blocks.
    • For example, if you defined tools as:
    • [
    • {
    • "name": "get_stock_price",
      
    • "description": "Get the current stock price for a given ticker symbol.",
      
    • "input_schema": {
      
    •   "type": "object",
      
    •   "properties": {
      
    •     "ticker": {
      
    •       "type": "string",
      
    •       "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
      
    •     }
      
    •   },
      
    •   "required": ["ticker"]
      
    • }
      
    • }
    • ]
    • And then asked the model "What's the S&P 500 at today?", the model might produce
    • tool_use content blocks in the response like this:
    • [
    • {
    • "type": "tool_use",
      
    • "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "name": "get_stock_price",
      
    • "input": { "ticker": "^GSPC" }
      
    • }
    • ]
    • You might then run your get_stock_price tool with {"ticker": "^GSPC"} as an
    • input, and return the following back to the model in a subsequent user
    • message:
    • [
    • {
    • "type": "tool_result",
      
    • "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "content": "259.75 USD"
      
    • }
    • ]
    • Tools can be used for workflows that include running client-side tools and
    • functions, or more generally whenever you want the model to produce a particular
    • JSON structure of output.
    • See our guide for more details.
  • */
  • tools?: Array;
    +}

+Messages.Batches = Batches;
+Messages.MessageBatchesPage = MessageBatchesPage;
+
export declare namespace Messages {
export {

  • type Base64PDFSource as Base64PDFSource,
  • type CacheControlEphemeral as CacheControlEphemeral,
    type ContentBlock as ContentBlock,
    type ContentBlockDeltaEvent as ContentBlockDeltaEvent,
  • type ContentBlockParam as ContentBlockParam,
    type ContentBlockStartEvent as ContentBlockStartEvent,
    type ContentBlockStopEvent as ContentBlockStopEvent,
  • type DocumentBlockParam as DocumentBlockParam,
    type ImageBlockParam as ImageBlockParam,
    type InputJsonDelta as InputJsonDelta,
    type InputJSONDelta as InputJSONDelta,
    @@ -806,6 +1083,7 @@ export declare namespace Messages {
    type MessageStartEvent as MessageStartEvent,
    type MessageStopEvent as MessageStopEvent,
    type MessageStreamEvent as MessageStreamEvent,
  • type MessageTokensCount as MessageTokensCount,
    type Metadata as Metadata,
    type Model as Model,
    type RawContentBlockDeltaEvent as RawContentBlockDeltaEvent,
    @@ -831,5 +1109,21 @@ export declare namespace Messages {
    type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
    type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
    type MessageStreamParams as MessageStreamParams,
  • type MessageCountTokensParams as MessageCountTokensParams,
  • };
  • export {
  • Batches as Batches,
  • type MessageBatch as MessageBatch,
  • type MessageBatchCanceledResult as MessageBatchCanceledResult,
  • type MessageBatchErroredResult as MessageBatchErroredResult,
  • type MessageBatchExpiredResult as MessageBatchExpiredResult,
  • type MessageBatchIndividualResponse as MessageBatchIndividualResponse,
  • type MessageBatchRequestCounts as MessageBatchRequestCounts,
  • type MessageBatchResult as MessageBatchResult,
  • type MessageBatchSucceededResult as MessageBatchSucceededResult,
  • MessageBatchesPage as MessageBatchesPage,
  • type BatchCreateParams as BatchCreateParams,
  • type BatchListParams as BatchListParams,
    };
    }
    diff --git a/src/resources/models.ts b/src/resources/models.ts
    new file mode 100644
    index 00000000..50e80399
    --- /dev/null
    +++ src/resources/models.ts
    @@ -0,0 +1,75 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import { APIResource } from '../resource';
+import { isRequestOptions } from '../core';
+import * as Core from '../core';
+import { Page, type PageParams } from '../pagination';
+
+export class Models extends APIResource {

  • /**
    • Get a specific model.
    • The Models API response can be used to determine information about a specific
    • model or resolve a model alias to a model ID.
  • */
  • retrieve(modelId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.get(/v1/models/${modelId}, options);
  • }
  • /**
    • List available models.
    • The Models API response can be used to determine which models are available for
    • use in the API. More recently released models are listed first.
  • */
  • list(query?: ModelListParams, options?: Core.RequestOptions): Core.PagePromise<ModelInfosPage, ModelInfo>;
  • list(options?: Core.RequestOptions): Core.PagePromise<ModelInfosPage, ModelInfo>;
  • list(
  • query: ModelListParams | Core.RequestOptions = {},
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<ModelInfosPage, ModelInfo> {
  • if (isRequestOptions(query)) {
  •  return this.list({}, query);
    
  • }
  • return this._client.getAPIList('/v1/models', ModelInfosPage, { query, ...options });
  • }
    +}

+export class ModelInfosPage extends Page {}
+
+export interface ModelInfo {

  • /**
    • Unique model identifier.
  • */
  • id: string;
  • /**
    • RFC 3339 datetime string representing the time at which the model was released.
    • May be set to an epoch value if the release date is unknown.
  • */
  • created_at: string;
  • /**
    • A human-readable name for the model.
  • */
  • display_name: string;
  • /**
    • Object type.
    • For Models, this is always "model".
  • */
  • type: 'model';
    +}

+export interface ModelListParams extends PageParams {}
+
+Models.ModelInfosPage = ModelInfosPage;
+
+export declare namespace Models {

  • export {
  • type ModelInfo as ModelInfo,
  • ModelInfosPage as ModelInfosPage,
  • type ModelListParams as ModelListParams,
  • };
    +}
    diff --git a/src/resources/shared.ts b/src/resources/shared.ts
    new file mode 100644
    index 00000000..d731c1f9
    --- /dev/null
    +++ src/resources/shared.ts
    @@ -0,0 +1,72 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+export interface APIErrorObject {

  • message: string;
  • type: 'api_error';
    +}

+export interface AuthenticationError {

  • message: string;
  • type: 'authentication_error';
    +}

+export interface BillingError {

  • message: string;
  • type: 'billing_error';
    +}

+export type ErrorObject =

  • | InvalidRequestError
  • | AuthenticationError
  • | BillingError
  • | PermissionError
  • | NotFoundError
  • | RateLimitError
  • | GatewayTimeoutError
  • | APIErrorObject
  • | OverloadedError;

+export interface ErrorResponse {

  • error: ErrorObject;
  • type: 'error';
    +}

+export interface GatewayTimeoutError {

  • message: string;
  • type: 'timeout_error';
    +}

+export interface InvalidRequestError {

  • message: string;
  • type: 'invalid_request_error';
    +}

+export interface NotFoundError {

  • message: string;
  • type: 'not_found_error';
    +}

+export interface OverloadedError {

  • message: string;
  • type: 'overloaded_error';
    +}

+export interface PermissionError {

  • message: string;
  • type: 'permission_error';
    +}

+export interface RateLimitError {

  • message: string;
  • type: 'rate_limit_error';
    +}
    diff --git src/version.ts src/version.ts
    index ab5165fb..4a46c186 100644
    --- src/version.ts
    +++ src/version.ts
    @@ -1 +1 @@
    -export const VERSION = '0.32.1'; // x-release-please-version
    +export const VERSION = '0.33.1'; // x-release-please-version
    diff --git tests/api-resources/MessageStream.test.ts tests/api-resources/MessageStream.test.ts
    index 81b9c81e..0051d397 100644
    --- tests/api-resources/MessageStream.test.ts
    +++ tests/api-resources/MessageStream.test.ts
    @@ -149,7 +149,12 @@ describe('MessageStream class', () => {
    model: 'claude-3-opus-20240229',
    stop_reason: 'end_turn',
    stop_sequence: null,
  •    usage: { output_tokens: 6, input_tokens: 10 },
    
  •    usage: {
    
  •      output_tokens: 6,
    
  •      input_tokens: 10,
    
  •      cache_creation_input_tokens: null,
    
  •      cache_read_input_tokens: null,
    
  •    },
     }),
    
    );

@@ -209,22 +214,22 @@ describe('MessageStream class', () => {
},
{
"args": [

  •        "{"type":"message_start","message":{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message_start","message":{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
           "{"type":"content_block_start","content_block":{"type":"text","text":""},"index":0}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":""}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":""}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
           "{"type":"content_block_delta","delta":{"type":"text_delta","text":"Hello"},"index":0}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -238,7 +243,7 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"content_block_delta","delta":{"type":"text_delta","text":" ther"},"index":0}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello ther"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello ther"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -252,7 +257,7 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"content_block_delta","delta":{"type":"text_delta","text":"e!"},"index":0}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -266,7 +271,7 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"content_block_stop","index":0}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -279,26 +284,26 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"message_delta","usage":{"output_tokens":6},"delta":{"stop_reason":"end_turn","stop_sequence":null}}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
           "{"type":"message_stop"}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "message",
       },
       {
         "args": [
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "finalMessage",
       },
    

@@ -326,6 +331,8 @@ describe('MessageStream class', () => {
"stop_sequence": null,
"type": "message",
"usage": {

  •      "cache_creation_input_tokens": null,
    
  •      "cache_read_input_tokens": null,
         "input_tokens": 10,
         "output_tokens": 6,
       },
    

@@ -353,7 +360,12 @@ describe('MessageStream class', () => {
model: 'claude-3-opus-20240229',
stop_reason: 'end_turn',
stop_sequence: null,

  •    usage: { output_tokens: 6, input_tokens: 10 },
    
  •    usage: {
    
  •      output_tokens: 6,
    
  •      input_tokens: 10,
    
  •      cache_creation_input_tokens: null,
    
  •      cache_read_input_tokens: null,
    
  •    },
     }),
    
    );

diff --git tests/api-resources/beta/messages/batches.test.ts tests/api-resources/beta/messages/batches.test.ts
index ed2027c8..e395910a 100644
--- tests/api-resources/beta/messages/batches.test.ts
+++ tests/api-resources/beta/messages/batches.test.ts
@@ -20,22 +20,6 @@ describe('resource batches', () => {
model: 'claude-3-5-sonnet-20241022',
},
},

  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •      },
    
  •    },
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •      },
    
  •    },
     ],
    
    });
    const rawResponse = await responsePromise.asResponse();
    @@ -57,143 +41,7 @@ describe('resource batches', () => {
    messages: [{ content: 'Hello, world', role: 'user' }],
    model: 'claude-3-5-sonnet-20241022',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •        stop_sequences: ['string', 'string', 'string'],
    
  •        stream: false,
    
  •        system: [
    
  •          { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    
  •        ],
    
  •        temperature: 1,
    
  •        tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •        tools: [
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •        ],
    
  •        top_k: 5,
    
  •        top_p: 0.7,
    
  •      },
    
  •    },
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •        metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •        stop_sequences: ['string', 'string', 'string'],
    
  •        stream: false,
    
  •        system: [
    
  •          { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    
  •        ],
    
  •        temperature: 1,
    
  •        tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •        tools: [
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •        ],
    
  •        top_k: 5,
    
  •        top_p: 0.7,
    
  •      },
    
  •    },
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •        metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •        stop_sequences: ['string', 'string', 'string'],
    
  •        stop_sequences: ['string'],
           stream: false,
           system: [
             { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    

@@ -217,45 +65,13 @@ describe('resource batches', () => {
description: 'Get the current weather in a given location',
type: 'custom',
},

  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
           ],
           top_k: 5,
           top_p: 0.7,
         },
       },
     ],
    
  •  betas: ['string', 'string', 'string'],
    
  •  betas: ['string'],
    
    });
    });

@@ -282,7 +98,7 @@ describe('resource batches', () => {
await expect(
client.beta.messages.batches.retrieve(
'message_batch_id',

  •    { betas: ['string', 'string', 'string'] },
    
  •    { betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    @@ -310,7 +126,7 @@ describe('resource batches', () => {
    // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
    await expect(
    client.beta.messages.batches.list(
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1, betas: ['string', 'string', 'string'] },
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1, betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    @@ -339,7 +155,7 @@ describe('resource batches', () => {
    await expect(
    client.beta.messages.batches.cancel(
    'message_batch_id',
  •    { betas: ['string', 'string', 'string'] },
    
  •    { betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    @@ -357,7 +173,7 @@ describe('resource batches', () => {
    await expect(
    client.beta.messages.batches.results(
    'message_batch_id',
  •    { betas: ['string', 'string', 'string'] },
    
  •    { betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    diff --git tests/api-resources/beta/messages/messages.test.ts tests/api-resources/beta/messages/messages.test.ts
    index 64b6299c..ec73d9c0 100644
    --- tests/api-resources/beta/messages/messages.test.ts
    +++ tests/api-resources/beta/messages/messages.test.ts
    @@ -30,7 +30,7 @@ describe('resource messages', () => {
    messages: [{ content: 'Hello, world', role: 'user' }],
    model: 'claude-3-5-sonnet-20241022',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stop_sequences: ['string'],
     stream: false,
     system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
     temperature: 1,
    

@@ -49,46 +49,16 @@ describe('resource messages', () => {
description: 'Get the current weather in a given location',
type: 'custom',
},

  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
     ],
     top_k: 5,
     top_p: 0.7,
    
  •  betas: ['string', 'string', 'string'],
    
  •  betas: ['string'],
    

    });
    });

    test('countTokens: only required params', async () => {
    const responsePromise = client.beta.messages.countTokens({

  •  messages: [
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •  ],
    
  •  messages: [{ content: 'string', role: 'user' }],
     model: 'string',
    

    });
    const rawResponse = await responsePromise.asResponse();
    @@ -102,11 +72,7 @@ describe('resource messages', () => {

    test('countTokens: required and optional params', async () => {
    const response = await client.beta.messages.countTokens({

  •  messages: [
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •  ],
    
  •  messages: [{ content: 'string', role: 'user' }],
     model: 'string',
     system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
     tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    

@@ -124,34 +90,8 @@ describe('resource messages', () => {
description: 'Get the current weather in a given location',
type: 'custom',
},

  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
     ],
    
  •  betas: ['string', 'string', 'string'],
    
  •  betas: ['string'],
    
    });
    });
    });
    diff --git a/tests/api-resources/beta/models.test.ts b/tests/api-resources/beta/models.test.ts
    new file mode 100644
    index 00000000..f155b632
    --- /dev/null
    +++ tests/api-resources/beta/models.test.ts
    @@ -0,0 +1,57 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import Anthropic from '@anthropic-ai/sdk';
+import { Response } from 'node-fetch';
+
+const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    +});

+describe('resource models', () => {

  • test('retrieve', async () => {
  • const responsePromise = client.beta.models.retrieve('model_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('retrieve: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.beta.models.retrieve('model_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
  • test('list', async () => {
  • const responsePromise = client.beta.models.list();
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('list: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.beta.models.list({ path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list: request options and params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.beta.models.list(
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1 },
    
  •    { path: '/_stainless_unknown_path' },
    
  •  ),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
    +});
    diff --git tests/api-resources/beta/prompt-caching/messages.test.ts tests/api-resources/beta/prompt-caching/messages.test.ts
    deleted file mode 100644
    index dd94b3a7..00000000
    --- tests/api-resources/beta/prompt-caching/messages.test.ts
    +++ /dev/null
    @@ -1,81 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import Anthropic from '@anthropic-ai/sdk';
-import { Response } from 'node-fetch';

-const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    -});

-describe('resource messages', () => {

  • test('create: only required params', async () => {
  • const responsePromise = client.beta.promptCaching.messages.create({
  •  max_tokens: 1024,
    
  •  messages: [{ content: 'Hello, world', role: 'user' }],
    
  •  model: 'claude-3-5-sonnet-20241022',
    
  • });
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('create: required and optional params', async () => {
  • const response = await client.beta.promptCaching.messages.create({
  •  max_tokens: 1024,
    
  •  messages: [{ content: 'Hello, world', role: 'user' }],
    
  •  model: 'claude-3-5-sonnet-20241022',
    
  •  metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stream: false,
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
    
  •  temperature: 1,
    
  •  tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •  tools: [
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •  ],
    
  •  top_k: 5,
    
  •  top_p: 0.7,
    
  •  betas: ['string', 'string', 'string'],
    
  • });
  • });
    -});
    diff --git tests/api-resources/completions.test.ts tests/api-resources/completions.test.ts
    index aa326cf2..fcd0a68c 100644
    --- tests/api-resources/completions.test.ts
    +++ tests/api-resources/completions.test.ts
    @@ -30,7 +30,7 @@ describe('resource completions', () => {
    model: 'string',
    prompt: '\n\nHuman: Hello, world!\n\nAssistant:',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stop_sequences: ['string'],
     stream: false,
     temperature: 1,
     top_k: 5,
    

diff --git a/tests/api-resources/messages/batches.test.ts b/tests/api-resources/messages/batches.test.ts
new file mode 100644
index 00000000..26efdbc8
--- /dev/null
+++ tests/api-resources/messages/batches.test.ts
@@ -0,0 +1,145 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+import Anthropic from '@anthropic-ai/sdk';
+import { Response } from 'node-fetch';
+
+const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    +});

+describe('resource batches', () => {

  • test('create: only required params', async () => {
  • const responsePromise = client.messages.batches.create({
  •  requests: [
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •      },
    
  •    },
    
  •  ],
    
  • });
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('create: required and optional params', async () => {
  • const response = await client.messages.batches.create({
  •  requests: [
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •        metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •        stop_sequences: ['string'],
    
  •        system: [
    
  •          { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    
  •        ],
    
  •        temperature: 1,
    
  •        tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •        tools: [
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •          },
    
  •        ],
    
  •        top_k: 5,
    
  •        top_p: 0.7,
    
  •      },
    
  •    },
    
  •  ],
    
  • });
  • });
  • test('retrieve', async () => {
  • const responsePromise = client.messages.batches.retrieve('message_batch_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('retrieve: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.retrieve('message_batch_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
  • test('list', async () => {
  • const responsePromise = client.messages.batches.list();
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('list: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.messages.batches.list({ path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list: request options and params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.list(
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1 },
    
  •    { path: '/_stainless_unknown_path' },
    
  •  ),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
  • test('cancel', async () => {
  • const responsePromise = client.messages.batches.cancel('message_batch_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('cancel: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.cancel('message_batch_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
  • test('results: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.results('message_batch_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
    +});
    diff --git tests/api-resources/messages.test.ts tests/api-resources/messages/messages.test.ts
    similarity index 74%
    rename from tests/api-resources/messages.test.ts
    rename to tests/api-resources/messages/messages.test.ts
    index 0497742e..3ae41d32 100644
    --- tests/api-resources/messages.test.ts
    +++ tests/api-resources/messages/messages.test.ts
    @@ -30,9 +30,9 @@ describe('resource messages', () => {
    messages: [{ content: 'Hello, world', role: 'user' }],
    model: 'claude-3-5-sonnet-20241022',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stop_sequences: ['string'],
     stream: false,
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text' }],
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
     temperature: 1,
     tool_choice: { type: 'auto', disable_parallel_tool_use: true },
     tools: [
    

@@ -45,8 +45,36 @@ describe('resource messages', () => {
},
},
name: 'x',

  •      cache_control: { type: 'ephemeral' },
         description: 'Get the current weather in a given location',
       },
    
  •  ],
    
  •  top_k: 5,
    
  •  top_p: 0.7,
    
  • });
  • });
  • test('countTokens: only required params', async () => {
  • const responsePromise = client.messages.countTokens({
  •  messages: [{ content: 'string', role: 'user' }],
    
  •  model: 'string',
    
  • });
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('countTokens: required and optional params', async () => {
  • const response = await client.messages.countTokens({
  •  messages: [{ content: 'string', role: 'user' }],
    
  •  model: 'string',
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
    
  •  tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •  tools: [
       {
         input_schema: {
           type: 'object',
    

@@ -56,22 +84,10 @@ describe('resource messages', () => {
},
},
name: 'x',

  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
         description: 'Get the current weather in a given location',
       },
     ],
    
  •  top_k: 5,
    
  •  top_p: 0.7,
    
    });
    });
    });
    diff --git a/tests/api-resources/models.test.ts b/tests/api-resources/models.test.ts
    new file mode 100644
    index 00000000..7f5c0411
    --- /dev/null
    +++ tests/api-resources/models.test.ts
    @@ -0,0 +1,57 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import Anthropic from '@anthropic-ai/sdk';
+import { Response } from 'node-fetch';
+
+const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    +});

+describe('resource models', () => {

  • test(,'retrieve', async () => {
  • const responsePromise = client.models.retrieve('model_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('retrieve: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.models.retrieve('model_id', { path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list', async () => {
  • const responsePromise = client.models.list();
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('list: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.models.list({ path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list: request options and params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.models.list(
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1 },
    
  •    { path: '/_stainless_unknown_path' },
    
  •  ),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
    +});
    diff --git tests/index.test.ts tests/index.test.ts
    index 0d4a6bba..b6398085 100644
    --- tests/index.test.ts
    +++ tests/index.test.ts
    @@ -183,7 +183,7 @@ describe('instantiate client', () => {
    expect(client.apiKey).toBe('my-anthropic-api-key');
    });
  • test('with overriden environment variable arguments', () => {
  • test('with overridden environment variable arguments', () => {
    // set options via env var
    process.env['ANTHROPIC_API_KEY'] = 'another my-anthropic-api-key';
    const client = new Anthropic({ apiKey: 'my-anthropic-api-key' });
    diff --git tests/responses.test.ts tests/responses.test.ts
    index d0db2b1f..db58e0b7 100644
    --- tests/responses.test.ts
    +++ tests/responses.test.ts
    @@ -1,5 +1,8 @@
    -import { createResponseHeaders } from '@anthropic-ai/sdk/core';
    +import { APIPromise, createResponseHeaders } from '@anthropic-ai/sdk/core';
    +import Anthropic from '@anthropic-ai/sdk/index';
    import { Headers } from '@anthropic-ai/sdk/_shims/index';
    +import { Response } from 'node-fetch';
    +import { compareType } from './utils/typing';

describe('response parsing', () => {
// TODO: test unicode characters
@@ -23,3 +26,129 @@ describe('response parsing', () => {
expect(headers['content-type']).toBe('text/xml, application/json');
});
});
+
+describe('request id', () => {

  • test('types', () => {
  • compareType<Awaited<APIPromise>, string>(true);
  • compareType<Awaited<APIPromise>, number>(true);
  • compareType<Awaited<APIPromise>, null>(true);
  • compareType<Awaited<APIPromise>, void>(true);
  • compareType<Awaited<APIPromise>, Response>(true);
  • compareType<Awaited<APIPromise>, Response>(true);
  • compareType<Awaited<APIPromise<{ foo: string }>>, { foo: string } & { _request_id?: string | null }>(
  •  true,
    
  • );
  • compareType<Awaited<APIPromise<Array<{ foo: string }>>>, Array<{ foo: string }>>(true);
  • });
  • test('withResponse', async () => {
  • const client = new Anthropic({
  •  apiKey: 'dummy',
    
  •  fetch: async () =>
    
  •    new Response(JSON.stringify({ id: 'bar' }), {
    
  •      headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •    }),
    
  • });
  • const {
  •  data: message,
    
  •  response,
    
  •  request_id,
    
  • } = await client.messages
  •  .create({ messages: [], model: 'claude-3-opus-20240229', max_tokens: 1024 })
    
  •  .withResponse();
    
  • expect(request_id).toBe('req_xxx');
  • expect(response.headers.get('request-id')).toBe('req_xxx');
  • expect(message.id).toBe('bar');
  • expect(JSON.stringify(message)).toBe('{"id":"bar"}');
  • });
  • test('object response', async () => {
  • const client = new Anthropic({
  •  apiKey: 'dummy',
    
  •  fetch: async () =>
    
  •    new Response(JSON.stringify({ id: 'bar' }), {
    
  •      headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •    }),
    
  • });
  • const rsp = await client.messages.create({
  •  messages: [],
    
  •  model: 'claude-3-opus-20240229',
    
  •  max_tokens: 1024,
    
  • });
  • expect(rsp.id).toBe('bar');
  • expect(rsp._request_id).toBe('req_xxx');
  • expect(JSON.stringify(rsp)).toBe('{"id":"bar"}');
  • });
  • test('envelope response', async () => {
  • const promise = new APIPromise<{ data: { foo: string } }>(
  •  (async () => {
    
  •    return {
    
  •      response: new Response(JSON.stringify({ data: { foo: 'bar' } }), {
    
  •        headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •      }),
    
  •      controller: {} as any,
    
  •      options: {} as any,
    
  •    };
    
  •  })(),
    
  • )._thenUnwrap((d) => d.data);
  • const rsp = await promise;
  • expect(rsp.foo).toBe('bar');
  • expect(rsp._request_id).toBe('req_xxx');
  • });
  • test('page response', async () => {
  • const client = new Anthropic({
  •  apiKey: 'dummy',
    
  •  fetch: async () =>
    
  •    new Response(JSON.stringify({ data: [{ foo: 'bar' }] }), {
    
  •      headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •    }),
    
  • });
  • const page = await client.beta.messages.batches.list();
  • expect(page.data).toMatchObject([{ foo: 'bar' }]);
  • expect((page as any)._request_id).toBeUndefined();
  • });
  • test('array response', async () => {
  • const promise = new APIPromise<Array<{ foo: string }>>(
  •  (async () => {
    
  •    return {
    
  •      response: new Response(JSON.stringify([{ foo: 'bar' }]), {
    
  •        headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •      }),
    
  •      controller: {} as any,
    
  •      options: {} as any,
    
  •    };
    
  •  })(),
    
  • );
  • const rsp = await promise;
  • expect(rsp.length).toBe(1);
  • expect(rsp[0]).toMatchObject({ foo: 'bar' });
  • expect((rsp as any)._request_id).toBeUndefined();
  • });
  • test('string response', async () => {
  • const promise = new APIPromise(
  •  (async () => {
    
  •    return {
    
  •      response: new Response('hello world', {
    
  •        headers: { 'request-id': 'req_xxx', 'content-type': 'application/text' },
    
  •      }),
    
  •      controller: {} as any,
    
  •      options: {} as any,
    
  •    };
    
  •  })(),
    
  • );
  • const result = await promise;
  • expect(result).toBe('hello world');
  • expect((result as any)._request_id).toBeUndefined();
  • });
    +});
    diff --git a/tests/utils/typing.ts b/tests/utils/typing.ts
    new file mode 100644
    index 00000000..4a791d2a
    --- /dev/null
    +++ tests/utils/typing.ts
    @@ -0,0 +1,9 @@
    +type Equal<X, Y> = (() => T extends X ? 1 : 2) extends () => T extends Y ? 1 : 2 ? true : false;

+export const expectType = (_expression: T): void => {

  • return;
    +};

+export const compareType = <T1, T2>(_expression: Equal<T1, T2>): void => {

  • return;
    +};

</details>

### Description

This PR updates the Anthropic TypeScript SDK, introducing new features, fixing bugs, and updating API models. Key changes include adding support for token counting, implementing the Models API, removing prompt caching, updating the Batches API, and various other improvements and bug fixes.

<details>
<summary><i>Changes</i></summary>

### Changes

1. `messages.ts`:
   - Added token counting functionality
   - Updated Message types and parameters
   - Removed deprecated models

2. `models.ts`:
   - Added new Models API resource

3. `batches.ts`:
   - Updated Batches API implementation

4. `beta/`:
   - Removed prompt caching
   - Updated beta models and messages

5. `core.ts`:
   - Added support for request IDs
   - Improved error handling

6. `README.md`:
   - Updated examples and documentation

7. `CHANGELOG.md`:
   - Added entries for new versions (0.33.1 and 0.33.0)

8. Various test files:
   - Updated and added tests for new functionalities

9. Package updates:
   - Updated version to 0.33.1
   - Updated dependencies

```mermaid
sequenceDiagram
    participant Client
    participant Anthropic
    participant Messages
    participant Models
    participant Batches

    Client->>Anthropic: Create client
    Anthropic->>Messages: Initialize Messages resource
    Anthropic->>Models: Initialize Models resource
    Anthropic->>Batches: Initialize Batches resource
    Client->>Messages: Create message
    Messages->>Anthropic: Send API request
    Anthropic-->>Messages: Return response with request ID
    Messages-->>Client: Return parsed response
    Client->>Models: List models
    Models->>Anthropic: Send API request
    Anthropic-->>Models: Return response
    Models-->>Client: Return parsed response
    Client->>Batches: Create message batch
    Batches->>Anthropic: Send API request
    Anthropic-->>Batches: Return response
    Batches-->>Client: Return parsed response

Possible Issues

  • The removal of prompt caching and some deprecated models might cause issues for users relying on these features.
  • Changes to the API structure and types may require updates to existing code using the SDK.

Security Hotspots

No significant security issues were identified in this change.

Copy link

bedrock debug - [puLL-Merge] - anthropics/[email protected]

Diff
diff --git .github/workflows/handle-release-pr-title-edit.yml .github/workflows/handle-release-pr-title-edit.yml
deleted file mode 100644
index 8144aaae..00000000
--- .github/workflows/handle-release-pr-title-edit.yml
+++ /dev/null
@@ -1,26 +0,0 @@
-name: Handle release PR title edits
-on:
-  pull_request:
-    types:
-      - edited
-      - unlabeled
-
-jobs:
-  update_pr_content:
-    name: Update pull request content
-    if: |
-      ((github.event.action == 'edited' && github.event.changes.title.from != github.event.pull_request.title) ||
-      (github.event.action == 'unlabeled' && github.event.label.name == 'autorelease: custom version')) &&
-      startsWith(github.event.pull_request.head.ref, 'release-please--') &&
-      github.event.pull_request.state == 'open' &&
-      github.event.sender.login != 'stainless-bot' &&
-      github.event.sender.login != 'stainless-app' &&
-      github.repository == 'anthropics/anthropic-sdk-typescript'
-    runs-on: ubuntu-latest
-    steps:
-      - uses: actions/checkout@v4
-      - uses: stainless-api/trigger-release-please@v1
-        with:
-          repo: ${{ github.event.repository.full_name }}
-          stainless-api-key: ${{ secrets.STAINLESS_API_KEY }}
-
diff --git .release-please-manifest.json .release-please-manifest.json
index e6b9ab03..2053c67b 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,5 +1,5 @@
 {
-  ".": "0.32.1",
-  "packages/vertex-sdk": "0.5.2",
-  "packages/bedrock-sdk": "0.11.2"
+  ".": "0.33.1",
+  "packages/vertex-sdk": "0.6.1",
+  "packages/bedrock-sdk": "0.12.0"
 }
diff --git .stats.yml .stats.yml
index ebe0695a..19e9daeb 100644
--- .stats.yml
+++ .stats.yml
@@ -1,2 +1,2 @@
-configured_endpoints: 10
-openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/anthropic-25f83d91f601c1962b3701fedf829f678f306aca0758af286ee1586cc9931f75.yml
+configured_endpoints: 19
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/anthropic-be055148d227480fcacc9086c37ac8009dcb487731069ada51af35044f65bee4.yml
diff --git CHANGELOG.md CHANGELOG.md
index f332e42a..a1a57c52 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,61 @@
 # Changelog
 
+## 0.33.1 (2024-12-17)
+
+Full Changelog: [sdk-v0.33.0...sdk-v0.33.1](https://github.com/anthropics/anthropic-sdk-typescript/compare/sdk-v0.33.0...sdk-v0.33.1)
+
+### Bug Fixes
+
+* **vertex:** remove `anthropic_version` deletion for token counting ([88221be](https://github.com/anthropics/anthropic-sdk-typescript/commit/88221be305d6e13ccf92e6e9cdb00daba45b57db))
+
+
+### Chores
+
+* **internal:** fix some typos ([#633](https://github.com/anthropics/anthropic-sdk-typescript/issues/633)) ([a0298f5](https://github.com/anthropics/anthropic-sdk-typescript/commit/a0298f5f67b8ecd25de416dbb3eada68b86befd7))
+
+## 0.33.0 (2024-12-17)
+
+Full Changelog: [sdk-v0.32.1...sdk-v0.33.0](https://github.com/anthropics/anthropic-sdk-typescript/compare/sdk-v0.32.1...sdk-v0.33.0)
+
+### Features
+
+* **api:** general availability updates ([93d1316](https://github.com/anthropics/anthropic-sdk-typescript/commit/93d13168f950b2cdfc3b7c6664205b06418fea79))
+* **api:** general availability updates ([#631](https://github.com/anthropics/anthropic-sdk-typescript/issues/631)) ([b5c92e5](https://github.com/anthropics/anthropic-sdk-typescript/commit/b5c92e5b74c370ac3f9ba28e915bd54588a42be0))
+* **client:** add ._request_id property to object responses ([#596](https://github.com/anthropics/anthropic-sdk-typescript/issues/596)) ([9d6d584](https://github.com/anthropics/anthropic-sdk-typescript/commit/9d6d58430a216df9888434158bf628ae4b067aba))
+* **internal:** make git install file structure match npm ([#617](https://github.com/anthropics/anthropic-sdk-typescript/issues/617)) ([d3dd7d5](https://github.com/anthropics/anthropic-sdk-typescript/commit/d3dd7d5f8cad460dd18725d5c0f3c8db3f00115d))
+* **vertex:** support token counting ([9e76b4d](https://github.com/anthropics/anthropic-sdk-typescript/commit/9e76b4dc22d62b1239b382bb771b69ad8cff9442))
+
+
+### Bug Fixes
+
+* **docs:** add missing await to pagination example ([#609](https://github.com/anthropics/anthropic-sdk-typescript/issues/609)) ([e303077](https://github.com/anthropics/anthropic-sdk-typescript/commit/e303077ebab73c41adee7d25375b767c3fc78998))
+* **types:** remove anthropic-instant-1.2 model ([#599](https://github.com/anthropics/anthropic-sdk-typescript/issues/599)) ([e222a4d](https://github.com/anthropics/anthropic-sdk-typescript/commit/e222a4d0518aa80671c66ee2a25d87dc87a51316))
+
+
+### Chores
+
+* **api:** update spec version ([#607](https://github.com/anthropics/anthropic-sdk-typescript/issues/607)) ([ea44f9a](https://github.com/anthropics/anthropic-sdk-typescript/commit/ea44f9ac49dcc25a5dfa53880ebf61318ee90f6c))
+* **api:** update spec version ([#629](https://github.com/anthropics/anthropic-sdk-typescript/issues/629)) ([a25295c](https://github.com/anthropics/anthropic-sdk-typescript/commit/a25295cd6db7b57162fdd9049eb8a3c37bb94f08))
+* **bedrock,vertex:** remove unsupported countTokens method ([#597](https://github.com/anthropics/anthropic-sdk-typescript/issues/597)) ([17b7da5](https://github.com/anthropics/anthropic-sdk-typescript/commit/17b7da5ee6f35ea2bdd53a66a662871affae6341))
+* **bedrock:** remove unsupported methods ([6458dc1](https://github.com/anthropics/anthropic-sdk-typescript/commit/6458dc14544c16240a6580a21a36fcf5bde594b2))
+* **ci:** remove unneeded workflow ([#594](https://github.com/anthropics/anthropic-sdk-typescript/issues/594)) ([7572e48](https://github.com/anthropics/anthropic-sdk-typescript/commit/7572e48dbccb2090562399c7ff2d01503c86f445))
+* **client:** drop unused devDependency ([#610](https://github.com/anthropics/anthropic-sdk-typescript/issues/610)) ([5d0d523](https://github.com/anthropics/anthropic-sdk-typescript/commit/5d0d523390d8c34cae836c423940b67defb9d2aa))
+* improve browser error message ([#613](https://github.com/anthropics/anthropic-sdk-typescript/issues/613)) ([c26121e](https://github.com/anthropics/anthropic-sdk-typescript/commit/c26121e84039b7430995b6363876ea9795ba31ed))
+* **internal:** bump cross-spawn to v7.0.6 ([#624](https://github.com/anthropics/anthropic-sdk-typescript/issues/624)) ([e58ba9a](https://github.com/anthropics/anthropic-sdk-typescript/commit/e58ba9a177ec5c8545fd3a3f4fd3d2e7c722f023))
+* **internal:** remove unnecessary getRequestClient function ([#623](https://github.com/anthropics/anthropic-sdk-typescript/issues/623)) ([882c45f](https://github.com/anthropics/anthropic-sdk-typescript/commit/882c45f5a0bd1f4b996d59e6589a205c2111f46b))
+* **internal:** update isAbsoluteURL ([#627](https://github.com/anthropics/anthropic-sdk-typescript/issues/627)) ([2528ea0](https://github.com/anthropics/anthropic-sdk-typescript/commit/2528ea0dcfc83f38e76b58eaadaa5e8c5c0b188d))
+* **internal:** update spec ([#630](https://github.com/anthropics/anthropic-sdk-typescript/issues/630)) ([82cac06](https://github.com/anthropics/anthropic-sdk-typescript/commit/82cac065e2711467773c0ea62848cdf139ed5a11))
+* **internal:** use reexports not destructuring ([#604](https://github.com/anthropics/anthropic-sdk-typescript/issues/604)) ([e4daff2](https://github.com/anthropics/anthropic-sdk-typescript/commit/e4daff2b6a3fb42876ebd06ed4947c88cff919d8))
+* remove redundant word in comment ([#615](https://github.com/anthropics/anthropic-sdk-typescript/issues/615)) ([ef57a10](https://github.com/anthropics/anthropic-sdk-typescript/commit/ef57a103bcfc922a724a7c878f970dbd369b305e))
+* **tests:** limit array example length ([#611](https://github.com/anthropics/anthropic-sdk-typescript/issues/611)) ([91dc181](https://github.com/anthropics/anthropic-sdk-typescript/commit/91dc1812db2cc9e1f4660a13106bad932518b7cf))
+* **types:** nicer error class types + jsdocs ([#626](https://github.com/anthropics/anthropic-sdk-typescript/issues/626)) ([0287993](https://github.com/anthropics/anthropic-sdk-typescript/commit/0287993912ef81bd2c49603d120f49f4f979d75e))
+
+
+### Documentation
+
+* remove suggestion to use `npm` call out ([#614](https://github.com/anthropics/anthropic-sdk-typescript/issues/614)) ([6369261](https://github.com/anthropics/anthropic-sdk-typescript/commit/6369261e3597351f17b8f1a3945ca56b00eba177))
+* use latest sonnet in example snippets ([#625](https://github.com/anthropics/anthropic-sdk-typescript/issues/625)) ([f70882b](https://github.com/anthropics/anthropic-sdk-typescript/commit/f70882b0e8119a414b01b9f0b85fbe1ccb06f122))
+
 ## 0.32.1 (2024-11-05)
 
 Full Changelog: [sdk-v0.32.0...sdk-v0.32.1](https://github.com/anthropics/anthropic-sdk-typescript/compare/sdk-v0.32.0...sdk-v0.32.1)
diff --git README.md README.md
index daba6e63..da3db48e 100644
--- README.md
+++ README.md
@@ -28,7 +28,7 @@ async function main() {
   const message = await client.messages.create({
     max_tokens: 1024,
     messages: [{ role: 'user', content: 'Hello, Claude' }],
-    model: 'claude-3-opus-20240229',
+    model: 'claude-3-5-sonnet-latest',
   });
 
   console.log(message.content);
@@ -49,7 +49,7 @@ const client = new Anthropic();
 const stream = await client.messages.create({
   max_tokens: 1024,
   messages: [{ role: 'user', content: 'Hello, Claude' }],
-  model: 'claude-3-opus-20240229',
+  model: 'claude-3-5-sonnet-latest',
   stream: true,
 });
 for await (const messageStreamEvent of stream) {
@@ -76,7 +76,7 @@ async function main() {
   const params: Anthropic.MessageCreateParams = {
     max_tokens: 1024,
     messages: [{ role: 'user', content: 'Hello, Claude' }],
-    model: 'claude-3-opus-20240229',
+    model: 'claude-3-5-sonnet-latest',
   };
   const message: Anthropic.Message = await client.messages.create(params);
 }
@@ -108,7 +108,7 @@ const anthropic = new Anthropic();
 async function main() {
   const stream = anthropic.messages
     .stream({
-      model: 'claude-3-opus-20240229',
+      model: 'claude-3-5-sonnet-latest',
       max_tokens: 1024,
       messages: [
         {
@@ -146,7 +146,7 @@ await anthropic.beta.messages.batches.create({
     {
       custom_id: 'my-first-request',
       params: {
-        model: 'claude-3-5-sonnet-20240620',
+        model: 'claude-3-5-sonnet-latest',
         max_tokens: 1024,
         messages: [{ role: 'user', content: 'Hello, world' }],
       },
@@ -154,7 +154,7 @@ await anthropic.beta.messages.batches.create({
     {
       custom_id: 'my-second-request',
       params: {
-        model: 'claude-3-5-sonnet-20240620',
+        model: 'claude-3-5-sonnet-latest',
         max_tokens: 1024,
         messages: [{ role: 'user', content: 'Hi again, friend' }],
       },
@@ -198,7 +198,7 @@ async function main() {
     .create({
       max_tokens: 1024,
       messages: [{ role: 'user', content: 'Hello, Claude' }],
-      model: 'claude-3-opus-20240229',
+      model: 'claude-3-5-sonnet-latest',
     })
     .catch(async (err) => {
       if (err instanceof Anthropic.APIError) {
@@ -227,6 +227,18 @@ Error codes are as followed:
 | >=500       | `InternalServerError`      |
 | N/A         | `APIConnectionError`       |
 
+## Request IDs
+
+> For more information on debugging requests, see [these docs](https://docs.anthropic.com/en/api/errors#request-id)
+
+All object responses in the SDK provide a `_request_id` property which is added from the `request-id` response header so that you can quickly log failing requests and report them back to Anthropic.
+
+\`\`\`ts
+const message = await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-5-sonnet-latest' });
+console.log(completion._request_id) // req_018EeWyXxfu5pfWkrYcMdjWG
+```
+
+
 ### Retries
 
 Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
@@ -243,7 +255,7 @@ const client = new Anthropic({
 });
 
 // Or, configure per-request:
-await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-opus-20240229' }, {
+await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-5-sonnet-latest' }, {
   maxRetries: 5,
 });

@@ -260,7 +272,7 @@ const client = new Anthropic({
});

// Override per-request:
-await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-opus-20240229' }, {
+await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-3-5-sonnet-latest' }, {
timeout: 5 * 1000,
});

@@ -295,7 +307,7 @@ for (const betaMessageBatch of page.data) {

// Convenience methods are provided for manually paginating:
while (page.hasNextPage()) {
-  page = page.getNextPage();
+  page = await page.getNextPage();
  // ...
}

@@ -317,7 +329,7 @@ const message = await client.messages.create(
{
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello, Claude' }],

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    },
    { headers: { 'anthropic-version': 'My-Custom-Value' } },
    );
    @@ -339,7 +351,7 @@ const response = await client.messages
    .create({
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello, Claude' }],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    })
    .asResponse();
    console.log(response.headers.get('X-My-Header'));
    @@ -349,7 +361,7 @@ const { data: message, response: raw } = await client.messages
    .create({
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello, Claude' }],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    })
    .withResponse();
    console.log(raw.headers.get('X-My-Header'));
    @@ -461,7 +473,7 @@ await client.messages.create(
    {
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello, Claude' }],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    },
    {
    httpAgent: new http.Agent({ keepAlive: false }),
    @@ -488,7 +500,7 @@ TypeScript >= 4.5 is supported.
    The following runtimes are supported:
  • Node.js 18 LTS or later (non-EOL) versions.
    -- Deno v1.28.0 or higher, using import Anthropic from "npm:@anthropic-ai/sdk".
    +- Deno v1.28.0 or higher.
  • Bun 1.0 or later.
  • Cloudflare Workers.
  • Vercel Edge Runtime.
    diff --git api.md api.md
    index ab1abd4c..48d1c9a8 100644
    --- api.md
    +++ api.md
    @@ -1,49 +1,103 @@

Anthropic

+# Shared
+
+Types:
+
+- APIErrorObject
+- AuthenticationError
+- BillingError
+- ErrorObject
+- ErrorResponse
+- GatewayTimeoutError
+- InvalidRequestError
+- NotFoundError
+- OverloadedError
+- PermissionError
+- RateLimitError
+

Messages

Types:

-- ContentBlock
-- ContentBlockDeltaEvent
-- ContentBlockStartEvent
-- ContentBlockStopEvent
-- ImageBlockParam
-- InputJSONDelta
-- Message
-- MessageDeltaEvent
-- MessageDeltaUsage
-- MessageParam
-- MessageStartEvent
-- MessageStopEvent
-- MessageStreamEvent
-- Metadata
-- Model
-- RawContentBlockDeltaEvent
-- RawContentBlockStartEvent
-- RawContentBlockStopEvent
-- RawMessageDeltaEvent
-- RawMessageStartEvent
-- RawMessageStopEvent
-- RawMessageStreamEvent
-- TextBlock
-- TextBlockParam
-- TextDelta
-- Tool
-- ToolChoice
-- ToolChoiceAny
-- ToolChoiceAuto
-- ToolChoiceTool
-- ToolResultBlockParam
-- ToolUseBlock
-- ToolUseBlockParam
-- Usage
+- Base64PDFSource
+- CacheControlEphemeral
+- ContentBlock
+- ContentBlockDeltaEvent
+- ContentBlockParam
+- ContentBlockStartEvent
+- ContentBlockStopEvent
+- DocumentBlockParam
+- ImageBlockParam
+- InputJSONDelta
+- Message
+- MessageDeltaEvent
+- MessageDeltaUsage
+- MessageParam
+- MessageStartEvent
+- MessageStopEvent
+- MessageStreamEvent
+- MessageTokensCount
+- Metadata
+- Model
+- RawContentBlockDeltaEvent
+- RawContentBlockStartEvent
+- RawContentBlockStopEvent
+- RawMessageDeltaEvent
+- RawMessageStartEvent
+- RawMessageStopEvent
+- RawMessageStreamEvent
+- TextBlock
+- TextBlockParam
+- TextDelta
+- Tool
+- ToolChoice
+- ToolChoiceAny
+- ToolChoiceAuto
+- ToolChoiceTool
+- ToolResultBlockParam
+- ToolUseBlock
+- ToolUseBlockParam
+- Usage

Methods:

-- client.messages.create({ ...params }) -> Message
+- client.messages.create({ ...params }) -> Message
+- client.messages.countTokens({ ...params }) -> MessageTokensCount

  • client.messages.stream(body, options?) -> MessageStream

+## Batches
+
+Types:
+
+- MessageBatch
+- MessageBatchCanceledResult
+- MessageBatchErroredResult
+- MessageBatchExpiredResult
+- MessageBatchIndividualResponse
+- MessageBatchRequestCounts
+- MessageBatchResult
+- MessageBatchSucceededResult
+
+Methods:
+
+- client.messages.batches.create({ ...params }) -> MessageBatch
+- client.messages.batches.retrieve(messageBatchId) -> MessageBatch
+- client.messages.batches.list({ ...params }) -> MessageBatchesPage
+- client.messages.batches.cancel(messageBatchId) -> MessageBatch
+- client.messages.batches.results(messageBatchId) -> Response
+
+# Models
+
+Types:
+
+- ModelInfo
+
+Methods:
+
+- client.models.retrieve(modelId) -> ModelInfo
+- client.models.list({ ...params }) -> ModelInfosPage
+

Beta

Types:
@@ -51,14 +105,27 @@ Types:

+## Models
+
+Types:
+
+- BetaModelInfo
+
+Methods:
+
+- client.beta.models.retrieve(modelId) -> BetaModelInfo
+- client.beta.models.list({ ...params }) -> BetaModelInfosPage
+

Messages

Types:
@@ -124,26 +191,3 @@ Methods:

  • client.beta.messages.batches.list({ ...params }) -> BetaMessageBatchesPage
  • client.beta.messages.batches.cancel(messageBatchId, { ...params }) -> BetaMessageBatch
  • client.beta.messages.batches.results(messageBatchId, { ...params }) -> Response

-## PromptCaching

-### Messages

-Types:

-- PromptCachingBetaCacheControlEphemeral
-- PromptCachingBetaImageBlockParam
-- PromptCachingBetaMessage
-- PromptCachingBetaMessageParam
-- PromptCachingBetaTextBlockParam
-- PromptCachingBetaTool
-- PromptCachingBetaToolResultBlockParam
-- PromptCachingBetaToolUseBlockParam
-- PromptCachingBetaUsage
-- RawPromptCachingBetaMessageStartEvent
-- RawPromptCachingBetaMessageStreamEvent

-Methods:

-- client.beta.promptCaching.messages.create({ ...params }) -> PromptCachingBetaMessage
-- client.beta.promptCaching.messages.stream({ ...params }) -> PromptCachingBetaMessageStream
diff --git examples/cancellation.ts examples/cancellation.ts
index 23fb7ec9..fc8bb0c7 100755
--- examples/cancellation.ts
+++ examples/cancellation.ts
@@ -16,7 +16,7 @@ async function main() {
const question = 'Hey Claude! How can I recursively list all files in a directory in Rust?';

const stream = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    stream: true,
    max_tokens: 500,
    messages: [{ role: 'user', content: question }],
    diff --git a/examples/count-tokens.ts b/examples/count-tokens.ts
    new file mode 100755
    index 00000000..e69de29b
    diff --git examples/demo.ts examples/demo.ts
    index 609e63ef..33fc2d87 100755
    --- examples/demo.ts
    +++ examples/demo.ts
    @@ -12,7 +12,7 @@ async function main() {
    content: 'Hey Claude!?',
    },
    ],
  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    max_tokens: 1024,
    });
    console.dir(result);
    diff --git examples/raw-streaming.ts examples/raw-streaming.ts
    index 559a6cac..916f2a4d 100755
    --- examples/raw-streaming.ts
    +++ examples/raw-streaming.ts
    @@ -6,7 +6,7 @@ const client = new Anthropic(); // gets API Key from environment variable ANTHRO

async function main() {
const stream = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    stream: true,
    max_tokens: 500,
    messages: [
    diff --git examples/streaming.ts examples/streaming.ts
    index 9ac2da60..bc2c74bd 100755
    --- examples/streaming.ts
    +++ examples/streaming.ts
    @@ -13,7 +13,7 @@ async function main() {
    content: Hey Claude! How can I recursively list all files in a directory in Rust?,
    },
    ],
  •  model: 'claude-3-opus-20240229',
    
  •  model: 'claude-3-5-sonnet-latest',
     max_tokens: 1024,
    
    })
    // Once a content block is fully streamed, this event will fire
    diff --git examples/tools-streaming.ts examples/tools-streaming.ts
    index 96d9cbdc..816201f2 100644
    --- examples/tools-streaming.ts
    +++ examples/tools-streaming.ts
    @@ -33,7 +33,7 @@ async function main() {
    },
    },
    ],
  •  model: 'claude-3-haiku-20240307',
    
  •  model: 'claude-3-5-sonnet-latest',
     max_tokens: 1024,
    

    })
    // When a JSON content block delta is encountered this
    diff --git examples/tools.ts examples/tools.ts
    index b237043b..1a696bc0 100644
    --- examples/tools.ts
    +++ examples/tools.ts
    @@ -22,7 +22,7 @@ async function main() {
    ];

    const message = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    max_tokens: 1024,
    messages: [userMessage],
    tools,
    @@ -38,7 +38,7 @@ async function main() {
    assert(tool);

const result = await client.messages.create({

  • model: 'claude-3-opus-20240229',
  • model: 'claude-3-5-sonnet-latest',
    max_tokens: 1024,
    messages: [
    userMessage,
    diff --git package.json package.json
    index f713c04f..d8f88f57 100644
    --- package.json
    +++ package.json
    @@ -1,6 +1,6 @@
    {
    "name": "@anthropic-ai/sdk",
  • "version": "0.32.1",
  • "version": "0.33.1",
    "description": "The official TypeScript library for the Anthropic API",
    "author": "Anthropic [email protected]",
    "types": "dist/index.d.ts",
    @@ -18,7 +18,7 @@
    "build": "./scripts/build-all",
    "prepublishOnly": "echo 'to publish, run yarn build && (cd dist; yarn publish)' && exit 1",
    "format": "prettier --write --cache --cache-strategy metadata . !dist",
  • "prepare": "if ./scripts/utils/check-is-in-git-install.sh; then ./scripts/build; fi",
  • "prepare": "if ./scripts/utils/check-is-in-git-install.sh; then ./scripts/build && ./scripts/utils/git-swap.sh; fi",
    "tsn": "ts-node -r tsconfig-paths/register",
    "lint": "./scripts/lint",
    "fix": "./scripts/format"
    @@ -45,7 +45,6 @@
    "jest": "^29.4.0",
    "prettier": "^3.0.0",
    "ts-jest": "^29.1.0",
  • "ts-morph": "^19.0.0",
    "ts-node": "^10.5.0",
    "tsc-multi": "^1.1.0",
    "tsconfig-paths": "^4.0.0",
    diff --git packages/bedrock-sdk/CHANGELOG.md packages/bedrock-sdk/CHANGELOG.md
    index 174cbb90..837af37e 100644
    --- packages/bedrock-sdk/CHANGELOG.md
    +++ packages/bedrock-sdk/CHANGELOG.md
    @@ -1,5 +1,24 @@

Changelog

+## 0.12.0 (2024-12-17)
+
+Full Changelog: bedrock-sdk-v0.11.2...bedrock-sdk-v0.12.0
+
+### Features
+
+* api: general availability updates (#631) (b5c92e5)
+
+
+### Chores
+
+* bedrock,vertex: remove unsupported countTokens method (#597) (17b7da5)
+* bedrock: remove unsupported methods (6458dc1)
+
+
+### Documentation
+
+* use latest sonnet in example snippets (#625) (f70882b)
+

0.11.2 (2024-11-05)

Full Changelog: bedrock-sdk-v0.11.1...bedrock-sdk-v0.11.2
diff --git packages/bedrock-sdk/README.md packages/bedrock-sdk/README.md
index f6eca6f5..74765c47 100644
--- packages/bedrock-sdk/README.md
+++ packages/bedrock-sdk/README.md
@@ -27,7 +27,7 @@ const client = new AnthropicBedrock();

async function main() {
const message = await client.messages.create({

  • model: 'anthropic.claude-3-sonnet-20240229-v1:0',
  • model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [
    {
    role: 'user',
    diff --git packages/bedrock-sdk/examples/demo.ts packages/bedrock-sdk/examples/demo.ts
    index 810514e8..a918b9ca 100644
    --- packages/bedrock-sdk/examples/demo.ts
    +++ packages/bedrock-sdk/examples/demo.ts
    @@ -11,7 +11,7 @@ const anthropic = new AnthropicBedrock();

async function main() {
const message = await anthropic.messages.create({

  • model: 'anthropic.claude-3-sonnet-20240229-v1:0',
  • model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [
    {
    role: 'user',
    diff --git packages/bedrock-sdk/examples/streaming.ts packages/bedrock-sdk/examples/streaming.ts
    index e1fac81f..5c577a2d 100644
    --- packages/bedrock-sdk/examples/streaming.ts
    +++ packages/bedrock-sdk/examples/streaming.ts
    @@ -11,7 +11,7 @@ const client = new AnthropicBedrock();

async function main() {
const stream = await client.messages.create({

  • model: 'anthropic.claude-3-sonnet-20240229-v1:0',
  • model: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [
    {
    role: 'user',
    diff --git packages/bedrock-sdk/package.json packages/bedrock-sdk/package.json
    index a0e56703..352931a5 100644
    --- packages/bedrock-sdk/package.json
    +++ packages/bedrock-sdk/package.json
    @@ -1,6 +1,6 @@
    {
    "name": "@anthropic-ai/bedrock-sdk",
  • "version": "0.11.2",
  • "version": "0.12.0",
    "description": "The official TypeScript library for the Anthropic Bedrock API",
    "author": "Anthropic [email protected]",
    "types": "dist/index.d.ts",
    diff --git packages/bedrock-sdk/src/client.ts packages/bedrock-sdk/src/client.ts
    index 523df8ba..86bd17ef 100644
    --- packages/bedrock-sdk/src/client.ts
    +++ packages/bedrock-sdk/src/client.ts
    @@ -74,7 +74,7 @@ export class AnthropicBedrock extends Core.APIClient {
    this.awsSessionToken = awsSessionToken;
    }
  • messages: Resources.Messages = new Resources.Messages(this);
  • messages: MessagesResource = makeMessagesResource(this);
    completions: Resources.Completions = new Resources.Completions(this);
    beta: BetaResource = makeBetaResource(this);

@@ -159,10 +159,27 @@ export class AnthropicBedrock extends Core.APIClient {
}

/**

    • The Bedrock API does not currently support prompt caching or the Batch API.
    • The Bedrock API does not currently support token counting or the Batch API.
  • */
    +type MessagesResource = Omit<Resources.Messages, 'batches' | 'countTokens'>;

+function makeMessagesResource(client: AnthropicBedrock): MessagesResource {

  • const resource = new Resources.Messages(client);
  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.batches;
  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.countTokens;
  • return resource;
    +}

+/**

    • The Bedrock API does not currently support prompt caching, token counting or the Batch API.
      */
      type BetaResource = Omit<Resources.Beta, 'promptCaching' | 'messages'> & {
  • messages: Omit<Resources.Beta['messages'], 'batches'>;
  • messages: Omit<Resources.Beta['messages'], 'batches' | 'countTokens'>;
    };

function makeBetaResource(client: AnthropicBedrock): BetaResource {
@@ -174,5 +191,8 @@ function makeBetaResource(client: AnthropicBedrock): BetaResource {
// @ts-expect-error we're deleting non-optional properties
delete resource.messages.batches;

  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.messages.countTokens;
  • return resource;
    }
    diff --git packages/vertex-sdk/CHANGELOG.md packages/vertex-sdk/CHANGELOG.md
    index 418af52a..94191164 100644
    --- packages/vertex-sdk/CHANGELOG.md
    +++ packages/vertex-sdk/CHANGELOG.md
    @@ -1,5 +1,32 @@

Changelog

+## 0.6.1 (2024-12-17)
+
+Full Changelog: vertex-sdk-v0.6.0...vertex-sdk-v0.6.1
+
+### Bug Fixes
+
+* vertex: remove anthropic_version deletion for token counting (88221be)
+
+## 0.6.0 (2024-12-17)
+
+Full Changelog: vertex-sdk-v0.5.2...vertex-sdk-v0.6.0
+
+### Features
+
+* api: general availability updates (#631) (b5c92e5)
+* vertex: support token counting (9e76b4d)
+
+
+### Chores
+
+* bedrock,vertex: remove unsupported countTokens method (#597) (17b7da5)
+
+
+### Documentation
+
+* use latest sonnet in example snippets (#625) (f70882b)
+

0.5.2 (2024-11-05)

Full Changelog: vertex-sdk-v0.5.1...vertex-sdk-v0.5.2
diff --git packages/vertex-sdk/README.md packages/vertex-sdk/README.md
index 6e63a8c5..6c9a9c93 100644
--- packages/vertex-sdk/README.md
+++ packages/vertex-sdk/README.md
@@ -30,7 +30,7 @@ async function main() {
content: 'Hey Claude!',
},
],

  • model: 'claude-3-sonnet@20240229',
  • model: 'claude-3-5-sonnet-v2@20241022',
    max_tokens: 300,
    });
    console.log(JSON.stringify(result, null, 2));
    diff --git packages/vertex-sdk/examples/vertex.ts packages/vertex-sdk/examples/vertex.ts
    index 62474cc7..75aba347 100644
    --- packages/vertex-sdk/examples/vertex.ts
    +++ packages/vertex-sdk/examples/vertex.ts
    @@ -14,7 +14,7 @@ async function main() {
    content: 'Hello!',
    },
    ],
  • model: 'claude-3-sonnet@20240229',
  • model: 'claude-3-5-sonnet-v2@20241022',
    max_tokens: 300,
    });
    console.log(JSON.stringify(result, null, 2));
    diff --git packages/vertex-sdk/package.json packages/vertex-sdk/package.json
    index 210c96d5..43fc356d 100644
    --- packages/vertex-sdk/package.json
    +++ packages/vertex-sdk/package.json
    @@ -1,6 +1,6 @@
    {
    "name": "@anthropic-ai/vertex-sdk",
  • "version": "0.5.2",
  • "version": "0.6.1",
    "description": "The official TypeScript library for the Anthropic Vertex API",
    "author": "Anthropic [email protected]",
    "types": "dist/index.d.ts",
    diff --git packages/vertex-sdk/src/client.ts packages/vertex-sdk/src/client.ts
    index 06231649..f1046455 100644
    --- packages/vertex-sdk/src/client.ts
    +++ packages/vertex-sdk/src/client.ts
    @@ -83,7 +83,7 @@ export class AnthropicVertex extends Core.APIClient {
    this._authClientPromise = this._auth.getClient();
    }
  • messages: Resources.Messages = new Resources.Messages(this);
  • messages: MessagesResource = makeMessagesResource(this);
    beta: BetaResource = makeBetaResource(this);

    protected override defaultQuery(): Core.DefaultQuery | undefined {
    @@ -147,15 +147,42 @@ export class AnthropicVertex extends Core.APIClient {
    options.path = /projects/${this.projectId}/locations/${this.region}/publishers/anthropic/models/${model}:${specifier};
    }

  • if (

  •  options.path === '/v1/messages/count_tokens' ||
    
  •  (options.path == '/v1/messages/count_tokens?beta=true' && options.method === 'post')
    
  • ) {

  •  if (!this.projectId) {
    
  •    throw new Error(
    
  •      'No projectId was given and it could not be resolved from credentials. The client should be instantiated with the `projectId` option or the `ANTHROPIC_VERTEX_PROJECT_ID` environment variable should be set.',
    
  •    );
    
  •  }
    
  •  options.path = `/projects/${this.projectId}/locations/${this.region}/publishers/anthropic/models/count-tokens:rawPredict`;
    
  • }

  • return super.buildRequest(options);
    }
    }

/**

    • The Vertex API does not currently support prompt caching or the Batch API.
    • The Vertex SDK does not currently support the Batch API.
  • */
    +type MessagesResource = Omit<Resources.Messages, 'batches'>;

+function makeMessagesResource(client: AnthropicVertex): MessagesResource {

  • const resource = new Resources.Messages(client);
  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.batches;
  • return resource;
    +}

+/**

    • The Vertex API does not currently support prompt caching, token counting or the Batch API.
      */
      type BetaResource = Omit<Resources.Beta, 'promptCaching' | 'messages'> & {
  • messages: Omit<Resources.Beta['messages'], 'batches'>;
  • messages: Omit<Resources.Beta['messages'], 'batches' | 'countTokens'>;
    };

function makeBetaResource(client: AnthropicVertex): BetaResource {
@@ -167,5 +194,8 @@ function makeBetaResource(client: AnthropicVertex): BetaResource {
// @ts-expect-error we're deleting non-optional properties
delete resource.messages.batches;

  • // @ts-expect-error we're deleting non-optional properties
  • delete resource.messages.countTokens;
  • return resource;
    }
    diff --git scripts/build scripts/build
    index ed2b9941..0bee923e 100755
    --- scripts/build
    +++ scripts/build
    @@ -32,7 +32,7 @@ npm exec tsc-multi

copy over handwritten .js/.mjs/.d.ts files

cp src/_shims/.{d.ts,js,mjs,md} dist/_shims
cp src/_shims/auto/
.{d.ts,js,mjs} dist/_shims/auto
-# we need to add exports = module.exports = Anthropic TypeScript to index.js;
+# we need to add exports = module.exports = Anthropic to index.js;

No way to get that from index.ts because it would cause compile errors

when building .mjs

node scripts/utils/fix-index-exports.cjs
diff --git scripts/utils/check-is-in-git-install.sh scripts/utils/check-is-in-git-install.sh
index 36bcedc2..1354eb43 100755
--- scripts/utils/check-is-in-git-install.sh
+++ scripts/utils/check-is-in-git-install.sh
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/usr/bin/env bash

Check if you happen to call prepare for a repository that's already in node_modules.

[ "$(basename "$(dirname "$PWD")")" = 'node_modules' ] ||

The name of the containing directory that 'npm` uses, which looks like

diff --git a/scripts/utils/git-swap.sh b/scripts/utils/git-swap.sh
new file mode 100755
index 00000000..79d1888e
--- /dev/null
+++ scripts/utils/git-swap.sh
@@ -0,0 +1,13 @@
+#!/usr/bin/env bash
+set -exuo pipefail
+# the package is published to NPM from ./dist
+# we want the final file structure for git installs to match the npm installs, so we
+
+# delete everything except ./dist and ./node_modules
+find . -maxdepth 1 -mindepth 1 ! -name 'dist' ! -name 'node_modules' -exec rm -rf '{}' +
+
+# move everything from ./dist to .
+mv dist/* .
+
+# delete the now-empty ./dist
+rmdir dist
diff --git src/core.ts src/core.ts
index a3e22246..ea8d8dca 100644
--- src/core.ts
+++ src/core.ts
@@ -37,7 +37,7 @@ type APIResponseProps = {
controller: AbortController;
};

-async function defaultParseResponse(props: APIResponseProps): Promise {
+async function defaultParseResponse(props: APIResponseProps): Promise<WithRequestID> {
const { response } = props;
if (props.options.stream) {
debug('response', response.status, response.url, response.headers, response.body);
@@ -54,11 +54,11 @@ async function defaultParseResponse(props: APIResponseProps): Promise {

// fetch refuses to read the body when the status code is 204.
if (response.status === 204) {

  • return null as T;
  • return null as WithRequestID;
    }

if (props.options.__binaryResponse) {

  • return response as unknown as T;
  • return response as unknown as WithRequestID;
    }

const contentType = response.headers.get('content-type');
@@ -69,26 +69,44 @@ async function defaultParseResponse(props: APIResponseProps): Promise {

 debug('response', response.status, response.url, response.headers, json);
  • return json as T;
  • return _addRequestID(json as T, response);
    }

const text = await response.text();
debug('response', response.status, response.url, response.headers, text);

// TODO handle blob, arraybuffer, other content types, etc.

  • return text as unknown as T;
  • return text as unknown as WithRequestID;
    +}

+type WithRequestID =

  • T extends Array | Response | AbstractPage ? T
  • : T extends Record<string, any> ? T & { _request_id?: string | null }
  • : T;

+function _addRequestID(value: T, response: Response): WithRequestID {

  • if (!value || typeof value !== 'object' || Array.isArray(value)) {
  • return value as WithRequestID;
  • }
  • return Object.defineProperty(value, '_request_id', {
  • value: response.headers.get('request-id'),
  • enumerable: false,
  • }) as WithRequestID;
    }

/**

  • A subclass of Promise providing additional helper methods
  • for interacting with the SDK.
    */
    -export class APIPromise extends Promise {
  • private parsedPromise: Promise | undefined;
    +export class APIPromise extends Promise<WithRequestID> {
  • private parsedPromise: Promise<WithRequestID> | undefined;

    constructor(
    private responsePromise: Promise,

  • private parseResponse: (props: APIResponseProps) => PromiseOrValue = defaultParseResponse,
  • private parseResponse: (
  •  props: APIResponseProps,
    
  • ) => PromiseOrValue<WithRequestID> = defaultParseResponse,
    ) {
    super((resolve) => {
    // this is maybe a bit weird but this has to be a no-op to not implicitly
    @@ -100,7 +118,7 @@ export class APIPromise extends Promise {

_thenUnwrap(transform: (data: T, props: APIResponseProps) => U): APIPromise {
return new APIPromise(this.responsePromise, async (props) =>

  •  transform(await this.parseResponse(props), props),
    
  •  _addRequestID(transform(await this.parseResponse(props), props), props.response),
    
    );
    }

@@ -120,33 +138,35 @@ export class APIPromise extends Promise {
asResponse(): Promise {
return this.responsePromise.then((p) => p.response);
}
+
/**

    • Gets the parsed response data and the raw Response instance.
    • Gets the parsed response data, the raw Response instance and the ID of the request,
    • returned vie the request-id header which is useful for debugging requests and resporting
    • issues to Anthropic.
    • If you just want to get the raw Response instance without parsing it,
    • you can use {@link asResponse()}.
    • 👋 Getting the wrong TypeScript type for Response?
    • Try setting "moduleResolution": "NodeNext" if you can,
    • or add one of these imports before your first import … from '@anthropic-ai/sdk':
      • import '@anthropic-ai/sdk/shims/node' (if you're running on Node)
      • import '@anthropic-ai/sdk/shims/web' (otherwise)
        */
  • async withResponse(): Promise<{ data: T; response: Response }> {
  • async withResponse(): Promise<{ data: T; response: Response; request_id: string | null | undefined }> {
    const [data, response] = await Promise.all([this.parse(), this.asResponse()]);
  • return { data, response };
  • return { data, response, request_id: response.headers.get('request-id') };
    }
  • private parse(): Promise {
  • private parse(): Promise<WithRequestID> {
    if (!this.parsedPromise) {
  •  this.parsedPromise = this.responsePromise.then(this.parseResponse);
    
  •  this.parsedPromise = this.responsePromise.then(this.parseResponse) as any as Promise<WithRequestID<T>>;
    
    }
    return this.parsedPromise;
    }
  • override then<TResult1 = T, TResult2 = never>(
  • onfulfilled?: ((value: T) => TResult1 | PromiseLike) | undefined | null,
  • override then<TResult1 = WithRequestID, TResult2 = never>(
  • onfulfilled?: ((value: WithRequestID) => TResult1 | PromiseLike) | undefined | null,
    onrejected?: ((reason: any) => TResult2 | PromiseLike) | undefined | null,
    ): Promise<TResult1 | TResult2> {
    return this.parse().then(onfulfilled, onrejected);
    @@ -154,11 +174,11 @@ export class APIPromise extends Promise {

override catch<TResult = never>(
onrejected?: ((reason: any) => TResult | PromiseLike) | undefined | null,

  • ): Promise<T | TResult> {
  • ): Promise<WithRequestID | TResult> {
    return this.parse().catch(onrejected);
    }
  • override finally(onfinally?: (() => void) | undefined | null): Promise {
  • override finally(onfinally?: (() => void) | undefined | null): Promise<WithRequestID> {
    return this.parse().finally(onfinally);
    }
    }
    @@ -177,7 +197,7 @@ export abstract class APIClient {
    maxRetries = 2,
    timeout = 600000, // 10 minutes
    httpAgent,
  • fetch: overridenFetch,
  • fetch: overriddenFetch,
    }: {
    baseURL: string;
    maxRetries?: number | undefined;
    @@ -190,7 +210,7 @@ export abstract class APIClient {
    this.timeout = validatePositiveInteger('timeout', timeout);
    this.httpAgent = httpAgent;
  • this.fetch = overridenFetch ?? fetch;
  • this.fetch = overriddenFetch ?? fetch;
    }

protected authHeaders(opts: FinalRequestOptions): Headers {
@@ -537,19 +557,13 @@ export abstract class APIClient {
const timeout = setTimeout(() => controller.abort(), ms);

 return (
  •  this.getRequestClient()
    
  •    // use undefined this binding; fetch errors if bound to something else in browser/cloudflare
    
  •    .fetch.call(undefined, url, { signal: controller.signal as any, ...options })
    
  •    .finally(() => {
    
  •      clearTimeout(timeout);
    
  •    })
    
  •  // use undefined this binding; fetch errors if bound to something else in browser/cloudflare
    
  •  this.fetch.call(undefined, url, { signal: controller.signal as any, ...options }).finally(() => {
    
  •    clearTimeout(timeout);
    
  •  })
    
    );
    }
  • protected getRequestClient(): RequestClient {
  • return { fetch: this.fetch };
  • }
  • private shouldRetry(response: Response): boolean {
    // Note this is not a standard header.
    const shouldRetryHeader = response.headers.get('x-should-retry');
    @@ -724,7 +738,13 @@ export class PagePromise<
    ) {
    super(
    request,
  •  async (props) => new Page(client, props.response, await defaultParseResponse(props), props.options),
    
  •  async (props) =>
    
  •    new Page(
    
  •      client,
    
  •      props.response,
    
  •      await defaultParseResponse(props),
    
  •      props.options,
    
  •    ) as WithRequestID<PageClass>,
    
    );
    }

@@ -992,8 +1012,8 @@ export const safeJSON = (text: string) => {
}
};

-// https://stackoverflow.com/a/19709846
-const startsWithSchemeRegexp = new RegExp('^(?:[a-z]+:)?//', 'i');
+// https://url.spec.whatwg.org/#url-scheme-string
+const startsWithSchemeRegexp = /^[a-z][a-z0-9+.-]*:/i;
const isAbsoluteURL = (url: string): boolean => {
return startsWithSchemeRegexp.test(url);
};
diff --git src/error.ts src/error.ts
index e9f24916..64525004 100644
--- src/error.ts
+++ src/error.ts
@@ -4,19 +4,21 @@ import { castToError, Headers } from './core';

export class AnthropicError extends Error {}

-export class APIError extends AnthropicError {

  • readonly status: number | undefined;
  • readonly headers: Headers | undefined;
  • readonly error: Object | undefined;
    +export class APIError<
  • TStatus extends number | undefined = number | undefined,

  • THeaders extends Headers | undefined = Headers | undefined,

  • TError extends Object | undefined = Object | undefined,
    +> extends AnthropicError {

  • /** HTTP status for the response that caused the error */

  • readonly status: TStatus;

  • /** HTTP headers for the response that caused the error */

  • readonly headers: THeaders;

  • /** JSON body of the response that caused the error */

  • readonly error: TError;

    readonly request_id: string | null | undefined;

  • constructor(
  • status: number | undefined,
  • error: Object | undefined,
  • message: string | undefined,
  • headers: Headers | undefined,
  • ) {
  • constructor(status: TStatus, error: TError, message: string | undefined, headers: THeaders) {
    super(${APIError.makeMessage(status, error, message)});
    this.status = status;
    this.headers = headers;
    @@ -51,7 +53,7 @@ export class APIError extends AnthropicError {
    message: string | undefined,
    headers: Headers | undefined,
    ): APIError {
  • if (!status) {
  • if (!status || !headers) {
    return new APIConnectionError({ message, cause: castToError(errorResponse) });
    }

@@ -93,17 +95,13 @@ export class APIError extends AnthropicError {
}
}

-export class APIUserAbortError extends APIError {

  • override readonly status: undefined = undefined;

+export class APIUserAbortError extends APIError<undefined, undefined, undefined> {
constructor({ message }: { message?: string } = {}) {
super(undefined, undefined, message || 'Request was aborted.', undefined);
}
}

-export class APIConnectionError extends APIError {

  • override readonly status: undefined = undefined;

+export class APIConnectionError extends APIError<undefined, undefined, undefined> {
constructor({ message, cause }: { message?: string | undefined; cause?: Error | undefined }) {
super(undefined, undefined, message || 'Connection error.', undefined);
// in some environments the 'cause' property is already declared
@@ -118,32 +116,18 @@ export class APIConnectionTimeoutError extends APIConnectionError {
}
}

-export class BadRequestError extends APIError {

  • override readonly status: 400 = 400;
    -}
    +export class BadRequestError extends APIError<400, Headers> {}

-export class AuthenticationError extends APIError {

  • override readonly status: 401 = 401;
    -}
    +export class AuthenticationError extends APIError<401, Headers> {}

-export class PermissionDeniedError extends APIError {

  • override readonly status: 403 = 403;
    -}
    +export class PermissionDeniedError extends APIError<403, Headers> {}

-export class NotFoundError extends APIError {

  • override readonly status: 404 = 404;
    -}
    +export class NotFoundError extends APIError<404, Headers> {}

-export class ConflictError extends APIError {

  • override readonly status: 409 = 409;
    -}
    +export class ConflictError extends APIError<409, Headers> {}

-export class UnprocessableEntityError extends APIError {

  • override readonly status: 422 = 422;
    -}
    +export class UnprocessableEntityError extends APIError<422, Headers> {}

-export class RateLimitError extends APIError {

  • override readonly status: 429 = 429;
    -}
    +export class RateLimitError extends APIError<429, Headers> {}

-export class InternalServerError extends APIError {}
+export class InternalServerError extends APIError<number, Headers> {}
diff --git src/index.ts src/index.ts
index 70c9d5d7..bfca4fc8 100644
--- src/index.ts
+++ src/index.ts
@@ -14,14 +14,35 @@ import {
CompletionCreateParamsStreaming,
Completions,
} from './resources/completions';
+import { ModelInfo, ModelInfosPage, ModelListParams, Models } from './resources/models';
import {

  • AnthropicBeta,
  • Beta,
  • BetaAPIError,
  • BetaAuthenticationError,
  • BetaBillingError,
  • BetaError,
  • BetaErrorResponse,
  • BetaGatewayTimeoutError,
  • BetaInvalidRequestError,
  • BetaNotFoundError,
  • BetaOverloadedError,
  • BetaPermissionError,
  • BetaRateLimitError,
    +} from './resources/beta/beta';
    +import {
  • Base64PDFSource,
  • CacheControlEphemeral,
    ContentBlock,
    ContentBlockDeltaEvent,
  • ContentBlockParam,
    ContentBlockStartEvent,
    ContentBlockStopEvent,
  • DocumentBlockParam,
    ImageBlockParam,
    InputJSONDelta,
    Message,
  • MessageCountTokensParams,
    MessageCreateParams,
    MessageCreateParamsNonStreaming,
    MessageCreateParamsStreaming,
    @@ -32,6 +53,7 @@ import {
    MessageStopEvent,
    MessageStreamEvent,
    MessageStreamParams,
  • MessageTokensCount,
    Messages,
    Metadata,
    Model,
    @@ -54,20 +76,7 @@ import {
    ToolUseBlock,
    ToolUseBlockParam,
    Usage,
    -} from './resources/messages';
    -import {
  • AnthropicBeta,
  • Beta,
  • BetaAPIError,
  • BetaAuthenticationError,
  • BetaError,
  • BetaErrorResponse,
  • BetaInvalidRequestError,
  • BetaNotFoundError,
  • BetaOverloadedError,
  • BetaPermissionError,
  • BetaRateLimitError,
    -} from './resources/beta/beta';
    +} from './resources/messages/messages';

export interface ClientOptions {
/**
@@ -181,7 +190,7 @@ export class Anthropic extends Core.APIClient {

 if (!options.dangerouslyAllowBrowser && Core.isRunningInBrowser()) {
   throw new Errors.AnthropicError(
  •    "It looks like you're running in a browser-like environment.\n\nThis is disabled by default, as it risks exposing your secret API credentials to attackers.\nIf you understand the risks and have appropriate mitigations in place,\nyou can set the `dangerouslyAllowBrowser` option to `true`, e.g.,\n\nnew Anthropic({ apiKey, dangerouslyAllowBrowser: true });\n\nTODO: link!\n",
    
  •    "It looks like you're running in a browser-like environment.\n\nThis is disabled by default, as it risks exposing your secret API credentials to attackers.\nIf you understand the risks and have appropriate mitigations in place,\nyou can set the `dangerouslyAllowBrowser` option to `true`, e.g.,\n\nnew Anthropic({ apiKey, dangerouslyAllowBrowser: true });\n",
     );
    
    }

@@ -201,6 +210,7 @@ export class Anthropic extends Core.APIClient {

completions: API.Completions = new API.Completions(this);
messages: API.Messages = new API.Messages(this);

  • models: API.Models = new API.Models(this);
    beta: API.Beta = new API.Beta(this);

    protected override defaultQuery(): Core.DefaultQuery | undefined {
    @@ -289,31 +299,11 @@ export class Anthropic extends Core.APIClient {
    static fileFromPath = Uploads.fileFromPath;
    }

-export const { HUMAN_PROMPT, AI_PROMPT } = Anthropic;

-export {

  • AnthropicError,
  • APIError,
  • APIConnectionError,
  • APIConnectionTimeoutError,
  • APIUserAbortError,
  • NotFoundError,
  • ConflictError,
  • RateLimitError,
  • BadRequestError,
  • AuthenticationError,
  • InternalServerError,
  • PermissionDeniedError,
  • UnprocessableEntityError,
    -} from './error';

-export import toFile = Uploads.toFile;
-export import fileFromPath = Uploads.fileFromPath;

Anthropic.Completions = Completions;
Anthropic.Messages = Messages;
+Anthropic.Models = Models;
+Anthropic.ModelInfosPage = ModelInfosPage;
Anthropic.Beta = Beta;

export declare namespace Anthropic {
export type RequestOptions = Core.RequestOptions;

@@ -330,10 +320,14 @@ export declare namespace Anthropic {

export {
Messages as Messages,

  • type Base64PDFSource as Base64PDFSource,
  • type CacheControlEphemeral as CacheControlEphemeral,
    type ContentBlock as ContentBlock,
    type ContentBlockDeltaEvent as ContentBlockDeltaEvent,
  • type ContentBlockParam as ContentBlockParam,
    type ContentBlockStartEvent as ContentBlockStartEvent,
    type ContentBlockStopEvent as ContentBlockStopEvent,
  • type DocumentBlockParam as DocumentBlockParam,
    type ImageBlockParam as ImageBlockParam,
    type InputJSONDelta as InputJSONDelta,
    type Message as Message,
    @@ -343,6 +337,7 @@ export declare namespace Anthropic {
    type MessageStartEvent as MessageStartEvent,
    type MessageStopEvent as MessageStopEvent,
    type MessageStreamEvent as MessageStreamEvent,
  • type MessageTokensCount as MessageTokensCount,
    type Metadata as Metadata,
    type Model as Model,
    type RawContentBlockDeltaEvent as RawContentBlockDeltaEvent,
    @@ -368,6 +363,14 @@ export declare namespace Anthropic {
    type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
    type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
    type MessageStreamParams as MessageStreamParams,
  • type MessageCountTokensParams as MessageCountTokensParams,
  • };
  • export {
  • Models as Models,
  • type ModelInfo as ModelInfo,
  • ModelInfosPage as ModelInfosPage,
  • type ModelListParams as ModelListParams,
    };

export {
@@ -375,14 +378,46 @@ export declare namespace Anthropic {
type AnthropicBeta as AnthropicBeta,
type BetaAPIError as BetaAPIError,
type BetaAuthenticationError as BetaAuthenticationError,

  • type BetaBillingError as BetaBillingError,
    type BetaError as BetaError,
    type BetaErrorResponse as BetaErrorResponse,
  • type BetaGatewayTimeoutError as BetaGatewayTimeoutError,
    type BetaInvalidRequestError as BetaInvalidRequestError,
    type BetaNotFoundError as BetaNotFoundError,
    type BetaOverloadedError as BetaOverloadedError,
    type BetaPermissionError as BetaPermissionError,
    type BetaRateLimitError as BetaRateLimitError,
    };
  • export type APIErrorObject = API.APIErrorObject;
  • export type AuthenticationError = API.AuthenticationError;
  • export type BillingError = API.BillingError;
  • export type ErrorObject = API.ErrorObject;
  • export type ErrorResponse = API.ErrorResponse;
  • export type GatewayTimeoutError = API.GatewayTimeoutError;
  • export type InvalidRequestError = API.InvalidRequestError;
  • export type NotFoundError = API.NotFoundError;
  • export type OverloadedError = API.OverloadedError;
  • export type PermissionError = API.PermissionError;
  • export type RateLimitError = API.RateLimitError;
    }
    +export const { HUMAN_PROMPT, AI_PROMPT } = Anthropic;

+export { toFile, fileFromPath } from './uploads';
+export {

  • AnthropicError,
  • APIError,
  • APIConnectionError,
  • APIConnectionTimeoutError,
  • APIUserAbortError,
  • NotFoundError,
  • ConflictError,
  • RateLimitError,
  • BadRequestError,
  • AuthenticationError,
  • InternalServerError,
  • PermissionDeniedError,
  • UnprocessableEntityError,
    +} from './error';

export default Anthropic;
diff --git src/lib/PromptCachingBetaMessageStream.ts src/lib/PromptCachingBetaMessageStream.ts
deleted file mode 100644
index 0e742cba..00000000
--- src/lib/PromptCachingBetaMessageStream.ts
+++ /dev/null
@@ -1,579 +0,0 @@
-import * as Core from '@anthropic-ai/sdk/core';
-import { AnthropicError, APIUserAbortError } from '@anthropic-ai/sdk/error';
-import { type ContentBlock, type TextBlock } from '@anthropic-ai/sdk/resources/messages';
-import {

  • Messages,
  • type PromptCachingBetaMessage,
  • type RawPromptCachingBetaMessageStreamEvent,
  • type PromptCachingBetaMessageParam,
  • type MessageCreateParams,
  • type MessageCreateParamsBase,
    -} from '@anthropic-ai/sdk/resources/beta/prompt-caching/messages';
    -import { type ReadableStream } from '@anthropic-ai/sdk/_shims/index';
    -import { Stream } from '@anthropic-ai/sdk/streaming';
    -import { partialParse } from '../_vendor/partial-json-parser/parser';

-export interface PromptCachingBetaMessageStreamEvents {

  • connect: () => void;
  • streamEvent: (event: RawPromptCachingBetaMessageStreamEvent, snapshot: PromptCachingBetaMessage) => void;
  • text: (textDelta: string, textSnapshot: string) => void;
  • inputJson: (partialJson: string, jsonSnapshot: unknown) => void;
  • message: (message: PromptCachingBetaMessage) => void;
  • contentBlock: (content: ContentBlock) => void;
  • finalPromptCachingBetaMessage: (message: PromptCachingBetaMessage) => void;
  • error: (error: AnthropicError) => void;
  • abort: (error: APIUserAbortError) => void;
  • end: () => void;
    -}

-type PromptCachingBetaMessageStreamEventListeners =

  • {
  • listener: PromptCachingBetaMessageStreamEvents[Event];
  • once?: boolean;
  • }[];

-const JSON_BUF_PROPERTY = '__json_buf';

-export class PromptCachingBetaMessageStream implements AsyncIterable {

  • messages: PromptCachingBetaMessageParam[] = [];
  • receivedMessages: PromptCachi,ngBetaMessage[] = [];
  • #currentMessageSnapshot: PromptCachingBetaMessage | undefined;
  • controller: AbortController = new AbortController();
  • #connectedPromise: Promise;
  • #resolveConnectedPromise: () => void = () => {};
  • #rejectConnectedPromise: (error: AnthropicError) => void = () => {};
  • #endPromise: Promise;
  • #resolveEndPromise: () => void = () => {};
  • #rejectEndPromise: (error: AnthropicError) => void = () => {};
  • #listeners: {
  • [Event in keyof PromptCachingBetaMessageStreamEvents]?: PromptCachingBetaMessageStreamEventListeners;
  • } = {};
  • #ended = false;
  • #errored = false;
  • #aborted = false;
  • #catchingPromiseCreated = false;
  • constructor() {
  • this.#connectedPromise = new Promise((resolve, reject) => {
  •  this.#resolveConnectedPromise = resolve;
    
  •  this.#rejectConnectedPromise = reject;
    
  • });
  • this.#endPromise = new Promise((resolve, reject) => {
  •  this.#resolveEndPromise = resolve;
    
  •  this.#rejectEndPromise = reject;
    
  • });
  • // Don't let these promises cause unhandled rejection errors.
  • // we will manually cause an unhandled rejection error later
  • // if the user hasn't registered any error listener or called
  • // any promise-returning method.
  • this.#connectedPromise.catch(() => {});
  • this.#endPromise.catch(() => {});
  • }
  • /**
    • Intended for use on the frontend, consuming a stream produced with
    • .toReadableStream() on the backend.
    • Note that messages sent to the model do not appear in .on('message')
    • in this context.
  • */
  • static fromReadableStream(stream: ReadableStream): PromptCachingBetaMessageStream {
  • const runner = new PromptCachingBetaMessageStream();
  • runner._run(() => runner._fromReadableStream(stream));
  • return runner;
  • }
  • static createMessage(
  • messages: Messages,
  • params: MessageCreateParamsBase,
  • options?: Core.RequestOptions,
  • ): PromptCachingBetaMessageStream {
  • const runner = new PromptCachingBetaMessageStream();
  • for (const message of params.messages) {
  •  runner._addPromptCachingBetaMessageParam(message);
    
  • }
  • runner._run(() =>
  •  runner._createPromptCachingBetaMessage(
    
  •    messages,
    
  •    { ...params, stream: true },
    
  •    { ...options, headers: { ...options?.headers, 'X-Stainless-Helper-Method': 'stream' } },
    
  •  ),
    
  • );
  • return runner;
  • }
  • protected _run(executor: () => Promise) {
  • executor().then(() => {
  •  this._emitFinal();
    
  •  this._emit('end');
    
  • }, this.#handleError);
  • }
  • protected _addPromptCachingBetaMessageParam(message: PromptCachingBetaMessageParam) {
  • this.messages.push(message);
  • }
  • protected _addPromptCachingBetaMessage(message: PromptCachingBetaMessage, emit = true) {
  • this.receivedMessages.push(message);
  • if (emit) {
  •  this._emit('message', message);
    
  • }
  • }
  • protected async _createPromptCachingBetaMessage(
  • messages: Messages,
  • params: MessageCreateParams,
  • options?: Core.RequestOptions,
  • ): Promise {
  • const signal = options?.signal;
  • if (signal) {
  •  if (signal.aborted) this.controller.abort();
    
  •  signal.addEventListener('abort', () => this.controller.abort());
    
  • }
  • this.#beginRequest();
  • const stream = await messages.create(
  •  { ...params, stream: true },
    
  •  { ...options, signal: this.controller.signal },
    
  • );
  • this._connected();
  • for await (const event of stream) {
  •  this.#addStreamEvent(event);
    
  • }
  • if (stream.controller.signal?.aborted) {
  •  throw new APIUserAbortError();
    
  • }
  • this.#endRequest();
  • }
  • protected _connected() {
  • if (this.ended) return;
  • this.#resolveConnectedPromise();
  • this._emit('connect');
  • }
  • get ended(): boolean {
  • return this.#ended;
  • }
  • get errored(): boolean {
  • return this.#errored;
  • }
  • get aborted(): boolean {
  • return this.#aborted;
  • }
  • abort() {
  • this.controller.abort();
  • }
  • /**
    • Adds the listener function to the end of the listeners array for the event.
    • No checks are made to see if the listener has already been added. Multiple calls passing
    • the same combination of event and listener will result in the listener being added, and
    • called, multiple times.
    • @returns this PromptCachingBetaMessageStream, so that calls can be chained
  • */
  • on(
  • event: Event,
  • listener: PromptCachingBetaMessageStreamEvents[Event],
  • ): this {
  • const listeners: PromptCachingBetaMessageStreamEventListeners =
  •  this.#listeners[event] || (this.#listeners[event] = []);
    
  • listeners.push({ listener });
  • return this;
  • }
  • /**
    • Removes the specified listener from the listener array for the event.
    • off() will remove, at most, one instance of a listener from the listener array. If any single
    • listener has been added multiple times to the listener array for the specified event, then
    • off() must be called multiple times to remove each instance.
    • @returns this PromptCachingBetaMessageStream, so that calls can be chained
  • */
  • off(
  • event: Event,
  • listener: PromptCachingBetaMessageStreamEvents[Event],
  • ): this {
  • const listeners = this.#listeners[event];
  • if (!listeners) return this;
  • const index = listeners.findIndex((l) => l.listener === listener);
  • if (index >= 0) listeners.splice(index, 1);
  • return this;
  • }
  • /**
    • Adds a one-time listener function for the event. The next time the event is triggered,
    • this listener is removed and then invoked.
    • @returns this PromptCachingBetaMessageStream, so that calls can be chained
  • */
  • once(
  • event: Event,
  • listener: PromptCachingBetaMessageStreamEvents[Event],
  • ): this {
  • const listeners: PromptCachingBetaMessageStreamEventListeners =
  •  this.#listeners[event] || (this.#listeners[event] = []);
    
  • listeners.push({ listener, once: true });
  • return this;
  • }
  • /**
    • This is similar to .once(), but returns a Promise that resolves the next time
    • the event is triggered, instead of calling a listener callback.
    • @returns a Promise that resolves the next time given event is triggered,
    • or rejects if an error is emitted. (If you request the 'error' event,
    • returns a promise that resolves with the error).
    • Example:
    • const message = await stream.emitted('message') // rejects if the stream errors
  • */
  • emitted(
  • event: Event,
  • ): Promise<
  • Parameters<PromptCachingBetaMessageStreamEvents[Event]> extends [infer Param] ? Param
  • : Parameters<PromptCachingBetaMessageStreamEvents[Event]> extends [] ? void
  • : Parameters<PromptCachingBetaMessageStreamEvents[Event]>
  • {

  • return new Promise((resolve, reject) => {
  •  this.#catchingPromiseCreated = true;
    
  •  if (event !== 'error') this.once('error', reject);
    
  •  this.once(event, resolve as any);
    
  • });
  • }
  • async done(): Promise {
  • this.#catchingPromiseCreated = true;
  • await this.#endPromise;
  • }
  • get currentMessage(): PromptCachingBetaMessage | undefined {
  • return this.#currentMessageSnapshot;
  • }
  • #getFinalMessage(): PromptCachingBetaMessage {
  • if (this.receivedMessages.length === 0) {
  •  throw new AnthropicError(
    
  •    'stream ended without producing a PromptCachingBetaMessage with role=assistant',
    
  •  );
    
  • }
  • return this.receivedMessages.at(-1)!;
  • }
  • /**
    • @returns a promise that resolves with the the final assistant PromptCachingBetaMessage response,
    • or rejects if an error occurred or the stream ended prematurely without producing a PromptCachingBetaMessage.
  • */
  • async finalMessage(): Promise {
  • await this.done();
  • return this.#getFinalMessage();
  • }
  • #getFinalText(): string {
  • if (this.receivedMessages.length === 0) {
  •  throw new AnthropicError(
    
  •    'stream ended without producing a PromptCachingBetaMessage with role=assistant',
    
  •  );
    
  • }
  • const textBlocks = this.receivedMessages
  •  .at(-1)!
    
  •  .content.filter((block): block is TextBlock => block.type === 'text')
    
  •  .map((block) => block.text);
    
  • if (textBlocks.length === 0) {
  •  throw new AnthropicError('stream ended without producing a content block with type=text');
    
  • }
  • return textBlocks.join(' ');
  • }
  • /**
    • @returns a promise that resolves with the the final assistant PromptCachingBetaMessage's text response, concatenated
    • together if there are more than one text blocks.
    • Rejects if an error occurred or the stream ended prematurely without producing a PromptCachingBetaMessage.
  • */
  • async finalText(): Promise {
  • await this.done();
  • return this.#getFinalText();
  • }
  • #handleError = (error: unknown) => {
  • this.#errored = true;
  • if (error instanceof Error && error.name === 'AbortError') {
  •  error = new APIUserAbortError();
    
  • }
  • if (error instanceof APIUserAbortError) {
  •  this.#aborted = true;
    
  •  return this._emit('abort', error);
    
  • }
  • if (error instanceof AnthropicError) {
  •  return this._emit('error', error);
    
  • }
  • if (error instanceof Error) {
  •  const anthropicError: AnthropicError = new AnthropicError(error.message);
    
  •  // @ts-ignore
    
  •  anthropicError.cause = error;
    
  •  return this._emit('error', anthropicError);
    
  • }
  • return this._emit('error', new AnthropicError(String(error)));
  • };
  • protected _emit(
  • event: Event,
  • ...args: Parameters<PromptCachingBetaMessageStreamEvents[Event]>
  • ) {
  • // make sure we don't emit any PromptCachingBetaMessageStreamEvents after end
  • if (this.#ended) return;
  • if (event === 'end') {
  •  this.#ended = true;
    
  •  this.#resolveEndPromise();
    
  • }
  • const listeners: PromptCachingBetaMessageStreamEventListeners | undefined = this.#listeners[event];
  • if (listeners) {
  •  this.#listeners[event] = listeners.filter((l) => !l.once) as any;
    
  •  listeners.forEach(({ listener }: any) => listener(...args));
    
  • }
  • if (event === 'abort') {
  •  const error = args[0] as APIUserAbortError;
    
  •  if (!this.#catchingPromiseCreated && !listeners?.length) {
    
  •    Promise.reject(error);
    
  •  }
    
  •  this.#rejectConnectedPromise(error);
    
  •  this.#rejectEndPromise(error);
    
  •  this._emit('end');
    
  •  return;
    
  • }
  • if (event === 'error') {
  •  // NOTE: _emit('error', error) should only be called from #handleError().
    
  •  const error = args[0] as AnthropicError;
    
  •  if (!this.#catchingPromiseCreated && !listeners?.length) {
    
  •    // Trigger an unhandled rejection if the user hasn't registered any error handlers.
    
  •    // If you are seeing stack traces here, make sure to handle errors via either:
    
  •    // - runner.on('error', () => ...)
    
  •    // - await runner.done()
    
  •    // - await runner.final...()
    
  •    // - etc.
    
  •    Promise.reject(error);
    
  •  }
    
  •  this.#rejectConnectedPromise(error);
    
  •  this.#rejectEndPromise(error);
    
  •  this._emit('end');
    
  • }
  • }
  • protected _emitFinal() {
  • const finalPromptCachingBetaMessage = this.receivedMessages.at(-1);
  • if (finalPromptCachingBetaMessage) {
  •  this._emit('finalPromptCachingBetaMessage', this.#getFinalMessage());
    
  • }
  • }
  • #beginRequest() {
  • if (this.ended) return;
  • this.#currentMessageSnapshot = undefined;
  • }
  • #addStreamEvent(event: RawPromptCachingBetaMessageStreamEvent) {
  • if (this.ended) return;
  • const messageSnapshot = this.#accumulateMessage(event);
  • this._emit('streamEvent', event, messageSnapshot);
  • switch (event.type) {
  •  case 'content_block_delta': {
    
  •    const content = messageSnapshot.content.at(-1)!;
    
  •    if (event.delta.type === 'text_delta' && content.type === 'text') {
    
  •      this._emit('text', event.delta.text, content.text || '');
    
  •    } else if (event.delta.type === 'input_json_delta' && content.type === 'tool_use') {
    
  •      if (content.input) {
    
  •        this._emit('inputJson', event.delta.partial_json, content.input);
    
  •      }
    
  •    }
    
  •    break;
    
  •  }
    
  •  case 'message_stop': {
    
  •    this._addPromptCachingBetaMessageParam(messageSnapshot);
    
  •    this._addPromptCachingBetaMessage(messageSnapshot, true);
    
  •    break;
    
  •  }
    
  •  case 'content_block_stop': {
    
  •    this._emit('contentBlock', messageSnapshot.content.at(-1)!);
    
  •    break;
    
  •  }
    
  •  case 'message_start': {
    
  •    this.#currentMessageSnapshot = messageSnapshot;
    
  •    break;
    
  •  }
    
  •  case 'content_block_start':
    
  •  case 'message_delta':
    
  •    break;
    
  • }
  • }
  • #endRequest(): PromptCachingBetaMessage {
  • if (this.ended) {
  •  throw new AnthropicError(`stream has ended, this shouldn't happen`);
    
  • }
  • const snapshot = this.#currentMessageSnapshot;
  • if (!snapshot) {
  •  throw new AnthropicError(`request ended without sending any chunks`);
    
  • }
  • this.#currentMessageSnapshot = undefined;
  • return snapshot;
  • }
  • protected async _fromReadableStream(
  • readableStream: ReadableStream,
  • options?: Core.RequestOptions,
  • ): Promise {
  • const signal = options?.signal;
  • if (signal) {
  •  if (signal.aborted) this.controller.abort();
    
  •  signal.addEventListener('abort', () => this.controller.abort());
    
  • }
  • this.#beginRequest();
  • this._connected();
  • const stream = Stream.fromReadableStream(
  •  readableStream,
    
  •  this.controller,
    
  • );
  • for await (const event of stream) {
  •  this.#addStreamEvent(event);
    
  • }
  • if (stream.controller.signal?.aborted) {
  •  throw new APIUserAbortError();
    
  • }
  • this.#endRequest();
  • }
  • /**
    • Mutates this.#currentPromptCachingBetaMessage with the current event. Handling the accumulation of multiple messages
    • will be needed to be handled by the caller, this method will throw if you try to accumulate for multiple
    • messages.
  • */
  • #accumulateMessage(event: RawPromptCachingBetaMessageStreamEvent): PromptCachingBetaMessage {
  • let snapshot = this.#currentMessageSnapshot;
  • if (event.type === 'message_start') {
  •  if (snapshot) {
    
  •    throw new AnthropicError(`Unexpected event order, got ${event.type} before receiving "message_stop"`);
    
  •  }
    
  •  return event.message;
    
  • }
  • if (!snapshot) {
  •  throw new AnthropicError(`Unexpected event order, got ${event.type} before "message_start"`);
    
  • }
  • switch (event.type) {
  •  case 'message_stop':
    
  •    return snapshot;
    
  •  case 'message_delta':
    
  •    snapshot.stop_reason = event.delta.stop_reason;
    
  •    snapshot.stop_sequence = event.delta.stop_sequence;
    
  •    snapshot.usage.output_tokens = event.usage.output_tokens;
    
  •    return snapshot;
    
  •  case 'content_block_start':
    
  •    snapshot.content.push(event.content_block);
    
  •    return snapshot;
    
  •  case 'content_block_delta': {
    
  •    const snapshotContent = snapshot.content.at(event.index);
    
  •    if (snapshotContent?.type === 'text' && event.delta.type === 'text_delta') {
    
  •      snapshotContent.text += event.delta.text;
    
  •    } else if (snapshotContent?.type === 'tool_use' && event.delta.type === 'input_json_delta') {
    
  •      // we need to keep track of the raw JSON string as well so that we can
    
  •      // re-parse it for each delta, for now we just store it as an untyped
    
  •      // non-enumerable property on the snapshot
    
  •      let jsonBuf = (snapshotContent as any)[JSON_BUF_PROPERTY] || '';
    
  •      jsonBuf += event.delta.partial_json;
    
  •      Object.defineProperty(snapshotContent, JSON_BUF_PROPERTY, {
    
  •        value: jsonBuf,
    
  •        enumerable: false,
    
  •        writable: true,
    
  •      });
    
  •      if (jsonBuf) {
    
  •        snapshotContent.input = partialParse(jsonBuf);
    
  •      }
    
  •    }
    
  •    return snapshot;
    
  •  }
    
  •  case 'content_block_stop':
    
  •    return snapshot;
    
  • }
  • }
  • Symbol.asyncIterator: AsyncIterator {
  • const pushQueue: RawPromptCachingBetaMessageStreamEvent[] = [];
  • const readQueue: {
  •  resolve: (chunk: RawPromptCachingBetaMessageStreamEvent | undefined) => void;
    
  •  reject: (error: unknown) => void;
    
  • }[] = [];
  • let done = false;
  • this.on('streamEvent', (event) => {
  •  const reader = readQueue.shift();
    
  •  if (reader) {
    
  •    reader.resolve(event);
    
  •  } else {
    
  •    pushQueue.push(event);
    
  •  }
    
  • });
  • this.on('end', () => {
  •  done = true;
    
  •  for (const reader of readQueue) {
    
  •    reader.resolve(undefined);
    
  •  }
    
  •  readQueue.length = 0;
    
  • });
  • this.on('abort', (err) => {
  •  done = true;
    
  •  for (const reader of readQueue) {
    
  •    reader.reject(err);
    
  •  }
    
  •  readQueue.length = 0;
    
  • });
  • this.on('error', (err) => {
  •  done = true;
    
  •  for (const reader of readQueue) {
    
  •    reader.reject(err);
    
  •  }
    
  •  readQueue.length = 0;
    
  • });
  • return {
  •  next: async (): Promise<IteratorResult<RawPromptCachingBetaMessageStreamEvent>> => {
    
  •    if (!pushQueue.length) {
    
  •      if (done) {
    
  •        return { value: undefined, done: true };
    
  •      }
    
  •      return new Promise<RawPromptCachingBetaMessageStreamEvent | undefined>((resolve, reject) =>
    
  •        readQueue.push({ resolve, reject }),
    
  •      ).then((chunk) => (chunk ? { value: chunk, done: false } : { value: undefined, done: true }));
    
  •    }
    
  •    const chunk = pushQueue.shift()!;
    
  •    return { value: chunk, done: false };
    
  •  },
    
  •  return: async () => {
    
  •    this.abort();
    
  •    return { value: undefined, done: true };
    
  •  },
    
  • };
  • }
  • toReadableStream(): ReadableStream {
  • const stream = new Stream(this[Symbol.asyncIterator].bind(this), this.controller);
  • return stream.toReadableStream();
  • }
    -}
    diff --git src/resources/beta/beta.ts src/resources/beta/beta.ts
    index ee3c6ca5..e29a187c 100644
    --- src/resources/beta/beta.ts
    +++ src/resources/beta/beta.ts
    @@ -1,6 +1,8 @@
    // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

import { APIResource } from '../../resource';
+import * as ModelsAPI from './models';
+import { BetaModelInfo, BetaModelInfosPage, ModelListParams, Models } from './models';
import * as MessagesAPI from './messages/messages';
import {
BetaBase64PDFBlock,
@@ -44,12 +46,10 @@ import {
MessageCreateParamsStreaming,
Messages,
} from './messages/messages';
-import * as PromptCachingAPI from './prompt-caching/prompt-caching';
-import { PromptCaching } from './prompt-caching/prompt-caching';

export class Beta extends APIResource {

  • models: ModelsAPI.Models = new ModelsAPI.Models(this._client);
    messages: MessagesAPI.Messages = new MessagesAPI.Messages(this._client);
  • promptCaching: PromptCachingAPI.PromptCaching = new PromptCachingAPI.PromptCaching(this._client);
    }

export type AnthropicBeta =
@@ -72,12 +72,20 @@ export interface BetaAuthenticationError {
type: 'authentication_error';
}

+export interface BetaBillingError {

  • message: string;
  • type: 'billing_error';
    +}

export type BetaError =
| BetaInvalidRequestError
| BetaAuthenticationError

  • | BetaBillingError
    | BetaPermissionError
    | BetaNotFoundError
    | BetaRateLimitError
  • | BetaGatewayTimeoutError
    | BetaAPIError
    | BetaOverloadedError;

@@ -87,6 +95,12 @@ export interface BetaErrorResponse {
type: 'error';
}

+export interface BetaGatewayTimeoutError {

  • message: string;
  • type: 'timeout_error';
    +}

export interface BetaInvalidRequestError {
message: string;

@@ -117,16 +131,19 @@ export interface BetaRateLimitError {
type: 'rate_limit_error';
}

+Beta.Models = Models;
+Beta.BetaModelInfosPage = BetaModelInfosPage;
Beta.Messages = Messages;
-Beta.PromptCaching = PromptCaching;

export declare namespace Beta {
export {
type AnthropicBeta as AnthropicBeta,
type BetaAPIError as BetaAPIError,
type BetaAuthenticationError as BetaAuthenticationError,

  • type BetaBillingError as BetaBillingError,
    type BetaError as BetaError,
    type BetaErrorResponse as BetaErrorResponse,

  • type BetaGatewayTimeoutError as BetaGatewayTimeoutError,
    type BetaInvalidRequestError as BetaInvalidRequestError,
    type BetaNotFoundError as BetaNotFoundError,
    type BetaOverloadedError as BetaOverloadedError,
    @@ -134,6 +151,13 @@ export declare namespace Beta {
    type BetaRateLimitError as BetaRateLimitError,
    };

  • export {

  • Models as Models,

  • type BetaModelInfo as BetaModelInfo,

  • BetaModelInfosPage as BetaModelInfosPage,

  • type ModelListParams as ModelListParams,

  • };

  • export {
    Messages as Messages,
    type BetaBase64PDFBlock as BetaBase64PDFBlock,
    @@ -176,6 +200,4 @@ export declare namespace Beta {
    type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
    type MessageCountTokensParams as MessageCountTokensParams,
    };

  • export { PromptCaching as PromptCaching };
    }
    diff --git src/resources/beta/index.ts src/resources/beta/index.ts
    index 6e2b0a89..a68f2327 100644
    --- src/resources/beta/index.ts
    +++ src/resources/beta/index.ts
    @@ -5,14 +5,17 @@ export {
    type AnthropicBeta,
    type BetaAPIError,
    type BetaAuthenticationError,
  • type BetaBillingError,
    type BetaError,
    type BetaErrorResponse,
  • type BetaGatewayTimeoutError,
    type BetaInvalidRequestError,
    type BetaNotFoundError,
    type BetaOverloadedError,
    type BetaPermissionError,
    type BetaRateLimitError,
    } from './beta';
    +export { BetaModelInfosPage, Models, type BetaModelInfo, type ModelListParams } from './models';
    export {
    Messages,
    type BetaBase64PDFBlock,
    @@ -55,4 +58,3 @@ export {
    type MessageCreateParamsStreaming,
    type MessageCountTokensParams,
    } from './messages/index';
    -export { PromptCaching } from './prompt-caching/index';
    diff --git src/resources/beta/messages/messages.ts src/resources/beta/messages/messages.ts
    index 3f39ca3a..186a6c36 100644
    --- src/resources/beta/messages/messages.ts
    +++ src/resources/beta/messages/messages.ts
    @@ -4,8 +4,8 @@ import { APIResource } from '../../../resource';
    import { APIPromise } from '../../../core';
    import * as Core from '../../../core';
    import * as MessagesMessagesAPI from './messages';
    -import * as MessagesAPI from '../../messages';
    import * as BetaAPI from '../beta';
    +import * as MessagesAPI from '../../messages/messages';
    import * as BatchesAPI from './batches';
    import {
    BatchCancelParams,
    diff --git a/src/resources/beta/models.ts b/src/resources/beta/models.ts
    new file mode 100644
    index 00000000..48036273
    --- /dev/null
    +++ src/resources/beta/models.ts
    @@ -0,0 +1,78 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import { APIResource } from '../../resource';
+import { isRequestOptions } from '../../core';
+import * as Core from '../../core';
+import { Page, type PageParams } from '../../pagination';
+
+export class Models extends APIResource {

  • /**
    • Get a specific model.
    • The Models API response can be used to determine information about a specific
    • model or resolve a model alias to a model ID.
  • */
  • retrieve(modelId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.get(/v1/models/${modelId}?beta=true, options);
  • }
  • /**
    • List available models.
    • The Models API response can be used to determine which models are available for
    • use in the API. More recently released models are listed first.
  • */
  • list(
  • query?: ModelListParams,
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<BetaModelInfosPage, BetaModelInfo>;
  • list(options?: Core.RequestOptions): Core.PagePromise<BetaModelInfosPage, BetaModelInfo>;
  • list(
  • query: ModelListParams | Core.RequestOptions = {},
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<BetaModelInfosPage, BetaModelInfo> {
  • if (isRequestOptions(query)) {
  •  return this.list({}, query);
    
  • }
  • return this._client.getAPIList('/v1/models?beta=true', BetaModelInfosPage, { query, ...options });
  • }
    +}

+export class BetaModelInfosPage extends Page {}
+
+export interface BetaModelInfo {

  • /**
    • Unique model identifier.
  • */
  • id: string;
  • /**
    • RFC 3339 datetime string representing the time at which the model was released.
    • May be set to an epoch value if the release date is unknown.
  • */
  • created_at: string;
  • /**
    • A human-readable name for the model.
  • */
  • display_name: string;
  • /**
    • Object type.
    • For Models, this is always "model".
  • */
  • type: 'model';
    +}

+export interface ModelListParams extends PageParams {}
+
+Models.BetaModelInfosPage = BetaModelInfosPage;
+
+export declare namespace Models {

  • export {
  • type BetaModelInfo as BetaModelInfo,
  • BetaModelInfosPage as BetaModelInfosPage,
  • type ModelListParams as ModelListParams,
  • };
    +}
    diff --git src/resources/beta/prompt-caching/index.ts src/resources/beta/prompt-caching/index.ts
    deleted file mode 100644
    index 78b4e747..00000000
    --- src/resources/beta/prompt-caching/index.ts
    +++ /dev/null
    @@ -1,20 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-export {

  • Messages,
  • type PromptCachingBetaCacheControlEphemeral,
  • type PromptCachingBetaImageBlockParam,
  • type PromptCachingBetaMessage,
  • type PromptCachingBetaMessageParam,
  • type PromptCachingBetaTextBlockParam,
  • type PromptCachingBetaTool,
  • type PromptCachingBetaToolResultBlockParam,
  • type PromptCachingBetaToolUseBlockParam,
  • type PromptCachingBetaUsage,
  • type RawPromptCachingBetaMessageStartEvent,
  • type RawPromptCachingBetaMessageStreamEvent,
  • type MessageCreateParams,
  • type MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming,
    -} from './messages';
    -export { PromptCaching } from './prompt-caching';
    diff --git src/resources/beta/prompt-caching/messages.ts src/resources/beta/prompt-caching/messages.ts
    deleted file mode 100644
    index 4ae7449b..00000000
    --- src/resources/beta/prompt-caching/messages.ts
    +++ /dev/null
    @@ -1,642 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import { APIResource } from '../../../resource';
-import { APIPromise } from '../../../core';
-import * as Core from '../../../core';
-import * as PromptCachingMessagesAPI from './messages';
-import * as MessagesAPI from '../../messages';
-import * as BetaAPI from '../beta';
-import { Stream } from '../../../streaming';
-import { PromptCachingBetaMessageStream } from '../../../lib/PromptCachingBetaMessageStream';

-export class Messages extends APIResource {

  • /**
    • Send a structured list of input messages with text and/or image content, and the
    • model will generate the next message in the conversation.
    • The Messages API can be used for either single queries or stateless multi-turn
    • conversations.
  • */
  • create(
  • params: MessageCreateParamsNonStreaming,
  • options?: Core.RequestOptions,
  • ): APIPromise;
  • create(
  • params: MessageCreateParamsStreaming,
  • options?: Core.RequestOptions,
  • ): APIPromise<Stream>;
  • create(
  • params: MessageCreateParamsBase,
  • options?: Core.RequestOptions,
  • ): APIPromise<Stream | PromptCachingBetaMessage>;
  • create(
  • params: MessageCreateParams,
  • options?: Core.RequestOptions,
  • ): APIPromise | APIPromise<Stream> {
  • const { betas, ...body } = params;
  • return this._client.post('/v1/messages?beta=prompt_caching', {
  •  body,
    
  •  timeout: (this._client as any)._options.timeout ?? 600000,
    
  •  ...options,
    
  •  headers: {
    
  •    'anthropic-beta': [...(betas ?? []), 'prompt-caching-2024-07-31'].toString(),
    
  •    ...options?.headers,
    
  •  },
    
  •  stream: params.stream ?? false,
    
  • }) as APIPromise | APIPromise<Stream>;
  • }
  • /**
    • Create a Message stream
  • */
  • stream(body: MessageStreamParams, options?: Core.RequestOptions): PromptCachingBetaMessageStream {
  • return PromptCachingBetaMessageStream.createMessage(this, body, options);
  • }
    -}

-export type MessageStreamParams = MessageCreateParamsBase;

-export interface PromptCachingBetaCacheControlEphemeral {

  • type: 'ephemeral';
    -}

-export interface PromptCachingBetaImageBlockParam {

  • source: PromptCachingBetaImageBlockParam.Source;
  • type: 'image';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
    -}

-export namespace PromptCachingBetaImageBlockParam {

  • export interface Source {
  • data: string;
  • media_type: 'image/jpeg' | 'image/png' | 'image/gif' | 'image/webp';
  • type: 'base64';
  • }
    -}

-export interface PromptCachingBetaMessage {

  • /**
    • Unique object identifier.
    • The format and length of IDs may change over time.
  • */
  • id: string;
  • /**
    • Content generated by the model.
    • This is an array of content blocks, each of which has a type that determines
    • its shape.
    • Example:
    • [{ "type": "text", "text": "Hi, I'm Claude." }]
    • If the request input messages ended with an assistant turn, then the
    • response content will continue directly from that last turn. You can use this
    • to constrain the model's output.
    • For example, if the input messages were:
    • [
    • {
    • "role": "user",
      
    • "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
      
    • },
    • { "role": "assistant", "content": "The best answer is (" }
    • ]
    • Then the response content might be:
    • [{ "type": "text", "text": "B)" }]
  • */
  • content: Array<MessagesAPI.ContentBlock>;
  • /**
    • The model that will complete your prompt.\n\nSee
    • details and options.
  • */
  • model: MessagesAPI.Model;
  • /**
    • Conversational role of the generated message.
    • This will always be "assistant".
  • */
  • role: 'assistant';
  • /**
    • The reason that we stopped.
    • This may be one the following values:
      • "end_turn": the model reached a natural stopping point
      • "max_tokens": we exceeded the requested max_tokens or the model's maximum
      • "stop_sequence": one of your provided custom stop_sequences was generated
      • "tool_use": the model invoked one or more tools
    • In non-streaming mode this value is always non-null. In streaming mode, it is
    • null in the message_start event and non-null otherwise.
  • */
  • stop_reason: 'end_turn' | 'max_tokens' | 'stop_sequence' | 'tool_use' | null;
  • /**
    • Which custom stop sequence was generated, if any.
    • This value will be a non-null string if one of your custom stop sequences was
    • generated.
  • */
  • stop_sequence: string | null;
  • /**
    • Object type.
    • For Messages, this is always "message".
  • */
  • type: 'message';
  • /**
    • Billing and rate-limit usage.
    • Anthropic's API bills and rate-limits by token counts, as tokens represent the
    • underlying cost to our systems.
    • Under the hood, the API transforms requests into a format suitable for the
    • model. The model's output then goes through a parsing stage before becoming an
    • API response. As a result, the token counts in usage will not match one-to-one
    • with the exact visible content of an API request or response.
    • For example, output_tokens will be non-zero, even for an empty string response
    • from Claude.
  • */
  • usage: PromptCachingBetaUsage;
    -}

-export interface PromptCachingBetaMessageParam {

  • content:
  • | string
  • | Array<
  •    | PromptCachingBetaTextBlockParam
    
  •    | PromptCachingBetaImageBlockParam
    
  •    | PromptCachingBetaToolUseBlockParam
    
  •    | PromptCachingBetaToolResultBlockParam
    
  •  >;
    
  • role: 'user' | 'assistant';
    -}

-export interface PromptCachingBetaTextBlockParam {

  • text: string;
  • type: 'text';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
    -}

-export interface PromptCachingBetaTool {

  • /**
    • This defines the shape of the input that your tool accepts and that the model
    • will produce.
  • */
  • input_schema: PromptCachingBetaTool.InputSchema;
  • /**
    • Name of the tool.
    • This is how the tool will be called by the model and in tool_use blocks.
  • */
  • name: string;
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
  • /**
    • Description of what this tool does.
    • Tool descriptions should be as detailed as possible. The more information that
    • the model has about what the tool is and how to use it, the better it will
    • perform. You can use natural language descriptions to reinforce important
    • aspects of the tool input JSON schema.
  • */
  • description?: string;
    -}

-export namespace PromptCachingBetaTool {

  • /**
    • This defines the shape of the input that your tool accepts and that the model
    • will produce.
  • */
  • export interface InputSchema {
  • type: 'object';
  • properties?: unknown | null;
  • }
    -}

-export interface PromptCachingBetaToolResultBlockParam {

  • tool_use_id: string;
  • type: 'tool_result';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
  • content?: string | Array<PromptCachingBetaTextBlockParam | PromptCachingBetaImageBlockParam>;
  • is_error?: boolean;
    -}

-export interface PromptCachingBetaToolUseBlockParam {

  • id: string;
  • input: unknown;
  • name: string;
  • type: 'tool_use';
  • cache_control?: PromptCachingBetaCacheControlEphemeral | null;
    -}

-export interface PromptCachingBetaUsage {

  • /**
    • The number of input tokens used to create the cache entry.
  • */
  • cache_creation_input_tokens: number | null;
  • /**
    • The number of input tokens read from the cache.
  • */
  • cache_read_input_tokens: number | null;
  • /**
    • The number of input tokens which were used.
  • */
  • input_tokens: number;
  • /**
    • The number of output tokens which were used.
  • */
  • output_tokens: number;
    -}

-export interface RawPromptCachingBetaMessageStartEvent {

  • message: PromptCachingBetaMessage;
  • type: 'message_start';
    -}

-export type RawPromptCachingBetaMessageStreamEvent =

  • | RawPromptCachingBetaMessageStartEvent
  • | MessagesAPI.RawMessageDeltaEvent
  • | MessagesAPI.RawMessageStopEvent
  • | MessagesAPI.RawContentBlockStartEvent
  • | MessagesAPI.RawContentBlockDeltaEvent
  • | MessagesAPI.RawContentBlockStopEvent;

-export type MessageCreateParams = MessageCreateParamsNonStreaming | MessageCreateParamsStreaming;

-export interface MessageCreateParamsBase {

  • /**
    • Body param: The maximum number of tokens to generate before stopping.
    • Note that our models may stop before reaching this maximum. This parameter
    • only specifies the absolute maximum number of tokens to generate.
    • Different models have different maximum values for this parameter. See
  • */
  • max_tokens: number;
  • /**
    • Body param: Input messages.
    • Our models are trained to operate on alternating user and assistant
    • conversational turns. When creating a new Message, you specify the prior
    • conversational turns with the messages parameter, and the model then generates
    • the next Message in the conversation. Consecutive user or assistant turns
    • in your request will be combined into a single turn.
    • Each input message must be an object with a role and content. You can
    • specify a single user-role message, or you can include multiple user and
    • assistant messages.
    • If the final message uses the assistant role, the response content will
    • continue immediately from the content in that message. This can be used to
    • constrain part of the model's response.
    • Example with a single user message:
    • [{ "role": "user", "content": "Hello, Claude" }]
    • Example with multiple conversational turns:
    • [
    • { "role": "user", "content": "Hello there." },
    • { "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
    • { "role": "user", "content": "Can you explain LLMs in plain English?" }
    • ]
    • Example with a partially-filled response from Claude:
    • [
    • {
    • "role": "user",
      
    • "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
      
    • },
    • { "role": "assistant", "content": "The best answer is (" }
    • ]
    • Each input message content may be either a single string or an array of
    • content blocks, where each block has a specific type. Using a string for
    • content is shorthand for an array of one content block of type "text". The
    • following input messages are equivalent:
    • { "role": "user", "content": "Hello, Claude" }
    • { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
    • Starting with Claude 3 models, you can also send image content blocks:
    • {
    • "role": "user",
    • "content": [
    • {
      
    •   "type": "image",
      
    •   "source": {
      
    •     "type": "base64",
      
    •     "media_type": "image/jpeg",
      
    •     "data": "/9j/4AAQSkZJRg..."
      
    •   }
      
    • },
      
    • { "type": "text", "text": "What is in this image?" }
      
    • ]
    • }
    • We currently support the base64 source type for images, and the image/jpeg,
    • image/png, image/gif, and image/webp media types.
    • more input examples.
    • Note that if you want to include a
    • the top-level system parameter — there is no "system" role for input
    • messages in the Messages API.
  • */
  • messages: Array;
  • /**
    • Body param: The model that will complete your prompt.\n\nSee
    • details and options.
  • */
  • model: MessagesAPI.Model;
  • /**
    • Body param: An object describing metadata about the request.
  • */
  • metadata?: MessagesAPI.Metadata;
  • /**
    • Body param: Custom text sequences that will cause the model to stop generating.
    • Our models will normally stop when they have naturally completed their turn,
    • which will result in a response stop_reason of "end_turn".
    • If you want the model to stop generating when it encounters custom strings of
    • text, you can use the stop_sequences parameter. If the model encounters one of
    • the custom sequences, the response stop_reason value will be "stop_sequence"
    • and the response stop_sequence value will contain the matched stop sequence.
  • */
  • stop_sequences?: Array;
  • /**
    • Body param: Whether to incrementally stream the response using server-sent
    • events.
    • details.
  • */
  • stream?: boolean;
  • /**
    • Body param: System prompt.
    • A system prompt is a way of providing context and instructions to Claude, such
    • as specifying a particular goal or role. See our
  • */
  • system?: string | Array;
  • /**
    • Body param: Amount of randomness injected into the response.
    • Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0
    • for analytical / multiple choice, and closer to 1.0 for creative and
    • generative tasks.
    • Note that even with temperature of 0.0, the results will not be fully
    • deterministic.
  • */
  • temperature?: number;
  • /**
    • Body param: How the model should use the provided tools. The model can use a
    • specific tool, any available tool, or decide by itself.
  • */
  • tool_choice?: MessagesAPI.ToolChoice;
  • /**
    • Body param: Definitions of tools that the model may use.
    • If you include tools in your API request, the model may return tool_use
    • content blocks that represent the model's use of those tools. You can then run
    • those tools using the tool input generated by the model and then optionally
    • return results back to the model using tool_result content blocks.
    • Each tool definition includes:
      • name: Name of the tool.
      • description: Optional, but strongly-recommended description of the tool.
    • shape that the model will produce in tool_use output content blocks.
    • For example, if you defined tools as:
    • [
    • {
    • "name": "get_stock_price",
      
    • "description": "Get the current stock price for a given ticker symbol.",
      
    • "input_schema": {
      
    •   "type": "object",
      
    •   "properties": {
      
    •     "ticker": {
      
    •       "type": "string",
      
    •       "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
      
    •     }
      
    •   },
      
    •   "required": ["ticker"]
      
    • }
      
    • }
    • ]
    • And then asked the model "What's the S&P 500 at today?", the model might produce
    • tool_use content blocks in the response like this:
    • [
    • {
    • "type": "tool_use",
      
    • "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "name": "get_stock_price",
      
    • "input": { "ticker": "^GSPC" }
      
    • }
    • ]
    • You might then run your get_stock_price tool with {"ticker": "^GSPC"} as an
    • input, and return the following back to the model in a subsequent user
    • message:
    • [
    • {
    • "type": "tool_result",
      
    • "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "content": "259.75 USD"
      
    • }
    • ]
    • Tools can be used for workflows that include running client-side tools and
    • functions, or more generally whenever you want the model to produce a particular
    • JSON structure of output.
    • See our guide for more details.
  • */
  • tools?: Array;
  • /**
    • Body param: Only sample from the top K options for each subsequent token.
    • Used to remove "long tail" low probability responses.
    • Recommended for advanced use cases only. You usually only need to use
    • temperature.
  • */
  • top_k?: number;
  • /**
    • Body param: Use nucleus sampling.
    • In nucleus sampling, we compute the cumulative distribution over all the options
    • for each subsequent token in decreasing probability order and cut it off once it
    • reaches a particular probability specified by top_p. You should either alter
    • temperature or top_p, but not both.
    • Recommended for advanced use cases only. You usually only need to use
    • temperature.
  • */
  • top_p?: number;
  • /**
    • Header param: Optional header to specify the beta version(s) you want to use.
  • */
  • betas?: Array<BetaAPI.AnthropicBeta>;
    -}

-export namespace MessageCreateParams {

  • /**
  • */
  • export type Metadata = MessagesAPI.Metadata;
  • /**
    • @deprecated use Anthropic.Messages.ToolChoiceAuto instead
  • */
  • export type ToolChoiceAuto = MessagesAPI.ToolChoiceAuto;
  • /**
    • @deprecated use Anthropic.Messages.ToolChoiceAny instead
  • */
  • export type ToolChoiceAny = MessagesAPI.ToolChoiceAny;
  • /**
    • @deprecated use Anthropic.Messages.ToolChoiceTool instead
  • */
  • export type ToolChoiceTool = MessagesAPI.ToolChoiceTool;
  • export type MessageCreateParamsNonStreaming = PromptCachingMessagesAPI.MessageCreateParamsNonStreaming;
  • export type MessageCreateParamsStreaming = PromptCachingMessagesAPI.MessageCreateParamsStreaming;
    -}

-export interface MessageCreateParamsNonStreaming extends MessageCreateParamsBase {

  • /**
    • Body param: Whether to incrementally stream the response using server-sent
    • events.
    • details.
  • */
  • stream?: false;
    -}

-export interface MessageCreateParamsStreaming extends MessageCreateParamsBase {

  • /**
    • Body param: Whether to incrementally stream the response using server-sent
    • events.
    • details.
  • */
  • stream: true;
    -}

-export declare namespace Messages {

  • export {
  • type PromptCachingBetaCacheControlEphemeral as PromptCachingBetaCacheControlEphemeral,
  • type PromptCachingBetaImageBlockParam as PromptCachingBetaImageBlockParam,
  • type PromptCachingBetaMessage as PromptCachingBetaMessage,
  • type PromptCachingBetaMessageParam as PromptCachingBetaMessageParam,
  • type PromptCachingBetaTextBlockParam as PromptCachingBetaTextBlockParam,
  • type PromptCachingBetaTool as PromptCachingBetaTool,
  • type PromptCachingBetaToolResultBlockParam as PromptCachingBetaToolResultBlockParam,
  • type PromptCachingBetaToolUseBlockParam as PromptCachingBetaToolUseBlockParam,
  • type PromptCachingBetaUsage as PromptCachingBetaUsage,
  • type RawPromptCachingBetaMessageStartEvent as RawPromptCachingBetaMessageStartEvent,
  • type RawPromptCachingBetaMessageStreamEvent as RawPromptCachingBetaMessageStreamEvent,
  • type MessageCreateParams as MessageCreateParams,
  • type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
  • };
    -}
    diff --git src/resources/beta/prompt-caching/prompt-caching.ts src/resources/beta/prompt-caching/prompt-caching.ts
    deleted file mode 100644
    index 421f8621..00000000
    --- src/resources/beta/prompt-caching/prompt-caching.ts
    +++ /dev/null
    @@ -1,47 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import { APIResource } from '../../../resource';
-import * as MessagesAPI from './messages';
-import {

  • MessageCreateParams,
  • MessageCreateParamsNonStreaming,
  • MessageCreateParamsStreaming,
  • Messages,
  • PromptCachingBetaCacheControlEphemeral,
  • PromptCachingBetaImageBlockParam,
  • PromptCachingBetaMessage,
  • PromptCachingBetaMessageParam,
  • PromptCachingBetaTextBlockParam,
  • PromptCachingBetaTool,
  • PromptCachingBetaToolResultBlockParam,
  • PromptCachingBetaToolUseBlockParam,
  • PromptCachingBetaUsage,
  • RawPromptCachingBetaMessageStartEvent,
  • RawPromptCachingBetaMessageStreamEvent,
    -} from './messages';

-export class PromptCaching extends APIResource {

  • messages: MessagesAPI.Messages = new MessagesAPI.Messages(this._client);
    -}

-PromptCaching.Messages = Messages;

-export declare namespace PromptCaching {

  • export {
  • Messages as Messages,
  • type PromptCachingBetaCacheControlEphemeral as PromptCachingBetaCacheControlEphemeral,
  • type PromptCachingBetaImageBlockParam as PromptCachingBetaImageBlockParam,
  • type PromptCachingBetaMessage as PromptCachingBetaMessage,
  • type PromptCachingBetaMessageParam as PromptCachingBetaMessageParam,
  • type PromptCachingBetaTextBlockParam as PromptCachingBetaTextBlockParam,
  • type PromptCachingBetaTool as PromptCachingBetaTool,
  • type PromptCachingBetaToolResultBlockParam as PromptCachingBetaToolResultBlockParam,
  • type PromptCachingBetaToolUseBlockParam as PromptCachingBetaToolUseBlockParam,
  • type PromptCachingBetaUsage as PromptCachingBetaUsage,
  • type RawPromptCachingBetaMessageStartEvent as RawPromptCachingBetaMessageStartEvent,
  • type RawPromptCachingBetaMessageStreamEvent as RawPromptCachingBetaMessageStreamEvent,
  • type MessageCreateParams as MessageCreateParams,
  • type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
  • };
    -}
    diff --git src/resources/completions.ts src/resources/completions.ts
    index a2ef4d98..2260681d 100644
    --- src/resources/completions.ts
    +++ src/resources/completions.ts
    @@ -4,7 +4,7 @@ import { APIResource } from '../resource';
    import { APIPromise } from '../core';
    import * as Core from '../core';
    import * as CompletionsAPI from './completions';
    -import * as MessagesAPI from './messages';
    +import * as MessagesAPI from './messages/messages';
    import { Stream } from '../streaming';

export class Completions extends APIResource {
diff --git src/resources/index.ts src/resources/index.ts
index 59b714ff..23366973 100644
--- src/resources/index.ts
+++ src/resources/index.ts
@@ -1,12 +1,15 @@
// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+export * from './shared';
export {
Beta,
type AnthropicBeta,
type BetaAPIError,
type BetaAuthenticationError,

  • type BetaBillingError,
    type BetaError,
    type BetaErrorResponse,
  • type BetaGatewayTimeoutError,
    type BetaInvalidRequestError,
    type BetaNotFoundError,
    type BetaOverloadedError,
    @@ -22,10 +25,14 @@ export {
    } from './completions';
    export {
    Messages,
  • type Base64PDFSource,
  • type CacheControlEphemeral,
    type ContentBlock,
    type ContentBlockDeltaEvent,
  • type ContentBlockParam,
    type ContentBlockStartEvent,
    type ContentBlockStopEvent,
  • type DocumentBlockParam,
    type ImageBlockParam,
    type InputJsonDelta,
    type InputJSONDelta,
    @@ -37,6 +44,7 @@ export {
    type MessageStopEvent,
    type MessageStreamEvent,
    type MessageStreamParams,
  • type MessageTokensCount,
    type Metadata,
    type Model,
    type RawContentBlockDeltaEvent,
    @@ -61,4 +69,6 @@ export {
    type MessageCreateParams,
    type MessageCreateParamsNonStreaming,
    type MessageCreateParamsStreaming,
    -} from './messages';
  • type MessageCountTokensParams,
    +} from './messages/messages';
    +export { ModelInfosPage, Models, type ModelInfo, type ModelListParams } from './models';
    diff --git a/src/resources/messages/batches.ts b/src/resources/messages/batches.ts
    new file mode 100644
    index 00000000..b4fd45e8
    --- /dev/null
    +++ src/resources/messages/batches.ts
    @@ -0,0 +1,298 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import { APIResource } from '../../resource';
+import { isRequestOptions } from '../../core';
+import * as Core from '../../core';
+import * as Shared from '../shared';
+import * as MessagesAPI from './messages';
+import { Page, type PageParams } from '../../pagination';
+import { JSONLDecoder } from '../../internal/decoders/jsonl';
+import { AnthropicError } from '../../error';
+
+export class Batches extends APIResource {

  • /**
    • Send a batch of Message creation requests.
    • The Message Batches API can be used to process multiple Messages API requests at
    • once. Once a Message Batch is created, it begins processing immediately. Batches
    • can take up to 24 hours to complete.
  • */
  • create(body: BatchCreateParams, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.post('/v1/messages/batches', { body, ...options });
  • }
  • /**
    • This endpoint is idempotent and can be used to poll for Message Batch
    • completion. To access the results of a Message Batch, make a request to the
    • results_url field in the response.
  • */
  • retrieve(messageBatchId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.get(/v1/messages/batches/${messageBatchId}, options);
  • }
  • /**
    • List all Message Batches within a Workspace. Most recently created batches are
    • returned first.
  • */
  • list(
  • query?: BatchListParams,
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<MessageBatchesPage, MessageBatch>;
  • list(options?: Core.RequestOptions): Core.PagePromise<MessageBatchesPage, MessageBatch>;
  • list(
  • query: BatchListParams | Core.RequestOptions = {},
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<MessageBatchesPage, MessageBatch> {
  • if (isRequestOptions(query)) {
  •  return this.list({}, query);
    
  • }
  • return this._client.getAPIList('/v1/messages/batches', MessageBatchesPage, { query, ...options });
  • }
  • /**
    • Batches may be canceled any time before processing ends. Once cancellation is
    • initiated, the batch enters a canceling state, at which time the system may
    • complete any in-progress, non-interruptible requests before finalizing
    • cancellation.
    • The number of canceled requests is specified in request_counts. To determine
    • which requests were canceled, check the individual results within the batch.
    • Note that cancellation may not result in any canceled requests if they were
    • non-interruptible.
  • */
  • cancel(messageBatchId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.post(/v1/messages/batches/${messageBatchId}/cancel, options);
  • }
  • /**
    • Streams the results of a Message Batch as a .jsonl file.
    • Each line in the file is a JSON object containing the result of a single request
    • in the Message Batch. Results are not guaranteed to be in the same order as
    • requests. Use the custom_id field to match results to requests.
  • */
  • async results(
  • messageBatchId: string,
  • options?: Core.RequestOptions,
  • ): Promise<JSONLDecoder> {
  • const batch = await this.retrieve(messageBatchId);
  • if (!batch.results_url) {
  •  throw new AnthropicError(
    
  •    `No batch \`results_url\`; Has it finished processing? ${batch.processing_status} - ${batch.id}`,
    
  •  );
    
  • }
  • return this._client
  •  .get(batch.results_url, { ...options, __binaryResponse: true })
    
  •  ._thenUnwrap((_, props) => JSONLDecoder.fromResponse(props.response, props.controller));
    
  • }
    +}

+export class MessageBatchesPage extends Page {}
+
+export interface MessageBatch {

  • /**
    • Unique object identifier.
    • The format and length of IDs may change over time.
  • */
  • id: string;
  • /**
    • RFC 3339 datetime string representing the time at which the Message Batch was
    • archived and its results became unavailable.
  • */
  • archived_at: string | null;
  • /**
    • RFC 3339 datetime string representing the time at which cancellation was
    • initiated for the Message Batch. Specified only if cancellation was initiated.
  • */
  • cancel_initiated_at: string | null;
  • /**
    • RFC 3339 datetime string representing the time at which the Message Batch was
    • created.
  • */
  • created_at: string;
  • /**
    • RFC 3339 datetime string representing the time at which processing for the
    • Message Batch ended. Specified only once processing ends.
    • Processing ends when every request in a Message Batch has either succeeded,
    • errored, canceled, or expired.
  • */
  • ended_at: string | null;
  • /**
    • RFC 3339 datetime string representing the time at which the Message Batch will
    • expire and end processing, which is 24 hours after creation.
  • */
  • expires_at: string;
  • /**
    • Processing status of the Message Batch.
  • */
  • processing_status: 'in_progress' | 'canceling' | 'ended';
  • /**
    • Tallies requests within the Message Batch, categorized by their status.
    • Requests start as processing and move to one of the other statuses only once
    • processing of the entire batch ends. The sum of all values always matches the
    • total number of requests in the batch.
  • */
  • request_counts: MessageBatchRequestCounts;
  • /**
    • URL to a .jsonl file containing the results of the Message Batch requests.
    • Specified only once processing ends.
    • Results in the file are not guaranteed to be in the same order as requests. Use
    • the custom_id field to match results to requests.
  • */
  • results_url: string | null;
  • /**
    • Object type.
    • For Message Batches, this is always "message_batch".
  • */
  • type: 'message_batch';
    +}

+export interface MessageBatchCanceledResult {

  • type: 'canceled';
    +}

+export interface MessageBatchErroredResult {

  • error: Shared.ErrorResponse;
  • type: 'errored';
    +}

+export interface MessageBatchExpiredResult {

  • type: 'expired';
    +}

+export interface MessageBatchIndividualResponse {

  • /**
    • Developer-provided ID created for each request in a Message Batch. Useful for
    • matching results to requests, as results may be given out of request order.
    • Must be unique for each request within the Message Batch.
  • */
  • custom_id: string;
  • /**
    • Processing result for this request.
    • Contains a Message output if processing was successful, an error response if
    • processing failed, or the reason why processing was not attempted, such as
    • cancellation or expiration.
  • */
  • result: MessageBatchResult;
    +}

+export interface MessageBatchRequestCounts {

  • /**
    • Number of requests in the Message Batch that have been canceled.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • canceled: number;
  • /**
    • Number of requests in the Message Batch that encountered an error.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • errored: number;
  • /**
    • Number of requests in the Message Batch that have expired.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • expired: number;
  • /**
    • Number of requests in the Message Batch that are processing.
  • */
  • processing: number;
  • /**
    • Number of requests in the Message Batch that have completed successfully.
    • This is zero until processing of the entire Message Batch has ended.
  • */
  • succeeded: number;
    +}

+/**

    • Processing result for this request.
    • Contains a Message output if processing was successful, an error response if
    • processing failed, or the reason why processing was not attempted, such as
    • cancellation or expiration.
  • */
    +export type MessageBatchResult =
  • | MessageBatchSucceededResult
  • | MessageBatchErroredResult
  • | MessageBatchCanceledResult
  • | MessageBatchExpiredResult;

+export interface MessageBatchSucceededResult {

  • message: MessagesAPI.Message;
  • type: 'succeeded';
    +}

+export interface BatchCreateParams {

  • /**
    • List of requests for prompt completion. Each is an individual request to create
    • a Message.
  • */
  • requests: Array<BatchCreateParams.Request>;
    +}

+export namespace BatchCreateParams {

  • export interface Request {
  • /**
  • * Developer-provided ID created for each request in a Message Batch. Useful for
    
  • * matching results to requests, as results may be given out of request order.
    
  • *
    
  • * Must be unique for each request within the Message Batch.
    
  • */
    
  • custom_id: string;
  • /**
  • * Messages API creation parameters for the individual request.
    
  • *
    
  • * See the [Messages API reference](/en/api/messages) for full documentation on
    
  • * available parameters.
    
  • */
    
  • params: MessagesAPI.MessageCreateParamsNonStreaming;
  • }
    +}

+export interface BatchListParams extends PageParams {}
+
+Batches.MessageBatchesPage = MessageBatchesPage;
+
+export declare namespace Batches {

  • export {
  • type MessageBatch as MessageBatch,
  • type MessageBatchCanceledResult as MessageBatchCanceledResult,
  • type MessageBatchErroredResult as MessageBatchErroredResult,
  • type MessageBatchExpiredResult as MessageBatchExpiredResult,
  • type MessageBatchIndividualResponse as MessageBatchIndividualResponse,
  • type MessageBatchRequestCounts as MessageBatchRequestCounts,
  • type MessageBatchResult as MessageBatchResult,
  • type MessageBatchSucceededResult as MessageBatchSucceededResult,
  • MessageBatchesPage as MessageBatchesPage,
  • type BatchCreateParams as BatchCreateParams,
  • type BatchListParams as BatchListParams,
  • };
    +}
    diff --git a/src/resources/messages/index.ts b/src/resources/messages/index.ts
    new file mode 100644
    index 00000000..10308d2a
    --- /dev/null
    +++ src/resources/messages/index.ts
    @@ -0,0 +1,63 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+export {

  • MessageBatchesPage,
  • Batches,
  • type MessageBatch,
  • type MessageBatchCanceledResult,
  • type MessageBatchErroredResult,
  • type MessageBatchExpiredResult,
  • type MessageBatchIndividualResponse,
  • type MessageBatchRequestCounts,
  • type MessageBatchResult,
  • type MessageBatchSucceededResult,
  • type BatchCreateParams,
  • type BatchListParams,
    +} from './batches';
    +export {
  • Messages,
  • type Base64PDFSource,
  • type CacheControlEphemeral,
  • type ContentBlock,
  • type ContentBlockDeltaEvent,
  • type ContentBlockParam,
  • type ContentBlockStartEvent,
  • type ContentBlockStopEvent,
  • type DocumentBlockParam,
  • type ImageBlockParam,
  • type InputJSONDelta,
  • type Message,
  • type MessageDeltaEvent,
  • type MessageDeltaUsage,
  • type MessageParam,
  • type MessageStartEvent,
  • type MessageStopEvent,
  • type MessageStreamEvent,
  • type MessageTokensCount,
  • type Metadata,
  • type Model,
  • type RawContentBlockDeltaEvent,
  • type RawContentBlockStartEvent,
  • type RawContentBlockStopEvent,
  • type RawMessageDeltaEvent,
  • type RawMessageStartEvent,
  • type RawMessageStopEvent,
  • type RawMessageStreamEvent,
  • type TextBlock,
  • type TextBlockParam,
  • type TextDelta,
  • type Tool,
  • type ToolChoice,
  • type ToolChoiceAny,
  • type ToolChoiceAuto,
  • type ToolChoiceTool,
  • type ToolResultBlockParam,
  • type ToolUseBlock,
  • type ToolUseBlockParam,
  • type Usage,
  • type MessageCreateParams,
  • type MessageCreateParamsBase,
  • type MessageCreateParamsNonStreaming,
  • type MessageCreateParamsStreaming,
  • type MessageCountTokensParams,
    +} from './messages';
    diff --git src/resources/messages.ts src/resources/messages/messages.ts
    similarity index 71%
    rename from src/resources/messages.ts
    rename to src/resources/messages/messages.ts
    index a13c43f4..a1affbf5 100644
    --- src/resources/messages.ts
    +++ src/resources/messages/messages.ts
    @@ -1,15 +1,32 @@
    // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import { APIResource } from '../resource';
-import { APIPromise } from '../core';
-import * as Core from '../core';
+import { APIResource } from '../../resource';
+import { APIPromise } from '../../core';
+import * as Core from '../../core';
import * as MessagesAPI from './messages';
-import { Stream } from '../streaming';
-import { MessageStream } from '../lib/MessageStream';

-export { MessageStream } from '../lib/MessageStream';
+import * as BatchesAPI from './batches';
+import {

  • BatchCreateParams,
  • BatchListParams,
  • Batches,
  • MessageBatch,
  • MessageBatchCanceledResult,
  • MessageBatchErroredResult,
  • MessageBatchExpiredResult,
  • MessageBatchIndividualResponse,
  • MessageBatchRequestCounts,
  • MessageBatchResult,
  • MessageBatchSucceededResult,
  • MessageBatchesPage,
    +} from './batches';
    +import { Stream } from '../../streaming';
    +import { MessageStream } from '../../lib/MessageStream';

+export { MessageStream } from '../../lib/MessageStream';

export class Messages extends APIResource {

  • batches: BatchesAPI.Batches = new Batch,esAPI.Batches(this._client);
  • /**
    • Send a structured list of input messages with text and/or image content, and the
    • model will generate the next message in the conversation.
      @@ -51,20 +68,62 @@ export class Messages extends APIResource {
      stream(body: MessageStreamParams, options?: Core.RequestOptions): MessageStream {
      return MessageStream.createMessage(this, body, options);
      }
  • /**
    • Count the number of tokens in a Message.
    • The Token Count API can be used to count the number of tokens in a Message,
    • including tools, images, and documents, without creating it.
  • */
  • countTokens(
  • body: MessageCountTokensParams,
  • options?: Core.RequestOptions,
  • ): Core.APIPromise {
  • return this._client.post('/v1/messages/count_tokens', { body, ...options });
  • }
    +}

+export interface Base64PDFSource {

  • data: string;
  • media_type: 'application/pdf';
  • type: 'base64';
    +}

+export interface CacheControlEphemeral {

  • type: 'ephemeral';
    }

export type ContentBlock = TextBlock | ToolUseBlock;

export type ContentBlockDeltaEvent = RawContentBlockDeltaEvent;

+export type ContentBlockParam =

  • | TextBlockParam
  • | ImageBlockParam
  • | ToolUseBlockParam
  • | ToolResultBlockParam
  • | DocumentBlockParam;

export type ContentBlockStartEvent = RawContentBlockStartEvent;

export type ContentBlockStopEvent = RawContentBlockStopEvent;

+export interface DocumentBlockParam {

  • source: Base64PDFSource;
  • type: 'document';
  • cache_control?: CacheControlEphemeral | null;
    +}

export interface ImageBlockParam {
source: ImageBlockParam.Source;

type: 'image';
+

  • cache_control?: CacheControlEphemeral | null;
    }

export namespace ImageBlockParam {
@@ -200,7 +259,7 @@ export interface MessageDeltaUsage {
}

export interface MessageParam {

  • content: string | Array<TextBlockParam | ImageBlockParam | ToolUseBlockParam | ToolResultBlockParam>;
  • content: string | Array;

    role: 'user' | 'assistant';
    }
    @@ -211,6 +270,14 @@ export type MessageStopEvent = RawMessageStopEvent;

export type MessageStreamEvent = RawMessageStreamEvent;

+export interface MessageTokensCount {

  • /**
    • The total number of tokens across the provided list of messages, system prompt,
    • and tools.
  • */
  • input_tokens: number;
    +}

export interface Metadata {
/**
* An external identifier for the user who is associated with the request.
@@ -239,8 +306,7 @@ export type Model =
| 'claude-3-sonnet-20240229'
| 'claude-3-haiku-20240307'
| 'claude-2.1'

  • | 'claude-2.0'
  • | 'claude-instant-1.2';
  • | 'claude-2.0';

type DeprecatedModelsType = {
[K in Model]?: string;
@@ -334,6 +400,8 @@ export interface TextBlockParam {
text: string;

type: 'text';
+

  • cache_control?: CacheControlEphemeral | null;
    }

export interface TextDelta {
@@ -358,6 +426,8 @@ export interface Tool {
*/
name: string;

  • cache_control?: CacheControlEphemeral | null;
  • /**
    • Description of what this tool does.

@@ -445,6 +515,8 @@ export interface ToolResultBlockParam {

type: 'tool_result';

  • cache_control?: CacheControlEphemeral | null;

  • content?: string | Array<TextBlockParam | ImageBlockParam>;

    is_error?: boolean;
    @@ -468,9 +540,21 @@ export interface ToolUseBlockParam {
    name: string;

    type: 'tool_use';

  • cache_control?: CacheControlEphemeral | null;
    }

export interface Usage {

  • /**
    • The number of input tokens used to create the cache entry.
  • */
  • cache_creation_input_tokens: number | null;
  • /**
    • The number of input tokens read from the cache.
  • */
  • cache_read_input_tokens: number | null;
  • /**
    • The number of input tokens which were used.
      */
      @@ -790,12 +874,205 @@ export interface MessageCreateParamsStreaming extends MessageCreateParamsBase {

export type MessageStreamParams = MessageCreateParamsBase;

+export interface MessageCountTokensParams {

  • /**
    • Input messages.
    • Our models are trained to operate on alternating user and assistant
    • conversational turns. When creating a new Message, you specify the prior
    • conversational turns with the messages parameter, and the model then generates
    • the next Message in the conversation. Consecutive user or assistant turns
    • in your request will be combined into a single turn.
    • Each input message must be an object with a role and content. You can
    • specify a single user-role message, or you can include multiple user and
    • assistant messages.
    • If the final message uses the assistant role, the response content will
    • continue immediately from the content in that message. This can be used to
    • constrain part of the model's response.
    • Example with a single user message:
    • [{ "role": "user", "content": "Hello, Claude" }]
    • Example with multiple conversational turns:
    • [
    • { "role": "user", "content": "Hello there." },
    • { "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
    • { "role": "user", "content": "Can you explain LLMs in plain English?" }
    • ]
    • Example with a partially-filled response from Claude:
    • [
    • {
    • "role": "user",
      
    • "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
      
    • },
    • { "role": "assistant", "content": "The best answer is (" }
    • ]
    • Each input message content may be either a single string or an array of
    • content blocks, where each block has a specific type. Using a string for
    • content is shorthand for an array of one content block of type "text". The
    • following input messages are equivalent:
    • { "role": "user", "content": "Hello, Claude" }
    • { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
    • Starting with Claude 3 models, you can also send image content blocks:
    • {
    • "role": "user",
    • "content": [
    • {
      
    •   "type": "image",
      
    •   "source": {
      
    •     "type": "base64",
      
    •     "media_type": "image/jpeg",
      
    •     "data": "/9j/4AAQSkZJRg..."
      
    •   }
      
    • },
      
    • { "type": "text", "text": "What is in this image?" }
      
    • ]
    • }
    • We currently support the base64 source type for images, and the image/jpeg,
    • image/png, image/gif, and image/webp media types.
    • more input examples.
    • Note that if you want to include a
    • the top-level system parameter — there is no "system" role for input
    • messages in the Messages API.
  • */
  • messages: Array;
  • /**
    • The model that will complete your prompt.\n\nSee
    • details and options.
  • */
  • model: Model;
  • /**
    • System prompt.
    • A system prompt is a way of providing context and instructions to Claude, such
    • as specifying a particular goal or role. See our
  • */
  • system?: string | Array;
  • /**
    • How the model should use the provided tools. The model can use a specific tool,
    • any available tool, or decide by itself.
  • */
  • tool_choice?: ToolChoice;
  • /**
    • Definitions of tools that the model may use.
    • If you include tools in your API request, the model may return tool_use
    • content blocks that represent the model's use of those tools. You can then run
    • those tools using the tool input generated by the model and then optionally
    • return results back to the model using tool_result content blocks.
    • Each tool definition includes:
      • name: Name of the tool.
      • description: Optional, but strongly-recommended description of the tool.
    • shape that the model will produce in tool_use output content blocks.
    • For example, if you defined tools as:
    • [
    • {
    • "name": "get_stock_price",
      
    • "description": "Get the current stock price for a given ticker symbol.",
      
    • "input_schema": {
      
    •   "type": "object",
      
    •   "properties": {
      
    •     "ticker": {
      
    •       "type": "string",
      
    •       "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
      
    •     }
      
    •   },
      
    •   "required": ["ticker"]
      
    • }
      
    • }
    • ]
    • And then asked the model "What's the S&P 500 at today?", the model might produce
    • tool_use content blocks in the response like this:
    • [
    • {
    • "type": "tool_use",
      
    • "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "name": "get_stock_price",
      
    • "input": { "ticker": "^GSPC" }
      
    • }
    • ]
    • You might then run your get_stock_price tool with {"ticker": "^GSPC"} as an
    • input, and return the following back to the model in a subsequent user
    • message:
    • [
    • {
    • "type": "tool_result",
      
    • "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
      
    • "content": "259.75 USD"
      
    • }
    • ]
    • Tools can be used for workflows that include running client-side tools and
    • functions, or more generally whenever you want the model to produce a particular
    • JSON structure of output.
    • See our guide for more details.
  • */
  • tools?: Array;
    +}

+Messages.Batches = Batches;
+Messages.MessageBatchesPage = MessageBatchesPage;
+
export declare namespace Messages {
export {

  • type Base64PDFSource as Base64PDFSource,
  • type CacheControlEphemeral as CacheControlEphemeral,
    type ContentBlock as ContentBlock,
    type ContentBlockDeltaEvent as ContentBlockDeltaEvent,
  • type ContentBlockParam as ContentBlockParam,
    type ContentBlockStartEvent as ContentBlockStartEvent,
    type ContentBlockStopEvent as ContentBlockStopEvent,
  • type DocumentBlockParam as DocumentBlockParam,
    type ImageBlockParam as ImageBlockParam,
    type InputJsonDelta as InputJsonDelta,
    type InputJSONDelta as InputJSONDelta,
    @@ -806,6 +1083,7 @@ export declare namespace Messages {
    type MessageStartEvent as MessageStartEvent,
    type MessageStopEvent as MessageStopEvent,
    type MessageStreamEvent as MessageStreamEvent,
  • type MessageTokensCount as MessageTokensCount,
    type Metadata as Metadata,
    type Model as Model,
    type RawContentBlockDeltaEvent as RawContentBlockDeltaEvent,
    @@ -831,5 +1109,21 @@ export declare namespace Messages {
    type MessageCreateParamsNonStreaming as MessageCreateParamsNonStreaming,
    type MessageCreateParamsStreaming as MessageCreateParamsStreaming,
    type MessageStreamParams as MessageStreamParams,
  • type MessageCountTokensParams as MessageCountTokensParams,
  • };
  • export {
  • Batches as Batches,
  • type MessageBatch as MessageBatch,
  • type MessageBatchCanceledResult as MessageBatchCanceledResult,
  • type MessageBatchErroredResult as MessageBatchErroredResult,
  • type MessageBatchExpiredResult as MessageBatchExpiredResult,
  • type MessageBatchIndividualResponse as MessageBatchIndividualResponse,
  • type MessageBatchRequestCounts as MessageBatchRequestCounts,
  • type MessageBatchResult as MessageBatchResult,
  • type MessageBatchSucceededResult as MessageBatchSucceededResult,
  • MessageBatchesPage as MessageBatchesPage,
  • type BatchCreateParams as BatchCreateParams,
  • type BatchListParams as BatchListParams,
    };
    }
    diff --git a/src/resources/models.ts b/src/resources/models.ts
    new file mode 100644
    index 00000000..50e80399
    --- /dev/null
    +++ src/resources/models.ts
    @@ -0,0 +1,75 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import { APIResource } from '../resource';
+import { isRequestOptions } from '../core';
+import * as Core from '../core';
+import { Page, type PageParams } from '../pagination';
+
+export class Models extends APIResource {

  • /**
    • Get a specific model.
    • The Models API response can be used to determine information about a specific
    • model or resolve a model alias to a model ID.
  • */
  • retrieve(modelId: string, options?: Core.RequestOptions): Core.APIPromise {
  • return this._client.get(/v1/models/${modelId}, options);
  • }
  • /**
    • List available models.
    • The Models API response can be used to determine which models are available for
    • use in the API. More recently released models are listed first.
  • */
  • list(query?: ModelListParams, options?: Core.RequestOptions): Core.PagePromise<ModelInfosPage, ModelInfo>;
  • list(options?: Core.RequestOptions): Core.PagePromise<ModelInfosPage, ModelInfo>;
  • list(
  • query: ModelListParams | Core.RequestOptions = {},
  • options?: Core.RequestOptions,
  • ): Core.PagePromise<ModelInfosPage, ModelInfo> {
  • if (isRequestOptions(query)) {
  •  return this.list({}, query);
    
  • }
  • return this._client.getAPIList('/v1/models', ModelInfosPage, { query, ...options });
  • }
    +}

+export class ModelInfosPage extends Page {}
+
+export interface ModelInfo {

  • /**
    • Unique model identifier.
  • */
  • id: string;
  • /**
    • RFC 3339 datetime string representing the time at which the model was released.
    • May be set to an epoch value if the release date is unknown.
  • */
  • created_at: string;
  • /**
    • A human-readable name for the model.
  • */
  • display_name: string;
  • /**
    • Object type.
    • For Models, this is always "model".
  • */
  • type: 'model';
    +}

+export interface ModelListParams extends PageParams {}
+
+Models.ModelInfosPage = ModelInfosPage;
+
+export declare namespace Models {

  • export {
  • type ModelInfo as ModelInfo,
  • ModelInfosPage as ModelInfosPage,
  • type ModelListParams as ModelListParams,
  • };
    +}
    diff --git a/src/resources/shared.ts b/src/resources/shared.ts
    new file mode 100644
    index 00000000..d731c1f9
    --- /dev/null
    +++ src/resources/shared.ts
    @@ -0,0 +1,72 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+export interface APIErrorObject {

  • message: string;
  • type: 'api_error';
    +}

+export interface AuthenticationError {

  • message: string;
  • type: 'authentication_error';
    +}

+export interface BillingError {

  • message: string;
  • type: 'billing_error';
    +}

+export type ErrorObject =

  • | InvalidRequestError
  • | AuthenticationError
  • | BillingError
  • | PermissionError
  • | NotFoundError
  • | RateLimitError
  • | GatewayTimeoutError
  • | APIErrorObject
  • | OverloadedError;

+export interface ErrorResponse {

  • error: ErrorObject;
  • type: 'error';
    +}

+export interface GatewayTimeoutError {

  • message: string;
  • type: 'timeout_error';
    +}

+export interface InvalidRequestError {

  • message: string;
  • type: 'invalid_request_error';
    +}

+export interface NotFoundError {

  • message: string;
  • type: 'not_found_error';
    +}

+export interface OverloadedError {

  • message: string;
  • type: 'overloaded_error';
    +}

+export interface PermissionError {

  • message: string;
  • type: 'permission_error';
    +}

+export interface RateLimitError {

  • message: string;
  • type: 'rate_limit_error';
    +}
    diff --git src/version.ts src/version.ts
    index ab5165fb..4a46c186 100644
    --- src/version.ts
    +++ src/version.ts
    @@ -1 +1 @@
    -export const VERSION = '0.32.1'; // x-release-please-version
    +export const VERSION = '0.33.1'; // x-release-please-version
    diff --git tests/api-resources/MessageStream.test.ts tests/api-resources/MessageStream.test.ts
    index 81b9c81e..0051d397 100644
    --- tests/api-resources/MessageStream.test.ts
    +++ tests/api-resources/MessageStream.test.ts
    @@ -149,7 +149,12 @@ describe('MessageStream class', () => {
    model: 'claude-3-opus-20240229',
    stop_reason: 'end_turn',
    stop_sequence: null,
  •    usage: { output_tokens: 6, input_tokens: 10 },
    
  •    usage: {
    
  •      output_tokens: 6,
    
  •      input_tokens: 10,
    
  •      cache_creation_input_tokens: null,
    
  •      cache_read_input_tokens: null,
    
  •    },
     }),
    
    );

@@ -209,22 +214,22 @@ describe('MessageStream class', () => {
},
{
"args": [

  •        "{"type":"message_start","message":{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message_start","message":{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
           "{"type":"content_block_start","content_block":{"type":"text","text":""},"index":0}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":""}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":""}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
           "{"type":"content_block_delta","delta":{"type":"text_delta","text":"Hello"},"index":0}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -238,7 +243,7 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"content_block_delta","delta":{"type":"text_delta","text":" ther"},"index":0}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello ther"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello ther"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -252,7 +257,7 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"content_block_delta","delta":{"type":"text_delta","text":"e!"},"index":0}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -266,7 +271,7 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"content_block_stop","index":0}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":null,"stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
    

@@ -279,26 +284,26 @@ describe('MessageStream class', () => {
{
"args": [
"{"type":"message_delta","usage":{"output_tokens":6},"delta":{"stop_reason":"end_turn","stop_sequence":null}}",

  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
           "{"type":"message_stop"}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "streamEvent",
       },
       {
         "args": [
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "message",
       },
       {
         "args": [
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10}}",
    
  •        "{"type":"message","id":"msg_01hhptzfxdaeehfxfv070yb6b8","role":"assistant","content":[{"type":"text","text":"Hello there!"}],"model":"claude-3-opus-20240229","stop_reason":"end_turn","stop_sequence":null,"usage":{"output_tokens":6,"input_tokens":10,"cache_creation_input_tokens":null,"cache_read_input_tokens":null}}",
         ],
         "type": "finalMessage",
       },
    

@@ -326,6 +331,8 @@ describe('MessageStream class', () => {
"stop_sequence": null,
"type": "message",
"usage": {

  •      "cache_creation_input_tokens": null,
    
  •      "cache_read_input_tokens": null,
         "input_tokens": 10,
         "output_tokens": 6,
       },
    

@@ -353,7 +360,12 @@ describe('MessageStream class', () => {
model: 'claude-3-opus-20240229',
stop_reason: 'end_turn',
stop_sequence: null,

  •    usage: { output_tokens: 6, input_tokens: 10 },
    
  •    usage: {
    
  •      output_tokens: 6,
    
  •      input_tokens: 10,
    
  •      cache_creation_input_tokens: null,
    
  •      cache_read_input_tokens: null,
    
  •    },
     }),
    
    );

diff --git tests/api-resources/beta/messages/batches.test.ts tests/api-resources/beta/messages/batches.test.ts
index ed2027c8..e395910a 100644
--- tests/api-resources/beta/messages/batches.test.ts
+++ tests/api-resources/beta/messages/batches.test.ts
@@ -20,22 +20,6 @@ describe('resource batches', () => {
model: 'claude-3-5-sonnet-20241022',
},
},

  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •      },
    
  •    },
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •      },
    
  •    },
     ],
    
    });
    const rawResponse = await responsePromise.asResponse();
    @@ -57,143 +41,7 @@ describe('resource batches', () => {
    messages: [{ content: 'Hello, world', role: 'user' }],
    model: 'claude-3-5-sonnet-20241022',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •        stop_sequences: ['string', 'string', 'string'],
    
  •        stream: false,
    
  •        system: [
    
  •          { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    
  •        ],
    
  •        temperature: 1,
    
  •        tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •        tools: [
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •        ],
    
  •        top_k: 5,
    
  •        top_p: 0.7,
    
  •      },
    
  •    },
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •        metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •        stop_sequences: ['string', 'string', 'string'],
    
  •        stream: false,
    
  •        system: [
    
  •          { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    
  •        ],
    
  •        temperature: 1,
    
  •        tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •        tools: [
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •        ],
    
  •        top_k: 5,
    
  •        top_p: 0.7,
    
  •      },
    
  •    },
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •        metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •        stop_sequences: ['string', 'string', 'string'],
    
  •        stop_sequences: ['string'],
           stream: false,
           system: [
             { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    

@@ -217,45 +65,13 @@ describe('resource batches', () => {
description: 'Get the current weather in a given location',
type: 'custom',
},

  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •            type: 'custom',
    
  •          },
           ],
           top_k: 5,
           top_p: 0.7,
         },
       },
     ],
    
  •  betas: ['string', 'string', 'string'],
    
  •  betas: ['string'],
    
    });
    });

@@ -282,7 +98,7 @@ describe('resource batches', () => {
await expect(
client.beta.messages.batches.retrieve(
'message_batch_id',

  •    { betas: ['string', 'string', 'string'] },
    
  •    { betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    @@ -310,7 +126,7 @@ describe('resource batches', () => {
    // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
    await expect(
    client.beta.messages.batches.list(
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1, betas: ['string', 'string', 'string'] },
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1, betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    @@ -339,7 +155,7 @@ describe('resource batches', () => {
    await expect(
    client.beta.messages.batches.cancel(
    'message_batch_id',
  •    { betas: ['string', 'string', 'string'] },
    
  •    { betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    @@ -357,7 +173,7 @@ describe('resource batches', () => {
    await expect(
    client.beta.messages.batches.results(
    'message_batch_id',
  •    { betas: ['string', 'string', 'string'] },
    
  •    { betas: ['string'] },
       { path: '/_stainless_unknown_path' },
     ),
    
    ).rejects.toThrow(Anthropic.NotFoundError);
    diff --git tests/api-resources/beta/messages/messages.test.ts tests/api-resources/beta/messages/messages.test.ts
    index 64b6299c..ec73d9c0 100644
    --- tests/api-resources/beta/messages/messages.test.ts
    +++ tests/api-resources/beta/messages/messages.test.ts
    @@ -30,7 +30,7 @@ describe('resource messages', () => {
    messages: [{ content: 'Hello, world', role: 'user' }],
    model: 'claude-3-5-sonnet-20241022',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stop_sequences: ['string'],
     stream: false,
     system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
     temperature: 1,
    

@@ -49,46 +49,16 @@ describe('resource messages', () => {
description: 'Get the current weather in a given location',
type: 'custom',
},

  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
     ],
     top_k: 5,
     top_p: 0.7,
    
  •  betas: ['string', 'string', 'string'],
    
  •  betas: ['string'],
    

    });
    });

    test('countTokens: only required params', async () => {
    const responsePromise = client.beta.messages.countTokens({

  •  messages: [
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •  ],
    
  •  messages: [{ content: 'string', role: 'user' }],
     model: 'string',
    

    });
    const rawResponse = await responsePromise.asResponse();
    @@ -102,11 +72,7 @@ describe('resource messages', () => {

    test('countTokens: required and optional params', async () => {
    const response = await client.beta.messages.countTokens({

  •  messages: [
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •    { content: 'string', role: 'user' },
    
  •  ],
    
  •  messages: [{ content: 'string', role: 'user' }],
     model: 'string',
     system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
     tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    

@@ -124,34 +90,8 @@ describe('resource messages', () => {
description: 'Get the current weather in a given location',
type: 'custom',
},

  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •      type: 'custom',
    
  •    },
     ],
    
  •  betas: ['string', 'string', 'string'],
    
  •  betas: ['string'],
    
    });
    });
    });
    diff --git a/tests/api-resources/beta/models.test.ts b/tests/api-resources/beta/models.test.ts
    new file mode 100644
    index 00000000..f155b632
    --- /dev/null
    +++ tests/api-resources/beta/models.test.ts
    @@ -0,0 +1,57 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import Anthropic from '@anthropic-ai/sdk';
+import { Response } from 'node-fetch';
+
+const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    +});

+describe('resource models', () => {

  • test('retrieve', async () => {
  • const responsePromise = client.beta.models.retrieve('model_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('retrieve: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.beta.models.retrieve('model_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
  • test('list', async () => {
  • const responsePromise = client.beta.models.list();
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('list: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.beta.models.list({ path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list: request options and params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.beta.models.list(
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1 },
    
  •    { path: '/_stainless_unknown_path' },
    
  •  ),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
    +});
    diff --git tests/api-resources/beta/prompt-caching/messages.test.ts tests/api-resources/beta/prompt-caching/messages.test.ts
    deleted file mode 100644
    index dd94b3a7..00000000
    --- tests/api-resources/beta/prompt-caching/messages.test.ts
    +++ /dev/null
    @@ -1,81 +0,0 @@
    -// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

-import Anthropic from '@anthropic-ai/sdk';
-import { Response } from 'node-fetch';

-const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    -});

-describe('resource messages', () => {

  • test('create: only required params', async () => {
  • const responsePromise = client.beta.promptCaching.messages.create({
  •  max_tokens: 1024,
    
  •  messages: [{ content: 'Hello, world', role: 'user' }],
    
  •  model: 'claude-3-5-sonnet-20241022',
    
  • });
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('create: required and optional params', async () => {
  • const response = await client.beta.promptCaching.messages.create({
  •  max_tokens: 1024,
    
  •  messages: [{ content: 'Hello, world', role: 'user' }],
    
  •  model: 'claude-3-5-sonnet-20241022',
    
  •  metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stream: false,
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
    
  •  temperature: 1,
    
  •  tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •  tools: [
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
    
  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •  ],
    
  •  top_k: 5,
    
  •  top_p: 0.7,
    
  •  betas: ['string', 'string', 'string'],
    
  • });
  • });
    -});
    diff --git tests/api-resources/completions.test.ts tests/api-resources/completions.test.ts
    index aa326cf2..fcd0a68c 100644
    --- tests/api-resources/completions.test.ts
    +++ tests/api-resources/completions.test.ts
    @@ -30,7 +30,7 @@ describe('resource completions', () => {
    model: 'string',
    prompt: '\n\nHuman: Hello, world!\n\nAssistant:',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stop_sequences: ['string'],
     stream: false,
     temperature: 1,
     top_k: 5,
    

diff --git a/tests/api-resources/messages/batches.test.ts b/tests/api-resources/messages/batches.test.ts
new file mode 100644
index 00000000..26efdbc8
--- /dev/null
+++ tests/api-resources/messages/batches.test.ts
@@ -0,0 +1,145 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+import Anthropic from '@anthropic-ai/sdk';
+import { Response } from 'node-fetch';
+
+const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    +});

+describe('resource batches', () => {

  • test('create: only required params', async () => {
  • const responsePromise = client.messages.batches.create({
  •  requests: [
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •      },
    
  •    },
    
  •  ],
    
  • });
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('create: required and optional params', async () => {
  • const response = await client.messages.batches.create({
  •  requests: [
    
  •    {
    
  •      custom_id: 'my-custom-id-1',
    
  •      params: {
    
  •        max_tokens: 1024,
    
  •        messages: [{ content: 'Hello, world', role: 'user' }],
    
  •        model: 'claude-3-5-sonnet-20241022',
    
  •        metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
    
  •        stop_sequences: ['string'],
    
  •        system: [
    
  •          { text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } },
    
  •        ],
    
  •        temperature: 1,
    
  •        tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •        tools: [
    
  •          {
    
  •            input_schema: {
    
  •              type: 'object',
    
  •              properties: {
    
  •                location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •                unit: {
    
  •                  description: 'Unit for the output - one of (celsius, fahrenheit)',
    
  •                  type: 'string',
    
  •                },
    
  •              },
    
  •            },
    
  •            name: 'x',
    
  •            cache_control: { type: 'ephemeral' },
    
  •            description: 'Get the current weather in a given location',
    
  •          },
    
  •        ],
    
  •        top_k: 5,
    
  •        top_p: 0.7,
    
  •      },
    
  •    },
    
  •  ],
    
  • });
  • });
  • test('retrieve', async () => {
  • const responsePromise = client.messages.batches.retrieve('message_batch_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('retrieve: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.retrieve('message_batch_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
  • test('list', async () => {
  • const responsePromise = client.messages.batches.list();
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('list: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.messages.batches.list({ path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list: request options and params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.list(
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1 },
    
  •    { path: '/_stainless_unknown_path' },
    
  •  ),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
  • test('cancel', async () => {
  • const responsePromise = client.messages.batches.cancel('message_batch_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('cancel: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.cancel('message_batch_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(,Anthropic.NotFoundError);
  • });
  • test('results: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.messages.batches.results('message_batch_id', { path: '/_stainless_unknown_path' }),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
    +});
    diff --git tests/api-resources/messages.test.ts tests/api-resources/messages/messages.test.ts
    similarity index 74%
    rename from tests/api-resources/messages.test.ts
    rename to tests/api-resources/messages/messages.test.ts
    index 0497742e..3ae41d32 100644
    --- tests/api-resources/messages.test.ts
    +++ tests/api-resources/messages/messages.test.ts
    @@ -30,9 +30,9 @@ describe('resource messages', () => {
    messages: [{ content: 'Hello, world', role: 'user' }],
    model: 'claude-3-5-sonnet-20241022',
    metadata: { user_id: '13803d75-b4b5-4c3e-b2a2-6f21399b021b' },
  •  stop_sequences: ['string', 'string', 'string'],
    
  •  stop_sequences: ['string'],
     stream: false,
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text' }],
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
     temperature: 1,
     tool_choice: { type: 'auto', disable_parallel_tool_use: true },
     tools: [
    

@@ -45,8 +45,36 @@ describe('resource messages', () => {
},
},
name: 'x',

  •      cache_control: { type: 'ephemeral' },
         description: 'Get the current weather in a given location',
       },
    
  •  ],
    
  •  top_k: 5,
    
  •  top_p: 0.7,
    
  • });
  • });
  • test('countTokens: only required params', async () => {
  • const responsePromise = client.messages.countTokens({
  •  messages: [{ content: 'string', role: 'user' }],
    
  •  model: 'string',
    
  • });
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('countTokens: required and optional params', async () => {
  • const response = await client.messages.countTokens({
  •  messages: [{ content: 'string', role: 'user' }],
    
  •  model: 'string',
    
  •  system: [{ text: "Today's date is 2024-06-01.", type: 'text', cache_control: { type: 'ephemeral' } }],
    
  •  tool_choice: { type: 'auto', disable_parallel_tool_use: true },
    
  •  tools: [
       {
         input_schema: {
           type: 'object',
    

@@ -56,22 +84,10 @@ describe('resource messages', () => {
},
},
name: 'x',

  •      description: 'Get the current weather in a given location',
    
  •    },
    
  •    {
    
  •      input_schema: {
    
  •        type: 'object',
    
  •        properties: {
    
  •          location: { description: 'The city and state, e.g. San Francisco, CA', type: 'string' },
    
  •          unit: { description: 'Unit for the output - one of (celsius, fahrenheit)', type: 'string' },
    
  •        },
    
  •      },
    
  •      name: 'x',
    
  •      cache_control: { type: 'ephemeral' },
         description: 'Get the current weather in a given location',
       },
     ],
    
  •  top_k: 5,
    
  •  top_p: 0.7,
    
    });
    });
    });
    diff --git a/tests/api-resources/models.test.ts b/tests/api-resources/models.test.ts
    new file mode 100644
    index 00000000..7f5c0411
    --- /dev/null
    +++ tests/api-resources/models.test.ts
    @@ -0,0 +1,57 @@
    +// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

+import Anthropic from '@anthropic-ai/sdk';
+import { Response } from 'node-fetch';
+
+const client = new Anthropic({

  • apiKey: 'my-anthropic-api-key',
  • baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
    +});

+describe('resource models', () => {

  • test('retrieve', async () => {
  • const responsePromise = client.models.retrieve('model_id');
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('retrieve: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.models.retrieve('model_id', { path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list', async () => {
  • const responsePromise = client.models.list();
  • const rawResponse = await responsePromise.asResponse();
  • expect(rawResponse).toBeInstanceOf(Response);
  • const response = await responsePromise;
  • expect(response).not.toBeInstanceOf(Response);
  • const dataAndResponse = await responsePromise.withResponse();
  • expect(dataAndResponse.data).toBe(response);
  • expect(dataAndResponse.response).toBe(rawResponse);
  • });
  • test('list: request options instead of params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(client.models.list({ path: '/_stainless_unknown_path' })).rejects.toThrow(
  •  Anthropic.NotFoundError,
    
  • );
  • });
  • test('list: request options and params are passed correctly', async () => {
  • // ensure the request options are being passed correctly by passing an invalid HTTP method in order to cause an error
  • await expect(
  •  client.models.list(
    
  •    { after_id: 'after_id', before_id: 'before_id', limit: 1 },
    
  •    { path: '/_stainless_unknown_path' },
    
  •  ),
    
  • ).rejects.toThrow(Anthropic.NotFoundError);
  • });
    +});
    diff --git tests/index.test.ts tests/index.test.ts
    index 0d4a6bba..b6398085 100644
    --- tests/index.test.ts
    +++ tests/index.test.ts
    @@ -183,7 +183,7 @@ describe('instantiate client', () => {
    expect(client.apiKey).toBe('my-anthropic-api-key');
    });
  • test('with overriden environment variable arguments', () => {
  • test('with overridden environment variable arguments', () => {
    // set options via env var
    process.env['ANTHROPIC_API_KEY'] = 'another my-anthropic-api-key';
    const client = new Anthropic({ apiKey: 'my-anthropic-api-key' });
    diff --git tests/responses.test.ts tests/responses.test.ts
    index d0db2b1f..db58e0b7 100644
    --- tests/responses.test.ts
    +++ tests/responses.test.ts
    @@ -1,5 +1,8 @@
    -import { createResponseHeaders } from '@anthropic-ai/sdk/core';
    +import { APIPromise, createResponseHeaders } from '@anthropic-ai/sdk/core';
    +import Anthropic from '@anthropic-ai/sdk/index';
    import { Headers } from '@anthropic-ai/sdk/_shims/index';
    +import { Response } from 'node-fetch';
    +import { compareType } from './utils/typing';

describe('response parsing', () => {
// TODO: test unicode characters
@@ -23,3 +26,129 @@ describe('response parsing', () => {
expect(headers['content-type']).toBe('text/xml, application/json');
});
});
+
+describe('request id', () => {

  • test('types', () => {
  • compareType<Awaited<APIPromise>, string>(true);
  • compareType<Awaited<APIPromise>, number>(true);
  • compareType<Awaited<APIPromise>, null>(true);
  • compareType<Awaited<APIPromise>, void>(true);
  • compareType<Awaited<APIPromise>, Response>(true);
  • compareType<Awaited<APIPromise>, Response>(true);
  • compareType<Awaited<APIPromise<{ foo: string }>>, { foo: string } & { _request_id?: string | null }>(
  •  true,
    
  • );
  • compareType<Awaited<APIPromise<Array<{ foo: string }>>>, Array<{ foo: string }>>(true);
  • });
  • test('withResponse', async () => {
  • cons,t client = new Anthropic({
  •  apiKey: 'dummy',
    
  •  fetch: async () =>
    
  •    new Response(JSON.stringify({ id: 'bar' }), {
    
  •      headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •    }),
    
  • });
  • const {
  •  data: message,
    
  •  response,
    
  •  request_id,
    
  • } = await client.messages
  •  .create({ messages: [], model: 'claude-3-opus-20240229', max_tokens: 1024 })
    
  •  .withResponse();
    
  • expect(request_id).toBe('req_xxx');
  • expect(response.headers.get('request-id')).toBe('req_xxx');
  • expect(message.id).toBe('bar');
  • expect(JSON.stringify(message)).toBe('{"id":"bar"}');
  • });
  • test('object response', async () => {
  • const client = new Anthropic({
  •  apiKey: 'dummy',
    
  •  fetch: async () =>
    
  •    new Response(JSON.stringify({ id: 'bar' }), {
    
  •      headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •    }),
    
  • });
  • const rsp = await client.messages.create({
  •  messages: [],
    
  •  model: 'claude-3-opus-20240229',
    
  •  max_tokens: 1024,
    
  • });
  • expect(rsp.id).toBe('bar');
  • expect(rsp._request_id).toBe('req_xxx');
  • expect(JSON.stringify(rsp)).toBe('{"id":"bar"}');
  • });
  • test('envelope response', async () => {
  • const promise = new APIPromise<{ data: { foo: string } }>(
  •  (async () => {
    
  •    return {
    
  •      response: new Response(JSON.stringify({ data: { foo: 'bar' } }), {
    
  •        headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •      }),
    
  •      controller: {} as any,
    
  •      options: {} as any,
    
  •    };
    
  •  })(),
    
  • )._thenUnwrap((d) => d.data);
  • const rsp = await promise;
  • expect(rsp.foo).toBe('bar');
  • expect(rsp._request_id).toBe('req_xxx');
  • });
  • test('page response', async () => {
  • const client = new Anthropic({
  •  apiKey: 'dummy',
    
  •  fetch: async () =>
    
  •    new Response(JSON.stringify({ data: [{ foo: 'bar' }] }), {
    
  •      headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •    }),
    
  • });
  • const page = await client.beta.messages.batches.list();
  • expect(page.data).toMatchObject([{ foo: 'bar' }]);
  • expect((page as any)._request_id).toBeUndefined();
  • });
  • test('array response', async () => {
  • const promise = new APIPromise<Array<{ foo: string }>>(
  •  (async () => {
    
  •    return {
    
  •      response: new Response(JSON.stringify([{ foo: 'bar' }]), {
    
  •        headers: { 'request-id': 'req_xxx', 'content-type': 'application/json' },
    
  •      }),
    
  •      controller: {} as any,
    
  •      options: {} as any,
    
  •    };
    
  •  })(),
    
  • );
  • const rsp = await promise;
  • expect(rsp.length).toBe(1);
  • expect(rsp[0]).toMatchObject({ foo: 'bar' });
  • expect((rsp as any)._request_id).toBeUndefined();
  • });
  • test('string response', async () => {
  • const promise = new APIPromise(
  •  (async () => {
    
  •    return {
    
  •      response: new Response('hello world', {
    
  •        headers: { 'request-id': 'req_xxx', 'content-type': 'application/text' },
    
  •      }),
    
  •      controller: {} as any,
    
  •      options: {} as any,
    
  •    };
    
  •  })(),
    
  • );
  • const result = await promise;
  • expect(result).toBe('hello world');
  • expect((result as any)._request_id).toBeUndefined();
  • });
    +});
    diff --git a/tests/utils/typing.ts b/tests/utils/typing.ts
    new file mode 100644
    index 00000000..4a791d2a
    --- /dev/null
    +++ tests/utils/typing.ts
    @@ -0,0 +1,9 @@
    +type Equal<X, Y> = (() => T extends X ? 1 : 2) extends () => T extends Y ? 1 : 2 ? true : false;

+export const expectType = (_expression: T): void => {

  • return;
    +};

+export const compareType = <T1, T2>(_expression: Equal<T1, T2>): void => {

  • return;
    +};

</details>

### Description
This PR makes significant updates to the Anthropic TypeScript SDK, including API changes, new features, and various improvements. The changes include updates to model versions, new endpoints, and modifications to existing functionality.

### Possible Issues
- Removal of the `prompt-caching` beta feature might break existing code that relies on it.
- Changes to model names and versions may require updates in client code.

### Security Hotspots
No significant security issues identified in this change.

<details>
<summary><i>Changes</i></summary>

### Changes

1. `index.ts`:
   - Added new types and exports for Models API
   - Updated error types and exports
   - Added `_request_id` property to API responses

2. `core.ts`:
   - Implemented `_request_id` handling in API responses
   - Updated `APIPromise` to support `_request_id`

3. `error.ts`:
   - Updated error classes with more specific types

4. `messages/messages.ts`:
   - Added `countTokens` method
   - Updated `Messages` resource with new types and parameters

5. `messages/batches.ts`:
   - Added new `Batches` resource for batch message processing

6. `models.ts`:
   - Added new `Models` resource for retrieving model information

7. `beta/beta.ts`:
   - Removed `promptCaching` feature
   - Updated beta types and resources

8. Various test files:
   - Updated and added tests for new functionality
   - Removed tests for deprecated features

9. `README.md` and other documentation:
   - Updated examples and documentation to reflect new API changes
   - Changed default model to `claude-3-5-sonnet-latest`

```mermaid
sequenceDiagram
    participant Client
    participant Anthropic
    participant Messages
    participant Models
    participant Batches

    Client->>Anthropic: Create client
    Client->>Messages: Create message
    Messages-->>Client: Return message
    Client->>Messages: Count tokens
    Messages-->>Client: Return token count
    Client->>Models: Retrieve model info
    Models-->>Client: Return model info
    Client->>Batches: Create message batch
    Batches-->>Client: Return batch info
    Client->>Batches: Retrieve batch results
    Batches-->>Client: Stream batch results

@thypon thypon merged commit 7055794 into main Dec 30, 2024
8 checks passed
@thypon thypon deleted the renovate/anthropic-ai-sdk-0.x branch December 30, 2024 19:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant