Skip to content

Commit

Permalink
feat: Updated OpenAPI spec
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] authored and HavenDV committed Sep 14, 2024
1 parent f7529fa commit bccc4ed
Show file tree
Hide file tree
Showing 28 changed files with 965 additions and 806 deletions.
1,492 changes: 748 additions & 744 deletions src/libs/OpenAI/Generated/JsonSerializerContextTypes.g.cs

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -131,8 +131,8 @@ partial void ProcessCreateAssistantResponseContent(
/// </param>
/// <param name="responseFormat">
/// Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.<br/>
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// </param>
/// <param name="cancellationToken">The token to cancel the operation with</param>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -168,8 +168,8 @@ partial void ProcessCreateRunResponseContent(
/// </param>
/// <param name="responseFormat">
/// Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.<br/>
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// </param>
/// <param name="cancellationToken">The token to cancel the operation with</param>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,8 +151,8 @@ partial void ProcessCreateThreadAndRunResponseContent(
/// </param>
/// <param name="responseFormat">
/// Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.<br/>
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// </param>
/// <param name="cancellationToken">The token to cancel the operation with</param>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,8 @@ partial void ProcessModifyAssistantResponseContent(
/// </param>
/// <param name="responseFormat">
/// Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.<br/>
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// </param>
/// <param name="cancellationToken">The token to cancel the operation with</param>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,9 +120,8 @@ partial void ProcessCreateChatCompletionResponseContent(
/// <param name="topLogprobs">
/// An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used.
/// </param>
/// <param name="maxTokens">
/// The maximum number of [tokens](/tokenizer) that can be generated in the chat completion.<br/>
/// The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
/// <param name="maxCompletionTokens">
/// An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and [reasoning tokens](/docs/guides/reasoning).
/// </param>
/// <param name="n">
/// How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs.<br/>
Expand All @@ -136,8 +135,8 @@ partial void ProcessCreateChatCompletionResponseContent(
/// </param>
/// <param name="responseFormat">
/// An object specifying the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4o mini](/docs/models/gpt-4o-mini), [GPT-4 Turbo](/docs/models/gpt-4-and-gpt-4-turbo) and all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.<br/>
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// </param>
/// <param name="seed">
Expand All @@ -147,7 +146,8 @@ partial void ProcessCreateChatCompletionResponseContent(
/// </param>
/// <param name="serviceTier">
/// Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:<br/>
/// - If set to 'auto', the system will utilize scale tier credits until they are exhausted.<br/>
/// - If set to 'auto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted. <br/>
/// - If set to 'auto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.<br/>
/// - If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.<br/>
/// - When not set, the default behavior is 'auto'.<br/>
/// When this parameter is set, the response body will include the `service_tier` utilized.
Expand Down Expand Up @@ -201,7 +201,7 @@ partial void ProcessCreateChatCompletionResponseContent(
global::OpenAI.CreateChatCompletionRequestLogitBias? logitBias = default,
bool? logprobs = false,
int? topLogprobs = default,
int? maxTokens = default,
int? maxCompletionTokens = default,
int? n = 1,
double? presencePenalty = 0,
global::System.OneOf<global::OpenAI.ResponseFormatText, global::OpenAI.ResponseFormatJsonObject, global::OpenAI.ResponseFormatJsonSchema>? responseFormat = default,
Expand All @@ -226,7 +226,7 @@ partial void ProcessCreateChatCompletionResponseContent(
LogitBias = logitBias,
Logprobs = logprobs,
TopLogprobs = topLogprobs,
MaxTokens = maxTokens,
MaxCompletionTokens = maxCompletionTokens,
N = n,
PresencePenalty = presencePenalty,
ResponseFormat = responseFormat,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ partial void ProcessCreateFineTuningJobResponseContent(
/// The hyperparameters used for the fine-tuning job.
/// </param>
/// <param name="suffix">
/// A string of up to 18 characters that will be added to your fine-tuned model name.<br/>
/// A string of up to 64 characters that will be added to your fine-tuned model name.<br/>
/// For example, a `suffix` of "custom-model-name" would produce a model name like `ft:gpt-4o-mini:openai:custom-model-name:7p4lURel`.
/// </param>
/// <param name="validationFile">
Expand Down
4 changes: 2 additions & 2 deletions src/libs/OpenAI/Generated/OpenAI.Models.AssistantObject.g.cs
Original file line number Diff line number Diff line change
Expand Up @@ -99,8 +99,8 @@ public sealed partial class AssistantObject

/// <summary>
/// Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.<br/>
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// </summary>
[global::System.Text.Json.Serialization.JsonPropertyName("response_format")]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ public sealed partial class AssistantToolsFileSearchFileSearch
public int MaxNumResults { get; set; }

/// <summary>
/// The ranking options for the file search.<br/>
/// The ranking options for the file search. If not specified, the file search tool will use the `auto` ranker and a score_threshold of 0.<br/>
/// See the [file search tool documentation](/docs/assistants/tools/file-search/customizing-file-search-settings) for more information.
/// </summary>
[global::System.Text.Json.Serialization.JsonPropertyName("ranking_options")]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ namespace OpenAI
{
/// <summary>
/// Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.<br/>
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// </summary>
public readonly partial struct AssistantsApiResponseFormatOption : global::System.IEquatable<AssistantsApiResponseFormatOption>
Expand Down
6 changes: 6 additions & 0 deletions src/libs/OpenAI/Generated/OpenAI.Models.CompletionUsage.g.cs
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,12 @@ public sealed partial class CompletionUsage
[global::System.Text.Json.Serialization.JsonRequired]
public required int TotalTokens { get; set; }

/// <summary>
/// Breakdown of tokens used in a completion.
/// </summary>
[global::System.Text.Json.Serialization.JsonPropertyName("completion_tokens_details")]
public global::OpenAI.CompletionUsageCompletionTokensDetails? CompletionTokensDetails { get; set; }

/// <summary>
/// Additional properties that are not explicitly defined in the schema
/// </summary>
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@

#nullable enable

namespace OpenAI
{
/// <summary>
/// Breakdown of tokens used in a completion.
/// </summary>
public sealed partial class CompletionUsageCompletionTokensDetails
{
/// <summary>
/// Tokens generated by the model for reasoning.
/// </summary>
[global::System.Text.Json.Serialization.JsonPropertyName("reasoning_tokens")]
public int ReasoningTokens { get; set; }

/// <summary>
/// Additional properties that are not explicitly defined in the schema
/// </summary>
[global::System.Text.Json.Serialization.JsonExtensionData]
public global::System.Collections.Generic.IDictionary<string, object> AdditionalProperties { get; set; } = new global::System.Collections.Generic.Dictionary<string, object>();
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,8 @@ public sealed partial class CreateAssistantRequest

/// <summary>
/// Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.<br/>
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
/// Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.<br/>
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// </summary>
[global::System.Text.Json.Serialization.JsonPropertyName("response_format")]
Expand Down
Loading

0 comments on commit bccc4ed

Please sign in to comment.