From 91a4d5250089c21234678efc382c363fca6f0137 Mon Sep 17 00:00:00 2001 From: HavenDV Date: Fri, 14 Jun 2024 02:16:48 +0400 Subject: [PATCH] test: Fixed. --- .../LangSmith/Methods/_.verified.txt | 4 +- .../Snapshots/Ollama/AnyOfs/_.verified.txt | 2 +- .../Snapshots/Ollama/Methods/_.verified.txt | 50 +-- .../Snapshots/Ollama/Models/_.verified.txt | 108 +++--- .../Snapshots/OpenAi/Methods/_.verified.txt | 202 +++++------ .../Snapshots/OpenAi/Models/_.verified.txt | 316 +++++++++--------- .../Replicate/Methods/_.verified.txt | 50 +-- 7 files changed, 366 insertions(+), 366 deletions(-) diff --git a/src/tests/OpenApiGenerator.UnitTests/Snapshots/LangSmith/Methods/_.verified.txt b/src/tests/OpenApiGenerator.UnitTests/Snapshots/LangSmith/Methods/_.verified.txt index 6481f720c9..464fb3a42e 100644 --- a/src/tests/OpenApiGenerator.UnitTests/Snapshots/LangSmith/Methods/_.verified.txt +++ b/src/tests/OpenApiGenerator.UnitTests/Snapshots/LangSmith/Methods/_.verified.txt @@ -181,7 +181,7 @@ GenerateJsonSerializerContextTypes: false, HttpMethod: Patch, Summary: -Update Run
+Update Run Update a run., BaseUrlSummary: , RequestType: { @@ -835,7 +835,7 @@ Update a run., GenerateJsonSerializerContextTypes: false, HttpMethod: Post, Summary: -Create Run
+Create Run Create a new run., BaseUrlSummary: , RequestType: { diff --git a/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/AnyOfs/_.verified.txt b/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/AnyOfs/_.verified.txt index b86d8bdc07..971c05e9c8 100644 --- a/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/AnyOfs/_.verified.txt +++ b/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/AnyOfs/_.verified.txt @@ -65,7 +65,7 @@ Namespace: G, Name: PullModelStatus, Summary: -Status pulling the model.
+Status pulling the model. Example: pulling manifest, Types: [ { diff --git a/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/Methods/_.verified.txt b/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/Methods/_.verified.txt index d4e8cb5b23..9372d1816a 100644 --- a/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/Methods/_.verified.txt +++ b/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/Methods/_.verified.txt @@ -33,7 +33,7 @@ The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -62,7 +62,7 @@ Example: llama2:7b, IsRequired: true, IsDeprecated: false, Summary: -The prompt to generate a response.
+The prompt to generate a response. Example: Why is the sky blue?, ConverterType: , ParameterName: prompt, @@ -327,7 +327,7 @@ You may choose to use the `raw` parameter if you are specifying a full templated IsDeprecated: false, Summary: If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects. -
+ Default Value: true, ConverterType: , ParameterName: stream, @@ -374,7 +374,7 @@ How long (in minutes) to keep the model loaded in memory. GenerateJsonSerializerContextTypes: true, HttpMethod: Post, Summary: -Generate a response for a given prompt with a provided model.
+Generate a response for a given prompt with a provided model. The final response object will include statistics and additional data from the request., BaseUrlSummary: , RequestType: { @@ -531,7 +531,7 @@ The final response object will include statistics and additional data from the r The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -684,7 +684,7 @@ Note: it's important to instruct the model to use JSON in the prompt. Otherwise, IsDeprecated: false, Summary: If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects. -
+ Default Value: true, ConverterType: , ParameterName: stream, @@ -731,7 +731,7 @@ How long (in minutes) to keep the model loaded in memory. GenerateJsonSerializerContextTypes: true, HttpMethod: Post, Summary: -Generate the next message in a chat with a provided model.
+Generate the next message in a chat with a provided model. This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request., BaseUrlSummary: , RequestType: { @@ -878,7 +878,7 @@ This is a streaming endpoint, so there will be a series of responses. The final The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -907,7 +907,7 @@ Example: llama2:7b, IsRequired: true, IsDeprecated: false, Summary: -Text to generate embeddings for.
+Text to generate embeddings for. Example: Here is an article about llamas..., ConverterType: , ParameterName: prompt, @@ -1133,7 +1133,7 @@ How long (in minutes) to keep the model loaded in memory. The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: mario, ConverterType: , ParameterName: model, @@ -1162,7 +1162,7 @@ Example: mario, IsRequired: true, IsDeprecated: false, Summary: -The contents of the Modelfile.
+The contents of the Modelfile. Example: FROM llama2\nSYSTEM You are mario from Super Mario Bros., ConverterType: , ParameterName: modelfile, @@ -1247,7 +1247,7 @@ Example: FROM llama2\nSYSTEM You are mario from Super Mario Bros., IsDeprecated: false, Summary: If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects. -
+ Default Value: true, ConverterType: , ParameterName: stream, @@ -1260,7 +1260,7 @@ Default Value: true, GenerateJsonSerializerContextTypes: true, HttpMethod: Post, Summary: -Create a model from a Modelfile.
+Create a model from a Modelfile. It is recommended to set `modelfile` to the content of the Modelfile rather than just set `path`. This is a requirement for remote create. Remote model creation should also create any file blobs, fields such as `FROM` and `ADAPTER`, explicitly with the server using Create a Blob and the value to the path indicated in the response., BaseUrlSummary: , RequestType: { @@ -1459,7 +1459,7 @@ It is recommended to set `modelfile` to the content of the Modelfile rather than The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -1596,7 +1596,7 @@ Example: llama2:7b, IsRequired: true, IsDeprecated: false, Summary: -Name of the model to copy.
+Name of the model to copy. Example: llama2:7b, ConverterType: , ParameterName: source, @@ -1625,7 +1625,7 @@ Example: llama2:7b, IsRequired: true, IsDeprecated: false, Summary: -Name of the new model.
+Name of the new model. Example: llama2-backup, ConverterType: , ParameterName: destination, @@ -1735,7 +1735,7 @@ Example: llama2-backup, The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:13b, ConverterType: , ParameterName: model, @@ -1843,7 +1843,7 @@ Example: llama2:13b, The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -1876,7 +1876,7 @@ Example: llama2:7b, Allow insecure connections to the library. Only use this if you are pulling from your own library during development. -
+ Default Value: false, ConverterType: , ParameterName: insecure, @@ -1961,7 +1961,7 @@ Default Value: false, IsDeprecated: false, Summary: If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects. -
+ Default Value: true, ConverterType: , ParameterName: stream, @@ -1974,7 +1974,7 @@ Default Value: true, GenerateJsonSerializerContextTypes: true, HttpMethod: Post, Summary: -Download a model from the ollama library.
+Download a model from the ollama library. Cancelled pulls are resumed from where they left off, and multiple calls will share the same download progress., BaseUrlSummary: , RequestType: { @@ -2102,7 +2102,7 @@ Cancelled pulls are resumed from where they left off, and multiple calls will sh IsRequired: true, IsDeprecated: false, Summary: -The name of the model to push in the form of <namespace>/<model>:<tag>.
+The name of the model to push in the form of <namespace>/<model>:<tag>. Example: mattw/pygmalion:latest, ConverterType: , ParameterName: model, @@ -2135,7 +2135,7 @@ Example: mattw/pygmalion:latest, Allow insecure connections to the library. Only use this if you are pushing to your library during development. -
+ Default Value: false, ConverterType: , ParameterName: insecure, @@ -2220,7 +2220,7 @@ Default Value: false, IsDeprecated: false, Summary: If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects. -
+ Default Value: true, ConverterType: , ParameterName: stream, @@ -2233,7 +2233,7 @@ Default Value: true, GenerateJsonSerializerContextTypes: true, HttpMethod: Post, Summary: -Upload a model to a model library.
+Upload a model to a model library. Requires registering for ollama.ai and adding a public key first., BaseUrlSummary: , RequestType: { diff --git a/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/Models/_.verified.txt b/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/Models/_.verified.txt index f0fc701246..3c3c8ab951 100644 --- a/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/Models/_.verified.txt +++ b/src/tests/OpenApiGenerator.UnitTests/Snapshots/Ollama/Models/_.verified.txt @@ -31,7 +31,7 @@ The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -60,7 +60,7 @@ Example: llama2:7b, IsRequired: true, IsDeprecated: false, Summary: -The prompt to generate a response.
+The prompt to generate a response. Example: Why is the sky blue?, ConverterType: , ParameterName: prompt, @@ -325,7 +325,7 @@ You may choose to use the `raw` parameter if you are specifying a full templated IsDeprecated: false, Summary: If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects. -
+ Default Value: true, ConverterType: , ParameterName: stream, @@ -1347,7 +1347,7 @@ Note: it's important to instruct the model to use JSON in the prompt. Otherwise, The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -1403,7 +1403,7 @@ Example: llama2:7b, IsRequired: false, IsDeprecated: false, Summary: -The response for a given prompt with a provided model.
+The response for a given prompt with a provided model. Example: The sky appears blue because of a phenomenon called Rayleigh scattering., ConverterType: , ParameterName: response, @@ -1432,7 +1432,7 @@ Example: The sky appears blue because of a phenomenon called Rayleigh scattering IsRequired: false, IsDeprecated: false, Summary: -Whether the response has completed.
+Whether the response has completed. Example: true, ConverterType: , ParameterName: done, @@ -1462,7 +1462,7 @@ Example: true, IsDeprecated: false, Summary: An encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory. -
+ Example: [1, 2, 3], ConverterType: , ParameterName: context, @@ -1491,7 +1491,7 @@ Example: [1, 2, 3], IsRequired: false, IsDeprecated: false, Summary: -Time spent generating the response.
+Time spent generating the response. Example: 5589157167, ConverterType: , ParameterName: totalDuration, @@ -1520,7 +1520,7 @@ Example: 5589157167, IsRequired: false, IsDeprecated: false, Summary: -Time spent in nanoseconds loading the model.
+Time spent in nanoseconds loading the model. Example: 3013701500, ConverterType: , ParameterName: loadDuration, @@ -1549,7 +1549,7 @@ Example: 3013701500, IsRequired: false, IsDeprecated: false, Summary: -Number of tokens in the prompt.
+Number of tokens in the prompt. Example: 46, ConverterType: , ParameterName: promptEvalCount, @@ -1578,7 +1578,7 @@ Example: 46, IsRequired: false, IsDeprecated: false, Summary: -Time spent in nanoseconds evaluating the prompt.
+Time spent in nanoseconds evaluating the prompt. Example: 1160282000, ConverterType: , ParameterName: promptEvalDuration, @@ -1607,7 +1607,7 @@ Example: 1160282000, IsRequired: false, IsDeprecated: false, Summary: -Number of tokens the response.
+Number of tokens the response. Example: 113, ConverterType: , ParameterName: evalCount, @@ -1636,7 +1636,7 @@ Example: 113, IsRequired: false, IsDeprecated: false, Summary: -Time in nanoseconds spent generating the response.
+Time in nanoseconds spent generating the response. Example: 1325948000, ConverterType: , ParameterName: evalDuration, @@ -1686,7 +1686,7 @@ Example: 1325948000, The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -1839,7 +1839,7 @@ Note: it's important to instruct the model to use JSON in the prompt. Otherwise, IsDeprecated: false, Summary: If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects. -
+ Default Value: true, ConverterType: , ParameterName: stream, @@ -1954,7 +1954,7 @@ How long (in minutes) to keep the model loaded in memory. The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -2010,7 +2010,7 @@ Example: llama2:7b, IsRequired: false, IsDeprecated: false, Summary: -Whether the response has completed.
+Whether the response has completed. Example: true, ConverterType: , ParameterName: done, @@ -2066,7 +2066,7 @@ Example: true, IsRequired: false, IsDeprecated: false, Summary: -Time spent generating the response.
+Time spent generating the response. Example: 5589157167, ConverterType: , ParameterName: totalDuration, @@ -2095,7 +2095,7 @@ Example: 5589157167, IsRequired: false, IsDeprecated: false, Summary: -Time spent in nanoseconds loading the model.
+Time spent in nanoseconds loading the model. Example: 3013701500, ConverterType: , ParameterName: loadDuration, @@ -2124,7 +2124,7 @@ Example: 3013701500, IsRequired: false, IsDeprecated: false, Summary: -Number of tokens in the prompt.
+Number of tokens in the prompt. Example: 46, ConverterType: , ParameterName: promptEvalCount, @@ -2153,7 +2153,7 @@ Example: 46, IsRequired: false, IsDeprecated: false, Summary: -Time spent in nanoseconds evaluating the prompt.
+Time spent in nanoseconds evaluating the prompt. Example: 1160282000, ConverterType: , ParameterName: promptEvalDuration, @@ -2182,7 +2182,7 @@ Example: 1160282000, IsRequired: false, IsDeprecated: false, Summary: -Number of tokens the response.
+Number of tokens the response. Example: 113, ConverterType: , ParameterName: evalCount, @@ -2211,7 +2211,7 @@ Example: 113, IsRequired: false, IsDeprecated: false, Summary: -Time in nanoseconds spent generating the response.
+Time in nanoseconds spent generating the response. Example: 1325948000, ConverterType: , ParameterName: evalDuration, @@ -2293,7 +2293,7 @@ Example: 1325948000, IsRequired: true, IsDeprecated: false, Summary: -The content of the message
+The content of the message Example: Why is the sky blue?, ConverterType: , ParameterName: content, @@ -2470,7 +2470,7 @@ Example: Why is the sky blue?, The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -2499,7 +2499,7 @@ Example: llama2:7b, IsRequired: true, IsDeprecated: false, Summary: -Text to generate embeddings for.
+Text to generate embeddings for. Example: Here is an article about llamas..., ConverterType: , ParameterName: prompt, @@ -2638,7 +2638,7 @@ How long (in minutes) to keep the model loaded in memory. IsRequired: false, IsDeprecated: false, Summary: -The embedding for the prompt.
+The embedding for the prompt. Example: [0.5670403838157654, 0.009260174818336964, ...], ConverterType: , ParameterName: embedding, @@ -2688,7 +2688,7 @@ Example: [0.5670403838157654, 0.009260174818336964, ...], The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: mario, ConverterType: , ParameterName: model, @@ -2717,7 +2717,7 @@ Example: mario, IsRequired: true, IsDeprecated: false, Summary: -The contents of the Modelfile.
+The contents of the Modelfile. Example: FROM llama2\nSYSTEM You are mario from Super Mario Bros., ConverterType: , ParameterName: modelfile, @@ -2802,7 +2802,7 @@ Example: FROM llama2\nSYSTEM You are mario from Super Mario Bros., IsDeprecated: false, Summary: If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects. -
+ Default Value: true, ConverterType: , ParameterName: stream, @@ -3143,7 +3143,7 @@ Default Value: true, The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -3199,7 +3199,7 @@ Example: llama2:7b, IsRequired: false, IsDeprecated: false, Summary: -Size of the model on disk.
+Size of the model on disk. Example: 7323310500, ConverterType: , ParameterName: size, @@ -3228,7 +3228,7 @@ Example: 7323310500, IsRequired: false, IsDeprecated: false, Summary: -The model's digest.
+The model's digest. Example: sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711a, ConverterType: , ParameterName: digest, @@ -3492,7 +3492,7 @@ Example: sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711a, The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -3539,7 +3539,7 @@ Example: llama2:7b, IsRequired: false, IsDeprecated: false, Summary: -The model's license.
+The model's license. Example: <contents of license block>, ConverterType: , ParameterName: license, @@ -3568,7 +3568,7 @@ Example: <contents of license block>, IsRequired: false, IsDeprecated: false, Summary: -The modelfile associated with the model.
+The modelfile associated with the model. Example: Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM llama2:latest\n\nFROM /Users/username/.ollama/models/blobs/sha256:8daa9615cce30c259a9555b1cc250d461d1bc69980a274b44d7eda0be78076d8\nTEMPLATE \"\"\"[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>\n\n{{ end }}{{ .Prompt }} [/INST] \"\"\"\nSYSTEM \"\"\"\"\"\"\nPARAMETER stop [INST]\nPARAMETER stop [/INST]\nPARAMETER stop <<SYS>>\nPARAMETER stop <</SYS>>\n", ConverterType: , ParameterName: modelfile, @@ -3597,7 +3597,7 @@ Example: Modelfile generated by \"ollama show\"\n# To build a new Modelfile base IsRequired: false, IsDeprecated: false, Summary: -The model parameters.
+The model parameters. Example: stop [INST]\nstop [/INST]\nstop <<SYS>>\nstop <</SYS>>, ConverterType: , ParameterName: parameters, @@ -3626,7 +3626,7 @@ Example: stop [INST]\nstop [/INST]\nstop <<SYS>>\nstop <</SYS& IsRequired: false, IsDeprecated: false, Summary: -The prompt template for the model.
+The prompt template for the model. Example: [INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>\n\n{{ end }}{{ .Prompt }} [/INST], ConverterType: , ParameterName: template, @@ -3761,7 +3761,7 @@ Example: [INST] {{ if and .First .System }}<<SYS>>{{ .System }}<& IsRequired: true, IsDeprecated: false, Summary: -Name of the model to copy.
+Name of the model to copy. Example: llama2:7b, ConverterType: , ParameterName: source, @@ -3790,7 +3790,7 @@ Example: llama2:7b, IsRequired: true, IsDeprecated: false, Summary: -Name of the new model.
+Name of the new model. Example: llama2-backup, ConverterType: , ParameterName: destination, @@ -3840,7 +3840,7 @@ Example: llama2-backup, The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:13b, ConverterType: , ParameterName: model, @@ -3890,7 +3890,7 @@ Example: llama2:13b, The model name. Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version. -
+ Example: llama2:7b, ConverterType: , ParameterName: model, @@ -3923,7 +3923,7 @@ Example: llama2:7b, Allow insecure connections to the library. Only use this if you are pulling from your own library during development. -
+ Default Value: false, ConverterType: , ParameterName: insecure, @@ -4008,7 +4008,7 @@ Default Value: false, IsDeprecated: false, Summary: If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects. -
+ Default Value: true, ConverterType: , ParameterName: stream, @@ -4056,7 +4056,7 @@ Default Value: true, IsRequired: false, IsDeprecated: false, Summary: -Status pulling the model.
+Status pulling the model. Example: pulling manifest, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: status, @@ -4085,7 +4085,7 @@ Example: pulling manifest, IsRequired: false, IsDeprecated: false, Summary: -The model's digest.
+The model's digest. Example: sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711a, ConverterType: , ParameterName: digest, @@ -4114,7 +4114,7 @@ Example: sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711a, IsRequired: false, IsDeprecated: false, Summary: -Total size of the model.
+Total size of the model. Example: 2142590208, ConverterType: , ParameterName: total, @@ -4143,7 +4143,7 @@ Example: 2142590208, IsRequired: false, IsDeprecated: false, Summary: -Total bytes transferred.
+Total bytes transferred. Example: 2142590208, ConverterType: , ParameterName: completed, @@ -4558,7 +4558,7 @@ The number of files to be downloaded depends on the number of layers specified i IsRequired: true, IsDeprecated: false, Summary: -The name of the model to push in the form of <namespace>/<model>:<tag>.
+The name of the model to push in the form of <namespace>/<model>:<tag>. Example: mattw/pygmalion:latest, ConverterType: , ParameterName: model, @@ -4591,7 +4591,7 @@ Example: mattw/pygmalion:latest, Allow insecure connections to the library. Only use this if you are pushing to your library during development. -
+ Default Value: false, ConverterType: , ParameterName: insecure, @@ -4676,7 +4676,7 @@ Default Value: false, IsDeprecated: false, Summary: If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects. -
+ Default Value: true, ConverterType: , ParameterName: stream, @@ -4751,7 +4751,7 @@ Default Value: true, IsRequired: false, IsDeprecated: false, Summary: -the model's digest
+the model's digest Example: sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711a, ConverterType: , ParameterName: digest, @@ -4780,7 +4780,7 @@ Example: sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711a, IsRequired: false, IsDeprecated: false, Summary: -total size of the model
+total size of the model Example: 2142590208, ConverterType: , ParameterName: total, @@ -4809,7 +4809,7 @@ Example: 2142590208, IsRequired: false, IsDeprecated: false, Summary: -Total bytes transferred.
+Total bytes transferred. Example: 2142590208, ConverterType: , ParameterName: completed, diff --git a/src/tests/OpenApiGenerator.UnitTests/Snapshots/OpenAi/Methods/_.verified.txt b/src/tests/OpenApiGenerator.UnitTests/Snapshots/OpenAi/Methods/_.verified.txt index ddb53dd038..168203be28 100644 --- a/src/tests/OpenApiGenerator.UnitTests/Snapshots/OpenAi/Methods/_.verified.txt +++ b/src/tests/OpenApiGenerator.UnitTests/Snapshots/OpenAi/Methods/_.verified.txt @@ -58,7 +58,7 @@ IsRequired: true, IsDeprecated: false, Summary: -ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
+ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API. Example: gpt-4-turbo, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -91,7 +91,7 @@ Example: gpt-4-turbo, Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) -
+ Default Value: 0, ConverterType: , ParameterName: frequencyPenalty, @@ -153,7 +153,7 @@ Accepts a JSON object that maps tokens (specified by their token ID in the token DefaultValue: false, IsDeprecated: false, Summary: -Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`.
+Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. Default Value: false, ConverterType: , ParameterName: logprobs, @@ -241,8 +241,8 @@ The total length of input tokens and generated tokens is limited by the model's DefaultValue: 1, IsDeprecated: false, Summary: -How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs.
-Default Value: 1
+How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. +Default Value: 1 Example: 1, ConverterType: , ParameterName: n, @@ -275,7 +275,7 @@ Example: 1, Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) -
+ Default Value: 0, ConverterType: , ParameterName: presencePenalty, @@ -400,7 +400,7 @@ Up to 4 sequences where the API will stop generating further tokens. IsDeprecated: false, Summary: If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions). -
+ Default Value: false, ConverterType: , ParameterName: stream, @@ -464,8 +464,8 @@ Options for streaming response. Only set this when you set `stream: true`. What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -498,8 +498,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -594,7 +594,7 @@ Specifying a particular tool via `{"type": "function", "function": {"name": "my_ IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -874,7 +874,7 @@ ID of the model to use. You can use the [List models](/docs/api-reference/models The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays. Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document. -
+ Default Value: <|endoftext|>, ConverterType: global::OpenApiGenerator.JsonConverters.OneOfJsonConverterFactory4, ParameterName: prompt, @@ -909,7 +909,7 @@ Generates `best_of` completions server-side and returns the "best" (the one with When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to return – `best_of` must be greater than `n`. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`. -
+ Default Value: 1, ConverterType: , ParameterName: bestOf, @@ -940,7 +940,7 @@ Default Value: 1, IsDeprecated: false, Summary: Echo back the prompt in addition to the completion -
+ Default Value: false, ConverterType: , ParameterName: echo, @@ -973,7 +973,7 @@ Default Value: false, Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) -
+ Default Value: 0, ConverterType: , ParameterName: frequencyPenalty, @@ -1071,8 +1071,8 @@ The maximum value for `logprobs` is 5. The maximum number of [tokens](/tokenizer) that can be generated in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens. -
-Default Value: 16
+ +Default Value: 16 Example: 16, ConverterType: , ParameterName: maxTokens, @@ -1105,8 +1105,8 @@ Example: 16, How many completions to generate for each prompt. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: n, @@ -1139,7 +1139,7 @@ Example: 1, Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) -
+ Default Value: 0, ConverterType: , ParameterName: presencePenalty, @@ -1231,7 +1231,7 @@ Up to 4 sequences where the API will stop generating further tokens. The returne IsDeprecated: false, Summary: Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions). -
+ Default Value: false, ConverterType: , ParameterName: stream, @@ -1294,7 +1294,7 @@ Options for streaming response. Only set this when you set `stream: true`. The suffix that comes after a completion of inserted text. This parameter is only supported for `gpt-3.5-turbo-instruct`. -
+ Example: test., ConverterType: , ParameterName: suffix, @@ -1327,8 +1327,8 @@ Example: test., What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -1361,8 +1361,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -1392,7 +1392,7 @@ Example: 1, IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -1563,7 +1563,7 @@ Example: user-1234, IsRequired: true, IsDeprecated: false, Summary: -A text description of the desired image(s). The maximum length is 1000 characters for `dall-e-2` and 4000 characters for `dall-e-3`.
+A text description of the desired image(s). The maximum length is 1000 characters for `dall-e-2` and 4000 characters for `dall-e-3`. Example: A cute baby sea otter, ConverterType: , ParameterName: prompt, @@ -1594,8 +1594,8 @@ Example: A cute baby sea otter, DefaultValue: global::G.CreateImageRequestModel.DallE2, IsDeprecated: false, Summary: -The model to use for image generation.
-Default Value: dall-e-2
+The model to use for image generation. +Default Value: dall-e-2 Example: dall-e-3, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -1625,8 +1625,8 @@ Example: dall-e-3, DefaultValue: 1, IsDeprecated: false, Summary: -The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported.
-Default Value: 1
+The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported. +Default Value: 1 Example: 1, ConverterType: , ParameterName: n, @@ -1662,8 +1662,8 @@ Example: 1, DefaultValue: global::G.CreateImageRequestQuality.Standard, IsDeprecated: false, Summary: -The quality of the image that will be generated. `hd` creates images with finer details and greater consistency across the image. This param is only supported for `dall-e-3`.
-Default Value: standard
+The quality of the image that will be generated. `hd` creates images with finer details and greater consistency across the image. This param is only supported for `dall-e-3`. +Default Value: standard Example: standard, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageRequestQualityJsonConverter, ParameterName: quality, @@ -1699,8 +1699,8 @@ Example: standard, DefaultValue: global::G.CreateImageRequestResponseFormat.Url, IsDeprecated: false, Summary: -The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated.
-Default Value: url
+The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated. +Default Value: url Example: url, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageRequestResponseFormatJsonConverter, ParameterName: responseFormat, @@ -1742,8 +1742,8 @@ Example: url, DefaultValue: global::G.CreateImageRequestSize._1024x1024, IsDeprecated: false, Summary: -The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`. Must be one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3` models.
-Default Value: 1024x1024
+The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`. Must be one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3` models. +Default Value: 1024x1024 Example: 1024x1024, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageRequestSizeJsonConverter, ParameterName: size, @@ -1779,8 +1779,8 @@ Example: 1024x1024, DefaultValue: global::G.CreateImageRequestStyle.Vivid, IsDeprecated: false, Summary: -The style of the generated images. Must be one of `vivid` or `natural`. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for `dall-e-3`.
-Default Value: vivid
+The style of the generated images. Must be one of `vivid` or `natural`. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for `dall-e-3`. +Default Value: vivid Example: vivid, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageRequestStyleJsonConverter, ParameterName: style, @@ -1810,7 +1810,7 @@ Example: vivid, IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -1978,7 +1978,7 @@ Example: user-1234, IsRequired: true, IsDeprecated: false, Summary: -A text description of the desired image(s). The maximum length is 1000 characters.
+A text description of the desired image(s). The maximum length is 1000 characters. Example: A cute baby sea otter wearing a beret, ConverterType: , ParameterName: prompt, @@ -2036,8 +2036,8 @@ Example: A cute baby sea otter wearing a beret, DefaultValue: global::G.CreateImageEditRequestModel.DallE2, IsDeprecated: false, Summary: -The model to use for image generation. Only `dall-e-2` is supported at this time.
-Default Value: dall-e-2
+The model to use for image generation. Only `dall-e-2` is supported at this time. +Default Value: dall-e-2 Example: dall-e-2, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -2067,8 +2067,8 @@ Example: dall-e-2, DefaultValue: 1, IsDeprecated: false, Summary: -The number of images to generate. Must be between 1 and 10.
-Default Value: 1
+The number of images to generate. Must be between 1 and 10. +Default Value: 1 Example: 1, ConverterType: , ParameterName: n, @@ -2106,8 +2106,8 @@ Example: 1, DefaultValue: global::G.CreateImageEditRequestSize._1024x1024, IsDeprecated: false, Summary: -The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`.
-Default Value: 1024x1024
+The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`. +Default Value: 1024x1024 Example: 1024x1024, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageEditRequestSizeJsonConverter, ParameterName: size, @@ -2143,8 +2143,8 @@ Example: 1024x1024, DefaultValue: global::G.CreateImageEditRequestResponseFormat.Url, IsDeprecated: false, Summary: -The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated.
-Default Value: url
+The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated. +Default Value: url Example: url, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageEditRequestResponseFormatJsonConverter, ParameterName: responseFormat, @@ -2174,7 +2174,7 @@ Example: url, IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -2344,8 +2344,8 @@ Example: user-1234, DefaultValue: global::G.CreateImageVariationRequestModel.DallE2, IsDeprecated: false, Summary: -The model to use for image generation. Only `dall-e-2` is supported at this time.
-Default Value: dall-e-2
+The model to use for image generation. Only `dall-e-2` is supported at this time. +Default Value: dall-e-2 Example: dall-e-2, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -2375,8 +2375,8 @@ Example: dall-e-2, DefaultValue: 1, IsDeprecated: false, Summary: -The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported.
-Default Value: 1
+The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported. +Default Value: 1 Example: 1, ConverterType: , ParameterName: n, @@ -2412,8 +2412,8 @@ Example: 1, DefaultValue: global::G.CreateImageVariationRequestResponseFormat.Url, IsDeprecated: false, Summary: -The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated.
-Default Value: url
+The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated. +Default Value: url Example: url, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageVariationRequestResponseFormatJsonConverter, ParameterName: responseFormat, @@ -2451,8 +2451,8 @@ Example: url, DefaultValue: global::G.CreateImageVariationRequestSize._1024x1024, IsDeprecated: false, Summary: -The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`.
-Default Value: 1024x1024
+The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`. +Default Value: 1024x1024 Example: 1024x1024, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageVariationRequestSizeJsonConverter, ParameterName: size, @@ -2482,7 +2482,7 @@ Example: 1024x1024, IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -2621,7 +2621,7 @@ Example: user-1234, IsDeprecated: false, Summary: Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for `text-embedding-ada-002`), cannot be an empty string, and any array must be 2048 dimensions or less. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens. -
+ Example: The quick brown fox jumped over the lazy dog, ConverterType: global::OpenApiGenerator.JsonConverters.OneOfJsonConverterFactory4, ParameterName: input, @@ -2652,7 +2652,7 @@ Example: The quick brown fox jumped over the lazy dog, IsDeprecated: false, Summary: ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them. -
+ Example: text-embedding-3-small, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -2688,8 +2688,8 @@ Example: text-embedding-3-small, DefaultValue: global::G.CreateEmbeddingRequestEncodingFormat.Float, IsDeprecated: false, Summary: -The format to return the embeddings in. Can be either `float` or [`base64`](https://pypi.org/project/pybase64/).
-Default Value: float
+The format to return the embeddings in. Can be either `float` or [`base64`](https://pypi.org/project/pybase64/). +Default Value: float Example: float, ConverterType: global::OpenApiGenerator.JsonConverters.CreateEmbeddingRequestEncodingFormatJsonConverter, ParameterName: encodingFormat, @@ -2748,7 +2748,7 @@ The number of dimensions the resulting output embeddings should have. Only suppo IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -3000,7 +3000,7 @@ One of the available [TTS models](/docs/models/tts): `tts-1` or `tts-1-hd` DefaultValue: global::G.CreateSpeechRequestResponseFormat.Mp3, IsDeprecated: false, Summary: -The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`, `wav`, and `pcm`.
+The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`, `wav`, and `pcm`. Default Value: mp3, ConverterType: global::OpenApiGenerator.JsonConverters.CreateSpeechRequestResponseFormatJsonConverter, ParameterName: responseFormat, @@ -3030,7 +3030,7 @@ Default Value: mp3, DefaultValue: 1, IsDeprecated: false, Summary: -The speed of the generated audio. Select a value from `0.25` to `4.0`. `1.0` is the default.
+The speed of the generated audio. Select a value from `0.25` to `4.0`. `1.0` is the default. Default Value: 1, ConverterType: , ParameterName: speed, @@ -3190,7 +3190,7 @@ The audio file object (not file name) to transcribe, in one of these formats: fl IsDeprecated: false, Summary: ID of the model to use. Only `whisper-1` (which is powered by our open source Whisper V2 model) is currently available. -
+ Example: whisper-1, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -3291,7 +3291,7 @@ An optional text to guide the model's style or continue a previous audio segment IsDeprecated: false, Summary: The format of the transcript output, in one of these options: `json`, `text`, `srt`, `verbose_json`, or `vtt`. -
+ Default Value: json, ConverterType: global::OpenApiGenerator.JsonConverters.CreateTranscriptionRequestResponseFormatJsonConverter, ParameterName: responseFormat, @@ -3322,7 +3322,7 @@ Default Value: json, IsDeprecated: false, Summary: The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit. -
+ Default Value: 0, ConverterType: , ParameterName: temperature, @@ -3486,7 +3486,7 @@ The audio file object (not file name) translate, in one of these formats: flac, IsDeprecated: false, Summary: ID of the model to use. Only `whisper-1` (which is powered by our open source Whisper V2 model) is currently available. -
+ Example: whisper-1, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -3546,7 +3546,7 @@ An optional text to guide the model's style or continue a previous audio segment IsDeprecated: false, Summary: The format of the transcript output, in one of these options: `json`, `text`, `srt`, `verbose_json`, or `vtt`. -
+ Default Value: json, ConverterType: , ParameterName: responseFormat, @@ -3577,7 +3577,7 @@ Default Value: json, IsDeprecated: false, Summary: The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit. -
+ Default Value: 0, ConverterType: , ParameterName: temperature, @@ -4334,7 +4334,7 @@ Please [contact us](https://help.openai.com/) if you need to increase these stor Summary: The name of the model to fine-tune. You can select one of the [supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned). -
+ Example: gpt-3.5-turbo, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -4370,7 +4370,7 @@ See [upload file](/docs/api-reference/files/create) for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose `fine-tune`. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details. -
+ Example: file-abc123, ConverterType: , ParameterName: trainingFile, @@ -4467,7 +4467,7 @@ The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose `fine-tune`. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details. -
+ Example: file-abc123, ConverterType: , ParameterName: validationFile, @@ -4525,7 +4525,7 @@ Example: file-abc123, Summary: The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed is not specified, one will be generated for you. -
+ Example: 42, ConverterType: , ParameterName: seed, @@ -5799,8 +5799,8 @@ List checkpoints for a fine-tuning job. Two content moderations models are available: `text-moderation-stable` and `text-moderation-latest`. The default is `text-moderation-latest` which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use `text-moderation-stable`, we will provide advanced notice before updating the model. Accuracy of `text-moderation-stable` may be slightly lower than for `text-moderation-latest`. -
-Default Value: text-moderation-latest
+ +Default Value: text-moderation-latest Example: text-moderation-stable, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -6144,7 +6144,7 @@ Example: text-moderation-stable, IsDeprecated: false, Summary: ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them. -
+ Example: gpt-4-turbo, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -6262,7 +6262,7 @@ The system instructions that the assistant uses. The maximum length is 256,000 c IsDeprecated: false, Summary: A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`. -
+ Default Value: [], ConverterType: , ParameterName: tools, @@ -6352,8 +6352,8 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful IsDeprecated: false, Summary: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -6386,8 +6386,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -6866,7 +6866,7 @@ The system instructions that the assistant uses. The maximum length is 256,000 c IsDeprecated: false, Summary: A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`. -
+ Default Value: [], ConverterType: , ParameterName: tools, @@ -6956,8 +6956,8 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful IsDeprecated: false, Summary: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -6990,8 +6990,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -9037,7 +9037,7 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful IsRequired: false, IsDeprecated: false, Summary: -The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
+The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used. Example: gpt-4-turbo, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -9181,8 +9181,8 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful IsDeprecated: false, Summary: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -9215,8 +9215,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -9903,7 +9903,7 @@ Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the m IsRequired: false, IsDeprecated: false, Summary: -The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
+The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used. Example: gpt-4-turbo, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -10072,8 +10072,8 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful IsDeprecated: false, Summary: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -10106,8 +10106,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, diff --git a/src/tests/OpenApiGenerator.UnitTests/Snapshots/OpenAi/Models/_.verified.txt b/src/tests/OpenApiGenerator.UnitTests/Snapshots/OpenAi/Models/_.verified.txt index c0f3578603..5b08acd01f 100644 --- a/src/tests/OpenApiGenerator.UnitTests/Snapshots/OpenAi/Models/_.verified.txt +++ b/src/tests/OpenApiGenerator.UnitTests/Snapshots/OpenAi/Models/_.verified.txt @@ -460,7 +460,7 @@ ID of the model to use. You can use the [List models](/docs/api-reference/models The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays. Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document. -
+ Default Value: <|endoftext|>, ConverterType: global::OpenApiGenerator.JsonConverters.OneOfJsonConverterFactory4, ParameterName: prompt, @@ -495,7 +495,7 @@ Generates `best_of` completions server-side and returns the "best" (the one with When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to return – `best_of` must be greater than `n`. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`. -
+ Default Value: 1, ConverterType: , ParameterName: bestOf, @@ -526,7 +526,7 @@ Default Value: 1, IsDeprecated: false, Summary: Echo back the prompt in addition to the completion -
+ Default Value: false, ConverterType: , ParameterName: echo, @@ -559,7 +559,7 @@ Default Value: false, Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) -
+ Default Value: 0, ConverterType: , ParameterName: frequencyPenalty, @@ -657,8 +657,8 @@ The maximum value for `logprobs` is 5. The maximum number of [tokens](/tokenizer) that can be generated in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens. -
-Default Value: 16
+ +Default Value: 16 Example: 16, ConverterType: , ParameterName: maxTokens, @@ -691,8 +691,8 @@ Example: 16, How many completions to generate for each prompt. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: n, @@ -725,7 +725,7 @@ Example: 1, Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) -
+ Default Value: 0, ConverterType: , ParameterName: presencePenalty, @@ -817,7 +817,7 @@ Up to 4 sequences where the API will stop generating further tokens. The returne IsDeprecated: false, Summary: Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions). -
+ Default Value: false, ConverterType: , ParameterName: stream, @@ -880,7 +880,7 @@ Options for streaming response. Only set this when you set `stream: true`. The suffix that comes after a completion of inserted text. This parameter is only supported for `gpt-3.5-turbo-instruct`. -
+ Example: test., ConverterType: , ParameterName: suffix, @@ -913,8 +913,8 @@ Example: test., What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -947,8 +947,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -978,7 +978,7 @@ Example: 1, IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -1872,7 +1872,7 @@ or `content_filter` if content was omitted due to a flag from our content filter DefaultValue: global::G.ChatCompletionRequestMessageContentPartImageImageUrlDetail.Auto, IsDeprecated: false, Summary: -Specifies the detail level of the image. Learn more in the [Vision guide](/docs/guides/vision/low-or-high-fidelity-image-understanding).
+Specifies the detail level of the image. Learn more in the [Vision guide](/docs/guides/vision/low-or-high-fidelity-image-understanding). Default Value: auto, ConverterType: global::OpenApiGenerator.JsonConverters.ChatCompletionRequestMessageContentPartImageImageUrlDetailJsonConverter, ParameterName: detail, @@ -1981,7 +1981,7 @@ Default Value: auto, } ], Summary: -Specifies the detail level of the image. Learn more in the [Vision guide](/docs/guides/vision/low-or-high-fidelity-image-understanding).
+Specifies the detail level of the image. Learn more in the [Vision guide](/docs/guides/vision/low-or-high-fidelity-image-understanding). Default Value: auto, IsDeprecated: false, AdditionalModels: null, @@ -5014,7 +5014,7 @@ Options for streaming response. Only set this when you set `stream: true`. IsRequired: true, IsDeprecated: false, Summary: -ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API.
+ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility) table for details on which models work with the Chat API. Example: gpt-4-turbo, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -5047,7 +5047,7 @@ Example: gpt-4-turbo, Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) -
+ Default Value: 0, ConverterType: , ParameterName: frequencyPenalty, @@ -5109,7 +5109,7 @@ Accepts a JSON object that maps tokens (specified by their token ID in the token DefaultValue: false, IsDeprecated: false, Summary: -Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`.
+Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. Default Value: false, ConverterType: , ParameterName: logprobs, @@ -5197,8 +5197,8 @@ The total length of input tokens and generated tokens is limited by the model's DefaultValue: 1, IsDeprecated: false, Summary: -How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs.
-Default Value: 1
+How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. +Default Value: 1 Example: 1, ConverterType: , ParameterName: n, @@ -5231,7 +5231,7 @@ Example: 1, Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details) -
+ Default Value: 0, ConverterType: , ParameterName: presencePenalty, @@ -5356,7 +5356,7 @@ Up to 4 sequences where the API will stop generating further tokens. IsDeprecated: false, Summary: If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions). -
+ Default Value: false, ConverterType: , ParameterName: stream, @@ -5420,8 +5420,8 @@ Options for streaming response. Only set this when you set `stream: true`. What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -5454,8 +5454,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -5550,7 +5550,7 @@ Specifying a particular tool via `{"type": "function", "function": {"name": "my_ IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -5672,8 +5672,8 @@ A list of functions the model may generate JSON inputs for. DefaultValue: global::G.CreateChatCompletionRequestResponseFormatType.Text, IsDeprecated: false, Summary: -Must be one of `text` or `json_object`.
-Default Value: text
+Must be one of `text` or `json_object`. +Default Value: text Example: json_object, ConverterType: global::OpenApiGenerator.JsonConverters.CreateChatCompletionRequestResponseFormatTypeJsonConverter, ParameterName: type, @@ -5761,8 +5761,8 @@ Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the m } ], Summary: -Must be one of `text` or `json_object`.
-Default Value: text
+Must be one of `text` or `json_object`. +Default Value: text Example: json_object, IsDeprecated: false, AdditionalModels: null, @@ -8749,7 +8749,7 @@ The reason the model stopped generating tokens. This will be `stop` if the model IsRequired: true, IsDeprecated: false, Summary: -A text description of the desired image(s). The maximum length is 1000 characters for `dall-e-2` and 4000 characters for `dall-e-3`.
+A text description of the desired image(s). The maximum length is 1000 characters for `dall-e-2` and 4000 characters for `dall-e-3`. Example: A cute baby sea otter, ConverterType: , ParameterName: prompt, @@ -8780,8 +8780,8 @@ Example: A cute baby sea otter, DefaultValue: global::G.CreateImageRequestModel.DallE2, IsDeprecated: false, Summary: -The model to use for image generation.
-Default Value: dall-e-2
+The model to use for image generation. +Default Value: dall-e-2 Example: dall-e-3, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -8811,8 +8811,8 @@ Example: dall-e-3, DefaultValue: 1, IsDeprecated: false, Summary: -The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported.
-Default Value: 1
+The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported. +Default Value: 1 Example: 1, ConverterType: , ParameterName: n, @@ -8848,8 +8848,8 @@ Example: 1, DefaultValue: global::G.CreateImageRequestQuality.Standard, IsDeprecated: false, Summary: -The quality of the image that will be generated. `hd` creates images with finer details and greater consistency across the image. This param is only supported for `dall-e-3`.
-Default Value: standard
+The quality of the image that will be generated. `hd` creates images with finer details and greater consistency across the image. This param is only supported for `dall-e-3`. +Default Value: standard Example: standard, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageRequestQualityJsonConverter, ParameterName: quality, @@ -8885,8 +8885,8 @@ Example: standard, DefaultValue: global::G.CreateImageRequestResponseFormat.Url, IsDeprecated: false, Summary: -The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated.
-Default Value: url
+The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated. +Default Value: url Example: url, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageRequestResponseFormatJsonConverter, ParameterName: responseFormat, @@ -8928,8 +8928,8 @@ Example: url, DefaultValue: global::G.CreateImageRequestSize._1024x1024, IsDeprecated: false, Summary: -The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`. Must be one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3` models.
-Default Value: 1024x1024
+The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`. Must be one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3` models. +Default Value: 1024x1024 Example: 1024x1024, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageRequestSizeJsonConverter, ParameterName: size, @@ -8965,8 +8965,8 @@ Example: 1024x1024, DefaultValue: global::G.CreateImageRequestStyle.Vivid, IsDeprecated: false, Summary: -The style of the generated images. Must be one of `vivid` or `natural`. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for `dall-e-3`.
-Default Value: vivid
+The style of the generated images. Must be one of `vivid` or `natural`. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for `dall-e-3`. +Default Value: vivid Example: vivid, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageRequestStyleJsonConverter, ParameterName: style, @@ -8996,7 +8996,7 @@ Example: vivid, IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -9078,8 +9078,8 @@ Example: user-1234, } ], Summary: -The quality of the image that will be generated. `hd` creates images with finer details and greater consistency across the image. This param is only supported for `dall-e-3`.
-Default Value: standard
+The quality of the image that will be generated. `hd` creates images with finer details and greater consistency across the image. This param is only supported for `dall-e-3`. +Default Value: standard Example: standard, IsDeprecated: false, AdditionalModels: null, @@ -9154,8 +9154,8 @@ Example: standard, } ], Summary: -The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated.
-Default Value: url
+The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated. +Default Value: url Example: url, IsDeprecated: false, AdditionalModels: null, @@ -9311,8 +9311,8 @@ Example: url, } ], Summary: -The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`. Must be one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3` models.
-Default Value: 1024x1024
+The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`. Must be one of `1024x1024`, `1792x1024`, or `1024x1792` for `dall-e-3` models. +Default Value: 1024x1024 Example: 1024x1024, IsDeprecated: false, AdditionalModels: null, @@ -9387,8 +9387,8 @@ Example: 1024x1024, } ], Summary: -The style of the generated images. Must be one of `vivid` or `natural`. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for `dall-e-3`.
-Default Value: vivid
+The style of the generated images. Must be one of `vivid` or `natural`. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for `dall-e-3`. +Default Value: vivid Example: vivid, IsDeprecated: false, AdditionalModels: null, @@ -9699,7 +9699,7 @@ Example: vivid, IsRequired: true, IsDeprecated: false, Summary: -A text description of the desired image(s). The maximum length is 1000 characters.
+A text description of the desired image(s). The maximum length is 1000 characters. Example: A cute baby sea otter wearing a beret, ConverterType: , ParameterName: prompt, @@ -9757,8 +9757,8 @@ Example: A cute baby sea otter wearing a beret, DefaultValue: global::G.CreateImageEditRequestModel.DallE2, IsDeprecated: false, Summary: -The model to use for image generation. Only `dall-e-2` is supported at this time.
-Default Value: dall-e-2
+The model to use for image generation. Only `dall-e-2` is supported at this time. +Default Value: dall-e-2 Example: dall-e-2, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -9788,8 +9788,8 @@ Example: dall-e-2, DefaultValue: 1, IsDeprecated: false, Summary: -The number of images to generate. Must be between 1 and 10.
-Default Value: 1
+The number of images to generate. Must be between 1 and 10. +Default Value: 1 Example: 1, ConverterType: , ParameterName: n, @@ -9827,8 +9827,8 @@ Example: 1, DefaultValue: global::G.CreateImageEditRequestSize._1024x1024, IsDeprecated: false, Summary: -The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`.
-Default Value: 1024x1024
+The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`. +Default Value: 1024x1024 Example: 1024x1024, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageEditRequestSizeJsonConverter, ParameterName: size, @@ -9864,8 +9864,8 @@ Example: 1024x1024, DefaultValue: global::G.CreateImageEditRequestResponseFormat.Url, IsDeprecated: false, Summary: -The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated.
-Default Value: url
+The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated. +Default Value: url Example: url, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageEditRequestResponseFormatJsonConverter, ParameterName: responseFormat, @@ -9895,7 +9895,7 @@ Example: url, IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -10004,8 +10004,8 @@ Example: user-1234, } ], Summary: -The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`.
-Default Value: 1024x1024
+The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`. +Default Value: 1024x1024 Example: 1024x1024, IsDeprecated: false, AdditionalModels: null, @@ -10080,8 +10080,8 @@ Example: 1024x1024, } ], Summary: -The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated.
-Default Value: url
+The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated. +Default Value: url Example: url, IsDeprecated: false, AdditionalModels: null, @@ -10196,8 +10196,8 @@ Example: url, DefaultValue: global::G.CreateImageVariationRequestModel.DallE2, IsDeprecated: false, Summary: -The model to use for image generation. Only `dall-e-2` is supported at this time.
-Default Value: dall-e-2
+The model to use for image generation. Only `dall-e-2` is supported at this time. +Default Value: dall-e-2 Example: dall-e-2, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -10227,8 +10227,8 @@ Example: dall-e-2, DefaultValue: 1, IsDeprecated: false, Summary: -The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported.
-Default Value: 1
+The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported. +Default Value: 1 Example: 1, ConverterType: , ParameterName: n, @@ -10264,8 +10264,8 @@ Example: 1, DefaultValue: global::G.CreateImageVariationRequestResponseFormat.Url, IsDeprecated: false, Summary: -The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated.
-Default Value: url
+The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated. +Default Value: url Example: url, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageVariationRequestResponseFormatJsonConverter, ParameterName: responseFormat, @@ -10303,8 +10303,8 @@ Example: url, DefaultValue: global::G.CreateImageVariationRequestSize._1024x1024, IsDeprecated: false, Summary: -The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`.
-Default Value: 1024x1024
+The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`. +Default Value: 1024x1024 Example: 1024x1024, ConverterType: global::OpenApiGenerator.JsonConverters.CreateImageVariationRequestSizeJsonConverter, ParameterName: size, @@ -10334,7 +10334,7 @@ Example: 1024x1024, IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -10416,8 +10416,8 @@ Example: user-1234, } ], Summary: -The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated.
-Default Value: url
+The format in which the generated images are returned. Must be one of `url` or `b64_json`. URLs are only valid for 60 minutes after the image has been generated. +Default Value: url Example: url, IsDeprecated: false, AdditionalModels: null, @@ -10519,8 +10519,8 @@ Example: url, } ], Summary: -The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`.
-Default Value: 1024x1024
+The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`. +Default Value: 1024x1024 Example: 1024x1024, IsDeprecated: false, AdditionalModels: null, @@ -10639,8 +10639,8 @@ Example: 1024x1024, Two content moderations models are available: `text-moderation-stable` and `text-moderation-latest`. The default is `text-moderation-latest` which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use `text-moderation-stable`, we will provide advanced notice before updating the model. Accuracy of `text-moderation-stable` may be slightly lower than for `text-moderation-latest`. -
-Default Value: text-moderation-latest
+ +Default Value: text-moderation-latest Example: text-moderation-stable, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -12052,7 +12052,7 @@ Use "assistants" for [Assistants](/docs/api-reference/assistants) and [Message]( Summary: The name of the model to fine-tune. You can select one of the [supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned). -
+ Example: gpt-3.5-turbo, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -12088,7 +12088,7 @@ See [upload file](/docs/api-reference/files/create) for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose `fine-tune`. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details. -
+ Example: file-abc123, ConverterType: , ParameterName: trainingFile, @@ -12185,7 +12185,7 @@ The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose `fine-tune`. See the [fine-tuning guide](/docs/guides/fine-tuning) for more details. -
+ Example: file-abc123, ConverterType: , ParameterName: validationFile, @@ -12243,7 +12243,7 @@ Example: file-abc123, Summary: The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed is not specified, one will be generated for you. -
+ Example: 42, ConverterType: , ParameterName: seed, @@ -12294,7 +12294,7 @@ Example: 42, Summary: Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance. -
+ Default Value: auto, ConverterType: global::OpenApiGenerator.JsonConverters.OneOfJsonConverterFactory2, ParameterName: batchSize, @@ -12327,7 +12327,7 @@ Default Value: auto, Summary: Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting. -
+ Default Value: auto, ConverterType: global::OpenApiGenerator.JsonConverters.OneOfJsonConverterFactory2, ParameterName: learningRateMultiplier, @@ -12360,7 +12360,7 @@ Default Value: auto, Summary: The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. -
+ Default Value: auto, ConverterType: global::OpenApiGenerator.JsonConverters.OneOfJsonConverterFactory2, ParameterName: nEpochs, @@ -12625,7 +12625,7 @@ to your run, and set a default entity (team, username, etc) to be associated wit IsDeprecated: false, Summary: The name of the project that the new run will be created under. -
+ Example: my-wandb-project, ConverterType: , ParameterName: project, @@ -13238,7 +13238,7 @@ to your run, and set a default entity (team, username, etc) to be associated wit IsDeprecated: false, Summary: Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for `text-embedding-ada-002`), cannot be an empty string, and any array must be 2048 dimensions or less. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens. -
+ Example: The quick brown fox jumped over the lazy dog, ConverterType: global::OpenApiGenerator.JsonConverters.OneOfJsonConverterFactory4, ParameterName: input, @@ -13269,7 +13269,7 @@ Example: The quick brown fox jumped over the lazy dog, IsDeprecated: false, Summary: ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them. -
+ Example: text-embedding-3-small, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -13305,8 +13305,8 @@ Example: text-embedding-3-small, DefaultValue: global::G.CreateEmbeddingRequestEncodingFormat.Float, IsDeprecated: false, Summary: -The format to return the embeddings in. Can be either `float` or [`base64`](https://pypi.org/project/pybase64/).
-Default Value: float
+The format to return the embeddings in. Can be either `float` or [`base64`](https://pypi.org/project/pybase64/). +Default Value: float Example: float, ConverterType: global::OpenApiGenerator.JsonConverters.CreateEmbeddingRequestEncodingFormatJsonConverter, ParameterName: encodingFormat, @@ -13365,7 +13365,7 @@ The number of dimensions the resulting output embeddings should have. Only suppo IsDeprecated: false, Summary: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids). -
+ Example: user-1234, ConverterType: , ParameterName: user, @@ -13447,8 +13447,8 @@ Example: user-1234, } ], Summary: -The format to return the embeddings in. Can be either `float` or [`base64`](https://pypi.org/project/pybase64/).
-Default Value: float
+The format to return the embeddings in. Can be either `float` or [`base64`](https://pypi.org/project/pybase64/). +Default Value: float Example: float, IsDeprecated: false, AdditionalModels: null, @@ -13867,7 +13867,7 @@ The audio file object (not file name) to transcribe, in one of these formats: fl IsDeprecated: false, Summary: ID of the model to use. Only `whisper-1` (which is powered by our open source Whisper V2 model) is currently available. -
+ Example: whisper-1, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -13968,7 +13968,7 @@ An optional text to guide the model's style or continue a previous audio segment IsDeprecated: false, Summary: The format of the transcript output, in one of these options: `json`, `text`, `srt`, `verbose_json`, or `vtt`. -
+ Default Value: json, ConverterType: global::OpenApiGenerator.JsonConverters.CreateTranscriptionRequestResponseFormatJsonConverter, ParameterName: responseFormat, @@ -13999,7 +13999,7 @@ Default Value: json, IsDeprecated: false, Summary: The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit. -
+ Default Value: 0, ConverterType: , ParameterName: temperature, @@ -14163,7 +14163,7 @@ Default Value: 0, ], Summary: The format of the transcript output, in one of these options: `json`, `text`, `srt`, `verbose_json`, or `vtt`. -
+ Default Value: json, IsDeprecated: false, AdditionalModels: null, @@ -14865,7 +14865,7 @@ The audio file object (not file name) translate, in one of these formats: flac, IsDeprecated: false, Summary: ID of the model to use. Only `whisper-1` (which is powered by our open source Whisper V2 model) is currently available. -
+ Example: whisper-1, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -14925,7 +14925,7 @@ An optional text to guide the model's style or continue a previous audio segment IsDeprecated: false, Summary: The format of the transcript output, in one of these options: `json`, `text`, `srt`, `verbose_json`, or `vtt`. -
+ Default Value: json, ConverterType: , ParameterName: responseFormat, @@ -14956,7 +14956,7 @@ Default Value: json, IsDeprecated: false, Summary: The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit. -
+ Default Value: 0, ConverterType: , ParameterName: temperature, @@ -15333,7 +15333,7 @@ One of the available [TTS models](/docs/models/tts): `tts-1` or `tts-1-hd` DefaultValue: global::G.CreateSpeechRequestResponseFormat.Mp3, IsDeprecated: false, Summary: -The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`, `wav`, and `pcm`.
+The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`, `wav`, and `pcm`. Default Value: mp3, ConverterType: global::OpenApiGenerator.JsonConverters.CreateSpeechRequestResponseFormatJsonConverter, ParameterName: responseFormat, @@ -15363,7 +15363,7 @@ Default Value: mp3, DefaultValue: 1, IsDeprecated: false, Summary: -The speed of the generated audio. Select a value from `0.25` to `4.0`. `1.0` is the default.
+The speed of the generated audio. Select a value from `0.25` to `4.0`. `1.0` is the default. Default Value: 1, ConverterType: , ParameterName: speed, @@ -15734,7 +15734,7 @@ Default Value: 1, } ], Summary: -The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`, `wav`, and `pcm`.
+The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`, `wav`, and `pcm`. Default Value: mp3, IsDeprecated: false, AdditionalModels: null, @@ -17391,7 +17391,7 @@ The `fine_tuning.job` object represents a fine-tuning job that has been created IsDeprecated: false, Summary: The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. -"auto" decides the optimal number of epochs based on the size of the dataset. If setting the number manually, we support any number between 1 and 50 epochs.
+"auto" decides the optimal number of epochs based on the size of the dataset. If setting the number manually, we support any number between 1 and 50 epochs. Default Value: auto, ConverterType: global::OpenApiGenerator.JsonConverters.OneOfJsonConverterFactory2, ParameterName: nEpochs, @@ -17792,7 +17792,7 @@ to your run, and set a default entity (team, username, etc) to be associated wit IsDeprecated: false, Summary: The name of the project that the new run will be created under. -
+ Example: my-wandb-project, ConverterType: , ParameterName: project, @@ -19134,8 +19134,8 @@ The `fine_tuning.job.checkpoint` object represents a model checkpoint for a fine DefaultValue: global::G.AssistantsApiResponseFormatType.Text, IsDeprecated: false, Summary: -Must be one of `text` or `json_object`.
-Default Value: text
+Must be one of `text` or `json_object`. +Default Value: text Example: json_object, ConverterType: global::OpenApiGenerator.JsonConverters.AssistantsApiResponseFormatTypeJsonConverter, ParameterName: type, @@ -19219,8 +19219,8 @@ An object describing the expected output of the model. If `json_object` only `fu } ], Summary: -Must be one of `text` or `json_object`.
-Default Value: text
+Must be one of `text` or `json_object`. +Default Value: text Example: json_object, IsDeprecated: false, AdditionalModels: null, @@ -19463,7 +19463,7 @@ The system instructions that the assistant uses. The maximum length is 256,000 c IsDeprecated: false, Summary: A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`. -
+ Default Value: [], ConverterType: , ParameterName: tools, @@ -19553,8 +19553,8 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful IsDeprecated: false, Summary: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -19587,8 +19587,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -19745,7 +19745,7 @@ A set of resources that are used by the assistant's tools. The resources are spe IsDeprecated: false, Summary: A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter`` tool. There can be a maximum of 20 files associated with the tool. -
+ Default Value: [], ConverterType: , ParameterName: fileIds, @@ -19962,7 +19962,7 @@ The ID of the [vector store](/docs/api-reference/vector-stores/object) attached IsDeprecated: false, Summary: ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them. -
+ Example: gpt-4-turbo, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -20080,7 +20080,7 @@ The system instructions that the assistant uses. The maximum length is 256,000 c IsDeprecated: false, Summary: A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`. -
+ Default Value: [], ConverterType: , ParameterName: tools, @@ -20170,8 +20170,8 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful IsDeprecated: false, Summary: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -20204,8 +20204,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -20363,7 +20363,7 @@ A set of resources that are used by the assistant's tools. The resources are spe IsDeprecated: false, Summary: A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool. -
+ Default Value: [], ConverterType: , ParameterName: fileIds, @@ -21262,7 +21262,7 @@ The system instructions that the assistant uses. The maximum length is 256,000 c IsDeprecated: false, Summary: A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`. -
+ Default Value: [], ConverterType: , ParameterName: tools, @@ -21352,8 +21352,8 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful IsDeprecated: false, Summary: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -21386,8 +21386,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -21544,7 +21544,7 @@ A set of resources that are used by the assistant's tools. The resources are spe IsDeprecated: false, Summary: Overrides the list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool. -
+ Default Value: [], ConverterType: , ParameterName: fileIds, @@ -23264,7 +23264,7 @@ Overrides the [vector store](/docs/api-reference/vector-stores/object) attached DefaultValue: , IsDeprecated: false, Summary: -The list of tools that the [assistant](/docs/api-reference/assistants) used for this run.
+The list of tools that the [assistant](/docs/api-reference/assistants) used for this run. Default Value: [], ConverterType: , ParameterName: tools, @@ -24539,7 +24539,7 @@ Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the m IsRequired: false, IsDeprecated: false, Summary: -The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
+The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used. Example: gpt-4-turbo, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -24708,8 +24708,8 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful IsDeprecated: false, Summary: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -24742,8 +24742,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -26284,7 +26284,7 @@ If `true`, returns a stream of events that happen during the Run as server-sent IsRequired: false, IsDeprecated: false, Summary: -The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
+The ID of the [Model](/docs/api-reference/models) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used. Example: gpt-4-turbo, ConverterType: global::OpenApiGenerator.JsonConverters.AnyOfJsonConverterFactory2, ParameterName: model, @@ -26428,8 +26428,8 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful IsDeprecated: false, Summary: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: temperature, @@ -26462,8 +26462,8 @@ Example: 1, An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. -
-Default Value: 1
+ +Default Value: 1 Example: 1, ConverterType: , ParameterName: topP, @@ -26771,7 +26771,7 @@ A set of resources that are used by the assistant's tools. The resources are spe IsDeprecated: false, Summary: A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool. -
+ Default Value: [], ConverterType: , ParameterName: fileIds, @@ -27785,7 +27785,7 @@ A set of resources that are made available to the assistant's tools in this thre IsDeprecated: false, Summary: A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool. -
+ Default Value: [], ConverterType: , ParameterName: fileIds, @@ -28106,7 +28106,7 @@ A set of resources that are made available to the assistant's tools in this thre IsDeprecated: false, Summary: A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool. -
+ Default Value: [], ConverterType: , ParameterName: fileIds, @@ -28459,7 +28459,7 @@ A set of resources that are made available to the assistant's tools in this thre IsDeprecated: false, Summary: A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool. -
+ Default Value: [], ConverterType: , ParameterName: fileIds, @@ -30827,7 +30827,7 @@ Set of 16 key-value pairs that can be attached to an object. This can be useful DefaultValue: global::G.MessageContentImageFileObjectImageFileDetail.Auto, IsDeprecated: false, Summary: -Specifies the detail level of the image if specified by the user. `low` uses fewer tokens, you can opt in to high resolution using `high`.
+Specifies the detail level of the image if specified by the user. `low` uses fewer tokens, you can opt in to high resolution using `high`. Default Value: auto, ConverterType: global::OpenApiGenerator.JsonConverters.MessageContentImageFileObjectImageFileDetailJsonConverter, ParameterName: detail, @@ -30936,7 +30936,7 @@ Default Value: auto, } ], Summary: -Specifies the detail level of the image if specified by the user. `low` uses fewer tokens, you can opt in to high resolution using `high`.
+Specifies the detail level of the image if specified by the user. `low` uses fewer tokens, you can opt in to high resolution using `high`. Default Value: auto, IsDeprecated: false, AdditionalModels: null, @@ -31161,7 +31161,7 @@ Default Value: auto, DefaultValue: global::G.MessageDeltaContentImageFileObjectImageFileDetail.Auto, IsDeprecated: false, Summary: -Specifies the detail level of the image if specified by the user. `low` uses fewer tokens, you can opt in to high resolution using `high`.
+Specifies the detail level of the image if specified by the user. `low` uses fewer tokens, you can opt in to high resolution using `high`. Default Value: auto, ConverterType: global::OpenApiGenerator.JsonConverters.MessageDeltaContentImageFileObjectImageFileDetailJsonConverter, ParameterName: detail, @@ -31270,7 +31270,7 @@ Default Value: auto, } ], Summary: -Specifies the detail level of the image if specified by the user. `low` uses fewer tokens, you can opt in to high resolution using `high`.
+Specifies the detail level of the image if specified by the user. `low` uses fewer tokens, you can opt in to high resolution using `high`. Default Value: auto, IsDeprecated: false, AdditionalModels: null, @@ -31468,7 +31468,7 @@ Default Value: auto, DefaultValue: global::G.MessageContentImageUrlObjectImageUrlDetail.Auto, IsDeprecated: false, Summary: -Specifies the detail level of the image. `low` uses fewer tokens, you can opt in to high resolution using `high`. Default value is `auto`
+Specifies the detail level of the image. `low` uses fewer tokens, you can opt in to high resolution using `high`. Default value is `auto` Default Value: auto, ConverterType: global::OpenApiGenerator.JsonConverters.MessageContentImageUrlObjectImageUrlDetailJsonConverter, ParameterName: detail, @@ -31577,7 +31577,7 @@ Default Value: auto, } ], Summary: -Specifies the detail level of the image. `low` uses fewer tokens, you can opt in to high resolution using `high`. Default value is `auto`
+Specifies the detail level of the image. `low` uses fewer tokens, you can opt in to high resolution using `high`. Default value is `auto` Default Value: auto, IsDeprecated: false, AdditionalModels: null, @@ -31802,7 +31802,7 @@ Default Value: auto, DefaultValue: global::G.MessageDeltaContentImageUrlObjectImageUrlDetail.Auto, IsDeprecated: false, Summary: -Specifies the detail level of the image. `low` uses fewer tokens, you can opt in to high resolution using `high`.
+Specifies the detail level of the image. `low` uses fewer tokens, you can opt in to high resolution using `high`. Default Value: auto, ConverterType: global::OpenApiGenerator.JsonConverters.MessageDeltaContentImageUrlObjectImageUrlDetailJsonConverter, ParameterName: detail, @@ -31911,7 +31911,7 @@ Default Value: auto, } ], Summary: -Specifies the detail level of the image. `low` uses fewer tokens, you can opt in to high resolution using `high`.
+Specifies the detail level of the image. `low` uses fewer tokens, you can opt in to high resolution using `high`. Default Value: auto, IsDeprecated: false, AdditionalModels: null, diff --git a/src/tests/OpenApiGenerator.UnitTests/Snapshots/Replicate/Methods/_.verified.txt b/src/tests/OpenApiGenerator.UnitTests/Snapshots/Replicate/Methods/_.verified.txt index f177c4aac9..ce9c4e50eb 100644 --- a/src/tests/OpenApiGenerator.UnitTests/Snapshots/Replicate/Methods/_.verified.txt +++ b/src/tests/OpenApiGenerator.UnitTests/Snapshots/Replicate/Methods/_.verified.txt @@ -12,7 +12,7 @@ JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -Get the authenticated account
+Get the authenticated account Returns information about the user or organization associated with the provided API token. Example cURL request: @@ -104,7 +104,7 @@ The response will be a JSON object describing the account: JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -List collections of models
+List collections of models Example cURL request: ```console @@ -213,7 +213,7 @@ The response will be a paginated JSON list of collection objects: JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -Get a collection of models
+Get a collection of models Example cURL request: ```console @@ -286,7 +286,7 @@ The response will be a collection object with a nested list of the models in tha JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -List deployments
+List deployments Get a list of deployments associated with the current account, including the latest release configuration for each deployment. Example cURL request: @@ -563,7 +563,7 @@ The response will be a paginated JSON array of deployment objects, sorted with t GenerateJsonSerializerContextTypes: false, HttpMethod: Post, Summary: -Create a deployment
+Create a deployment Create a new deployment: Example cURL request: @@ -758,7 +758,7 @@ The response will be a JSON object describing the deployment: GenerateJsonSerializerContextTypes: false, HttpMethod: Delete, Summary: -Delete a deployment
+Delete a deployment Delete a deployment Deployment deletion has some restrictions: @@ -889,7 +889,7 @@ The response will be an empty 204, indicating the deployment has been deleted. JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -Get a deployment
+Get a deployment Get information about a deployment by name including the current release. Example cURL request: @@ -1166,7 +1166,7 @@ The response will be a JSON object describing the deployment: GenerateJsonSerializerContextTypes: false, HttpMethod: Patch, Summary: -Update a deployment
+Update a deployment Update properties of an existing deployment, including hardware, min/max instances, and the deployment's underlying model [version](https://replicate.com/docs/how-does-replicate-work#versions). Example cURL request: @@ -1507,7 +1507,7 @@ Requests for event types `output` and `logs` will be sent at most once every 500 GenerateJsonSerializerContextTypes: false, HttpMethod: Post, Summary: -Create a prediction using a deployment
+Create a prediction using a deployment Start a new prediction for a deployment of a model using inputs you provide. Example request body: @@ -1637,7 +1637,7 @@ Output files are served by `replicate.delivery` and its subdomains. If you use a JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -List available hardware for models
+List available hardware for models Example cURL request: ```console @@ -1727,7 +1727,7 @@ The response will be a JSON array of hardware objects: JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -List public models
+List public models Get a paginated list of public models. Example cURL request: @@ -2074,7 +2074,7 @@ The `cover_image_url` string is an HTTPS URL for an image file. This can be: GenerateJsonSerializerContextTypes: false, HttpMethod: Post, Summary: -Create a model
+Create a model Create a model. Example cURL request: @@ -2238,7 +2238,7 @@ The response will be a model object in the following format: GenerateJsonSerializerContextTypes: false, HttpMethod: Delete, Summary: -Delete a model
+Delete a model Delete a model Model deletion has some restrictions: @@ -2371,7 +2371,7 @@ The response will be an empty 204, indicating the model has been deleted. JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -Get a model
+Get a model Example cURL request: ```console @@ -2676,7 +2676,7 @@ Requests for event types `output` and `logs` will be sent at most once every 500 GenerateJsonSerializerContextTypes: false, HttpMethod: Post, Summary: -Create a prediction using an official model
+Create a prediction using an official model Start a new prediction for an official model using the inputs you provide. Example request body: @@ -2867,7 +2867,7 @@ Output files are served by `replicate.delivery` and its subdomains. If you use a JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -List model versions
+List model versions Example cURL request: ```console @@ -3038,7 +3038,7 @@ The response will be a JSON array of model version objects, sorted with the most GenerateJsonSerializerContextTypes: false, HttpMethod: Delete, Summary: -Delete a model version
+Delete a model version Delete a model version and all associated predictions, including all output files. Model version deletion has some restrictions: @@ -3204,7 +3204,7 @@ The response will be an empty 202, indicating the deletion request has been acce JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -Get a model version
+Get a model version Example cURL request: ```console @@ -3536,7 +3536,7 @@ Requests for event types `output` and `logs` will be sent at most once every 500 GenerateJsonSerializerContextTypes: false, HttpMethod: Post, Summary: -Create a training
+Create a training Start a new training of the model version you specify. Example request body: @@ -3668,7 +3668,7 @@ To find some models to train on, check out the [trainable language models collec JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -List predictions
+List predictions Get a paginated list of predictions that you've created. This will include predictions created from the API and the website. It will return 100 records per page. Example cURL request: @@ -3958,7 +3958,7 @@ Requests for event types `output` and `logs` will be sent at most once every 500 GenerateJsonSerializerContextTypes: false, HttpMethod: Post, Summary: -Create a prediction
+Create a prediction Start a new prediction for the model version and inputs you provide. Example request body: @@ -4122,7 +4122,7 @@ Output files are served by `replicate.delivery` and its subdomains. If you use a JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -Get a prediction
+Get a prediction Get the current state of a prediction. Example cURL request: @@ -4316,7 +4316,7 @@ Output files are served by `replicate.delivery` and its subdomains. If you use a JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -List trainings
+List trainings Get a paginated list of trainings that you've created. This will include trainings created from the API and the website. It will return 100 records per page. Example cURL request: @@ -4457,7 +4457,7 @@ The response will be a paginated JSON array of training objects, sorted with the JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -Get a training
+Get a training Get the current state of a training. Example cURL request: @@ -4650,7 +4650,7 @@ Terminated trainings (with a status of `succeeded`, `failed`, or `canceled`) wil JsonSerializerContext: , GenerateJsonSerializerContextTypes: false, Summary: -Get the signing secret for the default webhook
+Get the signing secret for the default webhook Get the signing secret for the default webhook endpoint. This is used to verify that webhook requests are coming from Replicate. Example cURL request: