This crate provides functionality developed to simplify interfacing with Prediction Guard API in Rust.
To access the API, contact us here to get an enterprise access token. You will need this access token to continue.
extern crate prediction_guard as pg_client;
use pg_client::{chat, client, models};
#[tokio::main]
async fn main() {
let clt = client::Client::new().expect("client value");
let req = chat::Request::<chat::Message>::new("NeuralChat7B".to_string())
.add_message(
chat::Roles::User,
"How do you feel about the world in general?".to_string(),
)
.max_tokens(1000)
.temperature(0.85);
let result = clt
.generate_chat_completion(&req)
.await
.expect("error from generate chat completion");
println!("\nchat completion response:\n\n {:?}", result);
}
Take a look at the examples
directory for more examples.
You can find the Prediction Guard API docs on the Prediction Guard website.
Once you have your api key you can use the makefile
to run curl commands
for the different api endpoints. For example, make curl-injection
will connect to
the injection endpoint and return the injection response. The makefile
also allows you to run the different examples
such as make run-injection
to run the injection example.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Copyright 2024 Prediction Guard