Releases: intelligentnode/IntelliNode
IntelliNode v1.3.7
New Changes 🌟
- Support Llama Code and Python Models: now you can utilize Llama code and python-based models in the intelliNode module.
- Large Data Memory Semantic Search: introducing enhanced semantic search with large data using pagination.
Code Examples 💻
- Install the module:
npm i intellinode
- The llama coder sample: test_llama_chatbot.
- The large data semantic search using
SemanticSearchPaging
: test_semantic_search_pagination.
IntelliNode v1.3.0
v1.3.0
New Changes 🌟
- Introduced the Large Learning Model (LLM) Evaluator: a quick comparison among multiple models' results based on your target answers. This feature simplifies the assessing of LLMs such as LaMMA V2 and GPT4, providing valuable insights into model performance.
- Prompt Engineering Feature: an efficient way to automate the process of creating new prompts.
Code Examples 💻
LLM Evaluator
const { LLMEvaluation } = require('intellinode');
// create the LLMEvaluation with the desired embedding for unified comparison.
const llmEvaluation = new LLMEvaluation(openaiKey, 'openai');
const results = await llmEvaluation.compareModels(inputString, targetAnswers, providerSets)
Check the model evaluation documentation.
Prompt Engineering
Import the Prompt
class:
const Prompt = require('intellinode');
Generate a new prompt using an existing model:
const promptTemp = await Prompt.fromChatGPT("generate a fantasy image with ninja jumping across buildings", openaiApiKey);
Automatically fill and format a prompt with dynamic data:
const promptTemp = await Prompt.fromText("information retrieval about ${reference} text with ${query} user input");
const inputString = promptTemp.format({reference: '<reference_text>', query: '<user_query>'})
IntelliNode v1.2.3 🦙
v1.2.3
Update Chatbot with Llama V2 Support 🌟
IntelliNode library now supports Llama 🦙 v2 chat models, enhancing the capabilities of the chatbot function even further.
The Llama model available with two options:
-
Replica API Integration: a seamless and fast implementation requiring only an API key. Users can integrate Llama v2 into their existing apps with minimal configuration using intelli-node module.
-
AWS SageMaker Integration: a more advanced and flexible integration adaptability for enterprise-level projects.
Code Examples 💻
The documentation on how to switch the chatbot between ChatGPT and LLama can be found in the IntelliNode chatbot wiki.
IntelliNode v1.1.0
v1.1.0
New Changes 🌟
- Upgraded the chatbot to support functions call which empowers the automation.
- Added an automation sample for AWS S3.
- Openai updated to support the organization as an optional parameter.
Code Examples 💻
Automation
import:
const { Chatbot, ChatGPTInput, ChatGPTMessage, FunctionModelInput } = require('intellinode');
call:
// define the functions details
const functions_desc = [
new FunctionModelInput('list_buckets', 'List all available S3 buckets')
];
// prepare the input
const systemMsg = "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.";
const input = new ChatGPTInput(systemMsg, {model: "gpt-3.5-turbo-0613"});
input.addMessage(new ChatGPTMessage("List my s3 buckets.", "user"));
// call the chatbot
const bot = new Chatbot(apiKey, SupportedChatModels.OPENAI);
const responses = await bot.chat(input, functions_desc.map(f => f.getFunctionModelInput()));
parse the output:
// if the response type is an object, the model response a function name and arguments to call.
// otherwise, the model returned a text response for clarification.
responses.forEach((response) => {
if (typeof response === "object") {
console.log("- Function call: ", JSON.stringify(response.function_call, null, 2));
} else {
console.log("- " + response);
}
});
Check the full automation example under the samples directory.
IntelliNode v1.0.0
v1.0.0
🎉 The first stable release of IntelliNode, after testing with hundreds of users via the online playground show.intellinode.ai
New Changes 🌟
- Update the Gen function to support the new 16k context version of gpt-3.5-turbo.
- Security enhancements.
- Add test runner to validate the code before publishing to npm repository.
IntelliNode v0.0.31
v0.0.31
New Changes 🌟
- Support Azure OpenAI backend.
- Add an optional openai proxy URL to bypass regional restrictions (request by the users).
- Improve the
Gen
function by accepting backend engine to switch between models in one line code. - Enhance the csv to html charts function.
Code Examples 💻
Openai Azure Service
- Set Azure backend as proxy
const { ProxyHelper } = require('intellinode');
ProxyHelper.getInstance().setAzureOpenai(resourceName);
- Call any openai function
const { Gen } = require('intellinode');
// generate blog post
result = await Gen.get_blog_post(prompt, openaiApiKey, provider='openai');
Gen content with multi backend example
- Generate marketing content using openai
marketingDesc = await Gen.get_marketing_desc(prompt, openaiApiKey);
- Or using cohere
marketingDesc = await Gen.get_marketing_desc(prompt, cohereApiKey, provider='cohere');
Gen image with multi backend example
- Generate prompt description using chatGPT and the image using stable diffusion:
image = await Gen.generate_image_from_desc(prompt, openaiApiKey, stabilityApiKey, true);
- or generate the image using DALL·E:
image = await Gen.generate_image_from_desc(prompt, openaiApiKey, openaiApiKey, true, provider='openai');
IntelliNode v0.0.21
v0.0.21
New Changes 🌟
- Introducing a new
Gen
function: Generate charts directly from CSV data. - Enhanced the HTML generation: we have further enhanced the
Gen
function to generate HTML pages based on user prompts.
Code Examples 💻
- import:
const { Gen } = require('intellinode');
Generate charts
const csv_str_data = '<your csv as string>'
const topic = "<the csv topic>";
const htmlCode = await Gen.generate_dashboard(csv_str_data, topic, openaiKey, num_graphs=2);
Generate html files
text = 'a registration page with flat modern theme.'
await Gen.save_html_page(text, folder='tempFolder', file_name='register_page', openaiKey='<key>');
Note that Gen function in beta, the output generated may not always meet your expectations.
For more code examples, check the sample.
IntelliNode v0.0.19
v0.0.19
New Features 🌟
- Add a new
Gen
function to generate and save html pages from users prompt.
Code Examples 💻
- import:
const { Gen } = require('intellinode');
- execute the one line command to generate html:
text = 'a registration page with flat modern theme.'
await Gen.save_html_page(text, folder='tempFolder', file_name='register_page', openaiKey='<key>');
- other existing functions
// one line to generate blog post
const blogPost = await Gen.get_blog_post(prompt, openaiApiKey);
For more code examples, check the sample.
IntelliNode v0.0.18
v0.0.18
New Features 🌟
- Introduce the
Gen
function, an efficient addition to the IntelliNode to generate text content, images, and audio in the shortest way possible.
Code Examples 💻
- import:
const { Gen } = require('intellinode');
- execute the one line command:
// one line to generate marketing description
const marketingDesc = await Gen.get_marketing_desc('gaming chair', openaiApiKey);
// or one line to generate blog post
const blogPost = await Gen.get_blog_post(prompt, openaiApiKey);
// or generate images from production description
const image = await Gen.generate_image_from_desc(prompt, openaiApiKey, stabilityApiKey, is_base64=true);
// or one line to generate speech synthesis
const audioContent = await Gen.generate_speech_synthesis(text, apiKey);
For more details, check the sample.
IntelliNode v0.0.15
v0.0.15
New Features 🌟
- Add an image to image model to the stable diffusion wrapper (used for image styling or edits).
Changes 🔧
- Update the remote image controller to use stable diffusion XL model.
Code Examples 💻
Remote image model
const { RemoteImageModel, SupportedImageModels } = require('intellinode');
const imgModel = new RemoteImageModel('stability-ai-key', SupportedImageModels.STABILITY);
result_base64 = await imgModel.generateImages(new ImageModelInput({
prompt: imageText,
numberOfImages: 3 }));