Skip to content

Commit

Permalink
Best
Browse files Browse the repository at this point in the history
  • Loading branch information
ParisNeo committed Feb 28, 2024
1 parent be34aae commit e7cdefc
Show file tree
Hide file tree
Showing 23 changed files with 377 additions and 316 deletions.
28 changes: 28 additions & 0 deletions docs/youtube/lollms_architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
Hi there! Today, we're diving into the future of artificial intelligence integration with an exciting tool called LOLLMS – the Lord of Large Language and Multimodal Systems. Whether you're a developer, a content creator, or just curious about the possibilities of AI, this video will give you a comprehensive look at a platform that's shaping the way we interact with various AI systems. So, let's get started!

As you see here, we begin with the core of LOLLMS, a clean slate ready to be filled with endless possibilities. It's the foundation upon which all the magic happens.

If you have used lollms, you have probably come across the word bindings. Bindings, which are essentially python code, serve as the essential link that enables lollms to interact with models through web queries or python libraries. This unique functionality is what gives lollms the ability to tap into a diverse array of models, regardless of their form or location. It's the key ingredient that allows lollms to seamlessly connect with both local and remote services. With all bindings following the same patterns and offering consistent methods, lollms can remain model agnostic while maximizing its capabilities.

Alright, let's talk about the next piece of the puzzle - services. These are additional servers created by third-party developers and tailored for lollms' use. What's great is that all of these services are open source and come with permissive licenses. They offer a range of functionalities, from LLM services like ollama, vllm, and text generation, to innovative options like my new petals server. There are even services dedicated to image generation, such as AUTOMATIC1111's stable diffusion webui and daswer123's Xtts server. The best part? Users can easily install these services with just a click and customize their settings directly within lollms.

Moving on to the next exciting topic - generation engines. These engines act as the key to unlocking lollms' potential in generating text, images, and audio by seamlessly leveraging the bindings. Not only do they facilitate intelligent interactions with the bindings, but they also support the execution of code in various programming languages. This allows the AI to create, execute, and test code efficiently, thanks to a unified library of execution engines. The generation engines are crucial in enabling lollms to produce content in a cohesive manner, utilizing the power of bindings to deliver a wide range of engaging and diverse outputs.

The personalities engine is where LOLLMS truly shines. It allows the creation of distinct agents with unique characteristics, whether through text conditioning or custom Python code, enabling a multitude of applications. This engine features lots of very useful methods like yes no method that allows the AI to ask itself yes no questions about the prompt, the multichoice qna that allows it to select from precrafter choices, code extraction tools that allows asking the model to build code then extract it and include it in the current code as an element, Direct access to RAG and internet search, workflow style generation that allows a developer to build a workflow to automate manipulation of data or even to code or interact with the PC through function calls.

Let's now explore the fascinating world of the personalities engine in lollms. This engine truly exemplifies the brilliance of lollms by enabling the creation of unique agents with distinct characteristics through text conditioning or custom Python code, opening up a world of possibilities. Packed with valuable methods such as the yes-no method for self-questioning, multichoice Q&A for pre-crafted choices, and code extraction tools for seamless code integration, the personalities engine offers a diverse range of functionalities. With access to resources like RAG and internet search, workflow-style generation for data manipulation and automation, and a state machine interface, developers can fully leverage lollms in crafting dynamic and interactive content. In lollms, personalities are meticulously categorized, spanning from fun tools and games to more professional personas capable of handling a significant workload, freeing up time for more engaging pursuits. With over 500 personas developed in the past year and new ones created weekly, the potential with lollms personalities is limitless.

Let's now explore the dynamic capabilities of the RAG engine and the Extensions engine within lollms. These components not only add depth but also extendibility, transforming lollms from a mere tool into a thriving ecosystem. The RAG engine, or Retrieval Augmented Generation, empowers lollms to analyze your documents or websites and execute tasks with enhanced knowledge. It can even provide sources, boosting confidence in its responses and mitigating the issue of hallucinations. The Extensions engine further enriches lollms' functionality, offering a platform for continuous growth and innovation. Together, these engines elevate lollms' capabilities and contribute to its evolution as a versatile and reliable resource.

Let's now shine a spotlight on the vibrant world of personalities within the platform. These personalities breathe life into the AI, offering a personalized and engaging interaction experience. Each personality is tailored to cater to different applications, making the interaction with AI not only functional but also enjoyable. Whether built by me or by third parties, users have the flexibility to create their own personalities using the personality maker tool. This tool allows users to craft a full persona from a simple prompt or manually adjust existing personas to suit their needs. All 500 personas available in the zoo are free for use, with the only requirement being to maintain authorship credit. Users can modify and even share these personas with others, fostering a collaborative and creative community.

Now, let's turn our attention to the heart of the operation - the LOLLMS Elf server. This server, with its RESTful interface powered by FastAPI and a socket.io connection for the WebUI, acts as the central hub for all communication between the different components. The Elf server is a versatile tool, capable of being configured to serve the webui, or as a headless text generation server. In this configuration, it can connect with a variety of applications, including other lollms systems, Open AI, MistralAI, Gemini, Ollama, and VLLM compatible clients, enabling them to generate text. The text generation can be raw, or it can be enhanced by utilizing personalities to improve the quality and relevance of the output.


Now, let's explore how the elf server and bindings work together to make lollms a versatile switch, enabling any client to use another service, even if they're not initially compatible. For instance, imagine you have a client designed for the OpenAI interface, but you want to use Google Gemini instead. No problem! Simply select the Google Gemini binding and direct your OpenAI-compatible client to lollms. This flexibility works in all directions, allowing clients that exclusively use API services to be used with local models. With lollms, the possibilities are endless, as it breaks down compatibility barriers and unlocks new potential for various clients and services.

Now, let's talk about the development of LOLLMS. It's primarily a one-man show, with occasional support from the community. I work tirelessly on it during my nights, weekends, and vacations to bring you the best possible tool. However, I kindly ask for your patience when it comes to bugs or issues, especially with bindings that frequently change and require constant updates. As an open-source project, LOLLMS welcomes any help in maintaining and improving it. Your assistance, particularly in keeping track of the evolving bindings, would be greatly appreciated. Together, we can make LOLLMS even better!

And that's a wrap, folks! You've just been introduced to the amazing world of LOLLMS and its powerful components. But remember, this is just the tip of the iceberg. There's so much more to explore and discover with this fantastic tool. So, stay tuned for more in-depth tutorials and guides on how to maximize your experience with LOLLMS. Together, we'll unlock its full potential and create something truly extraordinary. Until next time, happy creating!

Thanks for watching, and don't forget to hit that subscribe button for more content on the cutting edge of technology. Drop a like if you're excited about the future of AI, and share your thoughts in the comments below. Until next time, keep innovating! See ya!
12 changes: 6 additions & 6 deletions endpoints/lollms_advanced.py
Original file line number Diff line number Diff line change
Expand Up @@ -95,28 +95,28 @@ async def execute_code(request: CodeRequest):
if language=="javascript":
ASCIIColors.info("Executing javascript code:")
ASCIIColors.yellow(code)
return execute_javascript(code, discussion_id, message_id)
return execute_javascript(code)
if language in ["html","html5","svg"]:
ASCIIColors.info("Executing javascript code:")
ASCIIColors.yellow(code)
return execute_html(code, discussion_id, message_id)
return execute_html(code)

elif language=="latex":
ASCIIColors.info("Executing latex code:")
ASCIIColors.yellow(code)
return execute_latex(code, discussion_id, message_id)
return execute_latex(code, client, message_id)
elif language in ["bash","shell","cmd","powershell"]:
ASCIIColors.info("Executing shell code:")
ASCIIColors.yellow(code)
return execute_bash(code, discussion_id, message_id)
return execute_bash(code, client)
elif language in ["mermaid"]:
ASCIIColors.info("Executing mermaid code:")
ASCIIColors.yellow(code)
return execute_mermaid(code, discussion_id, message_id)
return execute_mermaid(code)
elif language in ["graphviz","dot"]:
ASCIIColors.info("Executing graphviz code:")
ASCIIColors.yellow(code)
return execute_graphviz(code, discussion_id, message_id)
return execute_graphviz(code)
return {"status": False, "error": "Unsupported language", "execution_time": 0}
except Exception as ex:
trace_exception(ex)
Expand Down
2 changes: 1 addition & 1 deletion utilities/execution_engines/graphviz_execution_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,6 @@ def build_graphviz_output(code, ifram_name="unnamed"):
execution_time = time.time() - start_time
return {"output": rendered, "execution_time": execution_time}

def execute_graphviz(code, discussion_id, message_id):
def execute_graphviz(code):

return build_graphviz_output(code)
2 changes: 1 addition & 1 deletion utilities/execution_engines/html_execution_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,5 +40,5 @@ def build_html_output(code, ifram_name="unnamed"):
execution_time = time.time() - start_time
return {"output": rendered, "execution_time": execution_time}

def execute_html(code, discussion_id, message_id):
def execute_html(code):
return build_html_output(code)
2 changes: 1 addition & 1 deletion utilities/execution_engines/javascript_execution_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,5 +49,5 @@ def build_javascript_output(code, ifram_name="unnamed"):
execution_time = time.time() - start_time
return {"output": rendered, "execution_time": execution_time}

def execute_javascript(code, discussion_id, message_id):
def execute_javascript(code):
return build_javascript_output(code)
2 changes: 1 addition & 1 deletion utilities/execution_engines/latex_execution_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,6 @@ def spawn_process(code):
host = lollmsElfServer.config.host

url = f"{host}:{lollmsElfServer.config.port}/{discussion_path_2_url(pdf_file)}"
output_json = {"output": f"Pdf file generated at: {pdf_file}\n<a href='{url}'>Click here to show</a>", "execution_time": execution_time}
output_json = {"output": f"Pdf file generated at: {pdf_file}\n<a href='{url}' target='_blank'>Click here to show</a>", "execution_time": execution_time}
return output_json
return spawn_process(code)
2 changes: 1 addition & 1 deletion utilities/execution_engines/mermaid_execution_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,6 @@ def build_mermaid_output(code, ifram_name="unnamed"):



def execute_mermaid(code, discussion_id, message_id):
def execute_mermaid(code):

return build_mermaid_output(code)

Large diffs are not rendered by default.

482 changes: 241 additions & 241 deletions web/dist/assets/index-36f2b02c.js → web/dist/assets/index-fd262646.js

Large diffs are not rendered by default.

7 changes: 0 additions & 7 deletions web/dist/assets/rec_off-20dfd9fb.svg

This file was deleted.

5 changes: 5 additions & 0 deletions web/dist/assets/rec_off-2c08e836.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 6 additions & 0 deletions web/dist/assets/rec_on-3b37b566.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 0 additions & 8 deletions web/dist/assets/rec_on-92331eb8.svg

This file was deleted.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions web/dist/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LoLLMS WebUI - Welcome</title>
<script type="module" crossorigin src="/assets/index-36f2b02c.js"></script>
<link rel="stylesheet" href="/assets/index-13bf9073.css">
<script type="module" crossorigin src="/assets/index-fd262646.js"></script>
<link rel="stylesheet" href="/assets/index-a12915cf.css">
</head>
<body>
<div id="app"></div>
Expand Down
12 changes: 5 additions & 7 deletions web/src/assets/rec_off.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 6 additions & 8 deletions web/src/assets/rec_on.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 0 additions & 1 deletion web/src/components/Message.vue
Original file line number Diff line number Diff line change
Expand Up @@ -622,7 +622,6 @@ export default {
},
getImgUrl() {
if (this.avatar) {
console.log("Avatar:",bUrl + this.avatar)
return bUrl + this.avatar
}
console.log("No avatar found")
Expand Down
14 changes: 13 additions & 1 deletion web/src/components/TokensHilighter.vue
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,23 @@
<span :style="{ backgroundColor: colors[index % colors.length] }">{{ token[0] }}</span>
</span>
</div>
<div>
<span v-for="(token, index) in namedTokens" :key="index">
<span :style="{ backgroundColor: colors[index % colors.length] }">{{ token[1] }}</span>
</span>
</div>
</template>
<script>
export default {
props: ['namedTokens'],
name: "TokensHilighter",
props: {
namedTokens: {
type: Object,
required: true
}
},
data() {
return {
colors: [
Expand Down
12 changes: 10 additions & 2 deletions web/src/components/TopBar.vue
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,7 @@ import feather from 'feather-icons'
import static_info from "../assets/static_info.svg"
import animated_info from "../assets/animated_info.svg"
import { useRouter } from 'vue-router'
</script>
<script>
Expand Down Expand Up @@ -276,9 +277,16 @@ export default {
setTimeout(()=>{
window.close();
},2000)
},
},
refreshPage() {
window.location.href = "/";
const hostnameParts = window.location.href.split('/');
if(hostnameParts.length > 4){
window.location.href='/'
}
else{
window.location.reload(true);
}
},
handleOk(inputText) {
console.log("Input text:", inputText);
Expand Down
Loading

0 comments on commit e7cdefc

Please sign in to comment.